
You never said anything like that in your previous email. However, you do realize that the standard says very few things about binary layout? So sizeof(struct { char a; int b; }) is usually > sizeof(char) + sizeof(int).
It's more than 20 years that I code in C and Assembler, so yes, I know this. Sometimes it's important to care about the memory usage despite memory alignment performance issues. Using 16bit and not 32bit reduce memory usage a lot.
You can also write your own template type that takes bitsize (or value range) and resolves to the smallest native integral able to satisfy the requirement.
Sure, we know all that C/C++ fortunately allows to do that. But if such tasks, like cross platform serialization are so common, why not introducing them as a standard like with the STL.
wchar_t is horribly defined. Sometimes 1 byte, on windows 2 byte, on most unix platforms 4 byte. Not only the size is different but also the So you are saying the standard should have decided on a byte size for wchar_t
What I mean is that there should be a crossplatform solution for defining all characters. Unicode DO THIS. wchar_t not. So wchar_t is useless for real crossplatform i18n applications. On Windows it even breaks the definition of the wchar_t requiring that one wide char represent all possible characters for that system, that's not true on Windows because for rare combinations you need two wchar_t.
No here I think you are wrong. wchar_t is not supposed to help you work with Unicode.
The effect is that wchar_t is confusing and useless for crossplatform i18n. Once you realize this, you just avoid to use wchar_t. Nice for being a standard. I want to create applications running on different platforms without a nightmare.
You are saying that besides the native integral types (and native character types) C++ should offer you some fixed integer types (and some fixed character encoding types). C++0x will offer both from what I understand.
Yes, fixed integral types is enough. A Unicode point is just a number. Sure a standard function for decoding/encoding from the current locale to Unicode characters is welcome. Even better if I can write directly a Unicode text to a i/o stream.
Yes, that's what all people do in their portable layer creating portable types. Notice that just integer size and byte endianess is not all it takes to make it portable.
Since it's so widely used why not introduce in the standard a common serialization encoding for (fixed size) integral types for all applications? Why everyone must invent hot water again?
I haven't described this to advertise the library just to say that IMO it is quite normal for a low level language like C++ to provide native types of which representation depend largely on the platform.
Yes, I know this very well.. in the past performance issues was much more important than today. And 30 years ago issues like crossplatform serialization or i18n didn't exist. But today it's different.
Huh? Where in the standard does it says that the unsigned type has to use all the bits?
I want that my code is portable. I have a overflow because on common platforms the unsigned type use all bit's when handling signed/unsigned. Are you telling me to compile only on Unisys MCP to avoid the overflow?
Of course there is a need (otherwise they wouldn't be in C++0x). I'm just saying C++ native types are not flawed because of not providing that. They are not the tool for that, plain simple.
The simple fact that -1 < 2U returns false without giving me any warning is a issue. C++ is IMHO the best language in the world, I love it, I loved C since the first day long time ago. But there are issues... there is plenty of room to improve it. Do you really think that a over 30 years old language has no issues today??? The best path is to extend and improve it... so that I benefit new features on existing code.