
-----Original Text----- Dizzy wrote:
So you have sizeof().
You missed my point! Or you really think I never heard about sizeof? :-) When I define my class/struct it happen that to avoid a waste of space I want to define it with the needed size. For example I want it 16 bit (sorry that I use the bit expression, it's a old habit). The only way to do that is using preprocessor directive (because platform dependent) that create something like int16, int32, int64. With small data I don't care but when I work on a huge amount of data the difference between int16 and int32 is a double amount of used memory. And this matters. wchar_t is horribly defined. Sometimes 1 byte, on windows 2 byte, on most unix platforms 4 byte. Not only the size is different but also the encoding is different! What a wrong choice. You ever worked with Unicode? wchar_t is supposed to help in this but it request you to add a lot of platform dependant code. When I use UTF-8, UTF-16 or UTF-32 I NEED to know the size of the integer value when I define it. char for UTF-8, but today I use platform dependent preprocessor directives for UTF-16/32, would be simpler if C/C++ offer int16, int32, int64 types. Also when I want to store the data on cross platform compatible files I suppose to use always the same byte size for integer values. So using explicitly a int32/int64 data type (and handling manually endianess) is a easy way to handle this. Those types doesnÂ’t exist so we all have to define manually with stuff like #if.... + typedef int int32, etc etc.
Technically if you compare/add/sub two signed/unsigned values of the same byte size then you are having an overflow because signed values doesn't exist in the unsigned realm and you have a double amount of unsigned values. Not necessarily true (that unsigned have the corresponding signed type sign bit for value). There can be unsigned types for which the range of values is the same as the signed type (except of course the negative values).
How can a signed integer type include all the valus of a unsigned integer type if both has the same bit/byte size? Or I don't get your point.
For what, for the issues Zeljko described or for the fixed integer size you said? For the former it's technically impossible to have native fast integer types and checked without runtime cost.
Checked integer types are useful for debug builds. The runtime cost is not that heavy, usually on overflow a register flag is set (in some cases even an interrupt) so a simple assembler jmp handle this. But again.. only for debug/special builds since we all know that there is a runtime overhead.
For the later of course you can have as you can even make classes for fixed integer types (I have something like this in my serialization framework and the code takes fast code paths if it detects at compile time that the current platform matches well the fixed integer sizes that the user asked for).
You see.. you too admit that you needed to use integer types where you knew it's size, e.g. "fixed integer types". -----Messaggio originale----- Da: boost-users-bounces@lists.boost.org [mailto:boost-users-bounces@lists.boost.org] Per conto di dizzy Inviato: lunedì 18 agosto 2008 11.35 A: boost-users@lists.boost.org Oggetto: Re: [Boost-users] size_type doubts / integer library.. On Sunday 17 August 2008 22:16:29 Andrea Denzler wrote:
I may add that C/C++ have different integer sizes on different platforms adding even more confusion.
How do you suggest that C++ offer integer types that are native to the platform then?
I understand that a basic int has the size of the processor register, but when I handle and store data values I want to know it's size.
So you have sizeof().
When I want a 16 bit integer then I want to use a 16 bit integer because I don't waste space with a 32 bit or have data structures of different sizes.
I don't think bits matter in storage much. Because there is no system interface I am aware of that works with bits (POSIX file I/O, sockets I/O, etc all work with bytes). They all work with bytes, with the native platform byte. And sizeof() tells you the size in bytes. So you got all you need there to know how much space it takes to store that type.
Even worse the size of the wchar_t for i18n.
sizeof(wchar_t) works as well.
A signed/unsigned compare should always generate a warning, but I just found out it doesn't if you use constant values like -1 < 2U. Funny. signed int a=-1; unsigned int b=2U; bool result = a < b; // as usual I get the signed/unsigned warning
Technically if you compare/add/sub two signed/unsigned values of the same byte size then you are having an overflow because signed values doesn't exist in the unsigned realm and you have a double amount of unsigned values.
Not necessarily true (that unsigned have the corresponding signed type sign bit for value). There can be unsigned types for which the range of values is the same as the signed type (except of course the negative values).
That's why a class (or new standard integer types) handling those confusions is really welcome.
For what, for the issues Zeljko described or for the fixed integer size you said? For the former it's technically impossible to have native fast integer types and checked without runtime cost. For the later of course you can have as you can even make classes for fixed integer types (I have something like this in my serialization framework and the code takes fast code paths if it detects at compile time that the current platform matches well the fixed integer sizes that the user asked for).
Until now I rely on crossplatform integer sizes (uint16, uint32, uint64, etc) and compiler warnings. I think compiler warnings are important because you always know there is something to care about. An overflow assert at runtime can happen or not happen.
Of course, there is no doubt compile time checks are better than runtime ones (the main reason I like to use C++ and not C, since I have more compile time semantics to express more invariants at compile time using familiar syntax).
So ideally we should handle at compile time explicitly all incompatible integral types (signed/unsigned of same size) and have at runtime (at least in debug mode) asserts for any kind of integer overflow (through casting from signed to/from unsigned and basic operations like add, sub, mul, div, etc).
I think his library idea does exactly this. Still, that doesn't mean the C++ native integrals are flawed :) -- Dizzy "Linux is obsolete" -- AST _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users