On Monday 18 August 2008 13:06:08 Lang Stefan wrote:
C) Built-in integral types are based on internal representations of numbers.
Yes because they are the lowest level ones.
From the perspective of a designer this is just plain wrong! It is a violation of the principle of information hiding! Types shouldn't reveal anything about their internal representation, and after 20+ years of history in object oriented programming I really don't understand why we are still forced to use such types! When a developer selects an integral type for a variable, he considers the preconditions and postconditions, and decides the range of values his variable could legally assume. However, he shouldn't be forced to map this range to one of a few predefined ranges, just because these predefined ranges happen to fit the compiler's preferred internal representation. If a variable can assume values in the range [0..999999], then is 'long' the 'correct' type? Or is it 'unsigned long'? My answer is: neither!
Depending on platform it may be neither.
If the developer chooses either type, others looking at the code might not recognize that certain values outside that range (but well within the limits of the chosen type) will cause problems or might indicate an error elsewhere.
That's the job of the programmer. You have the native types with platform dependent ranges. Good. That's a good start (without which you couldn't do anything anyways). Now you need integer types to be defined in terms of ranges, ok, you can do it with templates. Arbitrary precision arithmetic is not the C++ standard IMO, it sits perfectly fine in third-party library space. There was also a BigNum boost library proposal for GSoC. After that is done then maybe everyone will be happy about this. -- Dizzy "Linux is obsolete" -- AST