-----Ursprüngliche Nachricht-----
If the developer chooses either type, others looking at the code might not recognize that certain values outside that range (but well within the limits of the chosen type) will cause problems or might indicate an error elsewhere.
That's the job of the programmer. You have the native types with platform dependent ranges. Good. That's a good start (without which you couldn't do anything anyways). Now you need integer types to be defined in terms of ranges, ok, you can do it with templates. Arbitrary precision arithmetic is not the C++ standard IMO, it sits perfectly fine in third-party library space. There was also a BigNum boost library proposal for GSoC. After that is done then maybe everyone will be happy about this.
-- Dizzy "Linux is obsolete" -- AST
I am not sure you got my meaning. What I was trying to point out that today, like thirty years ago, when a programmer looks at a variable, he has to consider what integral type it fits in. And he needs to do so in terms of the machine! (can you say DWORD?) This implies two anachronisms: 1) the programmer needs to know the exact range the built-in integral types cover. While this is not a big problem, the mere fact a programmer needs to know such internal details indicates a design problem of the language. 2) the programmer can not easily express the true limits of the integral variables he uses: if he needs to make sure a variable does not exceed a certain limit, this requires excessive testing, or alternativly substituting his type with a custom class. On top of that he needs to devise mechanisms that prevent other programmers from circumventing these limitations. These two artifacts of the C language are the cause for uncountable runtime errors. If a programmer wasn't forced to think about internal representations and could instead just define the type he needs in terms of the range his variable should be tied to, then a lot of run time errors could never happen! Yes, it is the job of the programmer to avoid such errors. Today. As it was thirty years ago. But it shouldn't be! Of course, in many cases there is no need to check a variable's range, simply because it is being used inside a very limited scope. But even then the ability for simple range checking without added effort could help detect runtime errors! Stefan