
On Fri, Jun 02, 2006 at 11:53:18PM +0200, Maarten Kronenburg wrote:
The base type unsigned int is a fact.
Historically grown out of the need of that extra bit, when cpu's didn't have 32 bits yet. If all you have is 8 bits, then it makes a big difference whether you can assign -127...128, or 0...255.
The modular_integer is a mathematical fact, and the base type unsigned int is modular.
Only because it's the natural way an overflow occurs. I don't think it's relevant that an (unsigned) integer is modulo 2^32 -- anyone USING that fact is doing something wrong imho (I wouldn't like to hire them!)
And users that want an integer that is of infinite precision but they want to know for sure will never become negative,
That is a very weird demand. Are those users not sure what they are coding? "Oh, I don't have an idea what I'm doing, but instead of using an assert, I'll "recover" from possible bugs by enforcing this variable to become 4294967295 instead of -1..." (as if that won't crash the program). Sorry, but it makes no sense whatsoever to use unsigned integers with as reason that (only?!) then you are sure that they won't be negative. A bug is a bug: if a variable shouldn't become negative, then it won't. If you don't feel secure about it, add an assert. If the fact that an unsigned can't become negative is used in a _legit_ way then that can only mean one thing: they REALLY needed modular arithmetics. In the case of this precision library, that would be a modular_integer. Thus, if anyone would need an infinite precision unsigned integer, then they are doing something seriously wrong. Instead they should use an infinite precision integer and add an assertion to make sure it never becomes negative. I see no reason to replace the "bug catcher" (the assertion) with a special type, that throws an exception! Carlo Wood <carlo@alinoe.com>