Re: [boost] Infinite precision integer draft

Unsigned integers shouldn't exist. If the programmer wants to check whether the number is greater than 0, or greater than 42, he can do it himself a lot faster than a try{} catch{} block in terms of both programming and runtime. Creating the extra datatype would add completely unnecessary complexity to the situation. Unnecessary complexity is the root of all evil.

The base type unsigned int is a fact. The modular_integer is a mathematical fact, and the base type unsigned int is modular. And users that want an integer that is of infinite precision but they want to know for sure will never become negative, they have the option of using unsigned_integer. I don't see any evil in this. And the other side of the story is that if we don't provide an unsigned_integer, people will start making it themselves, and then many unsigned integers will be floating around in the end, all a little bit different. Think about that too. Regards, Maarten. "Jonathan Ray" <ray.jonathan.w@gmail.com> wrote in message news:6520d1d70606021309w6a4bb366h30935112e4e68c28@mail.gmail.com...
Unsigned integers shouldn't exist. If the programmer wants to check whether the number is greater than 0, or greater than 42, he can do it himself a lot faster than a try{} catch{} block in terms of both programming and runtime. Creating the extra datatype would add completely unnecessary complexity to the situation. Unnecessary complexity is the root of all evil. _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Fri, Jun 02, 2006 at 11:53:18PM +0200, Maarten Kronenburg wrote:
The base type unsigned int is a fact.
Historically grown out of the need of that extra bit, when cpu's didn't have 32 bits yet. If all you have is 8 bits, then it makes a big difference whether you can assign -127...128, or 0...255.
The modular_integer is a mathematical fact, and the base type unsigned int is modular.
Only because it's the natural way an overflow occurs. I don't think it's relevant that an (unsigned) integer is modulo 2^32 -- anyone USING that fact is doing something wrong imho (I wouldn't like to hire them!)
And users that want an integer that is of infinite precision but they want to know for sure will never become negative,
That is a very weird demand. Are those users not sure what they are coding? "Oh, I don't have an idea what I'm doing, but instead of using an assert, I'll "recover" from possible bugs by enforcing this variable to become 4294967295 instead of -1..." (as if that won't crash the program). Sorry, but it makes no sense whatsoever to use unsigned integers with as reason that (only?!) then you are sure that they won't be negative. A bug is a bug: if a variable shouldn't become negative, then it won't. If you don't feel secure about it, add an assert. If the fact that an unsigned can't become negative is used in a _legit_ way then that can only mean one thing: they REALLY needed modular arithmetics. In the case of this precision library, that would be a modular_integer. Thus, if anyone would need an infinite precision unsigned integer, then they are doing something seriously wrong. Instead they should use an infinite precision integer and add an assertion to make sure it never becomes negative. I see no reason to replace the "bug catcher" (the assertion) with a special type, that throws an exception! Carlo Wood <carlo@alinoe.com>

The assertion for unsigned_integer is an interesting possibility. In "C++ in a nutshell" I read: "If the expression evaluates to 0, assert prints a message to the standard error file and calls abort." Personally I would rather throw an exception, so that the user can determine what should happen if an unsigned_integer becomes negative. I see it as equivalent to taking the square root of a negative number. Whether the unsigned_integer should do an assertion or an exception is not part of the interface. But this is also true for taking a negative square root. In the document I will present both possibilities, assertion and exception, and give a few pros and cons. Regards, Maarten. "Carlo Wood" <carlo@alinoe.com> wrote in message news:20060602233906.GA23742@alinoe.com...
On Fri, Jun 02, 2006 at 11:53:18PM +0200, Maarten Kronenburg wrote:
The base type unsigned int is a fact.
Historically grown out of the need of that extra bit, when cpu's didn't have 32 bits yet. If all you have is 8 bits, then it makes a big difference whether you can assign -127...128, or 0...255.
The modular_integer is a mathematical fact, and the base type unsigned int is modular.
Only because it's the natural way an overflow occurs. I don't think it's relevant that an (unsigned) integer is modulo 2^32 -- anyone USING that fact is doing something wrong imho (I wouldn't like to hire them!)
And users that want an integer that is of infinite precision but they want to know for sure will never become negative,
That is a very weird demand. Are those users not sure what they are coding? "Oh, I don't have an idea what I'm doing, but instead of using an assert, I'll "recover" from possible bugs by enforcing this variable to become 4294967295 instead of -1..." (as if that won't crash the program).
Sorry, but it makes no sense whatsoever to use unsigned integers with as reason that (only?!) then you are sure that they won't be negative. A bug is a bug: if a variable shouldn't become negative, then it won't. If you don't feel secure about it, add an assert. If the fact that an unsigned can't become negative is used in a _legit_ way then that can only mean one thing: they REALLY needed modular arithmetics. In the case of this precision library, that would be a modular_integer. Thus, if anyone would need an infinite precision unsigned integer, then they are doing something seriously wrong. Instead they should use an infinite precision integer and add an assertion to make sure it never becomes negative.
I see no reason to replace the "bug catcher" (the assertion) with a special type, that throws an exception!
Carlo Wood <carlo@alinoe.com> _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Sat, Jun 03, 2006 at 02:54:53PM +0200, Maarten Kronenburg wrote:
The assertion for unsigned_integer is an interesting possibility. In "C++ in a nutshell" I read: "If the expression evaluates to 0, assert prints a message to the standard error file and calls abort." Personally I would rather throw an exception, so that the user can determine what should happen if an unsigned_integer becomes negative.
There is nothing to decide, it is a bug when that happens. The best you can do is to dump a core (which is what assert does) so that the developer can examine as close as possible to the bug what has happened. Once the program works, an integer that should never become negative will never become negative anymore, and no testing for less than zero is needed. You can still use a normal signed integer and just omit the test (assertion). Thus, void Foo::release(void) { assert(x > 0); if (--x == 0) do_something(); } where x can be perfectly a signed integer, and no testing overhead is done in the final application. rather than void Foo::release(void) { if (--x == 0) do_something(); } where x is a special type, solely to throw an exception for the case that the program is bugged, needing not only a total new type (with no mathematical meaning, and no simularity with builtin-types*) ), but also the cost of testing as part of the pre-decrement operator, which can't be removed from the final application. *) Because, at least unsigned int is modular: both int (assuming no overflow occurs) and unsigned int are rings. While an infinite precision unsigned integer wouldn't even be a proper group. -- Carlo Wood <carlo@alinoe.com>

Carlo, Yes the unsigned_integer decrement does a test in the predecrement operator, but it is only the test of the sign, which will be fast. Users that are VERY performance critical then still have the option of not using unsigned_integer. Yes the unsigned_integer is not a proper group; on the other hand, when a user runs the following program: unsigned int x = -2; cout << x << endl; and the user gets 4294967294 then most users will be surprised, and not uderstand what is going on. The following program: unsigned_integer x = -2; cout << x << endl; will generate an error, which in my opinion is still more intuitive for most users who don't know what a modular integer is. Regards, Maarten. "Carlo Wood" <carlo@alinoe.com> wrote in message news:20060604020720.GA25970@alinoe.com...
On Sat, Jun 03, 2006 at 02:54:53PM +0200, Maarten Kronenburg wrote:
The assertion for unsigned_integer is an interesting possibility. In "C++ in a nutshell" I read: "If the expression evaluates to 0, assert prints a message to the standard error file and calls abort." Personally I would rather throw an exception, so that the user can determine what should happen if an unsigned_integer becomes negative.
There is nothing to decide, it is a bug when that happens. The best you can do is to dump a core (which is what assert does) so that the developer can examine as close as possible to the bug what has happened.
Once the program works, an integer that should never become negative will never become negative anymore, and no testing for less than zero is needed. You can still use a normal signed integer and just omit the test (assertion).
Thus,
void Foo::release(void) { assert(x > 0); if (--x == 0) do_something(); }
where x can be perfectly a signed integer, and no testing overhead is done in the final application.
rather than
void Foo::release(void) { if (--x == 0) do_something(); }
where x is a special type, solely to throw an exception for the case that the program is bugged, needing not only a total new type (with no mathematical meaning, and no simularity with builtin-types*) ), but also the cost of testing as part of the pre-decrement operator, which can't be removed from the final application.
*) Because, at least unsigned int is modular: both int (assuming no overflow occurs) and unsigned int are rings. While an infinite precision unsigned integer wouldn't even be a proper group.
-- Carlo Wood <carlo@alinoe.com> _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Carlo, "Before aborting, assert() outputs the name of its source file and the number of the line on which it appears. This makes assert() a useful debugging aid." Now as users will be not interested in my source code, I propose to let the run-time error be an exception. Regards, Maarten. "Maarten Kronenburg" <M.Kronenburg@inter.nl.net> wrote in message news:e5uf5p$53j$1@sea.gmane.org...
Carlo, Yes the unsigned_integer decrement does a test in the predecrement operator, but it is only the test of the sign, which will be fast. Users that are VERY performance critical then still have the option of not using unsigned_integer. Yes the unsigned_integer is not a proper group; on the other hand, when a user runs the following program: unsigned int x = -2; cout << x << endl; and the user gets 4294967294 then most users will be surprised, and not uderstand what is going on. The following program: unsigned_integer x = -2; cout << x << endl; will generate an error, which in my opinion is still more intuitive for most users who don't know what a modular integer is. Regards, Maarten.
"Carlo Wood" <carlo@alinoe.com> wrote in message news:20060604020720.GA25970@alinoe.com...
On Sat, Jun 03, 2006 at 02:54:53PM +0200, Maarten Kronenburg wrote:
The assertion for unsigned_integer is an interesting possibility. In "C++ in a nutshell" I read: "If the expression evaluates to 0, assert prints a message to the standard error file and calls abort." Personally I would rather throw an exception, so that the user can determine what should happen if an unsigned_integer becomes negative.
There is nothing to decide, it is a bug when that happens. The best you can do is to dump a core (which is what assert does) so that the developer can examine as close as possible to the bug what has happened.
Once the program works, an integer that should never become negative will never become negative anymore, and no testing for less than zero is needed. You can still use a normal signed integer and just omit the test (assertion).
Thus,
void Foo::release(void) { assert(x > 0); if (--x == 0) do_something(); }
where x can be perfectly a signed integer, and no testing overhead is done in the final application.
rather than
void Foo::release(void) { if (--x == 0) do_something(); }
where x is a special type, solely to throw an exception for the case that the program is bugged, needing not only a total new type (with no mathematical meaning, and no simularity with builtin-types*) ), but also the cost of testing as part of the pre-decrement operator, which can't be removed from the final application.
*) Because, at least unsigned int is modular: both int (assuming no overflow occurs) and unsigned int are rings. While an infinite precision unsigned integer wouldn't even be a proper group.
-- Carlo Wood <carlo@alinoe.com> _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

"Maarten Kronenburg" <M.Kronenburg@inter.nl.net> writes:
The base type unsigned int is a fact.
Yes, one that's borne out of a need not to lose any bits when expressing numbers that can't be negative, which doesn't apply to infinite precision integers.
The modular_integer is a mathematical fact,
Yes, one that's totally incompatible with the idea of infinite precision.
and the base type unsigned int is modular.
Point being?
And users that want an integer that is of infinite precision but they want to know for sure will never become negative, they have the option of using unsigned_integer. I don't see any evil in this.
Do you not acknowledge the costs of unnecessary complexity?
And the other side of the story is that if we don't provide an unsigned_integer, people will start making it themselves
I doubt it. Have you met anyone who would go to the trouble to do so? Does an unsigned infinite precision integer type exist anywhere today?
, and then many unsigned integers will be floating around in the end, all a little bit different.
The separate range limiting wrapper is a separate library. -- Dave Abrahams Boost Consulting www.boost-consulting.com
participants (4)
-
Carlo Wood
-
David Abrahams
-
Jonathan Ray
-
Maarten Kronenburg