Infinite precision integer draft

If you don't mind I start a new thread here. The unsigned infinite precision integer is different from the base type unsigned int, which is actually a modular integer with modulus 2^n. Therefore two integer derived classes are needed: unsigned_integer and modular_integer. The unsigned_integer is an infinite precision integer which can only be positive or zero. The negate() of a non-zero unsigned_integer will always throw an exception. A subtraction which results in a negative value will do the same; therefore in my opinion there is no fundamental problem with this, as negation is subtraction from zero. The modular_integer has a static method static void set_modulus( const integer & ). When the modulus is not set, it is zero, in that case the modular_integer is identical to integer. Users that like an unsigned integer with a negate() that always works, will have to use a modular_integer and set its modulus to a positive value. In the document I will specify unsigned_integer and modular_integer, and thus implementations can provide them. Regards, Maarten.

I do not agree with you. First of all the function works for zero argument, namely leaving the argument unchanged. Second, negation is subtraction from zero, so when subtractions with negative result throw an exception, negation of non-zero values should also throw an exception. That is how the mathematics is defined. Regards, Maarten. "Bronek Kozicki" <brok@rubikon.pl> wrote in message news:447C063F.6060209@rubikon.pl...
Maarten Kronenburg wrote:
The negate() of a non-zero unsigned_integer will always throw an exception.
if function never works, it should not belong to interface.
B.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Tue, May 30, 2006 at 10:31:01AM +0200, Maarten Kronenburg wrote:
If you don't mind I start a new thread here. The unsigned infinite precision integer is different from the base type unsigned int, which is actually a modular integer with modulus 2^n.
I'm not convinced that unsigned_integer is needed. It's only benefit would be that operator- and the unary negation aren't defined, helping the user to realize some possible glitches during compile time. However, if someone makes the mistake to use subtraction anyway (which should return a normal integer if it existed) then I don't think that was a typo. So, apparently, you want to support unsigned_integer's for which subtraction IS defined, so you can do: unsigned_integer x = 8; x -= 3; Then what is the use of it being unsigned? If the result becomes negative then it doesn't exist. Is it really useful to have an exception being thrown in the case operator- or operator-= results in a negative value? We (at least, someone on this list, and I agree) concluded that negation shouldn't be defined for unsigned_integer because of unnecessary overhead. I think the same argument holds for subtracting two unsigned_integer's. It isn't worth the try{}catch overhead: if it can become negative and you don't want that - then use normal integers and simply test if the result is < 0!
Therefore two integer derived classes are needed: unsigned_integer and modular_integer. The unsigned_integer is an infinite precision integer which can only be positive or zero. The negate() of a non-zero unsigned_integer will always throw an exception. A subtraction which results in a negative value will do the same; therefore in my opinion there is no fundamental problem with this, as negation is subtraction from zero.
Well, if unsigned_integer has to be there, then I guess this is what makes most sense.
The modular_integer has a static method static void set_modulus( const integer & ). When the modulus is not set, it is zero, in that case the modular_integer is identical to integer. Users that like an unsigned integer with a negate() that always works, will have to use a modular_integer and set its modulus to a positive value. In the document I will specify unsigned_integer and modular_integer, and thus implementations can provide them. Regards, Maarten.
Except for possible overhead of too many tests in relatively simple operations, I think this is a (mathematically) sound design. I hadn't realized before that you made a difference between modular_integer and unsigned_integer. I thought you was using unsigned_integer to implement modular integers :p (I guess I joined the thread too late). -- Carlo Wood <carlo@alinoe.com>

There is however one catch to the unsigned_integer: the expression a-b-c Then look in [expr.add]: "The additive operators group from left-to-right". The integer binary operator- first clones the rhs, negates it, and then adds to it the lhs. This guarantees that when b or c are non-zero unsigned_integer, then ALWAYS an exception is thrown. BUT what about a-(b+c) Now (b+c) returns a temporary integer (not unsigned_integer), which is negated (NO exception) and a is added to it. So although the behaviour is compiler-independent because of the [expr.add] remark, the use of braces in expressions may change the behaviour of an unsigned_integer expression, that is while a-b-c may throw an exception, a-(b+c) may not. Regards, Maarten. "Maarten Kronenburg" <M.Kronenburg@inter.nl.net> wrote in message news:e5gvs8$jbm$1@sea.gmane.org...
If you don't mind I start a new thread here. The unsigned infinite precision integer is different from the base type unsigned int, which is actually a modular integer with modulus 2^n. Therefore two integer derived classes are needed: unsigned_integer and modular_integer. The unsigned_integer is an infinite precision integer which can only be positive or zero. The negate() of a non-zero unsigned_integer will always throw an exception. A subtraction which results in a negative value will do the same; therefore in my opinion there is no fundamental problem with this, as negation is subtraction from zero. The modular_integer has a static method static void set_modulus( const integer & ). When the modulus is not set, it is zero, in that case the modular_integer is identical to integer. Users that like an unsigned integer with a negate() that always works, will have to use a modular_integer and set its modulus to a positive value. In the document I will specify unsigned_integer and modular_integer, and thus implementations can provide them. Regards, Maarten.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Because the integer binary operators must clone the rhs OR the lhs, the temporaries in expressions can be of type integer OR derived type. Now as each derived type has an operator=( const integer & ) which turns the integer temporary back to the derived type, the result after assignment is always identical. But the temporaries can still be either integer type or derived type. This expression a-b-c can have another type of temporary then a-(b+c) However which type it is is independent of compiler, because of the [expr.add] remark. Now the question is that because unsigned_integer throws an exception when it becomes negative, is it acceptable that when a and b nonzero, a-b-c then throws an exception and a-(b+c) not? Regards, Maarten. "Maarten Kronenburg" <M.Kronenburg@inter.nl.net> wrote in message news:e5id0q$9ev$1@sea.gmane.org...
There is however one catch to the unsigned_integer: the expression a-b-c Then look in [expr.add]: "The additive operators group from left-to-right". The integer binary operator- first clones the rhs, negates it, and then adds to it the lhs. This guarantees that when b or c are non-zero unsigned_integer, then ALWAYS an exception is thrown. BUT what about a-(b+c) Now (b+c) returns a temporary integer (not unsigned_integer), which is negated (NO exception) and a is added to it. So although the behaviour is compiler-independent because of the [expr.add] remark, the use of braces in expressions may change the behaviour of an unsigned_integer expression, that is while a-b-c may throw an exception, a-(b+c) may not. Regards, Maarten.
"Maarten Kronenburg" <M.Kronenburg@inter.nl.net> wrote in message news:e5gvs8$jbm$1@sea.gmane.org...
If you don't mind I start a new thread here. The unsigned infinite precision integer is different from the base type unsigned int, which is actually a modular integer with modulus 2^n. Therefore two integer derived classes are needed: unsigned_integer and modular_integer. The unsigned_integer is an infinite precision integer which can only be positive or zero. The negate() of a non-zero unsigned_integer will always throw an exception. A subtraction which results in a negative value will do the same; therefore in my opinion there is no fundamental problem with this, as negation is subtraction from zero. The modular_integer has a static method static void set_modulus( const integer & ). When the modulus is not set, it is zero, in that case the modular_integer is identical to integer. Users that like an unsigned integer with a negate() that always works, will have to use a modular_integer and set its modulus to a positive value. In the document I will specify unsigned_integer and modular_integer, and thus implementations can provide them. Regards, Maarten.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

The situation is a little bit different. The integer binary operators like operator+ and operator- return an integer, so the type of these temporaries is integer. Each type derived from type integer has an operator=( const integer & ) which converts the temporary integer back to the derived type. So after assigning the result, the result is identical. Then there are the temporaries that are cloned inside the integer binary operators, and the corresponding virtual member function is called of this clone. Because the binary operator must choose which object to clone, the rhs or the lhs, the values of these clones and the values of the integer temporaries can be different between a-b-c and a-(b+c). So the values of the clones and the integer temporaries can be different anyway, but as the unsigned_integer throws and exception when it becomes negative, the brackets may decide whether an exception is thrown. The question is if this can be accepted, if not the unsigned_integer must go. In the document I will also mention this issue. Regards, Maarten. "Maarten Kronenburg" <M.Kronenburg@inter.nl.net> wrote in message news:e5k67i$7hr$1@sea.gmane.org...
Because the integer binary operators must clone the rhs OR the lhs, the temporaries in expressions can be of type integer OR derived type. Now as each derived type has an operator=( const integer & ) which turns the integer temporary back to the derived type, the result after assignment is always identical. But the temporaries can still be either integer type or derived type. This expression a-b-c can have another type of temporary then a-(b+c) However which type it is is independent of compiler, because of the [expr.add] remark. Now the question is that because unsigned_integer throws an exception when it becomes negative, is it acceptable that when a and b nonzero, a-b-c then throws an exception and a-(b+c) not? Regards, Maarten.
"Maarten Kronenburg" <M.Kronenburg@inter.nl.net> wrote in message news:e5id0q$9ev$1@sea.gmane.org...
There is however one catch to the unsigned_integer: the expression a-b-c Then look in [expr.add]: "The additive operators group from left-to-right". The integer binary operator- first clones the rhs, negates it, and then adds to it the lhs. This guarantees that when b or c are non-zero unsigned_integer, then ALWAYS an exception is thrown. BUT what about a-(b+c) Now (b+c) returns a temporary integer (not unsigned_integer), which is negated (NO exception) and a is added to it. So although the behaviour is compiler-independent because of the [expr.add] remark, the use of braces in expressions may change the behaviour of an unsigned_integer expression, that is while a-b-c may throw an exception, a-(b+c) may not. Regards, Maarten.
"Maarten Kronenburg" <M.Kronenburg@inter.nl.net> wrote in message news:e5gvs8$jbm$1@sea.gmane.org...
If you don't mind I start a new thread here. The unsigned infinite precision integer is different from the base type unsigned int, which is actually a modular integer with modulus 2^n. Therefore two integer derived classes are needed: unsigned_integer and modular_integer. The unsigned_integer is an infinite precision integer which can only be positive or zero. The negate() of a non-zero unsigned_integer will always throw an exception. A subtraction which results in a negative value will do the same; therefore in my opinion there is no fundamental problem with this, as negation is subtraction from zero. The modular_integer has a static method static void set_modulus( const integer & ). When the modulus is not set, it is zero, in that case the modular_integer is identical to integer. Users that like an unsigned integer with a negate() that always works, will have to use a modular_integer and set its modulus to a positive value. In the document I will specify unsigned_integer and modular_integer, and thus implementations can provide them. Regards, Maarten.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

This problem with unsigned_integer throwing an exception when negative is already apparent for the two expressions a-b and -a+b When the integer binary operator- would clone the lhs, then the second effectively does a negate, but the first does not. Then the first would not throw an exception, but the second would. When because of this problem the unsigned_integer goes, then the user can only make an unsigned integer by using the modular_integer and setting its modulus (for example to 2^n). But as this would limit the values to under 2^n, it is not really infinite precision anymore. So we have a dilemma: or we have a true unsigned infinite precision integer which may or may not throw exceptions in equivalent expressions, or we only have an unsigned integer which is actually a modular integer with a modulus, so which is not really infinite precision. But the modular_integer will be there anyway, which may serve as an unsigned integer by: typedef modular_integer unsigned_integer; Regards, Maarten. "Maarten Kronenburg" <M.Kronenburg@inter.nl.net> wrote in message news:e5k7bh$chh$1@sea.gmane.org...
The situation is a little bit different. The integer binary operators like operator+ and operator- return an integer, so the type of these temporaries is integer. Each type derived from type integer has an operator=( const integer & ) which converts the temporary integer back to the derived type. So after assigning the result, the result is identical. Then there are the temporaries that are cloned inside the integer binary operators, and the corresponding virtual member function is called of this clone. Because the binary operator must choose which object to clone, the rhs or the lhs, the values of these clones and the values of the integer temporaries can be different between a-b-c and a-(b+c). So the values of the clones and the integer temporaries can be different anyway, but as the unsigned_integer throws and exception when it becomes negative, the brackets may decide whether an exception is thrown. The question is if this can be accepted, if not the unsigned_integer must go. In the document I will also mention this issue. Regards, Maarten.
"Maarten Kronenburg" <M.Kronenburg@inter.nl.net> wrote in message news:e5k67i$7hr$1@sea.gmane.org...
Because the integer binary operators must clone the rhs OR the lhs, the temporaries in expressions can be of type integer OR derived type. Now as each derived type has an operator=( const integer & ) which turns the integer temporary back to the derived type, the result after assignment is always identical. But the temporaries can still be either integer type or derived type. This expression a-b-c can have another type of temporary then a-(b+c) However which type it is is independent of compiler, because of the [expr.add] remark. Now the question is that because unsigned_integer throws an exception when it becomes negative, is it acceptable that when a and b nonzero, a-b-c then throws an exception and a-(b+c) not? Regards, Maarten.
"Maarten Kronenburg" <M.Kronenburg@inter.nl.net> wrote in message news:e5id0q$9ev$1@sea.gmane.org...
There is however one catch to the unsigned_integer: the expression a-b-c Then look in [expr.add]: "The additive operators group from left-to-right". The integer binary operator- first clones the rhs, negates it, and then adds to it the lhs. This guarantees that when b or c are non-zero unsigned_integer, then ALWAYS an exception is thrown. BUT what about a-(b+c) Now (b+c) returns a temporary integer (not unsigned_integer), which is negated (NO exception) and a is added to it. So although the behaviour is compiler-independent because of the [expr.add] remark, the use of braces in expressions may change the behaviour of an unsigned_integer expression, that is while a-b-c may throw an exception, a-(b+c) may not. Regards, Maarten.
"Maarten Kronenburg" <M.Kronenburg@inter.nl.net> wrote in message news:e5gvs8$jbm$1@sea.gmane.org...
If you don't mind I start a new thread here. The unsigned infinite precision integer is different from the base type unsigned int, which is actually a modular integer with modulus 2^n. Therefore two integer derived classes are needed: unsigned_integer and modular_integer. The unsigned_integer is an infinite precision integer which can only be positive or zero. The negate() of a non-zero unsigned_integer will always throw an exception. A subtraction which results in a negative value will do the same; therefore in my opinion there is no fundamental problem with this, as negation is subtraction from zero. The modular_integer has a static method static void set_modulus( const integer & ). When the modulus is not set, it is zero, in that case the modular_integer is identical to integer. Users that like an unsigned integer with a negate() that always works, will have to use a modular_integer and set its modulus to a positive value. In the document I will specify unsigned_integer and modular_integer, and thus implementations can provide them. Regards, Maarten.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

There is another solution: in the binary integer operators the lhs or rhs is cloned, but not the derived member operators are called, but the integer ones. This seems reasonable as the binary integer operators return integers by value anyway. Also the unary operator- returns integer by value, and can call the integer negate(), not the derived one. Only when the result is assigned, the derived operator=( const integer & ) is called, which converts the integer back to the derived class, and in the case of unsigned_integer, throw an exception when the result is negative. This solves our unsigned_integer problem. There is however one price: efficiency. For example for modular_integer, the temporary results are never made modular, so may become larger than when the derived member operators would have been called. So this is another dilemma. But as those integer arithmetic binary and unary operators return integer by value anyway, in my opinion the integer member operators should be called, and not the derived ones, although this may be less efficient in some cases. Regards, Maarten. "Maarten Kronenburg" <M.Kronenburg@inter.nl.net> wrote in message news:e5ku28$533$1@sea.gmane.org...
This problem with unsigned_integer throwing an exception when negative is already apparent for the two expressions a-b and -a+b When the integer binary operator- would clone the lhs, then the second effectively does a negate, but the first does not. Then the first would not throw an exception, but the second would. When because of this problem the unsigned_integer goes, then the user can only make an unsigned integer by using the modular_integer and setting its modulus (for example to 2^n). But as this would limit the values to under 2^n, it is not really infinite precision anymore. So we have a dilemma: or we have a true unsigned infinite precision integer which may or may not throw exceptions in equivalent expressions, or we only have an unsigned integer which is actually a modular integer with a modulus, so which is not really infinite precision. But the modular_integer will be there anyway, which may serve as an unsigned integer by: typedef modular_integer unsigned_integer; Regards, Maarten.
"Maarten Kronenburg" <M.Kronenburg@inter.nl.net> wrote in message news:e5k7bh$chh$1@sea.gmane.org...
The situation is a little bit different. The integer binary operators like operator+ and operator- return an integer, so the type of these temporaries is integer. Each type derived from type integer has an operator=( const integer & ) which converts the temporary integer back to the derived type. So after assigning the result, the result is identical. Then there are the temporaries that are cloned inside the integer binary operators, and the corresponding virtual member function is called of this clone. Because the binary operator must choose which object to clone, the rhs or the lhs, the values of these clones and the values of the integer temporaries can be different between a-b-c and a-(b+c). So the values of the clones and the integer temporaries can be different anyway, but as the unsigned_integer throws and exception when it becomes negative, the brackets may decide whether an exception is thrown. The question is if this can be accepted, if not the unsigned_integer must go. In the document I will also mention this issue. Regards, Maarten.
"Maarten Kronenburg" <M.Kronenburg@inter.nl.net> wrote in message news:e5k67i$7hr$1@sea.gmane.org...
Because the integer binary operators must clone the rhs OR the lhs, the temporaries in expressions can be of type integer OR derived type. Now as each derived type has an operator=( const integer & ) which turns the integer temporary back to the derived type, the result after assignment is always identical. But the temporaries can still be either integer type or derived type. This expression a-b-c can have another type of temporary then a-(b+c) However which type it is is independent of compiler, because of the [expr.add] remark. Now the question is that because unsigned_integer throws an exception when it becomes negative, is it acceptable that when a and b nonzero, a-b-c then throws an exception and a-(b+c) not? Regards, Maarten.
"Maarten Kronenburg" <M.Kronenburg@inter.nl.net> wrote in message news:e5id0q$9ev$1@sea.gmane.org...
There is however one catch to the unsigned_integer: the expression a-b-c Then look in [expr.add]: "The additive operators group from left-to-right". The integer binary operator- first clones the rhs, negates it, and then adds to it the lhs. This guarantees that when b or c are non-zero unsigned_integer, then ALWAYS an exception is thrown. BUT what about a-(b+c) Now (b+c) returns a temporary integer (not unsigned_integer), which is negated (NO exception) and a is added to it. So although the behaviour is compiler-independent because of the [expr.add] remark, the use of braces in expressions may change the behaviour of an unsigned_integer expression, that is while a-b-c may throw an exception, a-(b+c) may not. Regards, Maarten.
"Maarten Kronenburg" <M.Kronenburg@inter.nl.net> wrote in message news:e5gvs8$jbm$1@sea.gmane.org...
If you don't mind I start a new thread here. The unsigned infinite precision integer is different from the base type unsigned int, which is actually a modular integer with modulus 2^n. Therefore two integer derived classes are needed: unsigned_integer and modular_integer. The unsigned_integer is an infinite precision integer which can only be positive or zero. The negate() of a non-zero unsigned_integer will always throw an exception. A subtraction which results in a negative value will do the same; therefore in my opinion there is no fundamental problem with this, as negation is subtraction from zero. The modular_integer has a static method static void set_modulus( const integer & ). When the modulus is not set, it is zero, in that case the modular_integer is identical to integer. Users that like an unsigned integer with a negate() that always works, will have to use a modular_integer and set its modulus to a positive value. In the document I will specify unsigned_integer and modular_integer, and thus implementations can provide them. Regards, Maarten.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

This is the last one in this thread. The arithmetic non-member operators returning integer by value are changed to non-virtual. The virtual integer & operator=( const integer & ); now has the following note: "Derived classes must override this member operator and use it to convert (temporary) objects of type integer back to the derived class." Also it is explained that the arithmetic non-member operators and the arithmetic member operators returning integer by value must call integer member functions and operators, not the derived ones. Now in expressions with unsigned_integer variables, temporary results can be negative. Only when a negative end-result is assigned, an exception will be thrown. So when a is 3 and b = 4, x = -a + b; will not throw an exception. Regards, Maarten. "Maarten Kronenburg" <M.Kronenburg@inter.nl.net> wrote in message news:e5lb7k$gc8$1@sea.gmane.org...
There is another solution: in the binary integer operators the lhs or rhs is cloned, but not the derived member operators are called, but the integer ones. This seems reasonable as the binary integer operators return integers by value anyway. Also the unary operator- returns integer by value, and can call the integer negate(), not the derived one. Only when the result is assigned, the derived operator=( const integer & ) is called, which converts the integer back to the derived class, and in the case of unsigned_integer, throw an exception when the result is negative. This solves our unsigned_integer problem. There is however one price: efficiency. For example for modular_integer, the temporary results are never made modular, so may become larger than when the derived member operators would have been called. So this is another dilemma. But as those integer arithmetic binary and unary operators return integer by value anyway, in my opinion the integer member operators should be called, and not the derived ones, although this may be less efficient in some cases. Regards, Maarten.
"Maarten Kronenburg" <M.Kronenburg@inter.nl.net> wrote in message news:e5ku28$533$1@sea.gmane.org...
This problem with unsigned_integer throwing an exception when negative is already apparent for the two expressions a-b and -a+b When the integer binary operator- would clone the lhs, then the second effectively does a negate, but the first does not. Then the first would not throw an exception, but the second would. When because of this problem the unsigned_integer goes, then the user can only make an unsigned integer by using the modular_integer and setting its modulus (for example to 2^n). But as this would limit the values to under 2^n, it is not really infinite precision anymore. So we have a dilemma: or we have a true unsigned infinite precision integer which may or may not throw exceptions in equivalent expressions, or we only have an unsigned integer which is actually a modular integer with a modulus, so which is not really infinite precision. But the modular_integer will be there anyway, which may serve as an unsigned integer by: typedef modular_integer unsigned_integer; Regards, Maarten.
"Maarten Kronenburg" <M.Kronenburg@inter.nl.net> wrote in message news:e5k7bh$chh$1@sea.gmane.org...
The situation is a little bit different. The integer binary operators like operator+ and operator- return an integer, so the type of these temporaries is integer. Each type derived from type integer has an operator=( const integer & ) which converts the temporary integer back to the derived type. So after assigning the result, the result is identical. Then there are the temporaries that are cloned inside the integer binary operators, and the corresponding virtual member function is called of this clone. Because the binary operator must choose which object to clone, the rhs or the lhs, the values of these clones and the values of the integer temporaries can be different between a-b-c and a-(b+c). So the values of the clones and the integer temporaries can be different anyway, but as the unsigned_integer throws and exception when it becomes negative, the brackets may decide whether an exception is thrown. The question is if this can be accepted, if not the unsigned_integer must go. In the document I will also mention this issue. Regards, Maarten.
"Maarten Kronenburg" <M.Kronenburg@inter.nl.net> wrote in message news:e5k67i$7hr$1@sea.gmane.org...
Because the integer binary operators must clone the rhs OR the lhs, the temporaries in expressions can be of type integer OR derived type. Now as each derived type has an operator=( const integer & ) which turns the integer temporary back to the derived type, the result after assignment is always identical. But the temporaries can still be either integer type or derived type. This expression a-b-c can have another type of temporary then a-(b+c) However which type it is is independent of compiler, because of the [expr.add] remark. Now the question is that because unsigned_integer throws an exception when it becomes negative, is it acceptable that when a and b nonzero, a-b-c then throws an exception and a-(b+c) not? Regards, Maarten.
"Maarten Kronenburg" <M.Kronenburg@inter.nl.net> wrote in message news:e5id0q$9ev$1@sea.gmane.org...
There is however one catch to the unsigned_integer: the expression a-b-c Then look in [expr.add]: "The additive operators group from left-to-right". The integer binary operator- first clones the rhs, negates it, and then adds to it the lhs. This guarantees that when b or c are non-zero unsigned_integer, then ALWAYS an exception is thrown. BUT what about a-(b+c) Now (b+c) returns a temporary integer (not unsigned_integer), which is negated (NO exception) and a is added to it. So although the behaviour is compiler-independent because of the [expr.add] remark, the use of braces in expressions may change the behaviour of an unsigned_integer expression, that is while a-b-c may throw an exception, a-(b+c) may not. Regards, Maarten.
"Maarten Kronenburg" <M.Kronenburg@inter.nl.net> wrote in message news:e5gvs8$jbm$1@sea.gmane.org...
If you don't mind I start a new thread here. The unsigned infinite precision integer is different from the base type unsigned int, which is actually a modular integer with modulus 2^n. Therefore two integer derived classes are needed: unsigned_integer and modular_integer. The unsigned_integer is an infinite precision integer which can only be positive or zero. The negate() of a non-zero unsigned_integer will always throw an exception. A subtraction which results in a negative value will do the same; therefore in my opinion there is no fundamental problem with this, as negation is subtraction from zero. The modular_integer has a static method static void set_modulus( const integer & ). When the modulus is not set, it is zero, in that case the modular_integer is identical to integer. Users that like an unsigned integer with a negate() that always works, will have to use a modular_integer and set its modulus to a positive value. In the document I will specify unsigned_integer and modular_integer, and thus implementations can provide them. Regards, Maarten.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Maarten Kronenburg <M.Kronenburg@inter.nl.net> wrote:
"Derived classes must override this member operator and use it to convert (temporary) objects of type integer back to the derived class."
Also it is explained that the arithmetic non-member operators and the arithmetic member operators returning integer by value must call integer member functions and operators, not the derived ones.
this complexity is clear indication to me that there is something seriously wrong with design of the interface. B.

I know I'm entering this discussion a little late, so forgive me if this has already been said, but I fail to see the point of having an unsigned_integer. I understand that certain quantities are intrinsically non-negative and therefore the idea of an unsigned_integer has aesthetic value, but my experience with the built-in types is that unsigned integers create more problems than they solve. (I'm talking about subtraction and comparison to signed types.) An infinite precision signed integer can represent all the same values as an unsigned integer, so from a practical point of view, why bother with the unsigned type at all? It seems to me that it just introduces a lot of unnecessary complexity. D.

On Wed, May 31, 2006 at 05:35:16PM -0500, Daniel Mitchell wrote:
already been said, but I fail to see the point of having an unsigned_integer.
So do I. Regards -Gerhard -- Gerhard Wesp ZRH office voice: +41 (0)44 668 1878 ZRH office fax: +41 (0)44 200 1818 For the rest I claim that raw pointers must be abolished.

Daniel Mitchell <danmitchell@mail.utexas.edu> writes:
I know I'm entering this discussion a little late, so forgive me if this has already been said, but I fail to see the point of having an unsigned_integer. I understand that certain quantities are intrinsically non-negative and therefore the idea of an unsigned_integer has aesthetic value, but my experience with the built-in types is that unsigned integers create more problems than they solve. (I'm talking about subtraction and comparison to signed types.) An infinite precision signed integer can represent all the same values as an unsigned integer, so from a practical point of view, why bother with the unsigned type at all? It seems to me that it just introduces a lot of unnecessary complexity.
Agreed 100%. -- Dave Abrahams Boost Consulting www.boost-consulting.com

Daniel, Users that don't like the unsigned_integer and want to use integer although it will never become negative, are free to do so. But users that want to make sure that variables never become negative, but still want those variables to be really with infinite precision, have the option to use unsigned_integer. Regards, Maarten. "Daniel Mitchell" <danmitchell@mail.utexas.edu> wrote in message news:200605311735.17167.danmitchell@mail.utexas.edu...
I know I'm entering this discussion a little late, so forgive me if this has already been said, but I fail to see the point of having an unsigned_integer. I understand that certain quantities are intrinsically non-negative and therefore the idea of an unsigned_integer has aesthetic value, but my experience with the built-in types is that unsigned integers create more problems than they solve. (I'm talking about subtraction and comparison to signed types.) An infinite precision signed integer can represent all the same values as an unsigned integer, so from a practical point of view, why bother with the unsigned type at all? It seems to me that it just introduces a lot of unnecessary complexity.
D. _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

I don't think anyone really objects to unsigned_integer, though some question its utility. The problem seems to me to be the idea that it inherits from integer -- an almost classic example of misuse of inheritance. Pardon me if I lost the thread somewhere but that seems to be what is being proposed. As others have said, inheritance should represent ISA both in interface and conceptually. If unsigned_integer followed this relationship than there would be no argument here -- the negative of an unsigned_integer would be clearly and unambiguously defined -- it just wouldn't happen to be an unsigned_integer. But that seems to eliminate the point of having a separate unsigned_integer type. Inheriting unsigned_integer from integer means that I can never be sure that negating an integer (which might actually be an unsigned_integer) is a reasonable thing to do. VERY BAD DESIGN. Personally I would vote for unsigned_integer which does not inherit from integer, though I think that it is lower priority than infinite (more accurately, indefinite) precision (more accurately, "magnitude") integer. I do think that there is some use to an integral type that provides assurance that it is never negative. I would drop negate, and would include both a exception-throwing and a modular subtract. Just don't have it inherit from integer -- it makes integer useless except under tightly controlled circumstances (e.g., I could never use it as part of an API). Topher At 08:13 AM 6/1/2006, you wrote:
Daniel, Users that don't like the unsigned_integer and want to use integer although it will never become negative, are free to do so. But users that want to make sure that variables never become negative, but still want those variables to be really with infinite precision, have the option to use unsigned_integer. Regards, Maarten.
"Daniel Mitchell" <danmitchell@mail.utexas.edu> wrote in message news:200605311735.17167.danmitchell@mail.utexas.edu...
I know I'm entering this discussion a little late, so forgive me if this has already been said, but I fail to see the point of having an unsigned_integer. I understand that certain quantities are intrinsically non-negative and therefore the idea of an unsigned_integer has aesthetic value, but my experience with the built-in types is that unsigned integers create more problems than they solve. (I'm talking about subtraction and comparison to signed types.) An infinite precision signed integer can represent all the same values as an unsigned integer, so from a practical point of view, why bother with the unsigned type at all? It seems to me that it just introduces a lot of unnecessary complexity.
D. _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Topher Cooper <topher@topherc.net> writes:
I don't think anyone really objects to unsigned_integer, though some question its utility.
I do object. IMO it adds complexity for little or no benefit, and complexity is a big problem. Also, in my opinion, the job of limiting the allowed range of values for a type should be provided by a wrapper template, so that other types can benefit. For example: typedef range_checked< infinite_precision_integer , non_negative // a predicate
infinite_precision_unsigned_integer;
typedef range_checked< double , abs_less_or_equal_to_1
result_of_sin_or_cos;
-- Dave Abrahams Boost Consulting www.boost-consulting.com

That's really an issue of utility: Is the additional utility (which some seem to feel is essentially zero) it provides worth the additional complexity? A general range restriction wrapper, as suggested, might give you more bang for the buck, but support for unsigned integer is not just another range restriction. Non-negative integers -- i.e., numbers representing cardinality how *many* of some kind of thing there is -- is a uniquely useful and meaningful range restriction with special mathematical and practical properties. It isn't "just" a range restriction of integers. In fact, historically, the opposite is the case: the integers were an extension of the counting numbers. Much of the utility of range restriction will come out of range restrictions based on cardinality (e.g., the number of elements in a collection, or the number of ASCII characters), for example, so most use of range restricted integers would conceptually be range restrictions of unsigned integers whether or not those are instantiated as a separate type. In other words, a range restriction of 0..infinity is a "natural", useful and broadly meaningful, while a range restriction of, say -13..87 is arbitrary, and only useful and meaningful within a some restricted context. That isn't to say its worth implementing separately, however. Its just that its the same kind of question as whether its worth having an externally visible implementation of indefinite magnitude integer if you are going to have indefinite precision/magnitude rationals. Topher At 03:46 PM 6/1/2006, you wrote:
Topher Cooper <topher@topherc.net> writes:
I don't think anyone really objects to unsigned_integer, though some question its utility.
I do object. IMO it adds complexity for little or no benefit, and complexity is a big problem. Also, in my opinion, the job of limiting the allowed range of values for a type should be provided by a wrapper template, so that other types can benefit. For example:
typedef range_checked< infinite_precision_integer , non_negative // a predicate
infinite_precision_unsigned_integer;
typedef range_checked< double , abs_less_or_equal_to_1
result_of_sin_or_cos;
-- Dave Abrahams Boost Consulting www.boost-consulting.com
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Topher Cooper wrote:
<>
The problem seems to me to be the idea that it inherits from integer -- an almost classic example of misuse of inheritance. Pardon me if I lost the thread somewhere but that seems to be what is being proposed.
As others have said, inheritance should represent ISA both in interface and conceptually. If unsigned_integer followed this relationship than there would be no argument here -- the negative of an unsigned_integer would be clearly and unambiguously defined -- it just wouldn't happen to be an unsigned_integer. But that seems to eliminate the point of having a separate unsigned_integer type. Inheriting unsigned_integer from integer means that I can never be sure that negating an integer (which might actually be an unsigned_integer) is a reasonable thing to do. VERY BAD DESIGN.
Seems to me that, if the author is really set on using inheritance here, integer should inherit from unsigned_integer, since integer can do everything that an unsigned can do, but not vice-versa. Integer extends unsigned with the ability to represent negative values and the result of subtraction. - Marsh

Marsh J. Ray wrote:
Seems to me that, if the author is really set on using inheritance here, integer should inherit from unsigned_integer, since integer can do everything that an unsigned can do, but not vice-versa. Integer extends unsigned with the ability to represent negative values and the result of subtraction.
No, that's not true. (See Scott Meyers, Effective C++, Item 35) If a signed integer IS-A unsigned integer, then all invariants of unsigned integer must hold true for signed integer - including the one about negative values. ("There's no such thing.") Sebastian Redl

Sebastian Redl <sebastian.redl@getdesigned.at> writes:
Marsh J. Ray wrote:
Seems to me that, if the author is really set on using inheritance here, integer should inherit from unsigned_integer, since integer can do everything that an unsigned can do, but not vice-versa. Integer extends unsigned with the ability to represent negative values and the result of subtraction.
No, that's not true. (See Scott Meyers, Effective C++, Item 35) If a signed integer IS-A unsigned integer, then all invariants of unsigned integer must hold true for signed integer - including the one about negative values. ("There's no such thing.")
IMO the whole idea of using inheritance here is so misguided in the first place that which order you do it in is probably not worth arguing about. -- Dave Abrahams Boost Consulting www.boost-consulting.com

(A voice from the sidelines...) David Abrahams writes:
Sebastian Redl <sebastian.redl@getdesigned.at> writes:
Marsh J. Ray wrote:
Seems to me that, if the author is really set on using inheritance here, integer should inherit from unsigned_integer, since integer can do everything that an unsigned can do, but not vice-versa. Integer extends unsigned with the ability to represent negative values and the result of subtraction.
No, that's not true. (See Scott Meyers, Effective C++, Item 35) If a signed integer IS-A unsigned integer, then all invariants of unsigned integer must hold true for signed integer - including the one about negative values. ("There's no such thing.")
IMO the whole idea of using inheritance here is so misguided in the first place that which order you do it in is probably not worth arguing about.
I agree, but turn it around; this is another point in favor of not using inheritance at all. Signed can't inherit from unsigned, as stated above. But unsigned can't inherit from signed, since signeds can be negative, so an unsigned is-NOT-a signed. This is the classic "squares and rectangles can't derive from each other" situation. If you really wanted to use inheritance, both would have to inherit from a base class. I suspect that most (maybe all) of the operations would have to be virtual, with all the attendant design and performance issues that implies. I'm with many other here, in thinking that there's no need for an unsigned infinite integer. Make it signed, and make it do as the ints do. No inheritance, no templates, no muss, no fuss, simple, fast, easy to understand, easy to maintain. Also note that Boost libraries can change. An unsigned version could be added later, _after_ you've got use cases for it. Start with the simple thing. <lurk mode on> ---------------------------------------------------------------------- Dave Steffen, Ph.D. Fools ignore complexity. Software Engineer IV Pragmatists suffer it. Numerica Corporation Some can avoid it. ph (970) 419-8343 x27 Geniuses remove it. fax (970) 223-6797 -- Alan Perlis dgsteffen at numerica dot us

Dave Steffen <dgsteffen@numerica.us> writes:
IMO the whole idea of using inheritance here is so misguided in the first place that which order you do it in is probably not worth arguing about.
I agree, but turn it around; this is another point in favor of not using inheritance at all.
That's not a "but." I was saying exactly the same thing: using inheritance here at all is misguided. ...
I'm with many other here, in thinking that there's no need for an unsigned infinite integer.
I'm also saying that.
An unsigned version could be added later, _after_ you've got use cases for it.
Amen, brother!
Start with the simple thing.
...and the thing that's known to be useful. If there's no other widely used infinite precision unsigned integer library out there by now, it's probably a pretty good indicator that it isn't useful. -- Dave Abrahams Boost Consulting www.boost-consulting.com

Sebastian Redl wrote:
No, that's not true. (See Scott Meyers, Effective C++, Item 35) If a signed integer IS-A unsigned integer, then all invariants of unsigned integer must hold true for signed integer - including the one about negative values. ("There's no such thing.")
If this type is meant to have an invariant "about negative values", then perhaps it shouldn't be called UNsigned_integer (i.e. lacking the quality of sign)? Perhaps a better name would be nonnegative_integer. I think I'll ask the waiter if he can unsweeten my tea . . . on second thought, maybe I'd better not. - Marsh

That's even worse. ISA relationships are about *specialization* not generalization. If A is a subclass of B then the set of objects which are A's should be special cases of objects which are B's, and by *dint of those special circumstances* have additional capabilities. To give an example from mathematical theory rather than practical programming: a positive integer is a (ISA) number, but it is a special kind of number that by dint of that specialness has the special capability of it being meaningful to ask about its prime factorization. The practical test is whether under virtually all circumstances you would be comfortable "receiving" a (signed) integer when your spec requests an unsigned_integer. I rather doubt it. It is reasonable, however, to receive an unsigned_integer in place of a signed integer IF unsigned_integer is defined transparently -- e.g., so that negate returns the negative of the number (which would have to be a signed integer if the input were a non-zero unsigned_integer). If you wish to have an unsigned_integer class which is closed for negation and subtraction, then either you should give it no formal relationship to signed integer or have them both inherit from an abstract class, call it abstract_integer, whose interface would probably turn out very similar to that of unsigned_integer. Although the spec may be identical the meaning is different. While neither would have negate, or negate would be speced to perhaps throw an exception, an abstract_integer might be negative but an unsigned_integer never would. Topher At 04:03 PM 6/1/2006, you wrote:
Topher Cooper wrote:
<>
The problem seems to me to be the idea that it inherits from integer -- an almost classic example of misuse of inheritance. Pardon me if I lost the thread somewhere but that seems to be what is being proposed.
As others have said, inheritance should represent ISA both in interface and conceptually. If unsigned_integer followed this relationship than there would be no argument here -- the negative of an unsigned_integer would be clearly and unambiguously defined -- it just wouldn't happen to be an unsigned_integer. But that seems to eliminate the point of having a separate unsigned_integer type. Inheriting unsigned_integer from integer means that I can never be sure that negating an integer (which might actually be an unsigned_integer) is a reasonable thing to do. VERY BAD DESIGN.
Seems to me that, if the author is really set on using inheritance here, integer should inherit from unsigned_integer, since integer can do everything that an unsigned can do, but not vice-versa. Integer extends unsigned with the ability to represent negative values and the result of subtraction.
- Marsh
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
participants (10)
-
Bronek Kozicki
-
Carlo Wood
-
Daniel Mitchell
-
Dave Steffen
-
David Abrahams
-
Gerhard Wesp
-
Maarten Kronenburg
-
Marsh J. Ray
-
Sebastian Redl
-
Topher Cooper