Interest in an arbitrary precision library?

I've used several different arbitrary precision libraries with C++ and I have always been somewhat disappointed. And now I'm finally disappointed enough to start on my own. Here are the attributes that I think a C++ arbitrary precision library needs: 0) As much as possible, the library types should be drop-in replacements for the built-in types. a) This probably means some sort of numeic_limits support. I am unclear on the best implementation of this. b) Any automatic type conversions should behave in an obvious manner. c) Ease of use may take priority over efficiency. 1) The library should be both useful for number theory and a way of improving code robustness by minimizing overflow errors. 2) It should also provide equivalents of the basic C++ math functions. Implementing TR1 functions should be a priority. 3) As well as arbitrary precision, it should provide error range math: 2.00 * 2.00 is not generally the same thing as 2.0 * 2.0 4) It should use fast algorithms for multiplication and division, but instead of trying to compete with other libraries in computation speed (GMP uses hand-coded assembly in places), it should concentrate on code portability and ease of extension. 5) I do not envision any fancy memory management. The "big int" class will probably use a std::vector<int> to store it's implementation data. 6) Providing math functions beyond those already in C++ will not be a priority. a) GCD will be at least one exception to this. 7) It should work well with other Boost libraries where possible. 8) Divide by zero errors for integers should be handled with an exception. 9) Precision for rational numbers may be set as a static member variable or it may not. In the second case, expressions involving rational numbers of different. 10) Drop-in algorithm replacement for various operations might be a nice feature. Comments, suggestions, desired features? Some of the parts of this list are off the top of my head, so feel free to suggest changes. Joel Eidsath

Hi, "Joel Eidsath" <jeidsath@gmail.com> wrote in message news:4318989B.7000503@gmail.com...
I've used several different arbitrary precision libraries with C++ and I have always been somewhat disappointed. And now I'm finally disappointed enough to start on my own.
You are not the only one, I started and completed my own implementation of big ints based, somehow, on the same reasons.
Here are the attributes that I think a C++ arbitrary precision library needs:
0) As much as possible, the library types should be drop-in replacements for the built-in types. a) This probably means some sort of numeic_limits support. I am unclear on the best implementation of this.
Not clear to me on how are you planning on doing this, but if you are only thinking, but I can not say that I fully agree. Arbitrary precision is just that.
b) Any automatic type conversions should behave in an obvious manner.
just like C++ ;-)
c) Ease of use may take priority over efficiency.
Do not agree. You should have ease of use _and_ efficiency. Modern C++ techniques allow for this.
1) The library should be both useful for number theory and a way of improving code robustness by minimizing overflow errors.
It is not clear to me what you mean by "useful for number theory". You want the library to handle Z[sqrt(2)](log(2)) arbitrarily good? (this question is not rhetoric)
2) It should also provide equivalents of the basic C++ math functions. Implementing TR1 functions should be a priority.
90% agree. Priorities always change.
3) As well as arbitrary precision, it should provide error range math: 2.00 * 2.00 is not generally the same thing as 2.0 * 2.0
Do not fully understand, please expand this idea.
4) It should use fast algorithms for multiplication and division, but instead of trying to compete with other libraries in computation speed (GMP uses hand-coded assembly in places), it should concentrate on code portability and ease of extension.
Agree.
5) I do not envision any fancy memory management. The "big int" class will probably use a std::vector<int> to store it's implementation data.
Agree.
6) Providing math functions beyond those already in C++ will not be a priority. a) GCD will be at least one exception to this.
I am not sure about this, but lets say I agree.
7) It should work well with other Boost libraries where possible.
Agree.
8) Divide by zero errors for integers should be handled with an exception.
I do not fully like the idea of exceptions on the middle of my computations, I prefer to set the 'state' of the object to Nan. Even for big ints.
9) Precision for rational numbers may be set as a static member variable or it may not. In the second case, expressions involving rational numbers of different.
I see no point in using rational numbers, but maybe I am missing something.
10) Drop-in algorithm replacement for various operations might be a nice feature.
A must if you ask me.
Comments, suggestions, desired features? Some of the parts of this list are off the top of my head, so feel free to suggest changes.
Joel Eidsath _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Do not agree. You should have ease of use _and_ efficiency. Modern C++ techniques allow for this.
I was trying to give myself wiggle room, but I pretty much agree with you. I don't expect anything as fast as GMP, but I imagine that a plain C++ library could still do very well.
It is not clear to me what you mean by "useful for number theory". You want the library to handle Z[sqrt(2)](log(2)) arbitrarily good? (this question is not rhetoric)
You would set the precision of your calculations with a function call. (Specify the number of digits of accuracy.) Example: set_precision(3); cout << arbitrary_sqrt(2) << endl; set_precision(10); cout << arbitrary_sqrt(2); Output: 1.414 1.4142135624
3) As well as arbitrary precision, it should provide error range math: 2.00 * 2.00 is not generally the same thing as 2.0 * 2.0
Do not fully understand, please expand this idea.
For example: 1.0 + 1.00 = 2.0 (plus or minus .05) 1.00 + 1.00 = 2.00 (plus or minus .005)
I do not fully like the idea of exceptions on the middle of my computations, I prefer to set the 'state' of the object to Nan. Even for big ints.
That would work too. Whatever the solution is though, I don't want it to attempt a normal int division by zero internally if asked to do a big int division by zero. There is no way to handle that sort of error at run time.
9) Precision for rational numbers may be set as a static member variable or it may not. In the second case, expressions involving rational numbers of different.
I see no point in using rational numbers, but maybe I am missing something.
By rational, I a type that is "more than just integers." (Mathematics is my field, not CS.) Built-in types like float and double hold rational numbers. So I am not talking about a struct that holds a numerator and a denominator, if that's what you were thinking. Joel Eidsath

Hi, "Joel Eidsath" <jeidsath@gmail.com> wrote in message news:431931ED.7060901@gmail.com...
Do not agree. You should have ease of use _and_ efficiency. Modern C++ techniques allow for this.
I was trying to give myself wiggle room, but I pretty much agree with you. I don't expect anything as fast as GMP, but I imagine that a plain C++ library could still do very well.
Agree, I do not want to compete with GMP on performance, but, without sacrificing ease of use, have the best performance that a fully conformant C++ implementation allows.
It is not clear to me what you mean by "useful for number theory". You want the library to handle Z[sqrt(2)](log(2)) arbitrarily good? (this question is not rhetoric)
You would set the precision of your calculations with a function call. (Specify the number of digits of accuracy.)
Example: set_precision(3); cout << arbitrary_sqrt(2) << endl; set_precision(10); cout << arbitrary_sqrt(2);
Output: 1.414 1.4142135624
Ahh, ok I was thinking about a lazy evaluated, arbitrary precision, FP/big int library.
3) As well as arbitrary precision, it should provide error range math: 2.00 * 2.00 is not generally the same thing as 2.0 * 2.0
Do not fully understand, please expand this idea.
For example: 1.0 + 1.00 = 2.0 (plus or minus .05) 1.00 + 1.00 = 2.00 (plus or minus .005)
Ok, understood.
I do not fully like the idea of exceptions on the middle of my computations, I prefer to set the 'state' of the object to Nan. Even for big ints.
That would work too. Whatever the solution is though, I don't want it to attempt a normal int division by zero internally if asked to do a big int division by zero. There is no way to handle that sort of error at run time.
Agree.
9) Precision for rational numbers may be set as a static member variable or it may not. In the second case, expressions involving rational numbers of different.
I see no point in using rational numbers, but maybe I am missing something.
By rational, I a type that is "more than just integers." (Mathematics is my field, not CS.) Built-in types like float and double hold rational numbers. So I am not talking about a struct that holds a numerator and a denominator, if that's what you were thinking.
Maths my field too (I thought you might figure that out of the expression 'Z[sqrt(2)])(log(2))' ) but now I do not understand your original point (9) Anyway I can say I agree with you on almost everything. Do you want to move forward with this idea?
Joel Eidsath
Lucas Galfaso
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Maths my field too (I thought you might figure that out of the expression 'Z[sqrt(2)])(log(2))' ) but now I do not understand your original point (9)
Sorry about that. It looks like I got some words got cut out from my original post at 9). Basically it's a choice between: //first implementation of "set_precision" rational a,b; a.set_precision(50); //a's precision now set at 50 b.set_precision(150) //b's precision now set at 150, a's precision still at 50 //Do some math cout << (a + b) << endl << (b + a) //issues with the return values of the expressions here //second implementation rational a,b; set_precision(25) //a's and b's precision now at 25 I much prefer doing things the second way, but I wanted to get some comments. Thread safety will be an issue the second way.
Anyway I can say I agree with you on almost everything. Do you want to move forward with this idea?
I'd like to. I discovered N1692 and N1794 while looking through the C++0x proceedings yesterday. Here is the second: http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2005/n1781.pdf It is a proposal for an arbitrary integer library. It doesn't really say anything about implementation, but I think that I'll follow its guidelines -- whether it is adopted or not (has it been already? -- I don't know the comittee process), it's a good interface. Once that is finished, I'll get started on the rationals side of things. Joel Eidsath

On Sat, 03 Sep 2005 15:00:23 -0600, Joel Eidsath wrote Haven't been following this thread closely, but I just want to register interest in this effort. There's a wide variety of C++ applications could benefit for having this in Boost. Jeff

Ditto. Paul PS There are several available with quite free licence terms. My favorite is NTL see http://shoup.net/ntl/ Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539 561830 +44 7714 330204 mailto: pbristow@hetp.u-net.com | -----Original Message----- | From: boost-bounces@lists.boost.org | [mailto:boost-bounces@lists.boost.org] On Behalf Of Jeff Garland | Sent: 05 September 2005 16:06 | To: boost@lists.boost.org | Subject: Re: [boost] Interest in an arbitrary precision library? | | On Sat, 03 Sep 2005 15:00:23 -0600, Joel Eidsath wrote | | Haven't been following this thread closely, but I just want | to register | interest in this effort. There's a wide variety of C++ | applications could | benefit for having this in Boost. | | Jeff | _______________________________________________ | Unsubscribe & other changes: | http://lists.boost.org/mailman/listinfo.cgi/boost |

PS There are several available with quite free licence terms. My favorite is NTL see
My favorite is NTL too. It's what I use now. My main dissatisfaction is that ZZ and RR don't quite fit as drop-in replacements for int and double. Also it's GPL (not LGPL), which is an issue for me. Joel Eidsath

Would it be worth trying to get Victor Shoup to change the licence terms to Boost? I suspect it is GPL only as that was the simplest licence option when it can out - many years ago. He might also be willing to change it to the Boost style, or let someone else do it? Paul Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539 561830 +44 7714 330204 mailto: pbristow@hetp.u-net.com | -----Original Message----- | From: boost-bounces@lists.boost.org | [mailto:boost-bounces@lists.boost.org] On Behalf Of Joel Eidsath | Sent: 05 September 2005 19:52 | To: boost@lists.boost.org | Subject: Re: [boost] Interest in an arbitrary precision library? | | | >PS There are several available with quite free licence | terms. My favorite | >is NTL see | > | >http://shoup.net/ntl/ | > | > | My favorite is NTL too. It's what I use now. | | My main dissatisfaction is that ZZ and RR don't quite fit as drop-in | replacements for int and double. Also it's GPL (not LGPL), | which is an | issue for me. | | Joel Eidsath | _______________________________________________ | Unsubscribe & other changes: | http://lists.boost.org/mailman/listinfo.cgi/boost |

| -----Original Message----- | From: boost-bounces@lists.boost.org | [mailto:boost-bounces@lists.boost.org] On Behalf Of Joel Eidsath | Sent: 05 September 2005 19:52 | To: boost@lists.boost.org | Subject: Re: [boost] Interest in an arbitrary precision library? | | > | >http://shoup.net/ntl/ | > | My favorite is NTL too. It's what I use now. | | My main dissatisfaction is that ZZ and RR don't quite fit as drop-in | replacements for int and double. Is this a reasonable ambition? Aren't the fact that they are not builtin POD types always a problem? Paul Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539 561830 +44 7714 330204 mailto: pbristow@hetp.u-net.com www.hetp.u-net.com

| My main dissatisfaction is that ZZ and RR don't quite fit as drop-in | replacements for int and double.
Is this a reasonable ambition?
Aren't the fact that they are not builtin POD types always a problem?
Paul
Yes. But NTL could do with a few more overloads for operator= and operator+ and so on. Not a big deal by any means. The larger issue is the GPL license. If Stroup would be willing to release under a different license, like you suggest, that would be great. (As long as he didn't use any GMP code in building NTL. I think it's from scratch, but I don't know.) Joel Eidsath

Using GMP with NTL is an option (for speed), but I there is NTL code which does not use GMP. We definitely need a big int and an arbitrary precision in Boost, but I am unclear how to proceed (and I am not volunteering ;-). Views? Paul Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539 561830 +44 7714 330204 mailto: pbristow@hetp.u-net.com www.hetp.u-net.com | -----Original Message----- | From: boost-bounces@lists.boost.org | [mailto:boost-bounces@lists.boost.org] On Behalf Of Joel Eidsath | Sent: 06 September 2005 15:40 | To: boost@lists.boost.org | Subject: Re: [boost] Interest in an arbitrary precision library? | | | | >| My main dissatisfaction is that ZZ and RR don't quite fit | as drop-in | >| replacements for int and double. | > | >Is this a reasonable ambition? | > | >Aren't the fact that they are not builtin POD types always a problem? | > | >Paul | > | > | > | > | Yes. But NTL could do with a few more overloads for operator= and | operator+ and so on. Not a big deal by any means. The | larger issue is | the GPL license. If Stroup would be willing to release under a | different license, like you suggest, that would be great. | (As long as | he didn't use any GMP code in building NTL. I think it's | from scratch, | but I don't know.) | | Joel Eidsath | _______________________________________________ | Unsubscribe & other changes: | http://lists.boost.org/mailman/listinfo.cgi/boost |

Paul A Bristow wrote:
We definitely need a big int and an arbitrary precision in Boost, but I am unclear how to proceed (and I am not volunteering ;-).
Well here is how I'm starting off with the big integer side of things: I am implementing most of the interface of N1744: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1744.pdf What I am saving for last or never is the bitshift operators. Multiplication will use a Fast Fourier Transform, so it should equal GMP multiplication with large enough numbers. Knuth claims that this method, Schonhage-Strassen is O(n) for practical purposes. The division algorithm uses the multiplication algorithm, and presents a complexity of that times a constant. Evaluation of powers will use a binary algorithm from Knuth. GCD will use the binary gcd algorithm. That's pretty much what I've got so far. On the non-integer side of things, I have not spent much time looking for algorithms yet, but I'll probably get most of them from Knuth. A number of these algorithms can be improved on for special cases. That's too much work for one person though, so I'll just concentrate on getting a library out that can be tweaked later, once it's working. Joel Eidsath

"Lucas Galfaso" <lgalfaso@gmail.com> writes:
Hi,
"Joel Eidsath" <jeidsath@gmail.com> wrote in message news:431931ED.7060901@gmail.com...
Do not agree. You should have ease of use _and_ efficiency. Modern C++ techniques allow for this.
I was trying to give myself wiggle room, but I pretty much agree with you. I don't expect anything as fast as GMP, but I imagine that a plain C++ library could still do very well.
Agree, I do not want to compete with GMP on performance, but, without sacrificing ease of use, have the best performance that a fully conformant C++ implementation allows.
IMO, if you don't want to compete with GMP, you should at least allow the library to be built as a wrapper over GMP. -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams wrote:
IMO, if you don't want to compete with GMP, you should at least allow the library to be built as a wrapper over GMP.
I was thinking of something that would allow for the user to switch out algorithms easily. Using the GMP as an example implementation of this would probably be good. It's not quite that I don't want to compete with GMP's speed -- but I don't think that a portably designed, pure C++ library is going to be able to do it. Joel Eidsath

"David Abrahams" <dave@boost-consulting.com> wrote in message news:u3boli8k0.fsf@boost-consulting.com...
"Lucas Galfaso" <lgalfaso@gmail.com> writes:
Hi,
"Joel Eidsath" <jeidsath@gmail.com> wrote in message news:431931ED.7060901@gmail.com...
Do not agree. You should have ease of use _and_ efficiency. Modern C++ techniques allow for this.
I was trying to give myself wiggle room, but I pretty much agree with you. I don't expect anything as fast as GMP, but I imagine that a plain C++ library could still do very well.
Agree, I do not want to compete with GMP on performance, but, without sacrificing ease of use, have the best performance that a fully conformant C++ implementation allows.
IMO, if you don't want to compete with GMP, you should at least allow the library to be built as a wrapper over GMP.
Sorry if my statement was misunderstood, but when I said that I do not want to compete with GMP I did not mean that I am not considering speed, just that I will con write assembler to get a 5% improvement. The implementation I placed at the sandbox some few month ago, does have one of the fastest, pure C++, implementation of the modular exponentiation that I know about [and it has one of the poorest implementation of serialization/deserialization]. My goal in pursuing an implementation in Boost of big integers is not to have the _best_ implementation possible, but one that .) is really easy to use. .) fast (not fastest ever, just fast.) .) Boost like license. .) using C++ (not C as GMP) Regards, LG
-- Dave Abrahams Boost Consulting www.boost-consulting.com
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Lucas Galfaso wrote:
The implementation I placed at the sandbox some few month ago, does have one of the fastest, pure C++, implementation of the modular exponentiation that I know about [and it has one of the poorest implementation of serialization/deserialization]. My goal in pursuing an implementation in Boost of big integers is not to have the _best_ implementation possible, but one that .) is really easy to use. .) fast (not fastest ever, just fast.) .) Boost like license. .) using C++ (not C as GMP)
Regards, LG
I looked around for this but couldn't find it. Could you post a link? Joel Eidsath

I looked around for this but couldn't find it. Could you post a link?
Sorry, it is in the file section, not the sandbox. Here is the link http://groups.yahoo.com/group/boost/files/Number/
Joel Eidsath _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On 9/2/05 2:23 PM, "Joel Eidsath" <jeidsath@gmail.com> wrote:
I've used several different arbitrary precision libraries with C++ and I have always been somewhat disappointed. And now I'm finally disappointed enough to start on my own.
Here are the attributes that I think a C++ arbitrary precision library needs:
0) As much as possible, the library types should be drop-in replacements for the built-in types. a) This probably means some sort of numeic_limits support. I am unclear on the best implementation of this.
I have an attempt at this in the Sandbox CVS. It includes a "numeric_limits" specialization. Look for the math/big_whole library.
b) Any automatic type conversions should behave in an obvious manner. c) Ease of use may take priority over efficiency. 1) The library should be both useful for number theory and a way of improving code robustness by minimizing overflow errors. 2) It should also provide equivalents of the basic C++ math functions. Implementing TR1 functions should be a priority.
Be careful. Many functions return irrational (particularly transcendental) results. You have to define your cut-off philosophy. You can optionally make variants of the math functions that include a parameter for a cut-off specification.
3) As well as arbitrary precision, it should provide error range math: 2.00 * 2.00 is not generally the same thing as 2.0 * 2.0
Will this be a separate class? A pure computation class doesn't have to make this kind of distinction.
4) It should use fast algorithms for multiplication and division, but instead of trying to compete with other libraries in computation speed (GMP uses hand-coded assembly in places), it should concentrate on code portability and ease of extension. 5) I do not envision any fancy memory management. The "big int" class will probably use a std::vector<int> to store it's implementation data.
What about std::deque?
6) Providing math functions beyond those already in C++ will not be a priority. a) GCD will be at least one exception to this.
Will access to the internals improve the implementation of GCD? If not, we already have GCD and LCM functions in Boost.
7) It should work well with other Boost libraries where possible. 8) Divide by zero errors for integers should be handled with an exception. 9) Precision for rational numbers may be set as a static member variable or it may not. In the second case, expressions involving rational numbers of different.
Your rational numbers would be distinct from boost::rational<your::integer>? What would be the advantage?
10) Drop-in algorithm replacement for various operations might be a nice feature.
Comments, suggestions, desired features? Some of the parts of this list are off the top of my head, so feel free to suggest changes.
-- Daryle Walker Mac, Internet, and Video Game Junkie darylew AT hotmail DOT com

I think that I need to define the mathematical concept of rational numbers here. A rational number is any number that _can_ be written as a / b. But a rational class does not have to be stored as a separate numerator and denominator. In an arbitrary precision library, you definitely do not want to implement rational numbers as a struct with a numerator and denominator. Examples of basic rational number types in C++: float, double, etc. An arbitrary precision rational class will do the same thing as double, but to as many digits as you tell it to. See NTL's "RR" class. Irrational numbers can only be handled symbolically by a computer. There are no mathematical functions in C++ or TR1 that return irrational numbers. Daryle Walker wrote:
I have an attempt at this in the Sandbox CVS. It includes a "numeric_limits" specialization. Look for the math/big_whole library.
I'll take a look at it. It just provides support for arbitrary size integers? What sort of algorithms are you using?
Be careful. Many functions return irrational (particularly transcendental) results. You have to define your cut-off philosophy. You can optionally make variants of the math functions that include a parameter for a cut-off specification.
See top.
3) As well as arbitrary precision, it should provide error range math: 2.00 * 2.00 is not generally the same thing as 2.0 * 2.0
Will this be a separate class? A pure computation class doesn't have to make this kind of distinction.
I imagine that there will be an option to disable it if you don't like the computation hit.
4) It should use fast algorithms for multiplication and division, but instead of trying to compete with other libraries in computation speed (GMP uses hand-coded assembly in places), it should concentrate on code portability and ease of extension. 5) I do not envision any fancy memory management. The "big int" class will probably use a std::vector<int> to store it's implementation data.
What about std::deque?
Will I need to insert objects at the front at any time?
Will access to the internals improve the implementation of GCD? If not, we already have GCD and LCM functions in Boost.
Yeah, I knew about those. I hadn't gotten around to checking them out yet. If they use Euclid's algorithm rather than the binary algorithm, then I won't use them.
7) It should work well with other Boost libraries where possible. 8) Divide by zero errors for integers should be handled with an exception. 9) Precision for rational numbers may be set as a static member variable or it may not. In the second case, expressions involving rational numbers of different.
Your rational numbers would be distinct from boost::rational<your::integer>? What would be the advantage?
See top. Joel Eidsath

On 9/5/05 2:43 PM, "Joel Eidsath" <jeidsath@gmail.com> wrote:
I think that I need to define the mathematical concept of rational numbers here. A rational number is any number that _can_ be written as a / b. But a rational class does not have to be stored as a separate numerator and denominator. In an arbitrary precision library, you definitely do not want to implement rational numbers as a struct with a numerator and denominator.
Then how would you implement them?
Examples of basic rational number types in C++: float, double, etc.
An arbitrary precision rational class will do the same thing as double, but to as many digits as you tell it to. See NTL's "RR" class.
Irrational numbers can only be handled symbolically by a computer. There are no mathematical functions in C++ or TR1 that return irrational numbers.
You missed the point of my question. Physically, your assertion is true (since an irrational result would need infinite memory to hold it). However, many of the standard functions conceptually return irrational results. We simply get an approximation returned. The precision of the approximation is automatically determined by the properties of the built-in numeric type. What would be the limitation on your version of these functions? Would you let a single result fill up as much memory as possible? If not, what's the cut-off? Would the cut-off be fixed or determined at run-time (noting that the latter could change the function signature by requiring an extra parameter for the cut-off limit's data)?
Daryle Walker wrote:
I have an attempt at this in the Sandbox CVS. It includes a "numeric_limits" specialization. Look for the math/big_whole library.
I'll take a look at it. It just provides support for arbitrary size integers? What sort of algorithms are you using?
Be careful. Many functions return irrational (particularly transcendental) results. You have to define your cut-off philosophy. You can optionally make variants of the math functions that include a parameter for a cut-off specification.
See top.
3) As well as arbitrary precision, it should provide error range math: 2.00 * 2.00 is not generally the same thing as 2.0 * 2.0
Will this be a separate class? A pure computation class doesn't have to make this kind of distinction.
I imagine that there will be an option to disable it if you don't like the computation hit.
I don't think it can be a simple disabling-flag. It changes the semantics of the type and its functions.
4) It should use fast algorithms for multiplication and division, but instead of trying to compete with other libraries in computation speed (GMP uses hand-coded assembly in places), it should concentrate on code portability and ease of extension. 5) I do not envision any fancy memory management. The "big int" class will probably use a std::vector<int> to store it's implementation data.
What about std::deque?
Will I need to insert objects at the front at any time?
Maybe. And remove from the front too. These would be needed for digit-shift operations (<< and >>).
Will access to the internals improve the implementation of GCD? If not, we already have GCD and LCM functions in Boost.
Yeah, I knew about those. I hadn't gotten around to checking them out yet. If they use Euclid's algorithm rather than the binary algorithm, then I won't use them.
I think they're all Euclid-based. (They can't assume that the numeric type is binary-based.)
7) It should work well with other Boost libraries where possible. 8) Divide by zero errors for integers should be handled with an exception. 9) Precision for rational numbers may be set as a static member variable or it may not. In the second case, expressions involving rational numbers of different.
Your rational numbers would be distinct from boost::rational<your::integer>? What would be the advantage?
See top.
But would it really be a cost savings? -- Daryle Walker Mac, Internet, and Video Game Junkie darylew AT hotmail DOT com

Daryle Walker wrote:
Then how would you implement them?
Here would be a very naive implementation just to illustrate the concept: use an vector where each element represents a digit. Store the index where the integer ends, and the whole number begins. I.e., to store 186.0007, the vecor will hold: 1, 8, 6, 0, 0, 0, 7. The index where the integer ends is 2.
You missed the point of my question. Physically, your assertion is true (since an irrational result would need infinite memory to hold it). However, many of the standard functions conceptually return irrational results. We simply get an approximation returned. The precision of the approximation is automatically determined by the properties of the built-in numeric type. What would be the limitation on your version of these functions? Would you let a single result fill up as much memory as possible? If not, what's the cut-off? Would the cut-off be fixed or determined at run-time (noting that the latter could change the function signature by requiring an extra parameter for the cut-off limit's data)?
A call to something like boost::set_precision(150) will give you 150 decimal digit precision in all your calculations. A call to boost::set_precision(200) would give you 200 decimal digit precision. That is the entire point of any "arbitrary precision library." If you have never used one, try taking a look at NTL or something to get an idea of how it works.
I don't think it can be a simple disabling-flag. It changes the semantics of the type and its functions.
No it doesn't. You would use separate storage for error range data.
Maybe. And remove from the front too. These would be needed for digit-shift operations (<< and >>).
I doubt that those operations will be implemented. Bit-shift operators don't have much meaning for non built-in types. And faking it will be prohibitively expensive. As a side point, remember that there will be no "digits." Numbers will be stored in base-"numeric_limits<unsigned int>::max()".
Will access to the internals improve the implementation of GCD? If not, we already have GCD and LCM functions in Boost.
Yeah, I knew about those. I hadn't gotten around to checking them out yet. If they use Euclid's algorithm rather than the binary algorithm, then I won't use them.
I think they're all Euclid-based. (They can't assume that the numeric type is binary-based.)
No. The binary gcd algorithm. See Knuth "Semi-numerical algorithms."
7) It should work well with other Boost libraries where possible. 8) Divide by zero errors for integers should be handled with an exception. 9) Precision for rational numbers may be set as a static member variable or it may not. In the second case, expressions involving rational numbers of different.
Your rational numbers would be distinct from boost::rational<your::integer>? What would be the advantage?
See top.
But would it really be a cost savings?
Read this and get back to me: http://www.shoup.net/ntl/doc/RR.txt Joel Eidsath

On 9/5/05 8:21 PM, "Joel Eidsath" <jeidsath@gmail.com> wrote:
Daryle Walker wrote:
[SNIP]
You missed the point of my question. Physically, your assertion is true (since an irrational result would need infinite memory to hold it). However, many of the standard functions conceptually return irrational results. We simply get an approximation returned. The precision of the approximation is automatically determined by the properties of the built-in numeric type. What would be the limitation on your version of these functions? Would you let a single result fill up as much memory as possible? If not, what's the cut-off? Would the cut-off be fixed or determined at run-time (noting that the latter could change the function signature by requiring an extra parameter for the cut-off limit's data)?
A call to something like boost::set_precision(150) will give you 150 decimal digit precision in all your calculations. A call to boost::set_precision(200) would give you 200 decimal digit precision. That is the entire point of any "arbitrary precision library." If you have never used one, try taking a look at NTL or something to get an idea of how it works.
OK. When you say "arbitrary precision," you mean that a precision limit must be set (at run-time) before an operation. Most people use "arbitrary precision" to mean unlimited precision, not your "run-time cut-off" precision.
I don't think it can be a simple disabling-flag. It changes the semantics of the type and its functions.
No it doesn't. You would use separate storage for error range data.
Is this separate from the precision data stored in each number? If you don't change the signatures of each standard math function to include a parameter for a disable-flag or error-range, then you need to implement them as a global value (with all the disadvantages globals have).
Maybe. And remove from the front too. These would be needed for digit-shift operations (<< and >>).
I doubt that those operations will be implemented. Bit-shift operators don't have much meaning for non built-in types. And faking it will be prohibitively expensive. As a side point, remember that there will be no "digits." Numbers will be stored in base-"numeric_limits<unsigned int>::max()".
The operators would shift by digit places. Your precision APIs already assume that your numbers act like having radix-10 place-value semantics, so the shift operators would work with radix-10 too (i.e. multiplying or dividing by powers of 10). But as you said, if you don't actually implement the numbers that way, like using an INT_MAX radix, then faking it would be expensive.
Will access to the internals improve the implementation of GCD? If not, we already have GCD and LCM functions in Boost.
Yeah, I knew about those. I hadn't gotten around to checking them out yet. If they use Euclid's algorithm rather than the binary algorithm, then I won't use them.
I think they're all Euclid-based. (They can't assume that the numeric type is binary-based.)
No. The binary gcd algorithm. See Knuth "Semi-numerical algorithms."
Did you think I mistakenly said "binary-based" for "binary gcd"? I didn't; they mean separate things. I have the book; I've read the algorithm. Even though the binary GCD algorithm works for any representation, it's optimal for radix-2 representations, especially in computers with bit-level operations. (If a type matches that, then the algorithm can be done with shifts and masks.) I don't know if the binary GCD algorithm is more efficient if you cannot assume that the numeric type has an optimal representation. (Also, I didn't know of the binary GCD algorithm when I wrote the code.)
7) It should work well with other Boost libraries where possible. 8) Divide by zero errors for integers should be handled with an exception. 9) Precision for rational numbers may be set as a static member variable or it may not. In the second case, expressions involving rational numbers of different.
Your rational numbers would be distinct from boost::rational<your::integer>? What would be the advantage?
See top.
[The top was SNIPped.] I didn't see any advantage or disadvantage there.
But would it really be a cost savings?
Read this and get back to me: http://www.shoup.net/ntl/doc/RR.txt
I didn't see a cost savings (or penalty) there. It does something akin to the built-in floating-point types on common platforms (which are IEEE-754) with a binary mantissa and exponent. It's a tradeoff, giving up arbitrary denominators for quicker manipulation. -- Daryle Walker Mac, Internet, and Video Game Junkie darylew AT hotmail DOT com

On 9/11/05, Daryle Walker <darylew@hotmail.com> wrote:
Will access to the internals improve the implementation of GCD? If not, we already have GCD and LCM functions in Boost.
Yeah, I knew about those. I hadn't gotten around to checking them out yet. If they use Euclid's algorithm rather than the binary algorithm, then I won't use them.
I think they're all Euclid-based. (They can't assume that the numeric type is binary-based.)
No. The binary gcd algorithm. See Knuth "Semi-numerical algorithms."
Did you think I mistakenly said "binary-based" for "binary gcd"? I didn't; they mean separate things. I have the book; I've read the algorithm. Even though the binary GCD algorithm works for any representation, it's optimal for radix-2 representations, especially in computers with bit-level operations. (If a type matches that, then the algorithm can be done with shifts and masks.) I don't know if the binary GCD algorithm is more efficient if you cannot assume that the numeric type has an optimal representation.
it is possible to devise efficient algorithms resembling the binary GCD for any base with only a few prime factors (if you have access to the ACM digital library: i've recently found an article there discussing the merits of 12-based (GCD) computation, but i don't remember the title or author, so you'll have to search), but i guess for a binary architecture the binary GCD (and representation) is the natural (and optimal) choice br, andras

I didn't see a cost savings (or penalty) there.
I don't even know why I need to say this. For one thing, just look the GCD call at the creation of every boost::rational. You must understand the costs involved there when numbers get big.
It does something akin to the built-in floating-point types on common platforms (which are IEEE-754) with a binary mantissa and exponent. It's a tradeoff, giving up arbitrary denominators for quicker manipulation.
Explain how you'd implement the square root function for a boost::rational type class. (Or a host of other functions.) Lots of calls to set_precision, I expect. Joel Eidsath

From: Joel Eidsath <jeidsath@gmail.com>
I didn't see a cost savings (or penalty) there.
I don't even know why I need to say this.
There can be many reasons why you might need to say "this." Perhaps it's because you aren't adequately explaining yourself, you are significantly more knowledgeable about the matter than those you're interacting with, or you're missing the points being raised. Perhaps it's something else, entirely. Nevertheless, your reply sounds rude to me.
For one thing, just look the GCD call at the creation of every boost::rational. You must understand the costs involved there when numbers get big.
How about just saying, "The thing that stands out most to me is that boost::rational's default constructor does a lot of work for large numbers due to the GCD call." What you wrote sounds condescending.
It does something akin to the built-in floating-point types on common platforms (which are IEEE-754) with a binary mantissa and exponent. It's a tradeoff, giving up arbitrary denominators for quicker manipulation.
Explain how you'd implement the square root function for a boost::rational type class. (Or a host of other functions.) Lots of calls to set_precision, I expect.
Here, rather than addressing the perceived misconception, you seemingly attack the OP. Granted, asking someone to solve a particular problem can often lead them to realize something you're trying to convey, but wouldn't it be simpler to just state your point in the interest of fostering communication? -- Rob Stewart stewart@sig.com Software Engineer http://www.sig.com Susquehanna International Group, LLP using std::disclaimer;

On 9/11/05, Joel Eidsath <jeidsath@gmail.com> wrote:
I don't even know why I need to say this. For one thing, just look the GCD call at the creation of every boost::rational. You must understand the costs involved there when numbers get big.
just for fun, let's first compare [built-in] floating point and [boost::]rational: creating a floating point number from a numerator and a denominator requires an O( log n ) division creating a rational representation from a numerator and a denominator requires an O( log n ) normalization this means the speed difference is not theoretical, it is practical: - the current boost::rational implementation is not using the binary gcd algorithm, so the constant multiplier is fairly big - for floats initialized with compile-time constants the calculation is done at compile time -- this may be emulated for rationals with some inconvenience for the user, using template trickery - what remains is due to the fact that we are comapring a hw and a sw implementation so construction is not a good example, addition would be much better -- there we do have different O() complexity let's then compare arbirary-precision user-defined radix and rational arithmetic in general: what imho misleads you is that some operators (+,-) have lower time complexity for radix representations, and the rest (*,/,%) may have lower constant factors -- that is true, but on the other hand (imo) in many application areas you will get more accurate results using 64 bit rational arithmetic than using 1024 bit radix arithmetic, so in practice the latter can be faster afaik we are not using floats because they are good and fast (as a matter of fact, we know that they are bad), but for historical reasons; when they were introduced, they were a natural next step after fixed point, we did not yet know how bad they are, but we did know how to implement them efficiently (in hw). since then, we are stuck with them the same way as we are stuck with the QWERTY keyboard: we know that there are better solutions (in every sense of better), but the enormous amount of existing implementations makes it impossible to change when we implement and use our own arbitrary precision arithmetic, we make a compromise between speed and accuracy, and it is not as trivial as you seem to think that radix representation is the best of those compromises br, andras

Andras Erdei wrote: <snip> Your comparison wasn't quite right in places, but I'll skip that for now. What is important is what happens when you try to implement sqrt for boost::rational numbers. Or sin. Or log. Or a host of other functions. For example, what sort of answer would you get for sin(60 degrees) * sqrt(3) / 3? (The exact answer is 1/2.) Joel Eidsath

On 9/14/05, Joel Eidsath <jeidsath@gmail.com> wrote:
What is important is what happens when you try to implement sqrt for boost::rational numbers. Or sin. Or log. Or a host of other functions. For example, what sort of answer would you get for sin(60 degrees) * sqrt(3) / 3? (The exact answer is 1/2.)
(i am worried that this is just troll bait, but assuming it is a honest question) my guess would be "something closer to 1/2 than calculating with floating point arithmetic using the same number of bits (or maybe even the same number of CPU cycles)" in case of boost::rational, i doubt it will ever have sin() -- or, more precisely: the current boost::rational tries to do two things at the same time. being a fixed precision type (where the name is analogous to float, referring to the implementation), and that may have sin(), and everything else; and being an unlimited precision type (where the name refers to the mathematical concept, and accidentally the current implementation method), which is unlikely to ever have a sin() implemented br, andras

Andras Erdei wrote:
my guess would be "something closer to 1/2 than calculating with floating point arithmetic using the same number of bits (or maybe even the same number of CPU cycles)"
Wrong. Given n bits one can only represent 2^n numbers at most. (In the case of boost::rational, substantially less, since 1/2 = 4/8 and so on.) That simply translates to a set of points on the number line. If one were to choose a random real number, which would be less on average: its distance to the nearest boost:rational representation or its distance to the nearest n-bit floating point representation? Since the n-bit floats are equally spaced and all representations are unique, the answer is that with n-bits, you can get closer to an arbitrary real number with a floating point representation than with boost::rational. Getting to the issue of CPU cycles. Your previous email has several problems. For one thing, arbitrary floating points are not constructed with a numerator and denominator. You can construct them with a string of digits, for example. Further, you neglect the fact that gcd calls have to be made after every arithmetical operation, not simply on construction. This represents a huge effciency hit, not a small effciency hit. Lastly, you didn't compare worst cases. So problems with using boost::rational instead of an arbitrary floating point class are these: 1) It takes more memory on average. 2) It's not just a little slower, it's far slower, and worst case quickly gets uncomputable. 3) And most importantly: It's nearly useless for science and engineering because you cannot use infinite series functions like sin, sqrt, and so on. (You can hack it with series approximations, but you wind up redoing all the work of an arbitrary float class.) It is quite true that there is a problem space where boost::rational is the most appropriate solution. It's just not as big as you seem to think it is. Joel Eidsath

On 9/14/05, Joel Eidsath <jeidsath@gmail.com> wrote:
Given n bits one can only represent 2^n numbers at most. (In the case of boost::rational, substantially less, since 1/2 = 4/8 and so on.)
substantially less is not as bad as one might think, the loss is about 1 bit, depending on n (but always less than two bits)
That simply translates to a set of points on the number line. If one were to choose a random real number, which would be less on average: its distance to the nearest boost:rational representation or its distance to the nearest n-bit floating point representation? Since the n-bit floats are equally spaced
[i think fixed point and fixed slash are (asimptotically) uniformly distributed, and floating point and floating slash are (asimptotically) log uniformly distributed]
and all representations are unique, the answer is that with n-bits, you can get closer to an arbitrary real number with a floating point representation than with boost::rational.
the distribution of numbers representable with num/den pairs is arithmetic friendly, and of those representable as floating point is arithmetic unfriendly, but the explanation is very long and math intensive, so i am cornered the best i can offer here is an autoritarian "proof" of one particular difference; you were referring to knuth, so let's quote him: "For example, suppose we are doing [fixed precision rational arithmetic] so that the representable numbers (u/u') have -128 < u < 128 and 0 <= u' < 255. This isn't much precision, but it is enough to give us a feel [] Suppose we have a calculation that would take the overall form 22/7 = 314/159 + 1300 / 1113 if we were working in exact rational arithmetic, but the intermediate quantities have had to be rounded to representable numbers. In this case 314/159 would round to (79/40) and 1300/1113 would round to (7/6). The rounded terms sum to 79/40 + 7/6 = 377/120, which rounds to (22/7); so we have obtained the correct answer even though three roundings were required. This example was not specially contrived. When the answer to a problem is a simple fraction [fixed precision rational arithmetic] tends to make the intermediate rounding errors cancel out." D.E. Knuth: The Art of Computer Programming, Volume 2, Chapter 4.5.1 (in contrast, floating point tends to accumulate rounding errors)
Getting to the issue of CPU cycles. Your previous email has several problems. For one thing, arbitrary floating points are not constructed with a numerator and denominator. You can construct them with a string of digits, for example.
[to my understanding you were referring to the boost::rational ctor that takes a num and a den, so i compared that kind of initialisation, but] in theory, if a float or a rational is constructed using a num and a den, both takes O( log n ) steps; if they are constructed using a string of digits, still both takes O( log n ) steps; if the float is constructed using a precalculated float representation and a rational using a precalculated normalized num and den, both takes O( 1 ) the practical difference is that for built-in floats you have compiler and hw support, and it happens in compile-time, and for user-defined rationals you do no not, and it happens at run-time, in sw to an extent, the compiler support can be emulated; but we are unlikely to ever have hw support for rationals (there are several articles about efficient hw implementation of rationals, but i don't know about any PCs shipped with such hw :O)
Further, you neglect the fact that gcd calls have to be made after every arithmetical operation, not simply on construction. This represents a huge effciency hit, not a small effciency hit.
that's why i said you should have compared addition, not construction, as that is the operation that really have different time complexity (and also reciprocate, in the opposite direction) but there are other considerations: users may be happier with getting the correct result slightly slower than getting a bad answer a bit faster
Lastly, you didn't compare worst cases.
?
1) It takes more memory on average.
or less
2) It's not just a little slower, it's far slower,
addition is the only operation with worse time-complexity (i assume "far slower" means that), and i think it is up to the user whether he can live with that compromise
and worst case quickly gets uncomputable.
?
3) And most importantly: It's nearly useless for science and engineering because you cannot use infinite series functions like sin, sqrt, and so on. (You can hack it with series approximations, but you wind up redoing all the work of an arbitrary float class.)
swap the words float and rational and the above sentence still stands :O)
It is quite true that there is a problem space where boost::rational is the most appropriate solution. It's just not as big as you seem to think it is.
that could be right br, andras

"Daryle Walker" <darylew@hotmail.com> wrote
OK. When you say "arbitrary precision," you mean that a precision limit must be set (at run-time) before an operation. Most people use "arbitrary precision" to mean unlimited precision, not your "run-time cut-off" precision.
Are there really libraries that have unlimited precision? What happens when the result of a computation is irrational? regards Andy Little

On 9/13/05, Andy Little <andy@servocomm.freeserve.co.uk> wrote:
"Daryle Walker" <darylew@hotmail.com> wrote
OK. When you say "arbitrary precision," you mean that a precision limit must be set (at run-time) before an operation. Most people use "arbitrary precision" to mean unlimited precision, not your "run-time cut-off" precision.
Are there really libraries that have unlimited precision? What happens when the result of a computation is irrational?
To have unlimited precision is needed unlimited space... There's no way to have unlimited precision for any number. If you have n bits to represent some number, then you'll have 2^n numbers represented. Or else you'll have two numbers being represented in the same way, which would lead to ambiguity on the way back.
regards Andy Little
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- Felipe Magno de Almeida Developer from synergy and Computer Science student from State University of Campinas(UNICAMP). Unicamp: http://www.ic.unicamp.br Synergy: http://www.synergy.com.br "There is no dark side of the moon really. Matter of fact it's all dark."

Le mardi 13 septembre 2005 à 12:28 +0100, Andy Little a écrit :
"Daryle Walker" <darylew@hotmail.com> wrote
OK. When you say "arbitrary precision," you mean that a precision limit must be set (at run-time) before an operation. Most people use "arbitrary precision" to mean unlimited precision, not your "run-time cut-off" precision.
Are there really libraries that have unlimited precision? What happens when the result of a computation is irrational?
It all depends on which representation you choose for your numbers; rational representations are not the only ones. Some formats can represent any real number as long as it is constructible [1] (and you have enough memory, but no need for an infinite memory for a given number). There are lots of such formats and an abundant literature about them, so I will only give one single example: functional Cauchy sequences of rational numbers. Best regards, Guillaume [1] Rational numbers are constructible. Irrational numbers are constructible. Pi, E, and other common mathematical constants are constructible. Numbers derived from them are constructible. Omega (the halting problem encoding) is not constructible.

On 9/13/05 8:17 AM, "Guillaume Melquiond" <guillaume.melquiond@ens-lyon.fr> wrote: [SNIP]
[1] Rational numbers are constructible. Irrational numbers are constructible. Pi, E, and other common mathematical constants are constructible. Numbers derived from them are constructible. Omega (the halting problem encoding) is not constructible.
I couldn't find this mathematical definition of "constructible;" is it known by another term? (The only definition I know of is for those ancient Greek compass & [unmarked] straight-edge puzzles. But all the associated numbers for those are a subset of algebraic numbers. By that definition, pi and e are not constructible, since they're transcendental.) I looked in the Wikipedia, BTW. But maybe you mistyped; rational and irrational numbers cover _every_ real number, so you didn't have to specify pi, e, common constants, or derived values. -- Daryle Walker Mac, Internet, and Video Game Junkie darylew AT hotmail DOT com

On 9/13/05 7:28 AM, "Andy Little" <andy@servocomm.freeserve.co.uk> wrote:
"Daryle Walker" <darylew@hotmail.com> wrote
OK. When you say "arbitrary precision," you mean that a precision limit must be set (at run-time) before an operation. Most people use "arbitrary precision" to mean unlimited precision, not your "run-time cut-off" precision.
Are there really libraries that have unlimited precision? What happens when the result of a computation is irrational?
You can't store it conventionally, since such numbers would need an infinite amount of memory. You would just give up and have to define some sort of rounding/cut-off philosophy. As another poster said, you could store the irrational components of a number with some sort of formula (but only for algebraically irrational numbers, not transcendentally irrational numbers). -- Daryle Walker Mac, Internet, and Video Game Junkie darylew AT hotmail DOT com

On 9/13/05, Andy Little <andy@servocomm.freeserve.co.uk> wrote:
Are there really libraries that have unlimited precision? What happens when the result of a computation is irrational?
as i understand the possible range of arithmetic types, there are _fixed_ precision representations, like the built-in int or float, or (extended precision) user-defined types, like a 2048 bit integer for RSA operations (likely implemented using modular arithmetic) we are all familiar with these _arbitrary_ precision numbers (same as above, but the precision is set at runtime, rather than selected at compile time); if i understand correctly, Joel is proposing this kind of library i have never used such a beast, nor seen it used, so i only have guesses about it _unlimited_ precision types (i think these are referred as 'unlimited' rather than as 'infinite' -- except by the current C++ Standard Library proposal :O) -- intentionally) these usually do not have functions like sin/log/sqrt, as they are used for problems like solving a set of linear equations, or maintaining your account balance in a bank, in both cases exactly sometimes they do have some of those functions, coupled with the cabaility to represent the results, but i only have read about such ones in articles, never seen or used one _symbolic_ calculation i have never used such a thing (beyond the capabilities of my HP128S calculator long ago), but i think a really general implementation is impossible (e.g. finding a primitive function could be a hard AI problem :O), still, there are such things in use (i think mathematica is an example, although i could be wrong) so my answer is: unlimited precision types do not have operations that would lead out from the set of numbers representable in that type this does not mean that boost::rational cannot have an sqrt() function, but it will work just like the one for float, returning an approximate result br, andras

"Andy Little" <andy@servocomm.freeserve.co.uk> wrote in message news:dg6d4r$la2$1@sea.gmane.org...
"Daryle Walker" <darylew@hotmail.com> wrote
OK. When you say "arbitrary precision," you mean that a precision limit must be set (at run-time) before an operation. Most people use "arbitrary precision" to mean unlimited precision, not your "run-time cut-off" precision.
Are there really libraries that have unlimited precision? What happens when the result of a computation is irrational?
Just to put some light into this discussion, there are unlimited precision libraries. e.g. http://keithbriggs.info/xrc.html If you ask yourself how to represent an irrational number, first you have to read about computable number. Whenever you see that, the solution should be right there. Salú, LG
regards Andy Little
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

"Joel Eidsath" <jeidsath@gmail.com> wrote in message news:431C91C1.3030502@gmail.com...
I think that I need to define the mathematical concept of rational numbers here. A rational number is any number that _can_ be written as a / b. But a rational class does not have to be stored as a separate numerator and denominator. In an arbitrary precision library, you definitely do not want to implement rational numbers as a struct with a numerator and denominator.
Examples of basic rational number types in C++: float, double, etc.
On the contrary, conversion from rational to float is an example of a "lossy conversion". regards Andy Little

Andy Little wrote:
On the contrary, conversion from rational to float is an example of a "lossy conversion".
Wrong. It is sometimes lossy, it is sometimes exact. Numbers like 1.3, 134.5858, -0.87, are all rational numbers. They can all be stored exactly in float. 1/3 is a rational number that cannot be stored exactly as a float. Where you keep going wrong for some reason is thinking anarbitrary precision rational class should store _any_ rational number with infinite precision. It's an arbitrary precision class, not an infinite precision class. Numbers like 1/3 will be stored to an approximation only (however many of digits of accuracy you ask for). Joel Eidsath

"Joel Eidsath" <jeidsath@gmail.com> wrote in message news:431DCDEF.1020501@gmail.com...
Andy Little wrote:
On the contrary, conversion from rational to float is an example of a "lossy conversion".
Wrong. It is sometimes lossy, it is sometimes exact.
Thats true but irrelevant, because you cant tell, given only the resulting floating point representation, whether your floating-point value is representing an approximation to some value which may be rational or itrrational or alternatively is an exact representation of a rational value. regards Andy Little

Thats true but irrelevant, because you cant tell, given only the resulting floating point representation, whether your floating-point value is representing an approximation to some value which may be rational or itrrational or alternatively is an exact representation of a rational value.
And so...? I fail to see your point. Are you trying to support the original poster who said just to use boost::rational? Joel Eidsath

"Joel Eidsath" <jeidsath@gmail.com> wrote in message news:431E2D2F.7090501@gmail.com...
Thats true but irrelevant, because you cant tell, given only the resulting floating point representation, whether your floating-point value is representing an approximation to some value which may be rational or itrrational or alternatively is an exact representation of a rational value.
And so...? I fail to see your point.
My point is that floating point representation of a number is in no way a peer concept of rational number. IOW your assertion : "Examples of basic rational number types in C++: float, double, etc." is equivalent trying to compare two C++ types that dont have a comparison operator. The proper use for a rational number is as a dimensionless constant in an equation. In this case its numerator or denominator values are rarely anything other than small integers. floating-point values on the other hand are useful to represent measured analogue quantities (physical measurement by nature being inexact) , for example the distance between two points, the height of the tide at an analogue instant of time. Physical quantities are never rational numbers but rather analogue values given meaning by the convention of units. The following is an example of how I would consistenly use floating-point types and rationals to give a theoretically better precision then if using floating-point types only. (Of course, due to the lossiness of floats, any multiplication/division of a float by a rational and vice versa must result in a float (rather than a rational) ) // get distance travelled from u,v,a,t double u = U, v = V, t = T, a = A; // analogue velocities expressed in some units of measure double s = u * t + my::rational(1,2) * a * pow(t,2); // find volume of sphere double radius =R; double volume= my::rational(4,3) * pow(r,3); note: Having said that boost::rational can't be used in this way because there is no multiplication or division between boost::rational and floating point types. This in now way invalidates the examples. regards Andy Little

My point is that floating point representation of a number is in no way a peer concept of rational number. IOW your assertion :
"Examples of basic rational number types in C++: float, double, etc."
Rational numbers are a mathematical concept, not a computer science concept. Floats, doubles and so on hold rational numbers, not irrationals, not integers. Now, are you seriously claiming, as the first poster did, that boost::rational will work whenever you need an arbitrary precision rational number? Or do you have some other point? I really fail to see what you are trying to get at. The only point that you seemed to be trying to make with your examples was that "boost::rational has it's place too..." Again, I fail to see what that has to do with the discussion at hand. No one has said anything different. (That place is just not an arbitrary precision library rational class. For example, notice the calls to gcd() in the constructor. You're toast as soon as you start dealing with large primes.) Joel Eidsath

On 9/7/05, Joel Eidsath <jeidsath@gmail.com> wrote:
My point is that floating point representation of a number is in no way a peer concept of rational number. IOW your assertion :
"Examples of basic rational number types in C++: float, double, etc."
Rational numbers are a mathematical concept, not a computer science concept.
it is a CS concept as well if you are interested, a good text on representing numbers as rationals (num/den pairs) is Matula-Kornerup: Foundations of Finite Precision Rational Arithmetic, Computing Suppl. 2, 1980, pp 85-111 Floats, doubles and so on hold rational numbers, not
irrationals, not integers.
yes, but they are only capable of holding rationals with power-of-two denominators, which is a severe limitation Now, are you seriously claiming, as the first poster did, that
boost::rational will work whenever you need an arbitrary precision rational number?
not the current implementation, as it has no rounding Or do you have some other point? i think he is suggesting that radix representation may be good (efficient etc) for integers, but is certainly not a good way to represent non-integers num/den pairs are much better, and there are several other alternatives (representations based on exponential functions, continued fraction expansions, composition of linear fractional transformations etc) The only point that you seemed to be trying to make with your examples
was that "boost::rational has it's place too..." Again, I fail to see what that has to do with the discussion at hand. No one has said anything different. (That place is just not an arbitrary precision library rational class. For example, notice the calls to gcd() in the constructor. You're toast as soon as you start dealing with large primes.)
with the current boost::rational, yes, but with a rounding implementation, no br, andras

Andras Erdei wrote:
i think he is suggesting that radix representation may be good (efficient etc) for integers, but is certainly not a good way to represent non-integers
float is worthless, huh?
num/den pairs are much better, and there are several other alternatives (representations based on exponential functions, continued fraction expansions, composition of linear fractional transformations etc)
I don't think that you have a good conception of the effciency costs involved in doing large number math with those things. Joel Eidsath

"Joel Eidsath" <jeidsath@gmail.com> wrote in message news:431E4A69.9020805@gmail.com...
My point is that floating point representation of a number is in no way a peer concept of rational number. IOW your assertion :
"Examples of basic rational number types in C++: float, double, etc."
Rational numbers are a mathematical concept, not a computer science concept. Floats, doubles and so on hold rational numbers, not irrationals, not integers.
Now, are you seriously claiming, as the first poster did, that boost::rational will work whenever you need an arbitrary precision rational number? Or do you have some other point? I really fail to see what you are trying to get at.
The only point that you seemed to be trying to make with your examples was that "boost::rational has it's place too..." Again, I fail to see what that has to do with the discussion at hand. No one has said anything different. (That place is just not an arbitrary precision library rational class. For example, notice the calls to gcd() in the constructor. You're toast as soon as you start dealing with large primes.)
It is your useage of the word "rational number" that I am arguing with. Its misleading because ( I think) you are using it to mean an arbitrary precision floating point number representation of a real value rather than its proper sense. see e.g: http://mathworld.wolfram.com/RationalNumber.html OTOH, if you are using the term correctly then there is no such concept as "setting the precision of a rational number", because a generic rational number has exactly one representation ( eg as a fraction). The examples you have given eg: " rational a,b; a.set_precision(50); //a's precision now set at 50 b.set_precision(150) //b's precision now set at 150, a's precision. " make no sense for rational numbers, because they are not generically representable in floating point of any but infinite precision. Perhaps an alternative name to 'rational' for your type would be less confusing. regards Andy Little

Andy Little wrote:
It is your useage of the word "rational number" that I am arguing with. Its misleading because ( I think) you are using it to mean an arbitrary precision floating point number representation of a real value rather than its proper sense. see e.g:
No, I was precise in my language. To be very clear: all arbitrary precision floating point numbers are rational, though not all rational numbers can be represented (exactly) as arbitrary precision floating points. That's all I ever meant, and all I ever used it to mean.
Perhaps an alternative name to 'rational' for your type would be less confusing.
Okay, so this whole thing was just a naming issue? You have nothing to say about how I should code things? Sure, no problem. I was following Stroup's NTL who names his classes "ZZ" (Z stands for the integers in mathematics) and "RR" (R stands for the reals) respectively. All I'm working on right now is implementation. Suggest a name, and I'll note it. Joel Eidsath

"Joel Eidsath" <jeidsath@gmail.com> wrote
Okay, so this whole thing was just a naming issue?
It started as a misunderstanding on my part because of my preconceptions about what a rational is, but it is just a naming issue. I believe that renaming would result in greater clarity which surely cant be a bad thing. Apologies for making so much noise.
You have nothing to say about how I should code things?
Certainly not! Your ideas so far sound exciting. I will certainly be interested to follow your progress.
Sure, no problem. I was following Stroup's NTL who names his classes "ZZ" (Z stands for the integers in mathematics) and "RR" (R stands for the reals) respectively. All I'm working on right now is implementation. Suggest a name, and I'll note it.
How about APF standing for abitrary precision float? regards Andy Little

From: Joel Eidsath <jeidsath@gmail.com>
Andy Little wrote:
How about APF standing for abitrary precision float?
Sounds pretty good. I'll either use that or "abfloat" to go along with integer.
I hope you meant "apfloat" at least. Nevertheless, such an abbreviation is not Boost style, so I'd suggest "arbitrary_precision_float." Users can always typedef it to something shorter. -- Rob Stewart stewart@sig.com Software Engineer http://www.sig.com Susquehanna International Group, LLP using std::disclaimer;

arbitrary_precision_float is rather long, even for Boost preference for clarity over curtness - but a*float is/are far too cryptic. How about just precision_float? with the arbitrary or rather user-defined implied> Paul Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539 561830 +44 7714 330204 mailto: pbristow@hetp.u-net.com www.hetp.u-net.com | -----Original Message----- | From: boost-bounces@lists.boost.org | [mailto:boost-bounces@lists.boost.org] On Behalf Of Rob Stewart | Sent: 08 September 2005 17:09 | To: boost@lists.boost.org | Cc: boost@lists.boost.org | Subject: Re: [boost] Interest in an arbitrary precision library? | | From: Joel Eidsath <jeidsath@gmail.com> | > Andy Little wrote: | > | > >How about APF standing for abitrary precision float? | > > | > Sounds pretty good. I'll either use that or "abfloat" to | go along with | > integer. | | I hope you meant "apfloat" at least. Nevertheless, such an | abbreviation is not Boost style, so I'd suggest | "arbitrary_precision_float." Users can always typedef it to | something shorter. | | -- | Rob Stewart stewart@sig.com | Software Engineer http://www.sig.com | Susquehanna International Group, LLP using std::disclaimer; | _______________________________________________ | Unsubscribe & other changes: | http://lists.boost.org/mailman/listinfo.cgi/boost |

"Rob Stewart" <stewart@sig.com> wrote in message news:200509081609.j88G9D60027298@shannonhoon.balstatdev.susq.com...
From: Joel Eidsath <jeidsath@gmail.com>
Andy Little wrote:
How about APF standing for abitrary precision float?
Sounds pretty good. I'll either use that or "abfloat" to go along with integer.
I hope you meant "apfloat" at least. Nevertheless, such an abbreviation is not Boost style, so I'd suggest "arbitrary_precision_float." Users can always typedef it to something shorter.
(The downside is that you can get very long error messages. 'typedefs' dont help this). The library will need its own namespace anyway. Why not ( in boost namespace) apm::float_<> ; apm::int_<>; where apm stands for arbitrary-precision math. regards Andy Little

From: Joel Eidsath <jeidsath@gmail.com>
Daryle Walker wrote:
What about std::deque?
Will I need to insert objects at the front at any time?
It may need to grow and std::deque is more efficient at that than std::vector because it doesn't need to copy the existing data. OTOH, std::deque usually acquires more excess memory than std::vector when the string of digits is small. -- Rob Stewart stewart@sig.com Software Engineer http://www.sig.com Susquehanna International Group, LLP using std::disclaimer;

On 9/2/05, Joel Eidsath <jeidsath@gmail.com> wrote:
I've used several different arbitrary precision libraries with C++ and I have always been somewhat disappointed. And now I'm finally disappointed enough to start on my own.
i, too, have started to implement my own, for different reasons (i'm interested in the rational library, but my impression was that to get changes through, we have to have an unlimited precision integer library first)
3) As well as arbitrary precision, it should provide error range math: 2.00 * 2.00 is not generally the same thing as 2.0 * 2.0
if i understood your later comments regarding this issue correctly, this would lock the implementation to a decimal-based radix one (otherwise you can't specify the precision as the number of decimal digits) which seems to be a very severe limitation the most obvious choice for a 32 bit machine is a 2^31-based radix representation and to my understanding your precision requirements rule that out
4) It should use fast algorithms for multiplication and division, but instead of trying to compete with other libraries in computation speed (GMP uses hand-coded assembly in places), it should concentrate on code portability and ease of extension.
my secret hope is that using expression templates GMP and its ilk can be beaten :O) if you want to calculate a+b+c using a C library, you have to make two loops over the digits (e.g. to add a and b, and then to add c to the result), while in C++ you can add together all three in a single loop (iirc GMP does have a few hacks built in for fast computation of frequently appearing cases like x*x+y*y and a+b+c but even if memory serves me right, it is just a few special cases, not a general solution) if GMP really cannot be beaten, then we have to have a wrapper around it for those who need the efficiency and don't care about the licensing
5) I do not envision any fancy memory management. The "big int" class will probably use a std::vector<int> to store it's implementation data.
i do not yet have benchmarks (partly because do not know what to benchmark -- if someone would be willing to develop some benchmarks derived from real-life applications, it would be very helpful), but i suspect that memory management will be an important factor in the speed of the end result currently i have three methods in use: - linked list of single digits (std::list) - a continuos storage for all digits (std::vector) - a linked list of ever-growing slices of digits (atm i have two versions, one has nodes with 1,2,4,8,...2^n digits, and the other with 1,2,3,5,8...F(n) digits) br, andras

Andras Erdei wrote:
this would lock the implementation to a decimal-based radix one (otherwise you can't specify the precision as the number of decimal digits) which seems to be a very severe limitation
the most obvious choice for a 32 bit machine is a 2^31-based radix representation and to my understanding your precision requirements rule that out
No fears about losing the 2^32 radix (using unsigned int instead of int). What you quote was just an example explaining what error ranges were. To implement them, I imagine that something like this would be reasonable: rational r = 1.234 r.error_range = .02 This is meant to show that r can range from 1.214 to 1.254 Arithmetic operations would update the error range as you go along. This let's you handle a lot more than just decimal error ranges, but also let's you handle normal decimal error ranges simply: rational r = 2.3 r.error_range = 0.05 Now r ranges from 2.25 to 2.35 which is what the notation "2.3" generally means.
if i understood your later comments regarding this issue correctly,
4) It should use fast algorithms for multiplication and division, but instead of trying to compete with other libraries in computation speed (GMP uses hand-coded assembly in places), it should concentrate on code portability and ease of extension.
my secret hope is that using expression templates GMP and its ilk can be beaten :O)
if you want to calculate a+b+c using a C library, you have to make two loops over the digits (e.g. to add a and b, and then to add c to the result), while in C++ you can add together all three in a single loop
(iirc GMP does have a few hacks built in for fast computation of frequently appearing cases like x*x+y*y and a+b+c but even if memory serves me right, it is just a few special cases, not a general solution)
if GMP really cannot be beaten, then we have to have a wrapper around it for those who need the efficiency and don't care about the licensing
Wow. That's the first time I'd heard of expression templates. I can see them being very useful.
5) I do not envision any fancy memory management. The "big int" class will probably use a std::vector<int> to store it's implementation data.
i do not yet have benchmarks (partly because do not know what to benchmark -- if someone would be willing to develop some benchmarks derived from real-life applications, it would be very helpful), but i suspect that memory management will be an important factor in the speed of the end result
currently i have three methods in use:
- linked list of single digits (std::list)
- a continuos storage for all digits (std::vector)
- a linked list of ever-growing slices of digits (atm i have two versions, one has nodes with 1,2,4,8,...2^n digits, and the other with 1,2,3,5,8...F(n) digits)
By fancy memory management, I meant custom allocators. I can envision using containers other than vector if they would speed things up. Joel Eidsath

IMO, we should have a few classes to handle most need without to much overhead. I thing we should support some policies : A) Allocation - Fixed width (any multiple of 8 bit) - Variable width (dynamic memory) - Variable width (dynamic allocation) but with static allocation for small number. B) Precision - Integer (multiple of one) - Rationnal (a numerator and a denominator) - Fixed (base 2 or 10 or maybe arbitrary). *** Base 2 would be faster as we would be able to do some shifting in many computations while base 10 would be usefull for number that are typically displayed to the user (currency, length, coordinates,...) at a given decimal precision. - Floatting (a mantissa and an exponent) *** This would be usefull to compress floatting point or have different ranges/precisions that standard one. C) Signed or not D) Computation of the error range or not E) Handling of errors like overflow, NaN,... - Ignore - Special values like NaN, Infinity,... - Exceptions F) Maybe the possibility to uses GMP for those who want it and can accept the licence. *** This would be optional (at least for most operation) and the speed should never be worst that twice as slow in worst case (or something like that). Not all combination from A, B, C, D, E and F need to be supported. Also, we might want to support some mixed type computations. In general, the uses of GMP should not be required. In my case, this thing that interest me the most is : a) being able to support larger integer (64 or 128 bits) that are really fast for additions and multiplications. They could be used to add a bunch of 32 bit or even the product of two 32 bit integers (as a 64 bit integer) and not have any overflow (or precision loss). b) compressed floatting points (or integers) when the space matters (for exemple on an handled). In our case, the handled store data that came from a data logger that can record a few MB and we could a few data logger active. Since these devices are slower and with limited memory, if we can have compressed number that are almost as fast as built-in type, this can be very interesting. In some case, it might be ok for us to convert to built-in type at some point in the computation. For example, we might do all accumulations on large integers the uses floating point emulation of the device to compute the final stats. *** Another thing that would be interesting is to have some performances comparisons on a variety of platforms so that we can select the best type to uses more easily.

Hello everybody, This is just my opinion on a subject regarding the big integer library discusion. A lot of people are talking about performance, how whatever is proposed should have the best performance ever and to be able to compete with the best. That is absurd, is like not having a car and your first car should be a Ferrari. Did someone _really_ benchmarked BGL? This library is not able to compete with commercial products and nobody ever asked for this. What about uBlas, this library has mayor improvement to be made. Some people asked for benchmarks for the Parameters library that I believe where never made. The for_each library defines 'extra' variables that are not always used. Does that mean that those libraries can not be part of Boost? It is just too easy to complain and demand excellence from else. Regard, LG

On 9/8/05, Lucas Galfaso <lgalfaso@gmail.com> wrote:
A lot of people are talking about performance, how whatever is proposed should have the best performance ever and to be able to compete with the best. That is absurd, is like not having a car and your first car should be a Ferrari. Did someone _really_ benchmarked BGL? This library is not able to compete with commercial products and nobody ever asked for this. What about uBlas, this library has mayor improvement to be made. Some people asked for benchmarks for the Parameters library that I believe where never made. The for_each library defines 'extra' variables that are not always used. Does that mean that those libraries can not be part of Boost?
there is some truth in what you are saying, but i think this is a special case; most users of (unlimited precision) arithmetic are interested in speed above all else of course boost doesn't have to be better than everyone else (and it depends on the application field what is faster), but imho it has to be in the same league as existing implementations to be accepted -- things like ease of use won't help here br, andras
participants (12)
-
Andras Erdei
-
Andy Little
-
Daryle Walker
-
David Abrahams
-
Felipe Magno de Almeida
-
Guillaume Melquiond
-
Jeff Garland
-
Joel Eidsath
-
Lucas Galfaso
-
Paul A Bristow
-
Philippe Mori
-
Rob Stewart