
Greetings. This is my first post to the mailing list, so I'll try not to act stupid. :) The recent release of Boost (1.39.0), in the boost::rational library documentation, broadly hints at (and expresses a fond wish for) an arbitrary-precision integer ("bignum") library; however none currently exists within Boost. There are numerous such libraries out in the wild (many targeted towards C); though some of them are under incompatible licenses. (One interesting starting point would be libtommath, by Tom St. Denis; it appears to be a pretty robust and functional C library; which now has a C++ wrapper--albeit one that doesn't meet boost standards. It also contains numerous other useful number-theoretic functions, many of which appear to not be dependeing on the underlying bigint type. And it's in the public domain, or at least appears to be...) Searching through the mailing list, I've seen numerous references to creating such a thing, some of them dating back nearly ten years--but none yet seems to exist. This is something, I think, which would be Really Useful to add to boost. I wasn't able to locate any current boost project or proposal to create such a thing--though it's certainly possible I looked up the wrong keyword(s). So, my possibly impertinent newbie question: Is anyone (seriously) working on such a thing, for inclusion in boost? Are there any nasty (political) obstacles to such a thing being added? Thanks! -- engineer_scotty (no, not that one) If life gives you lemons, drink Hefeweizen

I'm definitely interested in this field (a pun for those who care). Most serious users though have no problem with the GNU license and a whole lot of very good quality work goes into GMP. Like, on a scale comparable to the entire boost effort. I don't have a big appetite for investing in a lib that's just good enough, when I could instead contribute to the one that does it best. Not for my research work, anyway. Boost should probably include something usable though, just like it needs to include enough linear algebra infrastructure to handle common tasks. Something comparable to the Java or .NET bignum support, enough for basic crypto and the usual demo programs. Duplicating GMP's production values would mean duplicating their investment in platform-specific optimizations, something incompatible with existing boost organization, practices, and personnel interests. I don't see anything ambitious happening in this area, but I'm just an interested observer. Others will likely have other views. -swn

On Mon, Aug 17, 2009 at 4:57 PM, Stephen Nuchia<snuchia@statsoft.com> wrote:
I'm definitely interested in this field (a pun for those who care). Most serious users though have no problem with the GNU license and a whole lot of very good quality work goes into GMP. Like, on a scale comparable to the entire boost effort.
I don't have a big appetite for investing in a lib that's just good enough, when I could instead contribute to the one that does it best. Not for my research work, anyway.
Boost should probably include something usable though, just like it needs to include enough linear algebra infrastructure to handle common tasks. Something comparable to the Java or .NET bignum support, enough for basic crypto and the usual demo programs.
Duplicating GMP's production values would mean duplicating their investment in platform-specific optimizations, something incompatible with existing boost organization, practices, and personnel interests.
I don't see anything ambitious happening in this area, but I'm just an interested observer. Others will likely have other views.
I would *love* to have a bignum library in Boost. Although I like GMP, its difficulty of building on VS, and its highly restrictive license (for my projects anyway, completely free and I give out source when asked, the GPL will not work with the bulk of the code and the purposes of the projects I work on, for an open-source license, the GPL is absolutely worthless, I prefer a BSD or Boost style license) makes it rather absolutely worthless, even considering all the work that has gone into it, seems so wasted. It should be such a generic thing (a bignum library), yet they completely destroyed the usefulness of it. So yes, I would love one in Boost, just one other library I would not have to manage separately and would have a lot of work done on it as well. I think work mainly done for the x86 (-64) and the other top 3 or 4 platforms (ppc, arm, etc...) would be more then enough to make it very useful, with slower pure C++ fallbacks for everything else.

I'm working on this in sandbox/mp_math and I also used libtommath as starting point. It probably does not compile under VC because I'm developing this using gcc. Currently it has a GMP backend and a libtommath backend and a native boost backend. Since I have become the new maintainer of libtommath I am also in the process of making custom modifications to it for it to integrate better with a c++ style library. In fact mp_math does not compile with the latest official libtommath release anymore. I'm hosting the libtommath repo at sourceforge.net/projects/tommath.

Kevin Sopp wrote:
I'm working on this in sandbox/mp_math and I also used libtommath as starting point. It probably does not compile under VC because I'm developing this using gcc.
Hi Kevin, It (mp_math) was compiling under vc9 about a year ago when I tried some examples with the lazy exact filter I had done. Have there been any updates since then? Cheers, Brandon

Hi Brandon, there have been big changes in the sandbox repository for this library, I wouldn't use it until I make some kind of release again. On Thu, Sep 3, 2009 at 2:18 PM, Brandon Kohn<blkohn@hotmail.com> wrote:
Kevin Sopp wrote:
I'm working on this in sandbox/mp_math and I also used libtommath as starting point. It probably does not compile under VC because I'm developing this using gcc.
Hi Kevin,
It (mp_math) was compiling under vc9 about a year ago when I tried some examples with the lazy exact filter I had done. Have there been any updates since then?
Cheers,
Brandon
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Scott Johnson wrote:
Greetings.
This is my first post to the mailing list, so I'll try not to act stupid. :)
The recent release of Boost (1.39.0), in the boost::rational library documentation, broadly hints at (and expresses a fond wish for) an arbitrary-precision integer ("bignum") library; however none currently exists within Boost. There are numerous such libraries out in the wild (many targeted towards C); though some of them are under incompatible licenses.
(One interesting starting point would be libtommath, by Tom St. Denis; it appears to be a pretty robust and functional C library; which now has a C++ wrapper--albeit one that doesn't meet boost standards. It also contains numerous other useful number-theoretic functions, many of which appear to not be dependeing on the underlying bigint type. And it's in the public domain, or at least appears to be...)
Searching through the mailing list, I've seen numerous references to creating such a thing, some of them dating back nearly ten years--but none yet seems to exist. This is something, I think, which would be Really Useful to add to boost. I wasn't able to locate any current boost project or proposal to create such a thing--though it's certainly possible I looked up the wrong keyword(s).
So, my possibly impertinent newbie question: Is anyone (seriously) working on such a thing, for inclusion in boost? Are there any nasty (political) obstacles to such a thing being added?
Thanks!
I would love to have such a library and been begging for it for quite a while. I don't have much experience in assembly especially 32 bit assembly else I might have attempted this myself. Please if you do attempt this keep the following things in mind. 1) On the C++0x Standard Library wishlist (revision 5) found at http://www.open-std.org/JTC1/sc22/WG21/docs/papers/2005/n1901.html there are these items a) Infinite-precision integer arithmetic b) Full-width integer operations "a" is what you are referring to but it has a dependency for efficiency on "b" so it may be good to propose both libraries and split the work among multiple people. These also shows that their is great interest. 2) Don't limit yourself to cryptographic applications. Big numbers have applications in identification and other scientific fields. As such one might be perfectly content with fixed sized stack allocated [unsigned] int types. Ex. std::bitset<512> -> uint<512> 3) Think pluggable. Start with a standard C++ implementation even if it is slow as long as it work and then gradually, on latter releases, replace or allow choosing faster implementations. Who knows this might allow others to plug GMP or other implementations into it. [Consider virtual functions, function pointers, facets. Most in past conversations have offered strong resistance to virtual functions.]

A few other thoughts for things which might be useful-- 1) a natural type (an unsigned arbitrary precision type). Comes up often, and many crypto apps work on unsigned rather than signed ints. 2) different division policies as template paramters. One policy option would be how division by zero is handled (undefined, exception, use of NaN/INF encodings); another would be rounding mode (floor, ceiling, to zero, bias-free, throw-if-not-exact, unspecified, etc). Default options, I would be think, would be round-to-floor and throw-on-div-by-zero. 3) potentially, boost::rational could support NaN/INF encodings as well, for those applications where division by zero shouldn't throw. 4) certain "combo" operations, such as multipy-accumulate, ternary add and subtract, simultaenous div/mod, and modular operations (plus, minus, times) ought to be part of the bigint class. Efficient algorithms for things like multiplication ought to as well (and selected by default as appropriate). 5) crypto-specific algorithms, and things which can be efficiently implemented on top of standard arithmetic, should best be put in a separate numerical library. On Tue, Aug 18, 2009 at 5:50 AM, Jarrad Waterloo <jwaterloo@dynamicquest.com
wrote:
Scott Johnson wrote:
Greetings.
This is my first post to the mailing list, so I'll try not to act stupid. :)
The recent release of Boost (1.39.0), in the boost::rational library documentation, broadly hints at (and expresses a fond wish for) an arbitrary-precision integer ("bignum") library; however none currently exists within Boost. There are numerous such libraries out in the wild (many targeted towards C); though some of them are under incompatible licenses.
(One interesting starting point would be libtommath, by Tom St. Denis; it appears to be a pretty robust and functional C library; which now has a C++ wrapper--albeit one that doesn't meet boost standards. It also contains numerous other useful number-theoretic functions, many of which appear to not be dependeing on the underlying bigint type. And it's in the public domain, or at least appears to be...)
Searching through the mailing list, I've seen numerous references to creating such a thing, some of them dating back nearly ten years--but none yet seems to exist. This is something, I think, which would be Really Useful to add to boost. I wasn't able to locate any current boost project or proposal to create such a thing--though it's certainly possible I looked up the wrong keyword(s).
So, my possibly impertinent newbie question: Is anyone (seriously) working on such a thing, for inclusion in boost? Are there any nasty (political) obstacles to such a thing being added?
Thanks!
I would love to have such a library and been begging for it for quite a while. I don't have much experience in assembly especially 32 bit assembly else I might have attempted this myself. Please if you do attempt this keep the following things in mind. 1) On the C++0x Standard Library wishlist (revision 5) found at http://www.open-std.org/JTC1/sc22/WG21/docs/papers/2005/n1901.html there are these items a) Infinite-precision integer arithmetic b) Full-width integer operations "a" is what you are referring to but it has a dependency for efficiency on "b" so it may be good to propose both libraries and split the work among multiple people. These also shows that their is great interest. 2) Don't limit yourself to cryptographic applications. Big numbers have applications in identification and other scientific fields. As such one might be perfectly content with fixed sized stack allocated [unsigned] int types. Ex. std::bitset<512> -> uint<512> 3) Think pluggable. Start with a standard C++ implementation even if it is slow as long as it work and then gradually, on latter releases, replace or allow choosing faster implementations. Who knows this might allow others to plug GMP or other implementations into it. [Consider virtual functions, function pointers, facets. Most in past conversations have offered strong resistance to virtual functions.]
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- engineer_scotty (no, not that one) If life gives you lemons, drink Hefeweizen

On Tue, Aug 18, 2009 at 3:49 PM, Scott Johnson<engineerscotty@gmail.com> wrote:
A few other thoughts for things which might be useful--
1) a natural type (an unsigned arbitrary precision type). Comes up often, and many crypto apps work on unsigned rather than signed ints.
2) different division policies as template paramters. One policy option would be how division by zero is handled (undefined, exception, use of NaN/INF encodings); another would be rounding mode (floor, ceiling, to zero, bias-free, throw-if-not-exact, unspecified, etc). Default options, I would be think, would be round-to-floor and throw-on-div-by-zero.
3) potentially, boost::rational could support NaN/INF encodings as well, for those applications where division by zero shouldn't throw.
4) certain "combo" operations, such as multipy-accumulate, ternary add and subtract, simultaenous div/mod, and modular operations (plus, minus, times) ought to be part of the bigint class. Efficient algorithms for things like multiplication ought to as well (and selected by default as appropriate).
5) crypto-specific algorithms, and things which can be efficiently implemented on top of standard arithmetic, should best be put in a separate numerical library.
On Tue, Aug 18, 2009 at 5:50 AM, Jarrad Waterloo <jwaterloo@dynamicquest.com
wrote:
Scott Johnson wrote:
Greetings.
This is my first post to the mailing list, so I'll try not to act stupid. :)
The recent release of Boost (1.39.0), in the boost::rational library documentation, broadly hints at (and expresses a fond wish for) an arbitrary-precision integer ("bignum") library; however none currently exists within Boost. There are numerous such libraries out in the wild (many targeted towards C); though some of them are under incompatible licenses.
(One interesting starting point would be libtommath, by Tom St. Denis; it appears to be a pretty robust and functional C library; which now has a C++ wrapper--albeit one that doesn't meet boost standards. It also contains numerous other useful number-theoretic functions, many of which appear to not be dependeing on the underlying bigint type. And it's in the public domain, or at least appears to be...)
Searching through the mailing list, I've seen numerous references to creating such a thing, some of them dating back nearly ten years--but none yet seems to exist. This is something, I think, which would be Really Useful to add to boost. I wasn't able to locate any current boost project or proposal to create such a thing--though it's certainly possible I looked up the wrong keyword(s).
So, my possibly impertinent newbie question: Is anyone (seriously) working on such a thing, for inclusion in boost? Are there any nasty (political) obstacles to such a thing being added?
Thanks!
I would love to have such a library and been begging for it for quite a while. I don't have much experience in assembly especially 32 bit assembly else I might have attempted this myself. Please if you do attempt this keep the following things in mind. 1) On the C++0x Standard Library wishlist (revision 5) found at http://www.open-std.org/JTC1/sc22/WG21/docs/papers/2005/n1901.html there are these items a) Infinite-precision integer arithmetic b) Full-width integer operations "a" is what you are referring to but it has a dependency for efficiency on "b" so it may be good to propose both libraries and split the work among multiple people. These also shows that their is great interest. 2) Don't limit yourself to cryptographic applications. Big numbers have applications in identification and other scientific fields. As such one might be perfectly content with fixed sized stack allocated [unsigned] int types. Ex. std::bitset<512> -> uint<512> 3) Think pluggable. Start with a standard C++ implementation even if it is slow as long as it work and then gradually, on latter releases, replace or allow choosing faster implementations. Who knows this might allow others to plug GMP or other implementations into it. [Consider virtual functions, function pointers, facets. Most in past conversations have offered strong resistance to virtual functions.]
I vote for this and would certainly use it. There are many times I need a bignum, but do not rely on speed. Oddly enough, during those times I usually include Boost.Python and use its integer class for my numbers, it handles things *well*, although a bit overkill. :)

2) Don't limit yourself to cryptographic applications. Big numbers have applications in identification and other scientific fields. As such one might be perfectly content with fixed sized stack allocated [unsigned] int types. Ex. std::bitset<512> -> uint<512>
I agree on this
3) Think pluggable. Start with a standard C++ implementation even if it is slow as long as it work and then gradually, on latter releases, replace or allow choosing faster implementations. Who knows this might allow others to plug GMP or other implementations into it. THis may end up as a test case for the general numerical functions
library I'm trying to generate out of my Boost.SIMD proposal. In a large scope, it provides a large number of stub functions that are tag dispatched rather efficiently based on "arithmetic"-like value types. So basically, it defines 150+ functions that looks at their parameters and decide which implementation to use at compile-time. I use that to discriminate between scalar and SIMD functions, but if you "tag" this bignum class correctly and follow my extension mecanism, then we can statically choose between implementation. Roughly : bignum<> x; bignum<impl::gmp> y; x = abs(x); <-- call the "standard bignum" implementation y = abs(y); <-- call the GMP implementation
[Consider virtual functions, function pointers, facets. Most in past conversations have offered strong resistance to virtual functions.] The more is done at CT, the better so traits+ tag-disptach is the way to go compared to virtual functions.
-- ___________________________________________ Joel Falcou - Assistant Professor PARALL Team - LRI - Universite Paris Sud XI Tel : (+33)1 69 15 66 35

on 19.08.2009 at 9:50 joel wrote :
THis may end up as a test case for the general numerical functions library I'm trying to generate out of my Boost.SIMD proposal. In a large scope, it provides a large number of stub functions that are tag dispatched rather efficiently based on "arithmetic"-like value types. So basically, it defines 150+ functions that looks at their parameters and decide which implementation to use at compile-time. I use that to discriminate between scalar and SIMD functions, but if you "tag" this bignum class correctly and follow my extension mecanism, then we can statically choose between implementation. Roughly : bignum<> x; bignum<impl::gmp> y; x = abs(x); <-- call the "standard bignum" implementation y = abs(y); <-- call the GMP implementation
i consider it a poor solution if i want debug version to use native c++ code and release version to use a back end i would be forced to type something like #ifdef NDEBUG typedef bignum<impl::gmp> mybignum; //release #elif typedef bignum<> mybignum; //debug #endif and to replicate it in every compilation unit what comes to my mind is for example build main.cpp funcs.cpp -dUSE_SPECIFIC_BACK_END -dNDEBUG for release version and build main.cpp funcs.cpp for debug version (i hope you got the point) note that i decide to use it or not by leaving the code unchanged or to enable back end explicitly in code i can #include <bignum/specific_back_end.h> -- Pavel

What about : #ifdef NDEBUG #define GMP_MODE gmp #elif #define GMP_MODE classic #endif then in other unit : bignum<GMP_MODE> x,y; -- ___________________________________________ Joel Falcou - Assistant Professor PARALL Team - LRI - Universite Paris Sud XI Tel : (+33)1 69 15 66 35

on 20.08.2009 at 0:37 joel wrote :
What about : #ifdef NDEBUG #define GMP_MODE gmp #elif #define GMP_MODE classic #endif then in other unit : bignum<GMP_MODE> x,y;
or type everywhere typedef bignum<IMPL> mybignum; and then #define IMPL or #define IMPL specific_impl (or set it in project options separately for release and debug version) but this is a hack it shouldn't be like that -- Pavel

Scott Johnson wrote:
A few other thoughts for things which might be useful--
1) a natural type (an unsigned arbitrary precision type). Comes up often, and many crypto apps work on unsigned rather than signed ints.
2) different division policies as template paramters. One policy option would be how division by zero is handled (undefined, exception, use of NaN/INF encodings); another would be rounding mode (floor, ceiling, to zero, bias-free, throw-if-not-exact, unspecified, etc). Default options, I would be think, would be round-to-floor and throw-on-div-by-zero.
3) potentially, boost::rational could support NaN/INF encodings as well, for those applications where division by zero shouldn't throw.
4) certain "combo" operations, such as multipy-accumulate, ternary add and subtract, simultaenous div/mod, and modular operations (plus, minus, times) ought to be part of the bigint class. Efficient algorithms for things like multiplication ought to as well (and selected by default as appropriate).
5) crypto-specific algorithms, and things which can be efficiently implemented on top of standard arithmetic, should best be put in a separate numerical library.
On Tue, Aug 18, 2009 at 5:50 AM, Jarrad Waterloo <jwaterloo@dynamicquest.com
wrote:
Scott Johnson wrote:
Greetings.
This is my first post to the mailing list, so I'll try not to act stupid. :)
The recent release of Boost (1.39.0), in the boost::rational library documentation, broadly hints at (and expresses a fond wish for) an arbitrary-precision integer ("bignum") library; however none currently exists within Boost. There are numerous such libraries out in the wild (many targeted towards C); though some of them are under incompatible licenses.
(One interesting starting point would be libtommath, by Tom St. Denis; it appears to be a pretty robust and functional C library; which now has a C++ wrapper--albeit one that doesn't meet boost standards. It also contains numerous other useful number-theoretic functions, many of which appear to not be dependeing on the underlying bigint type. And it's in the public domain, or at least appears to be...)
Searching through the mailing list, I've seen numerous references to creating such a thing, some of them dating back nearly ten years--but none yet seems to exist. This is something, I think, which would be Really Useful to add to boost. I wasn't able to locate any current boost project or proposal to create such a thing--though it's certainly possible I looked up the wrong keyword(s).
So, my possibly impertinent newbie question: Is anyone (seriously) working on such a thing, for inclusion in boost? Are there any nasty (political) obstacles to such a thing being added?
Thanks!
I would love to have such a library and been begging for it for quite a while. I don't have much experience in assembly especially 32 bit assembly else I might have attempted this myself. Please if you do attempt this keep the following things in mind. 1) On the C++0x Standard Library wishlist (revision 5) found at http://www.open-std.org/JTC1/sc22/WG21/docs/papers/2005/n1901.html there are these items a) Infinite-precision integer arithmetic b) Full-width integer operations "a" is what you are referring to but it has a dependency for efficiency on "b" so it may be good to propose both libraries and split the work among multiple people. These also shows that their is great interest. 2) Don't limit yourself to cryptographic applications. Big numbers have applications in identification and other scientific fields. As such one might be perfectly content with fixed sized stack allocated [unsigned] int types. Ex. std::bitset<512> -> uint<512> 3) Think pluggable. Start with a standard C++ implementation even if it is slow as long as it work and then gradually, on latter releases, replace or allow choosing faster implementations. Who knows this might allow others to plug GMP or other implementations into it. [Consider virtual functions, function pointers, facets. Most in past conversations have offered strong resistance to virtual functions.]
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Number 4, "certain 'combo' operations", is a similar thought expressed in "Full-width integer operations". If you want to know what should go into its public interface consider looking at http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2009/n2849.pdf. This is the current proposal for decimal floating point types. Being another numeric type for C++ this proposal should give you an idea of what functions are needed; at least as a goal. I agree with the need for a Number 1, unsigned large number type. If this is considered please add partition methods, a function that can get a bigint from an bigint that starts at bit offset and goes for bit length aka.bitfields where a bigint is the struct.

on 19.08.2009 at 17:07 Jarrad Waterloo wrote :
If you want to know what should go into its public interface consider looking at http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2009/n2849.pdf. This is the current proposal for decimal floating point types. Being another numeric type for C++ this proposal should give you an idea of what functions are needed; at least as a goal. indeed my suggestion would be: if a floating point type (with higher precision) is provided it sould be conformant with iec559 (ieee754) http://en.wikipedia.org/wiki/IEC_559 http://en.wikipedia.org/wiki/IEEE_754-2008
then i suppose it'll look like float64 f1; float128 f2; float256 f3; //etc. -- Pavel

I had drafted an earlier reply but perhaps I botched sending it as I've not seen it come through. My opinion then was that this (a bignum library) is a poor fit for the boost community, considering the amount and kind of platform-specific microoptimization that goes on in the GMP library camp. But today I'm writing unit tests for some tricky floating-point calculation sequences and it occurs to me that a reliable arbitrary-precision library with a very good API would be just the thing for calculating the "should be" values. Support for crypto and calculator demo programs round out the justification. A bignum library focused primarily on portable, provably correct "reference implementations" of the algorithms rather than on performance would have a lot of value. My toolbag for writing tests like these includes testing properties of the answers rather than checking their values against predictions, choosing input data for which the output is easy to calculate, or writing a model of the calculation in Mathematica, executing it manually for pre-selected values, and pasting the answers into an expected value table. The way Mathematica answers the questions "what is the value of this expression, rounded to 53 bits?" is to evaluate the expression using a heuristically determined amount of extra precision and interval arithmetic. If both ends of the result interval round to the same number we have an answer, otherwise the calculation is repeated with increasing precision until the results converge. Having just recently taken a look at the boost::proto stuff I think it might be possible to directly implement this concept with expressions written in C++. Now that would be a trip! The boost community is not set up to duplicate what GMP does. But I think we could do something else very well if we focus on correctness, portability, and metaprogramming API tastiness instead of making the tires smoke. If I grok the proto stuff correctly it should be possible to write expressions that can be evaluated in different contexts. One could be native hardware FP arithmetic, one could be the reference bignum library, and one could even use GMP. Right? -swn Lambda expressions in a statically typed, compiled language. It's actually making me giddy! Self-identified old fart story: I implemented lambda expressions in C using a preprocessor in 1987 or 88, while working in the Sun Windows API. It had a very fine-grained callback design and the resulting namespace pollution and lack of locality when reading code drove me to it. It actually worked really well for that application.

I tend to agree--if we do it, we ought to do a reasonable reference implementation. (If it gets standardized eventually, then compiler vendors can tune it up, after all). On Tue, Aug 18, 2009 at 2:25 PM, Stephen Nuchia <snuchia@statsoft.com>wrote:
I had drafted an earlier reply but perhaps I botched sending it as I've not seen it come through. My opinion then was that this (a bignum library) is a poor fit for the boost community, considering the amount and kind of platform-specific microoptimization that goes on in the GMP library camp.
But today I'm writing unit tests for some tricky floating-point calculation sequences and it occurs to me that a reliable arbitrary-precision library with a very good API would be just the thing for calculating the "should be" values. Support for crypto and calculator demo programs round out the justification.
A bignum library focused primarily on portable, provably correct "reference implementations" of the algorithms rather than on performance would have a lot of value.
My toolbag for writing tests like these includes testing properties of the answers rather than checking their values against predictions, choosing input data for which the output is easy to calculate, or writing a model of the calculation in Mathematica, executing it manually for pre-selected values, and pasting the answers into an expected value table.
The way Mathematica answers the questions "what is the value of this expression, rounded to 53 bits?" is to evaluate the expression using a heuristically determined amount of extra precision and interval arithmetic. If both ends of the result interval round to the same number we have an answer, otherwise the calculation is repeated with increasing precision until the results converge.
Having just recently taken a look at the boost::proto stuff I think it might be possible to directly implement this concept with expressions written in C++. Now that would be a trip!
The boost community is not set up to duplicate what GMP does. But I think we could do something else very well if we focus on correctness, portability, and metaprogramming API tastiness instead of making the tires smoke.
If I grok the proto stuff correctly it should be possible to write expressions that can be evaluated in different contexts. One could be native hardware FP arithmetic, one could be the reference bignum library, and one could even use GMP. Right?
-swn
Lambda expressions in a statically typed, compiled language. It's actually making me giddy!
Self-identified old fart story: I implemented lambda expressions in C using a preprocessor in 1987 or 88, while working in the Sun Windows API. It had a very fine-grained callback design and the resulting namespace pollution and lack of locality when reading code drove me to it. It actually worked really well for that application.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- engineer_scotty (no, not that one) If life gives you lemons, drink Hefeweizen

Stephen Nuchia wrote: ...
If I grok the proto stuff correctly it should be possible to write expressions that can be evaluated in different contexts. One could be native hardware FP arithmetic, one could be the reference bignum library, and one could even use GMP. Right?
-swn
FWIW, I've implemented and used an arithmetic expression evaluation library for use in geometric computations. It uses expression templates (via Boost.Proto) to construct statically-typed expression trees, which can then be evaluated with any user-defined evaluator (e.g., with Boost.Interval, extended precision floating point type, exact rational type, etc.). This allows arithmetic expressions to be adaptively evaluated until a certain geometric test is conclusive. For example, to determine if 2 segments intersect, one can compute the relevant signed areas using Boost.Interval, and if rounding errors cause the test to be inclusive, you can reevaluate with a more precise data type. Our application currently only involves polytopic primitives (points, segments, triangles, tetrahedrons), so expressions involving +, -, *, and / are all we need. There's also a generic expression type that type-erases a statically-typed expression (similar to Boost.Function), which doubles as a cache of evaluation results (the specification of the caching strategy is, however, somewhat more complicated than I'd like, since one must specify how a cached value can be converted to the various possible evaluation types). FWIW2, to exactly evaluate expressions, we use a C++ implementation of JR Shewchuk's expansion arithmetic algorithms: http://www.cs.cmu.edu/~quake/robust.html The idea is to represent a number as a sum of fixed-precision floating point numbers (floats or doubles), and operate on these sums. Our motivation was the ease with which a float or double can be converted to this representation, and the operations can generally be very fast, but I regret I haven't done any benchmarks myself to compare the computational speed of these algorithms to alternatives. It would seem to me that an expansion arithmetic type would probably only be useful if you know the depth of the expression trees in your application aren't too deep, and you're willing to accept the possibility of running out of exponent space (overflowing or underflowing). It does happen to work quite well for our computational geometry application. Just throwing out some ideas and my own experience in this (or a possibly tangential) area. - Jeff

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Stephen Nuchia Sent: 18 August 2009 22:25 To: boost@lists.boost.org Subject: Re: [boost] Arbitrary precision arithmetic
A bignum library focused primarily on portable, provably correct "reference implementations" of the algorithms rather than on performance would have a lot of value.
Indeed - the high accuracy of the Boost.Math distributions and functions is largely because John Maddock put a lot of effort into getting reference values using NTL (a GPL licensed library). (Some of them made my CPU smoke for days ;-)
The boost community is not set up to duplicate what GMP does. But I think we could do something else very well if we focus on correctness, portability, and metaprogramming API tastiness instead of making the tires smoke.
Or better, as recently proposed, also allow one to use GMP if desired. That should get the tires warm, if not smoking. Although the people who want speed, want it badly, most people don't really care. What most people also care about, is control of the behaviour at the limits (NaN/inf/divzero/overflow). So a policy based implementation is essential. But I suspect that, with some forethought, that can be added later. We seem to have several nearly finished offerings, but none reviewed for the Official Boost Logo. Paul --- Paul A. Bristow Prizet Farmhouse Kendal, UK LA8 8AB +44 1539 561830, mobile +44 7714330204 pbristow@hetp.u-net.com

Hi, Paul A. Bristow wrote:
The boost community is not set up to duplicate what GMP does. But I think we could do something else very well if we focus on correctness, portability, and metaprogramming API tastiness instead of making the tires smoke.
Or better, as recently proposed, also allow one to use GMP if desired. That should get the tires warm, if not smoking.
You might refer to our mails this spring. We (Bruno and I) are implementing a 'numeric adaptor' library, to be optionally used in our generic geometry library, but developed completely separately. It is currently in the Boost Sandbox, https://svn.boost.org/svn/boost/sandbox/numeric_adaptor/ It provides interfaces to GMP and CLN (another big number library), which can be used the same way as normal (IEEE) integers / doubles can be used. We recently changed and simplified the design. It now works more or less as Joel mailed today. It is template based, no virtual functions here. So Joel's sample would become: double x; boost::numeric_adaptor::gmp_value_type y; x = abs(x); <-- call the IEEE implementation y = abs(y); <-- call the GMP implementation
Although the people who want speed, want it badly, most people don't really care.
What most people also care about, is control of the behaviour at the limits (NaN/inf/divzero/overflow).
So a policy based implementation is essential. But I suspect that, with some forethought, that can be added later.
We seem to have several nearly finished offerings, but none reviewed for the Official Boost Logo.
It is not finished ('nearly finished' indeed) but you might have a look. Regards, Barend

You might refer to our mails this spring. We (Bruno and I) are implementing a 'numeric adaptor' library, to be optionally used in our generic geometry library, but developed completely separately.
It is currently in the Boost Sandbox, https://svn.boost.org/svn/boost/sandbox/numeric_adaptor/
It provides interfaces to GMP and CLN (another big number library), which can be used the same way as normal (IEEE) integers / doubles can be used. We recently changed and simplified the design. It now works more or less as Joel mailed today. It is template based, no virtual functions here. So Joel's sample would become:
double x; boost::numeric_adaptor::gmp_value_type y; x = abs(x); <-- call the IEEE implementation y = abs(y); <-- call the GMP implementation
Also don't forget that Boost.Math has simple bindings for NTL and MPFR (the latter using GMP internally) that allow these to be used as any other floating point type would be (including std lib transcendental functions etc). Cheers, John.

Barend Gehrels wrote:
Hi,
Paul A. Bristow wrote:
The boost community is not set up to duplicate what GMP does. But I think we could do something else very well if we focus on correctness, portability, and metaprogramming API tastiness instead of making the tires smoke.
Or better, as recently proposed, also allow one to use GMP if desired. That should get the tires warm, if not smoking.
You might refer to our mails this spring. We (Bruno and I) are implementing a 'numeric adaptor' library, to be optionally used in our generic geometry library, but developed completely separately.
It is currently in the Boost Sandbox, https://svn.boost.org/svn/boost/sandbox/numeric_adaptor/
It provides interfaces to GMP and CLN (another big number library), which can be used the same way as normal (IEEE) integers / doubles can be used. We recently changed and simplified the design. It now works more or less as Joel mailed today. It is template based, no virtual functions here. So Joel's sample would become:
double x; boost::numeric_adaptor::gmp_value_type y; x = abs(x); <-- call the IEEE implementation y = abs(y); <-- call the GMP implementation Interesting. I may give a shot at this using my framework and see how it goes.
-- ___________________________________________ Joel Falcou - Assistant Professor PARALL Team - LRI - Universite Paris Sud XI Tel : (+33)1 69 15 66 35
participants (12)
-
Barend Gehrels
-
Brandon Kohn
-
DE
-
Jarrad Waterloo
-
Jeffrey Hellrung
-
joel
-
John Maddock
-
Kevin Sopp
-
OvermindDL1
-
Paul A. Bristow
-
Scott Johnson
-
Stephen Nuchia