[Boost] Introduction of numeric_adaptor

Hi, I would like to introduce the numeric_adaptor library that Barend and I have developed to address a recurring problem: integration of big numbers. We already talked about it in the past, when discussions about geometry would lead to robustness. We're introducing it now because of the upcoming formal review of GGL. The purpose of this library is to turn big-numbers proposed by the most well-known big number libraries into value-types, (almost) usable like any number. This way, a library developer who has a robustness sensible function to write can take numbers as template parameters, and use them in such a way that they can be a fundamental type or one of the value-types provided by numeric_adaptor. This is basically done by: - defining in all the big-number types all the operators needed to manipulate them like fundamental types - defining some free functions in the boost namespace for other operations (sin, cos, hypot, conversions...), which have an default overload for fundamental types and a specific overload for each big-number type proposed Example: template <class T> T my_robustness_sensitive_function(T value) { value /= 2; T value2 = boost::to<T>("123.0123456789"); value += boost::sqrt(value2); return value; }; This function is numeric_adaptor-ready. If T is a fundamental type, boost::to() will forward to boost::lexical_cast and boost::sqrt() will forward to std::sqrt(). If T is one of the big number types provided by the library, specific overloads will be used. The precision of the function will thus entirely depend on the type passed. If the user chooses to sacrifice precision and use fundamental types, there won't be any run-time overhead. If the user wants something more precise and uses a big-number type, the precision of the result will be the one of the chosen type. The library is extensible. Anyone can add a new big number type with all needed operators, and overload the free functions (or specialize a dedicated traits class - see boost::to()). Note: For conversions, we would have liked to extend boost::lexical_cast directly, but the library is not designed in that sense. That's why we wrapped it into a boost::to() function which is explicitly designed to be extended, via traits. Last week on this list list a generic convert_to was discussed and that system could be used alternatively. The library is in the Boost sandbox: https://svn.boost.org/svn/boost/sandbox/numeric_adaptor/ The library currently proposes 2 big number types: CLN and GMP. This is not a formal review request, rather a preview and a reference that we will possibly use during the review of GGL. However, we would really like to have your feedback and to know if such a library would make sense. Regards Bruno

On Sun, Nov 1, 2009 at 2:16 PM, Bruno Lalande <bruno.lalande@gmail.com> wrote:
Hi,
I would like to introduce the numeric_adaptor library that Barend and I have developed to address a recurring problem: integration of big numbers. We already talked about it in the past, when discussions about geometry would lead to robustness. We're introducing it now because of the upcoming formal review of GGL.
The purpose of this library is to turn big-numbers proposed by the most well-known big number libraries into value-types, (almost) usable like any number. This way, a library developer who has a robustness sensible function to write can take numbers as template parameters, and use them in such a way that they can be a fundamental type or one of the value-types provided by numeric_adaptor.
This is basically done by: - defining in all the big-number types all the operators needed to manipulate them like fundamental types - defining some free functions in the boost namespace for other operations (sin, cos, hypot, conversions...), which have an default overload for fundamental types and a specific overload for each big-number type proposed
Example:
template <class T> T my_robustness_sensitive_function(T value) { value /= 2; T value2 = boost::to<T>("123.0123456789"); value += boost::sqrt(value2); return value; };
This function is numeric_adaptor-ready. If T is a fundamental type, boost::to() will forward to boost::lexical_cast and boost::sqrt() will forward to std::sqrt(). If T is one of the big number types provided by the library, specific overloads will be used. The precision of the function will thus entirely depend on the type passed. If the user chooses to sacrifice precision and use fundamental types, there won't be any run-time overhead. If the user wants something more precise and uses a big-number type, the precision of the result will be the one of the chosen type.
The library is extensible. Anyone can add a new big number type with all needed operators, and overload the free functions (or specialize a dedicated traits class - see boost::to()).
Note: For conversions, we would have liked to extend boost::lexical_cast directly, but the library is not designed in that sense. That's why we wrapped it into a boost::to() function which is explicitly designed to be extended, via traits. Last week on this list list a generic convert_to was discussed and that system could be used alternatively.
The library is in the Boost sandbox: https://svn.boost.org/svn/boost/sandbox/numeric_adaptor/
The library currently proposes 2 big number types: CLN and GMP.
This is not a formal review request, rather a preview and a reference that we will possibly use during the review of GGL. However, we would really like to have your feedback and to know if such a library would make sense.
Looks quite fascinating, now we just need a Boost.BigNum library. :)

Looks quite fascinating, now we just need a Boost.BigNum library. :)
There have been some discussions about that already, and one of the conclusions seems to be that it would be complex to try to do something at least as efficient as the existing libraries. Moreover the choice between those can depend on what you need exactly. So this library proposes to simply plug whatever bignum you want in. Regards Bruno

On Mon, Nov 2, 2009 at 1:52 AM, Bruno Lalande <bruno.lalande@gmail.com> wrote:
Looks quite fascinating, now we just need a Boost.BigNum library. :)
There have been some discussions about that already, and one of the conclusions seems to be that it would be complex to try to do something at least as efficient as the existing libraries. Moreover the choice between those can depend on what you need exactly. So this library proposes to simply plug whatever bignum you want in.
Let me rephrase that then: Looks quite fascinating, now we just need a boost-licensed BigNum library. :)

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Bruno Lalande Sent: Monday, November 02, 2009 8:53 AM To: boost@lists.boost.org Subject: Re: [boost] [Boost] Introduction of numeric_adaptor
Looks quite fascinating, now we just need a Boost.BigNum library. :)
There have been some discussions about that already, and one of the conclusions seems to be that it would be complex to try to do something at least as efficient as the existing libraries. Moreover the choice between those can depend on what you need exactly. So this library proposes to simply plug whatever bignum you want in.
I think this is good way to proceed. This proposal will make it easy to switch BigNum implementation. Many users very much want a BigNum but don't really care much about efficiency (but there are others who care very much indeed). It would allow those who cannot use the best GMP bignum (because it has the LGPL licence) to use a Boost.BigNum - and would make this much more useful. Kevin Sopp has been promising to finish his promising Boost compatible version for some time - hint!. Paul --- Paul A. Bristow Prizet Farmhouse Kendal, UK LA8 8AB +44 1539 561830, mobile +44 7714330204 pbristow@hetp.u-net.com

I got the hint ;-) because of severe time constraints I haven't been able to do anything for far too long now. I hope that I will find some time soon to finish this thing. On Mon, Nov 2, 2009 at 12:34 PM, Paul A. Bristow <pbristow@hetp.u-net.com> wrote:
-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Bruno Lalande Sent: Monday, November 02, 2009 8:53 AM To: boost@lists.boost.org Subject: Re: [boost] [Boost] Introduction of numeric_adaptor
Looks quite fascinating, now we just need a Boost.BigNum library. :)
There have been some discussions about that already, and one of the conclusions seems to be that it would be complex to try to do something at least as efficient as the existing libraries. Moreover the choice between those can depend on what you need exactly. So this library proposes to simply plug whatever bignum you want in.
I think this is good way to proceed. This proposal will make it easy to switch BigNum implementation.
Many users very much want a BigNum but don't really care much about efficiency (but there are others who care very much indeed).
It would allow those who cannot use the best GMP bignum (because it has the LGPL licence) to use a Boost.BigNum - and would make this much more useful. Kevin Sopp has been promising to finish his promising Boost compatible version for some time - hint!.
Paul
--- Paul A. Bristow Prizet Farmhouse Kendal, UK LA8 8AB +44 1539 561830, mobile +44 7714330204 pbristow@hetp.u-net.com
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

I would like to introduce the numeric_adaptor library that Barend and I have developed to address a recurring problem: integration of big numbers. We already talked about it in the past, when discussions about geometry would lead to robustness. We're introducing it now because of the upcoming formal review of GGL.
The purpose of this library is to turn big-numbers proposed by the most well-known big number libraries into value-types, (almost) usable like any number. This way, a library developer who has a robustness sensible function to write can take numbers as template parameters, and use them in such a way that they can be a fundamental type or one of the value-types provided by numeric_adaptor.
This looks like a good idea, but how does it compare to the thin wrappers for NTL and mpfr/gmp in boost/math/bindings ? Unfortunately for many applications, we really need a complete std libm implementation: the Boost.Math binding get close to that, how about these? Cheers, John.

This looks like a good idea, but how does it compare to the thin wrappers for NTL and mpfr/gmp in boost/math/bindings ? Unfortunately for many applications, we really need a complete std libm implementation: the Boost.Math binding get close to that, how about these?
Yep I remember now that you had talked about that already last time we mentioned our work, I completely forgot to look at Boost.Math bindings since then. However I don't find any real documentation about what it is exactly (rationale + use case(s)). If the purpose is the same as our work, there would be no problem to migrate our work into your (add CLN for instance). Bruno

Yep I remember now that you had talked about that already last time we mentioned our work, I completely forgot to look at Boost.Math bindings since then. However I don't find any real documentation about what it is exactly (rationale + use case(s)). If the purpose is the same as our work, there would be no problem to migrate our work into your (add CLN for instance).
Browse the topics starting here: http://www.boost.org/doc/libs/1_40_0/libs/math/doc/sf_and_dist/html/math_too..., the bindings add just enough syntactic sugar to make the two arbitrary precision libraries so far supported conform to our concept requirements (http://www.boost.org/doc/libs/1_40_0/libs/math/doc/sf_and_dist/html/math_too...). In other words to make them look "just like a regular std number type". Both NTL::RR and mpfr get pretty close to this already BTW, there were just a few features and Boost.Math specific traits/helper functions that were missing. Cheers, John.

Hi John, Just tried Boost.Math bindings. Technically, they do exactly the same thing as our numeric_adaptor indeed: - providing a wrapper class to make bignumbers behave like any number - ensure std functions can be performed on them - and in addition to ours, does the same for Boost.Math functions as well So I'm definitely interested. The only thing that disturbs me is about the way in which std functions must be invoked. One purpose of numeric_adaptor is to provide a common namespace (boost, but might be anything) from which library writers can call those functions (I place myself from the point of view of a function template that doesn't know in advance what type in gets in input). So: boost::sqrt(n); // works with fundamental types, GMP types, CLN types, etc... In Boost.Math bindings, those functions have a specific namespace for each specific type provided. So a library writer doesn't know what to type: std::sqrt(n); // if it's a fundamental type boost::math::ntl::sqrt(n); // if it's a boost::math::ntl::RR type etc.... But maybe the principle of your bindings is that the library writer only relies on ADL, and thus always simply writes "sqrt(n)"? If it's the case and if you think this is an acceptable way to do, then I would definitely adopt Boost.Math bindings. Regards Bruno

On Sun, Nov 1, 2009 at 4:16 PM, Bruno Lalande <bruno.lalande@gmail.com> wrote:
Example:
template <class T> T my_robustness_sensitive_function(T value) { value /= 2; T value2 = boost::to<T>("123.0123456789");
using boost::sqrt; value += sqrt(value2);
return value; };
This will get a sqrt(T) if defined, else boost::sqrt(T). Not sure about the to<> - it is a bit small/ambiguous to not have a namespace. Tony

Am Monday 02 November 2009 19:51:34 schrieb Gottlob Frege:
On Sun, Nov 1, 2009 at 4:16 PM, Bruno Lalande <bruno.lalande@gmail.com> wrote:
Example:
template <class T> T my_robustness_sensitive_function(T value) { value /= 2; T value2 = boost::to<T>("123.0123456789"); return value; };
Not sure about the to<> - it is a bit small/ambiguous to not have a namespace.
what's wrong with lexical_cast? it would put the requirement of operator<< on the number type T, but I don't see how that is worse than specializing a new boost::to template for each number type. a number type T most likely supports streaming anyway, and lexical_cast is implemented by streaming from a std::stringstream

what's wrong with lexical_cast? it would put the requirement of operator<< on the number type T
Actually my concern about using an operator<< was about performance, but on second thought there would be no performance hit doing like that, I think I was confusing with another use case of streaming operators. I will revise that, and will be happy to get back to lexical_cast if there's no problem. Bruno

what's wrong with lexical_cast? it would put the requirement of operator<< on the number type T
Actually my concern about using an operator<< was about performance, but on second thought there would be no performance hit doing like that, I think I was confusing with another use case of streaming operators. I will revise that, and will be happy to get back to lexical_cast if there's no problem.
Also take a look at the machinery we defined in boost/math/tools/constants.hpp, usage is a little clunky, for example: BOOST_DEFINE_MATH_CONSTANT(name, digits, extra-digits, exponent); after which you can use name<T>() to refer to the constant. Built in types simply return an appropriately defined numeric constant (so no lexical_cast overhead), while for UDT's the constant is defined as a string containing which gets lexical_cast'ed to the value. The remaining issue we haven't sorted out is thread safety: for UDT's the casted value is stored in a static variable for efficiency reasons, so only the first call results in a lexical_cast, but of course this isn't strictly thread safe (for built in types, there is no such issue). The "extra-digits" argument is required BTW because many compilers reject numeric constants with too many digits precision, so we put enough decimal places in "digits" to ensure 128-bit long double precision, and everything else (used only in extreme UDT cases) in "extra-digits". John.
participants (7)
-
Bruno Lalande
-
Gottlob Frege
-
John Maddock
-
Kevin Sopp
-
OvermindDL1
-
Paul A. Bristow
-
Stefan Strasser