[review] Multiprecision review (June 8th - 17th, 2012)

Okay, it's that time. Apologies for not sending a reminder, but there had been a lot of pre-review comments over the last week or so, so I think this has been on at least some people's radar. The original review announcement is given below. On Tue, May 29, 2012 at 2:08 PM, Jeffrey Lee Hellrung, Jr. < jeffrey.hellrung@gmail.com> wrote:
Hi all,
The review of the proposed Boost.Multiprecision library authored by John Maddock and Christopher Kormanyos has been scheduled for
June 8th - June 17th, 2012
and will be managed by myself.
From the Introduction:
-------- "The Multiprecision Library provides *User-defined* integer, rational and floating-point C++ types which try to emulate as closely as practicable the C++ built-in types, but provide for more range and precision. Depending upon the number type, precision may be arbitrarily large (limited only by available memory), fixed at compile time values, for example 50 decimal digits, or a variable controlled at run-time by member functions. The types are expression-template-enabled for better performance than naive user-defined types." --------
And from the original formal review request from John:
-------- Features:
* Expression template enabled front end. * Support for Integer, Rational and Floating Point types.
Supported Integer backends:
* GMP. * Libtommath. * cpp_int.
cpp_int is an all C++ Boost licensed backend, supports both arbitrary precision types (with Allocator support), and signed and unsigned fixed precision types (with no memory allocation).
There are also some integer specific functions - for Miller Rabin testing, bit fiddling, random numbers. Plus interoperability with Boost.Rational (though that loses the expression template frontend).
Supported Rational Backends:
* GMP * libtommath * cpp_int (as above)
Supported Floating point backends:
* GMP * MPFR * cpp_dec_float
cpp_dec_float is an all C++ Boost licensed type, adapted from Christopher Kormanyos' e_float code (published in TOMS last year).
All the floating point types, have full std lib support (cos sin exp, pow etc), as well as full interoperability with Boost.Math.
There's nothing in principal to prevent extension to complex numbers and interval arithmetic types (plus any other number types I've forgotten!), but I've run out of energy for now ;-)
Code is in the sandbox under /big_number/.
Docs can be viewed online here: http://svn.boost.org/svn/boost/sandbox/big_number/libs/multiprecision/doc/ht... --------
Any review discussion should take place on the developers' list ( boost@lists.boost.org), and anyone may submit a formal review, either publicly to the entire list or privately to just myself. As usual, please consider the following questions in your formal review: What is your evaluation of the design? What is your evaluation of the implementation? What is your evaluation of the documentation? What is your evaluation of the potential usefulness of the library? Did you try to use the library? With what compiler? Did you have any problems? How much effort did you put into your evaluation? A glance? A quick reading? In-depth study? Are you knowledgeable about the problem domain? And, most importantly, please explicitly answer the following question: Do you think the library should be accepted as a Boost library? Lastly, please consider that John and Christopher have compiled a TODO list [1] based on pre-review comments. Feel free to comment on the priority and necessity of such TODO items, and whether any might be show-stoppers or warrant conditional acceptance of the library. Thanks in advance; looking forward to the discussion! - Jeff (& John & Christopher) [1] http://svn.boost.org/svn/boost/sandbox/big_number/libs/multiprecision/doc/ht...

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Jeffrey Lee Hellrung, Jr. Sent: Friday, June 08, 2012 3:28 PM To: boost@lists.boost.org; boost-announce@lists.boost.org; boost-users@lists.boost.org Subject: [boost] [review] Multiprecision review (June 8th - 17th, 2012)
-------- "The Multiprecision Library provides *User-defined* integer, rational and floating-point C++ types which try to emulate as closely as practicable the C++ built-in types, but provide for more range and precision. C++ Depending upon the number type, precision may be arbitrarily large (limited only by available memory), fixed at compile time values, for example 50 decimal digits, or a variable controlled at run-time by member functions. The types are expression-template-enabled for better performance than naive user-defined types." --------
What is your evaluation of the design?
Boost.Multiprecision is an exciting development because it provides two items that Boost has long needed in its toolkit: fixed and arbitrary precision integer types, and fixed and arbitrary precision floating-point types. That Boost.Math functions can be called directly is a massive step forward - even just for 'trivial' tasks like pre-computing math constants. Wheee! - we can now use brute force and effortlessly hurl heaps of bits at any recalcitrant problems. Suddenly, we can do all sorts of tricks at altogether monstrous precision and range! Getting more precision for some of the calculation is suddenly painless. (Of course limiting the hundreds-of-digits calculations to the nitty-bitty bits will reduce the speed penalty). (Not to mention random and rational, and that complex and fixed-point might also finally be made fully generic is looking distinctly possible and that would be even more wonderful).
What is your evaluation of the implementation?
License and backend ================= Allowing a choice of backend has crucial license advantages, allowing the 'gold standard' optimised GMP, but also allowing the Boost licensed version with remarkably little loss of speed. (I note the price we are paying for the commercial greed and patent abuse that has made maintaining the GPL license status of GMP such a quasi-religious issue). Crucially, the implementation works hard to be as near as possible plug-in for C++ built-in types including providing std::numeric_limits (where these - mostly- make sense). In general, all the iostream functions do what you could (reasonably) expect. And it is a very strong plus-point that fpclassify and all the usual suspects of regular C99 functions exist: this should mean that most moves from built-in floating-point to multiprecision should 'just work'. (Any potential license problems from copyright of Christopher Kormanyos's e_float by the ACM have been resolved). Speed ===== Fast enough for many purposes, especially if it is possible to use a GPL backend. Optional expression-template-enable is cool. Testing ====== There is a big suite of test programs written by Christopher Kormanyos to test his e_float type (which engine was hi-jacked by John Maddock to extend it to (optionally) use expression templates). These provide a good assurance that the underlying integer and floating point types work correctly and that it is going to work when used in anger. Unsurprisingly, I was able to run the test package using MSVC VS 10 OK. Don't hold your breath! (Testing iostream is, of course, a nightmare - there are an infinity of possible I/O combinations and the standard is sketchy in places and there are some differences between major platforms, so portability is never going to be 100%. But I got the impression that it works as expected). Writing a simple loopback stream output and re-input, I found that using Boost.Test to compare values can mislead. A patch is at https://svn.boost.org/trac/boost/ticket/5758 #5758: Boost.Test Floating-point comparison diagnostic output does not support radix 10 (not enough digits displayed) For example, it leads to nonsensical reports from a loopback test like [1e+2776234983093287513 != 1e+2776234983093287513] when the true situation should be obvious from this [1e+2776234983093287513 != 9.99999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999 9999999999999999999999e+2776234983093287512] However the underlying Boost.Test macros appeared to work fine and are used by a comprehensive set of tests provided (dealing with complications of multiple backend and error handling policies).
What is your evaluation of the documentation?
I was able to write some trivia before I needed to dig in. Nice. Convincing user examples. Warns of some dragons waiting to burn the unwary. (Very few typos - nice proof-reading ;-))
What is your evaluation of the potential usefulness of the library?
When you need it, you need it very badly. So essential.
Did you try to use the library? With what compiler?
Used with MSVC 10 for a few experiments and to calculate high precision constants. Did 'what it said on the tin', and agreed with Mathematica and other sources.
Did you have any problems?
Shamefacedly, I fell into the pits noted below and was duly singed by the dragons lurking therein :-(
How much effort did you put into your evaluation?
Reasonable, including playing with e_float. I wrote a simple loopback stream output and re-input using the random package. ss << std::setprecision(std::numeric_limits<cpp_dec_float_100>::max_digits10) << b << std::endl; It was essential to use max_digits10, not just digits10. This ran until I got bored. Careful reading of docs. Enough use to be confident it works OK.
Are you knowledgeable about the problem domain?
Faintly.
Do you think the library should be accepted as a Boost library?
Lastly, please consider that John and Christopher have compiled a TODO list [1] based on
Definitely YES. pre-review
comments. Feel free to comment on the priority and necessity of such TODO items, and whether any might be show-stoppers or warrant conditional acceptance of the library.
No showstoppers. ============== It is ready for use. I am sure that the library will be refined in the light of wider user experience (and that will only really come when it is issued as a Boost library). Implicit conversion =============== Initially e_float (like NTL - used during development of Boost.Math as an example of a multiprecision type, and for calculation of constants) both forbade implicit conversion. But it soon became clear that it was impractical to make everything explicit and we patched NTL to permit this (and to provide the usual suspects of std::numeric_limits and functions too). This leaves some dragons waiting for the unwary, so those who write cpp_dec_float_100 v1234567890 = 1.23456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789 012345678901234567890; will get what they asked for, and deserve - the catastrophic loss of accuracy starting at the 17th decimal digit! (`std::numeric_limits<double>::max_digits10` = 17 for the common 64-bit representation). This loss of accuracy will rarely jump out at you :-( (If I get a £1, $1, or 1 euro for everyone who makes this mistake (or one of the many other complex pits discussed in the docs), I believe I will become rich ;-) It would be nice to catch these mistakes, but not at the price of losing use of all the Boost.Math functionality (and much more). (I fantasize about a macro that can switch intelligently between explicit and implicit to protect the hapless user from his folly, but advising on loss of accuracy is probably really a compiler task). On the old hand, it is really, really cool that using a string works: cpp_dec_float_50 df = "3.14159265358979323846264338327950288419716939937510"; Guard digits ========== The existence and (surprising) number of these has already been discussed and I see no problem with the way it works. It will be an exceptional program that really needs to use max_digits10 rather than digits10. (Boost.Test is an example, to avoid nonsensical display of [2 != 2] when guard digits differ - see above). Paul --- Paul A. Bristow, Prizet Farmhouse, Kendal LA8 8AB UK +44 1539 561830 07714330204 pbristow@hetp.u-net.com

On Sat, 9 Jun 2012, Paul A. Bristow wrote:
License and backend ================= Allowing a choice of backend has crucial license advantages, allowing the 'gold standard' optimised GMP, but also allowing the Boost licensed version with remarkably little loss of speed. (I note the price we are paying for the commercial greed and patent abuse that has made maintaining the GPL license status of GMP such a quasi-religious issue).
Let us try to be factual in a section called "License": the license of GMP is *L*GPL. I know everything GNU is anathema in BSD-land, but it does make a difference. (end of digression, please no license flame war) -- Marc Glisse

Paul: Thanks for taking the time to make such a detailed review! On Sat, Jun 9, 2012 at 2:40 AM, Paul A. Bristow <pbristow@hetp.u-net.com>wrote:
-----Original Message----- From: boost-bounces@lists.boost.org [mailto: boost-bounces@lists.boost.org] On Behalf Of Jeffrey Lee Hellrung, Jr. Sent: Friday, June 08, 2012 3:28 PM To: boost@lists.boost.org; boost-announce@lists.boost.org; boost-users@lists.boost.org Subject: [boost] [review] Multiprecision review (June 8th - 17th, 2012)
-------- "The Multiprecision Library provides *User-defined* integer, rational and floating-point C++ types which try to emulate as closely as practicable the C++ built-in types, but provide for more range and precision. C++ Depending upon the number type, precision may be arbitrarily large (limited only by available memory), fixed at compile time values, for example 50 decimal digits, or a variable controlled at run-time by member functions. The types are expression-template-enabled for better performance than naive user-defined types." --------
What is your evaluation of the design?
Boost.Multiprecision is an exciting development because it provides two items that Boost has long needed in its toolkit: fixed and arbitrary precision integer types, and fixed and arbitrary precision floating-point types.
That Boost.Math functions can be called directly is a massive step forward - even just for 'trivial' tasks like pre-computing math constants.
Wheee! - we can now use brute force and effortlessly hurl heaps of bits at any recalcitrant problems.
Suddenly, we can do all sorts of tricks at altogether monstrous precision and range! Getting more precision for some of the calculation is suddenly painless.
(Of course limiting the hundreds-of-digits calculations to the nitty-bitty bits will reduce the speed penalty).
(Not to mention random and rational, and that complex and fixed-point might also finally be made fully generic is looking distinctly possible and that would be even more wonderful).
What is your evaluation of the implementation?
License and backend ================= Allowing a choice of backend has crucial license advantages, allowing the 'gold standard' optimised GMP, but also allowing the Boost licensed version with remarkably little loss of speed. (I note the price we are paying for the commercial greed and patent abuse that has made maintaining the GPL license status of GMP such a quasi-religious issue).
Crucially, the implementation works hard to be as near as possible plug-in for C++ built-in types including providing std::numeric_limits (where these - mostly- make sense). In general, all the iostream functions do what you could (reasonably) expect. And it is a very strong plus-point that fpclassify and all the usual suspects of regular C99 functions exist: this should mean that most moves from built-in floating-point to multiprecision should 'just work'.
(Any potential license problems from copyright of Christopher Kormanyos's e_float by the ACM have been resolved).
Speed =====
Fast enough for many purposes, especially if it is possible to use a GPL backend. Optional expression-template-enable is cool.
Testing ======
There is a big suite of test programs written by Christopher Kormanyos to test his e_float type (which engine was hi-jacked by John Maddock to extend it to (optionally) use expression templates). These provide a good assurance that the underlying integer and floating point types work correctly and that it is going to work when used in anger.
Unsurprisingly, I was able to run the test package using MSVC VS 10 OK. Don't hold your breath!
(Testing iostream is, of course, a nightmare - there are an infinity of possible I/O combinations and the standard is sketchy in places and there are some differences between major platforms, so portability is never going to be 100%. But I got the impression that it works as expected).
Writing a simple loopback stream output and re-input, I found that using Boost.Test to compare values can mislead.
A patch is at https://svn.boost.org/trac/boost/ticket/5758 #5758: Boost.Test Floating-point comparison diagnostic output does not support radix 10 (not enough digits displayed)
For example, it leads to nonsensical reports from a loopback test like [1e+2776234983093287513 != 1e+2776234983093287513] when the true situation should be obvious from this [1e+2776234983093287513 !=
9.99999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999 9999999999999999999999e+2776234983093287512]
However the underlying Boost.Test macros appeared to work fine and are used by a comprehensive set of tests provided (dealing with complications of multiple backend and error handling policies).
What is your evaluation of the documentation?
I was able to write some trivia before I needed to dig in. Nice. Convincing user examples. Warns of some dragons waiting to burn the unwary. (Very few typos - nice proof-reading ;-))
What is your evaluation of the potential usefulness of the library?
When you need it, you need it very badly. So essential.
Did you try to use the library? With what compiler?
Used with MSVC 10 for a few experiments and to calculate high precision constants.
Did 'what it said on the tin', and agreed with Mathematica and other sources.
Did you have any problems?
Shamefacedly, I fell into the pits noted below and was duly singed by the dragons lurking therein :-(
How much effort did you put into your evaluation?
Reasonable, including playing with e_float.
I wrote a simple loopback stream output and re-input using the random package. ss << std::setprecision(std::numeric_limits<cpp_dec_float_100>::max_digits10) << b << std::endl; It was essential to use max_digits10, not just digits10. This ran until I got bored.
Careful reading of docs.
Enough use to be confident it works OK.
Are you knowledgeable about the problem domain?
Faintly.
Do you think the library should be accepted as a Boost library?
Definitely YES.
Lastly, please consider that John and Christopher have compiled a TODO list [1] based on pre-review comments. Feel free to comment on the priority and necessity of such TODO items, and whether any might be show-stoppers or warrant conditional acceptance of the library.
No showstoppers. ============== It is ready for use.
I am sure that the library will be refined in the light of wider user experience (and that will only really come when it is issued as a Boost library).
Implicit conversion =============== Initially e_float (like NTL - used during development of Boost.Math as an example of a multiprecision type, and for calculation of constants) both forbade implicit conversion. But it soon became clear that it was impractical to make everything explicit and we patched NTL to permit this (and to provide the usual suspects of std::numeric_limits and functions too).
This leaves some dragons waiting for the unwary, so those who write
cpp_dec_float_100 v1234567890 =
1.23456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789 012345678901234567890;
will get what they asked for, and deserve - the catastrophic loss of accuracy starting at the 17th decimal digit! (`std::numeric_limits<double>::max_digits10` = 17 for the common 64-bit representation).
This loss of accuracy will rarely jump out at you :-(
(If I get a £1, $1, or 1 euro for everyone who makes this mistake (or one of the many other complex pits discussed in the docs), I believe I will become rich ;-)
It would be nice to catch these mistakes, but not at the price of losing use of all the Boost.Math functionality (and much more). (I fantasize about a macro that can switch intelligently between explicit and implicit to protect the hapless user from his folly, but advising on loss of accuracy is probably really a compiler task).
On the old hand, it is really, really cool that using a string works: cpp_dec_float_50 df = "3.14159265358979323846264338327950288419716939937510";
Guard digits ========== The existence and (surprising) number of these has already been discussed and I see no problem with the way it works. It will be an exceptional program that really needs to use max_digits10 rather than digits10. (Boost.Test is an example, to avoid nonsensical display of [2 != 2] when guard digits differ - see above).
Paul
--- Paul A. Bristow, Prizet Farmhouse, Kendal LA8 8AB UK +44 1539 561830 07714330204 pbristow@hetp.u-net.com
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

I'd like to encourage those interested in John and Christopher's Multiprecision library to take a close look at it at some point this week, provide suggestions for improvement and/or discuss any potential issues, and submit a formal review. Thanks! Original announcement below. On Fri, Jun 8, 2012 at 7:28 AM, Jeffrey Lee Hellrung, Jr. < jeffrey.hellrung@gmail.com> wrote:
Okay, it's that time. Apologies for not sending a reminder, but there had been a lot of pre-review comments over the last week or so, so I think this has been on at least some people's radar. The original review announcement is given below.
On Tue, May 29, 2012 at 2:08 PM, Jeffrey Lee Hellrung, Jr. < jeffrey.hellrung@gmail.com> wrote:
Hi all,
The review of the proposed Boost.Multiprecision library authored by John Maddock and Christopher Kormanyos has been scheduled for
June 8th - June 17th, 2012
and will be managed by myself.
From the Introduction:
-------- "The Multiprecision Library provides *User-defined* integer, rational and floating-point C++ types which try to emulate as closely as practicable the C++ built-in types, but provide for more range and precision. Depending upon the number type, precision may be arbitrarily large (limited only by available memory), fixed at compile time values, for example 50 decimal digits, or a variable controlled at run-time by member functions. The types are expression-template-enabled for better performance than naive user-defined types." --------
And from the original formal review request from John:
-------- Features:
* Expression template enabled front end. * Support for Integer, Rational and Floating Point types.
Supported Integer backends:
* GMP. * Libtommath. * cpp_int.
cpp_int is an all C++ Boost licensed backend, supports both arbitrary precision types (with Allocator support), and signed and unsigned fixed precision types (with no memory allocation).
There are also some integer specific functions - for Miller Rabin testing, bit fiddling, random numbers. Plus interoperability with Boost.Rational (though that loses the expression template frontend).
Supported Rational Backends:
* GMP * libtommath * cpp_int (as above)
Supported Floating point backends:
* GMP * MPFR * cpp_dec_float
cpp_dec_float is an all C++ Boost licensed type, adapted from Christopher Kormanyos' e_float code (published in TOMS last year).
All the floating point types, have full std lib support (cos sin exp, pow etc), as well as full interoperability with Boost.Math.
There's nothing in principal to prevent extension to complex numbers and interval arithmetic types (plus any other number types I've forgotten!), but I've run out of energy for now ;-)
Code is in the sandbox under /big_number/.
Docs can be viewed online here: http://svn.boost.org/svn/boost/sandbox/big_number/libs/multiprecision/doc/ht... --------
Any review discussion should take place on the developers' list ( boost@lists.boost.org), and anyone may submit a formal review, either publicly to the entire list or privately to just myself.
As usual, please consider the following questions in your formal review:
What is your evaluation of the design? What is your evaluation of the implementation? What is your evaluation of the documentation? What is your evaluation of the potential usefulness of the library? Did you try to use the library? With what compiler? Did you have any problems? How much effort did you put into your evaluation? A glance? A quick reading? In-depth study? Are you knowledgeable about the problem domain?
And, most importantly, please explicitly answer the following question:
Do you think the library should be accepted as a Boost library?
Lastly, please consider that John and Christopher have compiled a TODO list [1] based on pre-review comments. Feel free to comment on the priority and necessity of such TODO items, and whether any might be show-stoppers or warrant conditional acceptance of the library.
Thanks in advance; looking forward to the discussion!
- Jeff (& John & Christopher)
[1] http://svn.boost.org/svn/boost/sandbox/big_number/libs/multiprecision/doc/ht...

Hi, here it is my review which is a summary of the discussion we have the past week. The scope of the library must be stated more clearly, I don't think the library is able to manage with fixed-point arithmetic (as least as I understand it). The reasons why mixed arithmetic is not provided must be added in the scope. The disadvantages to use mp_number instead of the underlying back-end when expression templates are not used and the back-end is lightweight.
What is your evaluation of the design?
While I was expecting a more open design/interface, I think that the current design is good enough for some application. * Separating the expression template and the specific number representations (front-end/back-end) is a good design decision. * Mixing back-ends It is unfortunate that the front-end doesn't takes care of mixed arithmetic yet. I would expect the result of unary operator-() always signed. This operation is not defined in the front end as it isn't unary operator+(). I would expect the result of binary operator-() always signed. Adding mp_uint128_t and mp_int128_t and assign it to a mp_int256_t needs two explicit conversions. This interface limits some possible optimizations. I think it should be possible to mix back-ends without too much complexity at least when expression templates are not used. When ET are used, the ET need to take as parameters the back-end of the parameters and the result. If mixing independent backends (that is BE provided by 3pp) is not desired, I think that mixing BE of the same "family" (that is provided by the same library/developer) is a must on some domain. The backend could contain this meta-data in some way (a typedef). * back-end requirements It should be clear what are the back-end requirements, which are mandatory, which ones are used only if a specific operation of the front-end is used and which one are used only if present as an optimization. Each back-end should document the requirements it satisfies. * I would expect the library provide a back-end for representing N-bits integers with exactly (N/8)+1 bytes or something like that. * And also an overflow aware back-end for fixed precision integers. * Allocators and precision are orthogonal concepts and the library should allow to associate one for fixed precision. Adding a 3rd parameter to state if it is fixed or arbitrary precision should help in this concern. * Conversions IIUC convert_to is used to emulate in c++98 compilers C++11 explicit conversions. Could the explicit conversion operator be added on compilers supporting it? The front-end should make the differences between implicit and explicit construction. On c++11 compilers providing explicit conversion, couldn't the convert_to function be replaced by a explicit conversion operator? * shift operations An overload of the shift operations with a unsigned should be provided (in addition of the signed one) in order to avoid the sign check. * Uses of bool The use of bool in template parameters could be improved by the use of an enum class which will be more explicit. E.g enum class expression_template {disabled, enabled}; enum class sign {unsigned, signed}; * Default for ExpressionTemplates parameter The ExpresionTemplate parameter could be defaulted to e.g. ExpresionTemplate = expression_template_trait<Backend>::value Where expression_template_trait could be defined with a defaulted value as template<typename Backend> struct expression_template_trait { static const expression_template value = Backend::expression_template_value; }; Another defaulted value could also be possible, as disabled(or enabled) by default template<typename Backend> struct expression_template_trait { static const expression_template value = expression_template::disabled; }; But I think a default expressed by the backend is a better default. * noexcep: The library interface should use the noexcept (BOOST_NOEXCEPT, ...) facilities. * constexpr: It is unfortunate that the generic mp_number front end can not make use contexpr as not all the backends can ensure this. It will be worth however seen how far we can go. * literals The library doesn't provide some kind of literals. I think that the mp_number class should provide a way to create literals if the backend is able to. For example to_nonnegative<2884,-4>()| will produce a nonnegativefixed point constant with a range and resolution just sufficient to hold the value 2884*2^-4. Adding a move constructor from the backend maybe could help. constexpr mp_number<BE>::mp_number(const BE&&);
What is your evaluation of the implementation?
The performances of mp_number<a_trivial_adaptor<float>, false>respect to float and mp_number<a_trivial_adaptor<int>, false> and int should be given to show the cost of using the generic interface. Boost.Move should be used to emulate move semantics in compilers don't providing it.
What is your evaluation of the documentation?
The documentation is quite good, but there are some issues that need to be taken in account, in particular: * The documentation should contain Throws specification on the mp_number and backend requirements operations. * The tutorial should add more examples concerning implicit or explicit conversions. * The reference should state clearly which kind of rounding is done while converting between types. * The reference should contain the type of boost::multiprecision::number_category<B>::type for all the provided backends, and why not add also B::signed_types, B::unsigned_types, B::float_types, B::exponent_type? * The reference section must contain the relation between files and what is provided by them. * The documentation must explain how move semantics helps in this domain and what the backend needs to do to profit from this optimization. * The rounding applied when converting must be documented. * deviations from builtin behavior: The library say that the behavior follows as far as possible the behavior of builtin type. The documentation should state clearly (maybe in a specific section) the specific differences.
What is your evaluation of the potential usefulness of the library?
This library provides a uniform interface for multiprecission number 3pp libraries adding some optimization using expression templates, which is in its own quite useful.
Did you try to use the library? With what compiler? Did you have any problems?
I tried it with some compilers and found just some errors with clang-2.9. Note that the errors are not present with clang-3.0.
How much effort did you put into your evaluation? A glance? A quick reading? In-depth study?
An in depth study of the interface to report what is missing to use it for my fixed-point library. I have not really take a look at the implementation.
Are you knowledgeable about the problem domain?
I don't know too much about floating points. For the time been, I'm only interested in fixed point types.
And, most importantly, please explicitly answer the following question:
Do you think the library should be accepted as a Boost library?
Even if the current state of the library don't allows me to use it for my fixed point library I guess that the authors will add the needed features before I could provide similar features myself. Anyway, the library provides enough features that are missing in Boost, so YES, this library should be accepted. - afficher le texte cité - By order of priority: First all the documentation issues: * Document the size requirements of fixed precision ints. Yes. This should be in the documentation before release. A fixed precision integer backend reducing the size to the maximum will be welcome. * Be a bit clearer on the effects of sign-magnitude representation of cpp_int - min == -max etc. Yes. * Document cpp_dec_float precision, rounding, and exponent size. Yes. this will be of course welcome. * Can we be clearer in the docs that mixed arithmetic doesn't work? I will add this even on the scope. And I think this is a must feature for future versions of the library. * Document round functions behaviour better (they behave as in C99). Yes. * Document limits on size of cpp_dec_float. Yes. Next some not too difficult features: * Can we differentiate between explicit and implicit conversions in the mp_number constructor (may need some new type traits)? This is a must have, as the backend could have different expectations. * Make fixed precision orthogonal to Allocator type in cpp_int. Possible solution - add an additional MaxBits template argument that defaults to 0 (meaning keep going till no more space/memory). Yes, this should be taken in account. * Can the non-expression template operators be further optimised with rvalue reference support? I think this must be analyzed. * Can any functions - in particular value initialization - be made |constexp|? Unfortunately I think that the generic mp_number class can not take care of this completely, but it should where possible. I don't know enough the domain to have an opinion on the following: * Add support for fused multiply add (and subtract). GMP mpz_t could use this. * Document std lib function accuracy. * Make the exponent type for cpp_dec_float a templare parameter, maybe include support for big-integer exponents. Open question - what should be the default - int32_t or int64_t? * Can ring types (exact floating point types) be supported? The answer should be yes, but someone needs to write it. * Should there be a choice of rounding mode (probably MPFR specific)? Good luck, Vicente

Vicente, Many thanks for the review, some specific comment follow...
I would expect the result of unary operator-() always signed. This operation is not defined in the front end as it isn't unary operator+().
The operators are defined as non-members.
I would expect the result of binary operator-() always signed.
That's probably possible, although it makes the behaviour noticably different from builtin integers.
Adding mp_uint128_t and mp_int128_t and assign it to a mp_int256_t needs two explicit conversions. This interface limits some possible optimizations.
I'll do what I can do in this area, but the code is already pretty complex and I worry about it becoming untestable, as well as the issues in deciding what the "right thing to do" is for mixed arithmetic.
* I would expect the library provide a back-end for representing N-bits integers with exactly (N/8)+1 bytes or something like that.
* And also an overflow aware back-end for fixed precision integers.
OK, I'll see what I can do.
* shift operations
An overload of the shift operations with a unsigned should be provided (in addition of the signed one) in order to avoid the sign check.
It's there already - the backend is always fed an unsigned value, and mp_number only range checks the value provided if it's signed.
* noexcep:
The library interface should use the noexcept (BOOST_NOEXCEPT, ...) facilities.
There are some interfaces where that's trivial (for example the operator overloads returning an expression template). For mp_number's member functions, I don't know enough about noexcept in non trivial situations to know if it's possible (can a template member function be noexcept when whether it throws or not, depends on a dependant type?) There are also plenty of functions which should be noexcept as they only call external C library functions, but probably can't be marked as such without modifying third party headers (gmp.h for example).
* constexpr:
It is unfortunate that the generic mp_number front end can not make use contexpr as not all the backends can ensure this. It will be worth however seen how far we can go.
I think the functions that return an expression template can be constexp. Will investigate other uses.
* literals
The library doesn't provide some kind of literals. I think that the mp_number class should provide a way to create literals if the backend is able to. For example to_nonnegative<2884,-4>()| will produce a nonnegativefixed point constant with a range and resolution just sufficient to hold the value 2884*2^-4.
Again I don't yet know enough about constexp in non-trivial use cases to know if we can do this. Will investigate.
Adding a move constructor from the backend maybe could help.
constexpr mp_number<BE>::mp_number(const BE&&);
Will add.
Boost.Move should be used to emulate move semantics in compilers don't providing it.
Will investigate. Regards, John.

Hello, I just found this unsent draft. I don't remember what else I wanted to say... On Sun, 17 Jun 2012, John Maddock wrote:
* noexcep:
The library interface should use the noexcept (BOOST_NOEXCEPT, ...) facilities.
There are some interfaces where that's trivial (for example the operator overloads returning an expression template). For mp_number's member functions, I don't know enough about noexcept in non trivial situations to know if it's possible (can a template member function be noexcept when whether it throws or not, depends on a dependant type?)
noexcept(noexcept(expression)) works great.
There are also plenty of functions which should be noexcept as they only call external C library functions, but probably can't be marked as such without modifying third party headers (gmp.h for example).
GMP functions that are noexcept are marked as such. Functions that may allocate may throw and thus can't be noexcept (although a macro that says: "assume there is enough memory" might make sense).
Adding a move constructor from the backend maybe could help. constexpr mp_number<BE>::mp_number(const BE&&);
I am not sure I understand what that's good for? Just to mention a few other things. For mixed arithmetic, having operations like: number<common_type<A,B>::type> operator+(number<A>,number<B>) that call eval_add(common_type<A,B>::type&,A,B) seems good. If A==B, it doesn't change anything. If a backend is implicitly convertible to another, operations just work. And the backend writer can still overload eval_add to optimize some operations. You could even consider turning that into a result_trait<T...,op> that defaults to common_type<T...>::type but can be specialized by backend writers and determines mp_exp<...>::result_type (I notice you already have combine_expression that could be extended). Having the public type (mp_number) and the expression type (mp_exp) be specializations of the same template could save a few lines of code? You seem to go further than I did in gmpxx to remove temporaries by reusing the result: you check for aliasing within any expression, whereas I stopped at depth one (I had a prototype for arbitrary depth, and it simplified the code, but I was scared of the potential quadratic cost, I probably shouldn't be). Although I am surprised that in a=expr1+expr2, when a appears in expr1 but not expr2, you don't try to take advantage of it (not that it would help much now, but it might if you eventually add FMA). Good luck with the reviews, -- Marc Glisse

On Sat, 23 Jun 2012, Marc Glisse wrote:
I don't remember what else I wanted to say...
Ah, I probably wanted to mention the possibility to reuse the same temporaries in 2 subtrees. When evaluating a=expr1+expr2, expr1 and expr2 are evaluated independently and may both create temporaries. Having them share these temporaries is possible but complicated enough that I haven't bothered yet. -- Marc Glisse

On Sat, 23 Jun 2012, Marc Glisse wrote:
I don't remember what else I wanted to say...
One other thing: I am not sure where it is said in the doc that the expression template mechanism reassociates operations (turning a+(b+c) into (a+b)+c), which may be undesirable for some floating point backends. Turning the bool that determines the use of expression templates into an enum with more details might be too painful, but adding a few sentences to the doc would already help. -- Marc Glisse

A couple more remarks (sorry for sending comments as they come): - is mp_int128_t an alias for __int128 when gcc provides it? (or for long long on platforms where it is 128 bit long) - boost contains a library (proto) for writing expression templates. Not eating your own dog food always sends a strange signal. Maybe a couple words in the doc could help with that? -- Marc Glisse

Thanks for your detailed and clear review, Vicente.
Hi, here it is my review which is a summary of the discussion we have the past week.
The scope of the library must be stated more clearly, I don't think the library is able to manage with fixed-point arithmetic (as least as I understand it).
We can extend the docs, also as per your list in your review. <snip>
Even if the current state of the library don't allows me to use it for my fixed point library I guess that the authors will add the needed features before I could provide similar features myself.
This is something that I would really like to work---even if fixed-point is slow in multiprecision. It helps prove the concept of generic mathematical programming in C++. It would also show that we boosters are act as a coherent development community by attempting to obtain uniform treatment for non-built-in types. Perhaps we can work together and get your requirements done so you can plug your future fixed-point into multiprecision. Thanks again. Best regards, Chris.

The review of Multiprecision is due to end on Sunday, June 17th, 2012, but AFAIK we only have two (very detailed and informative!) formal reviews so far, which I fear is not enough to establish community consensus for acceptance. I want to encourage more reviews from the community. If anyone would like to extend the review deadline to give himself or herself more time to look at the library, discuss it, and/or write a review, please let me know! Thanks, - Jeff (Original review announcement below.) On Fri, Jun 8, 2012 at 7:28 AM, Jeffrey Lee Hellrung, Jr. < jeffrey.hellrung@gmail.com> wrote: [...]
On Tue, May 29, 2012 at 2:08 PM, Jeffrey Lee Hellrung, Jr. < jeffrey.hellrung@gmail.com> wrote:
Hi all,
The review of the proposed Boost.Multiprecision library authored by John Maddock and Christopher Kormanyos has been scheduled for
June 8th - June 17th, 2012
and will be managed by myself.
From the Introduction:
-------- "The Multiprecision Library provides *User-defined* integer, rational and floating-point C++ types which try to emulate as closely as practicable the C++ built-in types, but provide for more range and precision. Depending upon the number type, precision may be arbitrarily large (limited only by available memory), fixed at compile time values, for example 50 decimal digits, or a variable controlled at run-time by member functions. The types are expression-template-enabled for better performance than naive user-defined types." --------
And from the original formal review request from John:
-------- Features:
* Expression template enabled front end. * Support for Integer, Rational and Floating Point types.
Supported Integer backends:
* GMP. * Libtommath. * cpp_int.
cpp_int is an all C++ Boost licensed backend, supports both arbitrary precision types (with Allocator support), and signed and unsigned fixed precision types (with no memory allocation).
There are also some integer specific functions - for Miller Rabin testing, bit fiddling, random numbers. Plus interoperability with Boost.Rational (though that loses the expression template frontend).
Supported Rational Backends:
* GMP * libtommath * cpp_int (as above)
Supported Floating point backends:
* GMP * MPFR * cpp_dec_float
cpp_dec_float is an all C++ Boost licensed type, adapted from Christopher Kormanyos' e_float code (published in TOMS last year).
All the floating point types, have full std lib support (cos sin exp, pow etc), as well as full interoperability with Boost.Math.
There's nothing in principal to prevent extension to complex numbers and interval arithmetic types (plus any other number types I've forgotten!), but I've run out of energy for now ;-)
Code is in the sandbox under /big_number/.
Docs can be viewed online here: http://svn.boost.org/svn/boost/sandbox/big_number/libs/multiprecision/doc/ht... --------
Any review discussion should take place on the developers' list ( boost@lists.boost.org), and anyone may submit a formal review, either publicly to the entire list or privately to just myself.
As usual, please consider the following questions in your formal review:
What is your evaluation of the design? What is your evaluation of the implementation? What is your evaluation of the documentation? What is your evaluation of the potential usefulness of the library? Did you try to use the library? With what compiler? Did you have any problems? How much effort did you put into your evaluation? A glance? A quick reading? In-depth study? Are you knowledgeable about the problem domain?
And, most importantly, please explicitly answer the following question:
Do you think the library should be accepted as a Boost library?
Lastly, please consider that John and Christopher have compiled a TODO list [1] based on pre-review comments. Feel free to comment on the priority and necessity of such TODO items, and whether any might be show-stoppers or warrant conditional acceptance of the library.
Thanks in advance; looking forward to the discussion!
- Jeff (& John & Christopher)
[1] http://svn.boost.org/svn/boost/sandbox/big_number/libs/multiprecision/doc/ht...

On 6/16/2012 3:08 PM, Jeffrey Lee Hellrung, Jr. wrote:
The review of Multiprecision is due to end on Sunday, June 17th, 2012, but AFAIK we only have two (very detailed and informative!) formal reviews so far, which I fear is not enough to establish community consensus for acceptance. I want to encourage more reviews from the community. If anyone would like to extend the review deadline to give himself or herself more time to look at the library, discuss it, and/or write a review, please let me know!
I would like to see the review period extended by at least another full week.

The review of Multiprecision is due to end on Sunday, June 17th, 2012, but AFAIK we only have two (very detailed and informative!) formal reviews so far, which I fear is not enough to establish community consensus for acceptance. I want to encourage more reviews from the community. If anyone would like to extend the review deadline to give himself or herself more time to look at the library, discuss it, and/or write a review, please let me know!
I would like to see the review period extended by at least another full week.
Edward, would you have time to take place in the review then? Thank you, Chris.

On 6/17/2012 6:16 AM, Christopher Kormanyos wrote:
The review of Multiprecision is due to end on Sunday, June 17th, 2012, but AFAIK we only have two (very detailed and informative!) formal reviews so far, which I fear is not enough to establish community consensus for acceptance. I want to encourage more reviews from the community. If anyone would like to extend the review deadline to give himself or herself more time to look at the library, discuss it, and/or write a review, please let me know!
I would like to see the review period extended by at least another full week.
Edward, would you have time to take place in the review then?
Yes.

The review of Multiprecision is due to end on Sunday, June 17th, 2012, but AFAIK we only have two (very detailed and informative!) formal reviews so far, which I fear is not enough to establish community consensus for acceptance. I want to encourage more reviews from the community. If anyone would like to extend the review deadline to give himself or herself more time to look at the library, discuss it, and/or write a review, please let me know!
I would like to see the review period extended by at least another full week.
Edward, would you have time to take place in the review then?
Yes.
Thank you, Edward.

On Sat, Jun 16, 2012 at 8:38 PM, Edward Diener <eldiener@tropicsoft.com>wrote:
On 6/16/2012 3:08 PM, Jeffrey Lee Hellrung, Jr. wrote:
The review of Multiprecision is due to end on Sunday, June 17th, 2012, but AFAIK we only have two (very detailed and informative!) formal reviews so far, which I fear is not enough to establish community consensus for acceptance. I want to encourage more reviews from the community. If anyone would like to extend the review deadline to give himself or herself more time to look at the library, discuss it, and/or write a review, please let me know!
I would like to see the review period extended by at least another full week.
Edward and others, I got the green light from Ron Garcia to extend the review period until next Sunday, June 24th, 2012. Hopefully this gives sufficient time for additional members to submit formal reviews. - Jeff

The review of Multiprecision is due to end on Sunday, June 17th, 2012, but AFAIK we only have two (very detailed and informative!) formal reviews so far, which I fear is not enough to establish community consensus for acceptance. I want to encourage more reviews from the community. If anyone would like to extend the review deadline to give himself or herself more time to look at the library, discuss it, and/or write a review, please let me know!
Relating to that it would be useful to know if the lack of reviews is due to: 1) No time. 2) No interest. 3) Don't like the library but are too polite to say so ;-) 4) Doesn't address the specific use case you're interested in. Many thanks, John.

The review of Multiprecision is due to end on Sunday, June 17th, 2012, but AFAIK we only have two (very detailed and informative!) formal reviews so far, which I fear is not enough to establish community consensus for acceptance. I want to encourage more reviews from the community. If anyone would like to extend the review deadline to give himself or herself more time to look at the library, discuss it, and/or write a review, please let me know!
Relating to that it would be useful to know if the lack of reviews is due to:
1) No time. 2) No interest. 3) Don't like the library but are too polite to say so ;-) 4) Doesn't address the specific use case you're interested in.
Many thanks, John.
I really did expect a better turnout. I suppose everyone is just busy. Then again, the European Soccer Championship is also on. Did we get the wrong name? Did we focus too much on generics? Did we fail to address bare metal types like UINT128? Are big numbers just too much like school and doing homework? If we got it wrong, please contribute to the process of getting it right. We do, however, actually need to work together on a consensus. Boost truly needs support for large integers, rationals and floats. If there is anything I can do to reduce the administrative burden such as providing more examples, etc., please let me know. Best regards, Chris.

The review of Multiprecision is due to end on Sunday, June 17th, 2012, but AFAIK we only have two (very detailed and informative!) formal reviews so far, which I fear is not enough to establish community consensus for acceptance. I want to encourage more reviews from the community. If anyone would like to extend the review deadline to give himself or herself more time to look at the library, discuss it, and/or write a review, please let me know!
Relating to that it would be useful to know if the lack of reviews is due to:
1) No time. 2) No interest. 3) Don't like the library but are too polite to say so ;-) 4) Doesn't address the specific use case you're interested in.
Many thanks, John.
I have written extended integer classes which have been used in commercial software. I have read all the discussion on multiprecision Therefore I am sure I am not qualified to review the submission in any detail. However if I have a vote , it is for acceptance. p.s. Add to John's list : 5) Know enough to know how little one knows about the problem domain. _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Sun, Jun 17, 2012 at 6:25 AM, Keith Burton <kb@xtramax.co.uk> wrote:
The review of Multiprecision is due to end on Sunday, June 17th, 2012, but AFAIK we only have two (very detailed and informative!) formal reviews so far, which I fear is not enough to establish community consensus for acceptance. I want to encourage more reviews from the community. If anyone would like to extend the review deadline to give himself or herself more time to look at the library, discuss it, and/or write a review, please let me know!
Relating to that it would be useful to know if the lack of reviews is due to:
1) No time. 2) No interest. 3) Don't like the library but are too polite to say so ;-) 4) Doesn't address the specific use case you're interested in.
Many thanks, John.
I have written extended integer classes which have been used in commercial software. I have read all the discussion on multiprecision
Therefore I am sure I am not qualified to review the submission in any detail. However if I have a vote , it is for acceptance.
p.s. Add to John's list : 5) Know enough to know how little one knows about the problem domain.
Sounds like you're being too humble! If you get a chance, despite your comments above, I think we would be collectively interested in your thoughts regarding the proposed Multiprecision library within the context of your past experience with "big integer" types. (As review manager, it's my job to do a little prodding here and there.) - Jeff

On Jun 17, 2012, at 2:00 AM, John Maddock wrote:
The review of Multiprecision is due to end on Sunday, June 17th, 2012, but AFAIK we only have two (very detailed and informative!) formal reviews so far, which I fear is not enough to establish community consensus for acceptance. I want to encourage more reviews from the community. If anyone would like to extend the review deadline to give himself or herself more time to look at the library, discuss it, and/or write a review, please let me know!
Relating to that it would be useful to know if the lack of reviews is due to:
1) No time. 2) No interest. 3) Don't like the library but are too polite to say so ;-) 4) Doesn't address the specific use case you're interested in.
I know very little about this domain and don't currently have a need for it, I've pinged a few coworkers who may be interested in this area. I have a couple of questions though, as one who runs tests, I am mildly curious how big the library footprint is (disk space, peak memory, compile times, runtimes, ...)? Many ET libraries can't compile in the 5 minutes nightly testing provides, will this be an issue for your library? Is the testing infrastructure setup to automatically locate dependent libraries (GMP, MPFR, ...) and enable those tests if found, or is this a manual configuration step? -- Noel

I know very little about this domain and don't currently have a need for it, I've pinged a few coworkers who may be interested in this area. I have a couple of questions though, as one who runs tests, I am mildly curious how big the library footprint is (disk space, peak memory, compile times, runtimes, ...)? Many ET libraries can't compile in the 5 minutes nightly testing provides, will this be an issue for your library? Is the testing infrastructure setup to automatically locate dependent libraries (GMP, MPFR, ...) and enable those tests if found, or is this a manual configuration step?
Both compile times and runtimes are an issue, that said I am aware of the testing limitations and have tried to live within them - exactly how successful that will prove to be, will no doubt become apparent if the library heads off to the test runners! As for dependencies - those should be found automatically if they're in the compiler's search paths already. John.

On 6/17/2012 4:00 AM, John Maddock wrote:
The review of Multiprecision is due to end on Sunday, June 17th, 2012, but AFAIK we only have two (very detailed and informative!) formal reviews so far, which I fear is not enough to establish community consensus for acceptance. I want to encourage more reviews from the community. If anyone would like to extend the review deadline to give himself or herself more time to look at the library, discuss it, and/or write a review, please let me know!
Relating to that it would be useful to know if the lack of reviews is due to:
1) No time. 2) No interest.
I am very interested in the library but have not had the time to try it out. Next weekend I can take a number of hours to do so.

AMDG Here are my comments from reading through the tutorial. Documentation: Introduction: * "precision may be arbitrarily large ..., fixed ..., or a variable ..." You shouldn't combine adjective, adjective, noun with "or" like this. The elements should be the same part of speech. * "An expression-template-enabled front-end mp_number" "An expression-template-enabled front-end*,* mp_number" A non-restrictive appositive should have a comma. * "Separation of ... but provides Boost license alternatives ..." The subject of "provides" is "separation," which doesn't make a whole lot of sense. * "Which is to say some back-ends rely on 3rd party libraries ..." Is it really necessary to restate the previous sentence? * "typedef mp::mp_number<mp::mpfr_float_backend<300> > my_float;" Is there a reason to have two spaces before "my_float?" * "Note that mixing arithmetic operations using types of different precision is strictly forbidden:" The example uses cpp_int, which hasn't been introduced before. Does this rule apply to all backends? If so it would flow better if you continued to use mpfr. Also, if the rule is anything more complex than "mixing different specializations of mp_number is illegal," then I'd like to see a link to a more detailed discussion of what kind of mixing is legal and what kind of mixing isn't. Expression Templates: * "For example, lets suppose we're evaluating a polynomial via Horners method..." s/lets/let's/ s/Horners/Horner's/ * "... Horners method, something like this:" This doesn't parse. "like," which could have introduced a dependent clause, modifies "something," which doesn't have any legitimate place in the sentence. * "... the mpfr_class C++ wrapper for MPFR - then this expression ..." Is there any particular reason to use a hypen here? Normally, it should be a comma. * "Had we used ... things would have been ... and no less that 24 temporaries are created" s/that/than/ Also, don't mix moods and tenses like this. (i.e. it should read "... temporaries would have been created") * "... rather than the number of temporaries directly" "directly" is an adverb, but it's being used to modify a noun "number." You can fix this by inserting a gerund: "rather than measuring the number of temporaries directly." * "... directly, note also that the mpf_class ..." Run on sentence. * "Finally if we use this library with " Comma after Finally. * "Of course, should you write: x = sin(x); Then we clearly ..." "Then" shouldn't be capitalized, because it only introduces an independent clause, not a new sentence. (There are many other instances of this.) * "... during the calculation, so then a temporary variable ..." Delete "then." It sounds awkward to me. * "... that expression-templates are some kind of universal-panacea" "universal-panacea" doesn't need a hyphen, since it's just an adjective modifying a noun, not a compound word. Not to mention that "universal" is redundant. * "than their simpler cousins, they're also harder" Run on sentence. * "Having said that, these situations don't occur that often - or indeed not at all for non-template functions" There are way too many negatives in this sentence. * "that the lifetimes of a, b and c will outlive that" Missing comma before "and". * "... multiplication, where as operator* can use the target ..." "where as" doesn't make sense. Try "but" * "Even so the transformation is more efficient than creating the extra temporary variable" There should be a comma after "Even so". The transformation is not more efficient than anything. I think you mean that the transformation makes the /transformed code/ more efficient. * "... argument, which, when set to false disables all the ..." There should be a comma after "false." * "... between these three libraries, again, all are using ..." Run on sentence. General notes: * There are a lot of places where a feel like an article would make the text read more smoothly. * Please decide whether you want sentences starting with "For example", "Instead", etc. to have a comma. You aren't consistent. * Maybe try to use fewer gerunds. It's starting to annoy me. Tutorial: Integer Types: * "Very versatile, Boost licensed, all C++ integer type which support both " s/support/supports/ * "Very fast and efficient back-end." The redundancy doesn't do anything for me. cpp_int * "When the Allocator parameter is type void, then this field" "then" should really only be used with "if" * "Note that for arbitrary precision types ..., then this parameter ..." Remove "then" * "For fixed precision types then this type ..." Remove "then" * "Default constructed cpp_int_backends have the value zero." So far you've only used mp_number<cpp_int_backend<...> > and the relationship between default constructing cpp_int_backend and constructing mp_number has not been specified. * "In might be tempting to use a 127-bit type instead ..." s/In/It/ * ".. identical (apart from the sign), where as they ..." "where as" doesn't make sense. Try "although." * "Attempting to print negative values as either an Octal or Hexadecimal string ..." Mismatched number. (values vs. a string) * "...a std::runtime_error being thrown, this is a direct consequence ..." Run on sentence. gmp_int: * "It's also possible to access the underlying mpz_t via the data() member function of gmp_int" I'm assuming from this that mpz_int ends up inheriting data() from gmp_int? Otherwise, this wouldn't be terribly useful. Anyway, (I had a similar comment for cpp_int) I thing you should either specify everything in terms of the appropriate specializations of mp_number or say somewhere early on that mp_number inherits all the behavior of the backend. Wait... I see v.backend().data() in the example. * ".... notation for negative values, as a result performing formatted ..." Run on sentence. tom_int * "... for the builtin integer types, it should be noted ..." Run on sentence. * "it is a rather strange beast as it's a signed type that is not a 2's complement type" isn't this true of GMP and cpp_int as well? Why the special warning for tom_int? Examples: Factorials: Bit Operations: * "... integer may be manipulated, we'll start with an often ..." Run on sentence. * "... just set the n'th bit in the ..." Oh, come on. I'm sure there's a way to use a proper superscript. Floating Point Numbers: * I'd like to see the same order as in Integer Types. (i.e. cpp_dec_float before GMP). gmp_float: * "... overflow or division by zero. That latter will trigger ..." s/That/The/ mpfr_float: cpp_dec_float: * "There is full standard library and numeric_limits support available for this type." It's better to avoid "there is." It just adds verbosity. i.e. "Full ... support is available...." or use the active voiceS "This type has full support for ..." * On a related note, what does "standard library support" mean? ... In the example, I see "log(b)." I think this terminology is imprecise. log(b) does not call a standard library function at all. * "Narrowing conversions are truncating." This is unfortunate. I'd prefer it to round according to the current rounding mode. Examples: Area of Circle: Defining a Lambda Function: * The name of the section is misleading because "Lambda" has a totally different meaning in programming. Maybe find a different example. * "Now lets implement the ..." s/lets/let's/ * "... non-template examples, lets repeat the code ..." s/lets/let's/ * "...mixed-argument functions, here's how the new ..." Run on sentence. * I'm a little concerned that if the arguments of the final version, JEL5 are expression templates, they would be evaluated multiple times. Is this the case? Calculating a Derivative: Calculating an Integral: * "Similar to the ... in a similar manner" Don't use similar twice. * You should probably explain that the algorithm uses the trapezoid rule and iteratively cuts the step size in half. * It would be nice if there were a way to specify the minimum number of steps. Otherwise, it's tricky in cases like: \int_0^{2\pi} sin(x) dx. (Of course, this is only example code...) * "... how the function can be called, we begin by defining ..." Run on sentence. Polynomial Evaluation: Rational Number Types: gmp_rational: cpp_rational: tommath_rational: Use With Boost.Rational: rational_adapter: * class rational_adpater; typo. Constructing and Interconverting Between Number Types: * "Any number type will interoperate with the builtin type in arithmetic expressions:" There's more than one builtin type. Maybe "any builtin arithmetic type?" * "An mp_number can be converted to any built in type, via the convert_to member function" In C++11, can we use an explicit conversion operator? Generating Random Numbers: * boost/multiprecision/random.hpp should go away. I'll try to work out the changes needed to make Boost.Random work directly. Primality Testing: * "... test for primality, if the result is false ..." Run on sentence. * "... when producing random primes then you should ..." Remove "then". In Christ, Steven Watanabe

Here are my comments from reading through the tutorial.
Many thanks for the detailed comments.
* "It's also possible to access the underlying mpz_t via the data() member function of gmp_int" I'm assuming from this that mpz_int ends up inheriting data() from gmp_int? Otherwise, this wouldn't be terribly useful. Anyway, (I had a similar comment for cpp_int) I thing you should either specify everything in terms of the appropriate specializations of mp_number or say somewhere early on that mp_number inherits all the behavior of the backend. Wait... I see v.backend().data() in the example.
Right, the backend type is accessible via the backend() member function.
* "it is a rather strange beast as it's a signed type that is not a 2's complement type" isn't this true of GMP and cpp_int as well? Why the special warning for tom_int?
Lib tommath's bitwise operations behave differently from a 2's complement type. In contrast gmp and cpp_int "simulate" 2's complement behaviour when performing bitwise ops on negative numbers. Note that this is an area where the behaviour of builtin types is undefined in the std anyway, there is however, a general expection that they will actually behave in a certain way.
* "... just set the n'th bit in the ..." Oh, come on. I'm sure there's a way to use a proper superscript.
I'm not sure I follow, do you mean make mp_number subscriptable to access the individual bits? Obviously that's possible to do, I'd be interested in others thoughts as to whether it's the correct interface choice.
* On a related note, what does "standard library support" mean? ... In the example, I see "log(b)." I think this terminology is imprecise. log(b) does not call a standard library function at all.
I'll try and be more specific - it means that all the functions (from cmath) and traits (numeric_limits) that operate on builtin floating point types, are also defined for these types.
* "Narrowing conversions are truncating." This is unfortunate. I'd prefer it to round according to the current rounding mode.
Chris will have to answer that one.
Defining a Lambda Function:
* The name of the section is misleading because "Lambda" has a totally different meaning in programming. Maybe find a different example.
Point taken, shame, it made a nice simple example :-(
* I'm a little concerned that if the arguments of the final version, JEL5 are expression templates, they would be evaluated multiple times. Is this the case?
My bad, yes that would be the case. Will fix the example.
* "An mp_number can be converted to any built in type, via the convert_to member function" In C++11, can we use an explicit conversion operator?
Not yet, but's already ben asked for and trivial to add.
Generating Random Numbers:
* boost/multiprecision/random.hpp should go away. I'll try to work out the changes needed to make Boost.Random work directly.
I hoped you'd say that ;-) However, I've deliberately held off submitting patches or suggesting any changes until/unless the interface on this library is accepted. No point in chasing a moving target. Just consider the code in random.hpp proof of concept for now, I figure folks would complain about it not working if I didn't provide it. Regards, John.

<snip>
* "Narrowing conversions are truncating." This is unfortunate. I'd prefer it to round according to the current rounding mode.
Chris will have to answer that one.
Yes. I agree. I would also prefer cpp_dec_float to round when converting to lesser digits or built-in types. Unfortunately, though, cpp_dec_float in its current (*well-tested*) state does not support rounding. It only rounds to nearest when creating an output string. The cpp_dec_float class uses an internal base-10 representation. This has some advantages such as providing an intuitive form allowing for ease of visualization with a debugger. There are, however, unfortunate drawbacks with a base-10 representation. In particular conversions to and from base-10 to base-2 representations are difficult to implement. Several reviewers have been disappointed with my back-end. (That's a funny sentence, isn't it?) By that I mean disappointed in the lack of rounding thereof. If Boost.Multiprecision gets accepted and if I can find a reliable, efficient way to round to nearest when performing narrowing conversions with cpp_dec_float, then I will implement this rounding. Brute force conversion to string would be the last resort, but atrociously inefficient---not my first choice. John, could you please add this to the ToDo list? <snip>
Defining a Lambda Function:
* The name of the section is misleading because "Lambda" has a totally different meaning in programming. Maybe find a different example.
Point taken, shame, it made a nice simple example :-(
The full name of the function is: Jahnke-Emden-Lambda Function. Maybe this is enough for the documentation. Alternatively, there is also the classic "sinc" function defined as sinc(x) = sin(x) / x Calculating the sinc function uses a very simple strategy: Taylor series for small arguments and straight forward evaluation for large arguments. The example would, therefore, remain terse and east to understand. I could probably come up with another pithy, witty example. I would, however, need more time to conjure up one. Best regards, Chris.

Here are my comments from reading through the tutorial.
Documentation:
<snip Steven's comments on docs>
In Christ, Steven Watanabe
Thank you for your detailed comments and suggestions, Steven. John, should I try to look into Steven's documentation issues during the upcoming weekend or would you prefer to keep going on that? <snip> Best regards, Chris.

Thank you for your detailed comments and suggestions, Steven.
John, should I try to look into Steven's documentation issues during the upcoming weekend or would you prefer to keep going on that?
If you've got some time, then yes by all means go for it! Thanks, John.

Thank you for your detailed comments and suggestions, Steven.
John, should I try to look into Steven's documentation issues during the upcoming weekend or would you prefer to keep going on that?
If you've got some time, then yes by all means go for it! Thanks, John.
OK. I'll try to do it. Best regards, Chris.

Thank you for your detailed comments and suggestions, Steven.
John, should I try to look into Steven's documentation issues during the upcoming weekend or would you prefer to keep going on that?
If you've got some time, then yes by all means go for it! Thanks, John.
I worked over the documentation based on Steven's suggestions and incorporated some editing of my own---primarily in the first portions of the text. There can be further improvements. For example, I suggest to adding a table, which is in TBD-form in the text. John, with your permission, may I commit the changes to the documentation? When building the documentation, I only get the XML target. I am such a beginner, it hurts! Is the warning about "standalone" below important? warn: Unable to construct ./standalone warn: Unable to construct ./standalone ...patience... ...patience... ...found 1705 targets... ...updating 1 target... quickbook.quickbook-to-boostbook bin\msvc-10.0\debug\big_number.xml Generating Output File: bin\msvc-10.0\debug\big_number.xml ...updated 1 target... Best regards, Chris.

Thank you for your detailed comments and suggestions, Steven.
John, should I try to look into Steven's documentation issues during the upcoming weekend or would you prefer to keep going on that?
If you've got some time, then yes by all means go for it! Thanks, John.
John, with your permission, may I commit the changes to the documentation?
Sure - but please check the built html look OK first ;-)
When building the documentation, I only get the XML target. I am such a beginner, it hurts! Is the warning about "standalone" below important?
Yup. Do you have xsltproc and docbook configured in your user-config.jam? If not please see: https://svn.boost.org/trac/boost/wiki/BoostDocs/GettingStarted (note I've never tried this with the cygwin toolchain, working with the Win32 binaries works OK though). HTH, John.

Here is my late review. Apologies for the delays. What is your evaluation of the design? Boost.Multiprecision is interesting as it brings boost into area of computer science where it is not very present yet. Providing a high-level fixed/arbitrary precision integer/float type library to numerical scientist is a great move. The library is easy to use and behaves as one could expect when looking at the function name or operator. One can thus easily move any mathematical calculation from a usual POD double to a cpp_dec_float or mpf_float. A beginner will understand rapidly what a given piece of code using this library does. I especially like the wrapper around GMP which is a standard among numerical scientists. The conversions are however sometimes tricky to understand but I do not know if there is a solution to this issue which would satisfy everyone in every situation. What is your evaluation of the implementation? As already mentioned, I really like the ability to use pre-existing libraries such as GMP which are well-known and well-tested. The speed difference is really minimal and is probably worth the comfort. All functions do what you expect and the list of supported functions and constants is really comprehensive. Everyday use should be a problem. The support of numeric_limits is also a good thing for re-use in other libraries. The decomposition in front-end and back-end is particularly good for future extensions. The optional use of expression template is a great feature. The proposed improvements present on the TODO list aiming at improving the speed of the implementation will reduce the difference between cpp_int and the GMP equivalent even further. What is your evaluation of the documentation? The documentation is really comprehensive and presents interesting examples. The tricky points (conversions, ... ) are discussed and a beginner should be able to write a code without loosing his time reading an obscure documentation What is your evaluation of the potential usefulness of the library? Useful to anyone needing high precision mathematical computations. Did you try to use the library? With what compiler? Did you have any problems? Used with GCC 4.6 and ICC 12.1 to test my own library proposal (complex numbers). No problems nor warnings. The results obtained always agreed with the values given by Maple. As mentioned by others, one must still be careful with conversions. But any user of extended precision libraries should be aware of these caveats anyway. Furthermore, these are well documented. It was easy to use the cpp_dec_float or gmp_float classes in the context of complex numbers. How much effort did you put into your evaluation? A glance? A quick reading? In-depth study? Good effort at understanding and testing most of the floating point library in the context presented above. Tested the integer types for this review. Are you knowledgeable about the problem domain? Yes. See the extended complex number class proposal. Do you think the library should be accepted as a Boost library? If boost wants to move into the realm of extended precision numbers, then yes. Lastly, please consider that John and Christopher have compiled a TODO list [1] based on pre-review comments. Feel free to comment on the priority and necessity of such TODO items, and whether any might be show-stoppers or warrant conditional acceptance of the library. In my opinion, these are not show-stoppers. Adding more back-ends is always good but the main library in this field (GMP) can already be used. So this should not be a reason to reject this library. The same is true with the possible improvements to cpp_int_backend. Users wanting a really fast code can use the GMP back-end. Users in the real world may also make suggestions for other possible improvements or express their needs. Having a version of this library in the real world is a good thing to spot its possible weaknesses and will also allow the authors to work on the elements of their TODO lists. Regards, Matthieu -- Matthieu Schaller

AMDG On 06/24/2012 04:25 PM, Matthieu Schaller wrote:
Do you think the library should be accepted as a Boost library?
If boost wants to move into the realm of extended precision numbers, then yes.
What do you mean "If boost want to...?" There is no way to determine this apart from the review process, /which you are a part of by submitting a review/. In Christ, Steven Watanabe

Here is my late review. Apologies for the delays.
Thank you for your review, Matthieu.
The optional use of expression template is a great feature. The proposed improvements present on the TODO list aiming at improving the speed of the implementation will reduce the difference between cpp_int and the GMP equivalent even further.
...And get the number of digits higher, if we can find the right algorithms. <snip>
It was easy to use the cpp_dec_float or gmp_float classes in the context of complex numbers.
Excellent! Thank you. We are looking forward to extended complex as well.
Do you think the library should be accepted as a Boost library?
If boost wants to move into the realm of extended precision numbers, then yes.
Matthieu, you need to say yes or no. The reviewers decide and it's Boolean. <snip>
Regards, Matthieu
Best regards, Chris.

On 6/8/2012 10:28 AM, Jeffrey Lee Hellrung, Jr. wrote:
Any review discussion should take place on the developers' list ( boost@lists.boost.org), and anyone may submit a formal review, either publicly to the entire list or privately to just myself.
As usual, please consider the following questions in your formal review:
This is my review of the multiprecision library. I am going to use the term "general type" to refer to either 'integer', 'rational number', or 'floating point' type no matter what the backend is.
What is your evaluation of the design?
The design is logical. I like the fact that various backends are supported and that there is always a Boost backend to fall back on for a general type. I also like it very much that future backends are also supported via the backend requirements.
What is your evaluation of the implementation?
I looked the implementation from the point of view as an end-user but did not look at the details. I did not have the chance to test the implementation so my comments are based on the doc. It seems very easy to use the implementation. The hardest part is that some of the backends have their own rules, which are not entirely consistent with other backends of the same general type ( integer, rational number, or floating point ). This does not occur very often but when it does the end-user has to be aware of it. These slight inconsistencies are documented but I would like to see an attempt to regularize them by the front-end, perhaps via a compile time trait. As a single example of this the gmp_int backend triggers a division by 0 signal when one tries to divide the integer by 0; the tom_int raises a hardward signal with a division by 0; the cpp_int throws a std::runtime_error with a division by 0. I would like to see some means by which I could use any integer backend and know that a std::runtime_error would be thrown by a division by 0. It would be nice if other partcularities could be regularized in a similar manner. This would make it little easier to use the library generically. The library allows conversions between values of a general type no matter what the backend. This is good. The library allows what I think of as widening conversions, from integer to rational or float, and from rational to float. This is good. Both follow the similar idea in C++ itself. In C++ one can do a narrowing conversion if a static_cast is used, otherwise a compiler error ensues. The document also explains that a narrowing conversion will produce a compiler error. Can a static_cast be used to do a narrowing conversion ? The introduction states that "mixing arithmetic operations using types of different precision is strictly forbidden". I was disappointed not to read any discussion of why this would be so. In C++ this is not the case with integer and floating point types. Considering that this library does allow conversons within the same general type and widening conversions it would seem that doing operations with different types could be technically allowed fairly easily, by converting all values in an operation to the largest type and/or greatest precision.
What is your evaluation of the documentation?
The doc is totally adeqate.
What is your evaluation of the potential usefulness of the library?
Tremendously important to C++. Although I am not a mathematician myself, my background in math and science suggests that a multiprecision library involving huge and/or highly accurate numbers is an absolute necesssity to serious mathematical and scientific calculations.
Did you try to use the library? With what compiler? Did you have any problems?
I did not have time to try it out with any compiler. I originally wanted to modify some of the tests in the library so I could try them out with the compilers I have, but the tests were too complicated for me to understand easily. I am going to try to cobble together some simple tests for myself during the upcoming week but i wanted to submit my review nonetheless before the period for it was over.
How much effort did you put into your evaluation? A glance? A quick reading? In-depth study?
I put a good deal of effort attempting to understand the pluses and negatives of the design, and thinking about the problems involved.
Are you knowledgeable about the problem domain?
I am fairly knowledgeable about mathematics without being a mathematician or claiming expertise in any particular area of mathematics.
And, most importantly, please explicitly answer the following question:
Do you think the library should be accepted as a Boost library?
I think the library should be accepted as a Boost library. I do think more work needs to be done to make the library usable, but I have no doubt the authors of the library are capable of doing so.
Lastly, please consider that John and Christopher have compiled a TODO list [1] based on pre-review comments. Feel free to comment on the priority and necessity of such TODO items, and whether any might be show-stoppers or warrant conditional acceptance of the library.
I would like to see more work done in the two areas I mentioned: regularizing the backends and performing operations with different precisions. I do realize that accuracy is paramount when using the library, but in the tradition of C++ as long as the end-user knows any possible shortcoming of these two areas I think they should be allowed if technically feasible. Eddie Diener

AMDG On 06/24/2012 07:58 PM, Edward Diener wrote:
As a single example of this the gmp_int backend triggers a division by 0 signal when one tries to divide the integer by 0; the tom_int raises a hardward signal with a division by 0; the cpp_int throws a std::runtime_error with a division by 0. I would like to see some means by which I could use any integer backend and know that a std::runtime_error would be thrown by a division by 0.
IMO, you should just avoid division by zero period. As far as C++ itself is concerned, division by 0 is undefined behavior, so any code that wants to handle any numeric type including builtins can't assume anything about division by zero. In Christ, Steven Watanabe

As a single example of this the gmp_int backend triggers a division by 0 signal when one tries to divide the integer by 0; the tom_int raises a hardward signal with a division by 0; the cpp_int throws a std::runtime_error with a division by 0. I would like to see some means by which I could use any integer backend and know that a std::runtime_error would be thrown by a division by 0.
IMO, you should just avoid division by zero period. As far as C++ itself is concerned, division by 0 is undefined behavior, so any code that wants to handle any numeric type including builtins can't assume anything about division by zero.
True, but.... I have some sympathy for Edwards POV, and we could be more newbie friendly in that area. My original thought was "don't second guess the backend" - in other words let the backend just do whatever it does for each operation, and not try to manhandle different backends towards the same behavior. However, on reflection, division by zero is relatively cheap to test for compared to a full division anyway, so we could check that particular case (or add an extra template param to the backend I guess). John.

On 6/25/2012 4:10 AM, John Maddock wrote:
As a single example of this the gmp_int backend triggers a division by 0 signal when one tries to divide the integer by 0; the tom_int raises a hardward signal with a division by 0; the cpp_int throws a std::runtime_error with a division by 0. I would like to see some means by which I could use any integer backend and know that a std::runtime_error would be thrown by a division by 0.
IMO, you should just avoid division by zero period. As far as C++ itself is concerned, division by 0 is undefined behavior, so any code that wants to handle any numeric type including builtins can't assume anything about division by zero.
True, but.... I have some sympathy for Edwards POV, and we could be more newbie friendly in that area.
My original thought was "don't second guess the backend" - in other words let the backend just do whatever it does for each operation, and not try to manhandle different backends towards the same behavior. However, on reflection, division by zero is relatively cheap to test for compared to a full division anyway, so we could check that particular case (or add an extra template param to the backend I guess).
FYI, Several years ago, implementing a Runge Kutta integrator and solver, we found that removing the explicit divide by zero check and relying on the hardware signal to propagate as an exception gave a 40 - 60% performance improvement. Though, I'm not sure how that translates to today's hardware/compilers and data types used. Jeff

FYI, Several years ago, implementing a Runge Kutta integrator and solver, we found that removing the explicit divide by zero check and relying on the hardware signal to propagate as an exception gave a 40 - 60% performance improvement. Though, I'm not sure how that translates to today's hardware/compilers and data types used.
Interesting - thanks! I take it this was with native int's and not extended precision ones? I ask because typical long division algorithms are rather expensive - as in O(N^2) with a biggish constant as well. That said, values that are small enough to fit in an int, but are actually in an extended type "just in case" are an important use case, and as supporting that case efficiently still needs more work, I guess I shouldn't hamper that effort too much.... so I guess it's back to a compile time switch.... John.

On 6/25/2012 11:15 AM, John Maddock wrote:
FYI, Several years ago, implementing a Runge Kutta integrator and solver, we found that removing the explicit divide by zero check and relying on the hardware signal to propagate as an exception gave a 40 - 60% performance improvement. Though, I'm not sure how that translates to today's hardware/compilers and data types used.
Interesting - thanks!
I take it this was with native int's and not extended precision ones? I
Yes, these were purely native types.
ask because typical long division algorithms are rather expensive - as in O(N^2) with a biggish constant as well.
That said, values that are small enough to fit in an int, but are actually in an extended type "just in case" are an important use case, and as supporting that case efficiently still needs more work, I guess I shouldn't hamper that effort too much.... so I guess it's back to a compile time switch....
In our case the incidence of divide by zero was a very small percentage of the total iterations. The real cost of the if(...) was during all of the non-zero iterations. As you say it could be negligible compared to the other work being done. Jeff

On 6/25/2012 4:10 AM, John Maddock wrote:
As a single example of this the gmp_int backend triggers a division by 0 signal when one tries to divide the integer by 0; the tom_int raises a hardward signal with a division by 0; the cpp_int throws a std::runtime_error with a division by 0. I would like to see some means by which I could use any integer backend and know that a std::runtime_error would be thrown by a division by 0.
IMO, you should just avoid division by zero period. As far as C++ itself is concerned, division by 0 is undefined behavior, so any code that wants to handle any numeric type including builtins can't assume anything about division by zero.
True, but.... I have some sympathy for Edwards POV, and we could be more newbie friendly in that area.
My original thought was "don't second guess the backend" - in other words let the backend just do whatever it does for each operation, and not try to manhandle different backends towards the same behavior.
The problem from the end-user's POV is that when different backends do different things the end user has to be aware of the backend implementation and take action accordingly. This complicates generic use of your library somewhat for an end-user.
However, on reflection, division by zero is relatively cheap to test for compared to a full division anyway, so we could check that particular case (or add an extra template param to the backend I guess).
I think choosing division by zero as an example was flawed as Steven pointed out. My general point is that the library should strive to provide compile-time alternatives which would regularize any differences in backends as much as possible. Of course if this comes at the cost of numeric interoperability and ease of use, I would be against any such regularization. But I think that the individual cases should be considered carefully.

on Mon Jun 25 2012, John Maddock <boost.regex-AT-virgin.net> wrote:
As a single example of this the gmp_int backend triggers a division by 0 signal when one tries to divide the integer by 0; the tom_int raises a hardward signal with a division by 0; the cpp_int throws a std::runtime_error with a division by 0. I would like to see some means by which I could use any integer backend and know that a std::runtime_error would be thrown by a division by 0.
IMO, you should just avoid division by zero period. As far as C++ itself is concerned, division by 0 is undefined behavior, so any code that wants to handle any numeric type including builtins can't assume anything about division by zero.
True, but.... I have some sympathy for Edwards POV, and we could be more newbie friendly in that area.
My original thought was "don't second guess the backend" - in other words let the backend just do whatever it does for each operation, and not try to manhandle different backends towards the same behavior. However, on reflection, division by zero is relatively cheap to test for compared to a full division anyway, so we could check that particular case (or add an extra template param to the backend I guess).
Not only that, but not everybody is writing platform-independent code, and this goes double for people involved in HPC. A long-running calculation that goes down because of divide-by-zero can be extremely expensive. A nod should be given toward ways of handling these issues. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 6/28/2012 8:09 AM, Dave Abrahams wrote:
on Mon Jun 25 2012, John Maddock <boost.regex-AT-virgin.net> wrote:
As a single example of this the gmp_int backend triggers a division by 0 signal when one tries to divide the integer by 0; the tom_int raises a hardward signal with a division by 0; the cpp_int throws a std::runtime_error with a division by 0. I would like to see some means by which I could use any integer backend and know that a std::runtime_error would be thrown by a division by 0.
IMO, you should just avoid division by zero period. As far as C++ itself is concerned, division by 0 is undefined behavior, so any code that wants to handle any numeric type including builtins can't assume anything about division by zero.
True, but.... I have some sympathy for Edwards POV, and we could be more newbie friendly in that area.
My original thought was "don't second guess the backend" - in other words let the backend just do whatever it does for each operation, and not try to manhandle different backends towards the same behavior. However, on reflection, division by zero is relatively cheap to test for compared to a full division anyway, so we could check that particular case (or add an extra template param to the backend I guess).
Not only that, but not everybody is writing platform-independent code, and this goes double for people involved in HPC. A long-running calculation that goes down because of divide-by-zero can be extremely expensive. A nod should be given toward ways of handling these issues.
I did not want to focus on just divide by zero. I wanted to suggest that any anomalies between backends using the same general type be normalized as much as it is possible to do effectively in order to promote generic use of the backend. As a further example the doc for the cpp_int type says: "Construction from a string that contains invalid non-numeric characters results in a std::runtime_error being thrown. " I do not see any mention if this is the case for gmp_int and/or tom_int. If it is not the same then a programmer constructing an mp_number using a cpp_int_backend<> which passes a string potentially containing an invalid non_numeric character must handled the mp_number constructor differently than when constructing using a gmp_int or a tom_int backend. But ideally one wants to choose any effective backend and then have all functionality be consistently the same and hopefully equally as correct as possible. My point is simply that I believe the library should normalize ( or "regularize" ) any differences as much as possible as long as this does not affect the accuracy ( and perhaps speed ) of the library. Of course there are always tradeoffs but a generic library should always work to minimize differences wherever possible.

I do not see any mention if this is the case for gmp_int and/or tom_int. If it is not the same then a programmer constructing an mp_number using a cpp_int_backend<> which passes a string potentially containing an invalid non_numeric character must handled the mp_number constructor differently than when constructing using a gmp_int or a tom_int backend. But ideally one wants to choose any effective backend and then have all functionality be consistently the same and hopefully equally as correct as possible. My point is simply that I believe the library should normalize ( or "regularize" ) any differences as much as possible as long as this does not affect the accuracy ( and perhaps speed ) of the library. Of course there are always tradeoffs but a generic library should always work to minimize differences wherever possible.
Nod. I will certainly try and normalize behavior where practical, and those are good examples to look at further, Thanks, John.

AMDG I really haven't had time to go through this library as thoroughly as I would have liked, but, since it's the last day of the review: *I vote to accept Multiprecision into Boost* Here are my comments on the code so far. It's not very much, since I only got through a few files. I'll try to expand on this as I have time, but no promises. Revision: sandbox/big_number@79066 concepts/mp_number_architypes.hpp: * The file name is misspelled. It should be "archetypes." * The #includes aren't quite right. The file does not use trunc, and does use fpclassify. * line 28: mpl::list requires #include <boost/mpl/list.hpp> * line 79: stringstream requires #include <sstream> depricated: * Should be deprecated. detail/functions/constants.hpp: * line 12: typedef typename has_enough_bits<...> pred_type; If you're going to go through all this trouble to make sure that you have a type with enough bits, you shouldn't assume that unsigned has enough. IIRC, the standard only guarantees 16 bits for unsigned. * line 38: if(digits < 3640) Ugh. I'm assuming that digits means binary digits and 3640 = floor(log_2 10^1100) * line 51: I would appreciate a brief description of the formula. This code isn't very readable. * line 105: It looks like this is using the formula e \approx \sum_0^n{\frac{1}{i!}} = (\sum_0^n\frac{n!}{i!})/n! and induction using the rule: \sum_0^n\frac{n!}{i!} = (\sum_0^(n-1)\frac{(n-1)!}{i!}) * n + 1 Please explain the math or point to somewhere that explains it. * line 147: ditto for pi * line 251: calc_pi(result, ...digits2<...>::value + 10); Any particular reason for +10? I'd prefer calc_pi to do whatever adjustments are needed to get the right number of digits. detail/functions/pow.hpp: * line 18 ff: pow_imp I don't like the fact that this implementation creates log2(p) temporaries on the stack and multiplies them all together on the way out of the recursion. I'd prefer to only use a constant number of temporaries. * line 31: I don't see how the cases > 1 buy you anything. They don't change the number of calls to eval_multiply. (At least, it wouldn't if the algorithm were implemented correctly.) * line 47: ???: The work of this loop is repeated at every level of recursion. Please fix this function. It's a well-known algorithm. It shouldn't be hard to get it right. * line 76: pow_imp(denom, t, -p, mpl::false_()); This is broken for p == INT_MIN with twos-complement arithmetic. * line 185: "The ldexp function..." I think you mean the "exp" function. * line 273: if(x.compare(ll) == 0) This is only legal if long long is in T::int_types. Better to use intmax_t which is guaranteed to be in T::int_types. * line 287: const bool b_scale = (xx.compare(float_type(1e-4)) > 0); Unless I'm missing something, this will always be true because xx > 1 > 1e-4 as a result of the test on line 240. * line 332: if(&arg == &result) This is totally unnecessary, since you already copy arg into a temporary using frexp on line 348. * line 351: if(t.compare(fp_type(2) / fp_type(3)) <= 0) Is the rounding error evaluating 2/3 important? I've convinced myself that it works, but it took a little thought. Maybe add a comment to the effect that the exact value of the boundary doesn't matter as long as its about 2/3? * line 413: "The fabs function " s/fabs/log10/. * line 464: eval_convert_to(&an, a); This makes the assumption that conversion to long long is always okay even if the value is out of the range of long long. This assumption makes me nervous. * line 465: a.compare(an) compare(long long) * line 474: eval_subtract(T, long long) Also, this value of da is never used. It gets overwritten on line 480 or 485. * line 493: (eval_get_sign(x) > 0) This expression should always be true at this point. * line 614: *p_cosh = x; cosh is an even function, so this is wrong when x = -\inf * line 625: T e_px, e_mx; These are only used in the generic implementation. It's better to restrict the scope of variables to the region that they're actually used in. * line 643: eval_divide(*p_sinh, ui_type(2)); I notice that in other places you use ldexp for division by powers of 2. Which is better? detail/mp_number_base.hpp: * line 423: (std::numeric_limits<T>::digits + 1) * 1000L This limits the number of bits to LONG_MAX/1000. I'd like there to be some way to know that as long as I use less than k bits, the library won't have any integer overflow problems. k can depend on INT_MAX, LONG_MAX, etc, but the implementation limits need to be clearly documented somewhere. I really hate the normal ad hoc situation where you just ignore the problem and hope that all the values are small enough to avoid overflow. Other notes: Backend Requirements: * Do we always require that all number types be assignable from intmax_t, uintmax_t, and long double? Why must an integer be assignable from long double? * The code seems to assume that it's okay to pass int as the exp parameter of ldexp. These requirements don't mandate that this will work. In Christ, Steven Watanabe

Here are my comments on the code so far. It's not very much, since I only got through a few files. I'll try to expand on this as I have time, but no promises.
Many thanks Steven.
concepts/mp_number_architypes.hpp:
* The file name is misspelled. It should be "archetypes."
Indeed, fixed in my local copy.
* The #includes aren't quite right. The file does not use trunc, and does use fpclassify.
* line 28: mpl::list requires #include <boost/mpl/list.hpp> * line 79: stringstream requires #include <sstream>
Fixed in my local copy.
depricated:
* Should be deprecated.
Will go away if the library is accepted anyway (just contains bits and bobs not in the submission that I didn't want to delete just yet).
detail/functions/constants.hpp:
* line 12: typedef typename has_enough_bits<...> pred_type; If you're going to go through all this trouble to make sure that you have a type with enough bits, you shouldn't assume that unsigned has enough. IIRC, the standard only guarantees 16 bits for unsigned.
Nod. There's some new code that simplifies all that as well, simplified and fixed in my local copy.
* line 38: if(digits < 3640) Ugh. I'm assuming that digits means binary digits and 3640 = floor(log_2 10^1100)
Comments to that effect throughout.
* line 51: I would appreciate a brief description of the formula. This code isn't very readable.
Nod. Added locally: // // We calculate log2 from using the formula: // // ln(2) = 3/4 SUM[n>=0] ((-1)^n * N!^2 / (2^n(2n+1)!)) // // Numerator and denominator are calculated separately and then // divided at the end, we also precalculate the terms up to n = 5 // since these fit in a 32-bit integer anyway. // // See Gourdon, X., and Sebah, P. The logarithmic constant: log 2, Jan. 2004. // Also http://www.mpfr.org/algorithms.pdf. // Also spotted a missing optimisation in the arithmetic.
* line 105: It looks like this is using the formula e \approx \sum_0^n{\frac{1}{i!}} = (\sum_0^n\frac{n!}{i!})/n! and induction using the rule: \sum_0^n\frac{n!}{i!} = (\sum_0^(n-1)\frac{(n-1)!}{i!}) * n + 1
Please explain the math or point to somewhere that explains it.
Nod adding locally: // // Standard evaluation from the definition of e: http://functions.wolfram.com/Constants/E/02/ //
* line 147: ditto for pi
Nod. Adding locally: // // This algorithm is from: // Schonhage, A., Grotefeld, A. F. W., and Vetter, E. Fast Algorithms: A Multitape Turing // Machine Implementation. BI Wissenschaftverlag, 1994. // Also described in MPFR's algorithm guide: http://www.mpfr.org/algorithms.pdf. // // Let: // a[0] = A[0] = 1 // B[0] = 1/2 // D[0] = 1/4 // Then: // S[k+1] = (A[k]+B[k]) / 4 // b[k] = sqrt(B[k]) // a[k+1] = a[k]^2 // B[k+1] = 2(A[k+1]-S[k+1]) // D[k+1] = D[k] - 2^k(A[k+1]-B[k+1]) // Stop when |A[k]-B[k]| <= 2^(k-p) // and PI = B[k]/D[k] Currently testing those changes before moving on to the other comments, Thanks again, John.

* line 251: calc_pi(result, ...digits2<...>::value + 10); Any particular reason for +10? I'd prefer calc_pi to do whatever adjustments are needed to get the right number of digits.
It looks to be an "implementation artifact", I'm investigating removing it.
detail/functions/pow.hpp:
* line 18 ff: pow_imp I don't like the fact that this implementation creates log2(p) temporaries on the stack and multiplies them all together on the way out of the recursion. I'd prefer to only use a constant number of temporaries.
* line 31: I don't see how the cases > 1 buy you anything. They don't change the number of calls to eval_multiply. (At least, it wouldn't if the algorithm were implemented correctly.) * line 47: ???: The work of this loop is repeated at every level of recursion. Please fix this function. It's a well-known algorithm. It shouldn't be hard to get it right.
Chris?
* line 76: pow_imp(denom, t, -p, mpl::false_()); This is broken for p == INT_MIN with twos-complement arithmetic.
Fixed locally.
* line 185: "The ldexp function..." I think you mean the "exp" function.
Nod, fixed locally.
* line 273: if(x.compare(ll) == 0) This is only legal if long long is in T::int_types. Better to use intmax_t which is guaranteed to be in T::int_types.
Nod. Fixed locally, there were a couple of other cases as well.
* line 287: const bool b_scale = (xx.compare(float_type(1e-4)) > 0); Unless I'm missing something, this will always be true because xx > 1 > 1e-4 as a result of the test on line 240.
Yep. Fixed locally. My fault for "improving" Chris's original algorithm.
* line 332: if(&arg == &result) This is totally unnecessary, since you already copy arg into a temporary using frexp on line 348.
Nod, fixed locally.
* line 351: if(t.compare(fp_type(2) / fp_type(3)) <= 0) Is the rounding error evaluating 2/3 important? I've convinced myself that it works, but it took a little thought. Maybe add a comment to the effect that the exact value of the boundary doesn't matter as long as its about 2/3?
Not sure, Chris?
* line 413: "The fabs function " s/fabs/log10/.
Fixed locally.
* line 464: eval_convert_to(&an, a); This makes the assumption that conversion to long long is always okay even if the value is out of the range of long long. This assumption makes me nervous.
Needs a try{}catch{} at the very least. Fixed locally.
* line 465: a.compare(an) compare(long long)
Yep.
* line 474: eval_subtract(T, long long) Also, this value of da is never used. It gets overwritten on line 480 or 485.
Nod. Fixed locally.
* line 493: (eval_get_sign(x) > 0) This expression should always be true at this point.
Nod. Fixed locally.
* line 614: *p_cosh = x; cosh is an even function, so this is wrong when x = -\inf
Nod. Fixed locally.
* line 625: T e_px, e_mx; These are only used in the generic implementation. It's better to restrict the scope of variables to the region that they're actually used in.
Nod. Fixed locally.
* line 643: eval_divide(*p_sinh, ui_type(2)); I notice that in other places you use ldexp for division by powers of 2. Which is better?
Good question, this is translated from Chris's original code which was specific to his decimal floating point code where division by 2 is (slightly) cheaper 'cos there's no simple ldexp. In the general case though, ldexp is usually a better bet as it's often a simple bit-fiddle. Changed locally.
detail/mp_number_base.hpp:
* line 423: (std::numeric_limits<T>::digits + 1) * 1000L This limits the number of bits to LONG_MAX/1000. I'd like there to be some way to know that as long as I use less than k bits, the library won't have any integer overflow problems. k can depend on INT_MAX, LONG_MAX, etc, but the implementation limits need to be clearly documented somewhere. I really hate the normal ad hoc situation where you just ignore the problem and hope that all the values are small enough to avoid overflow.
If there are really more digits than fit in a long then we'll likely have other issues - such as not enough memory to store them all in - however, I've added a static_assert and a note to this effect.
Other notes:
Backend Requirements:
* Do we always require that all number types be assignable from intmax_t, uintmax_t, and long double? Why must an integer be assignable from long double?
Good question. It feels right to me that wherever possible all types should have uniform requirements, but given that long double -> builtin int conversions are always possible, why not long double -> extended int? In other words it follows the idea that multiprecision types should as far as possible behave like builtin types. The conversion should probably be explicit in mp_number though (that's one on the TODO list).
* The code seems to assume that it's okay to pass int as the exp parameter of ldexp. These requirements don't mandate that this will work.
That's a bug in the requirements, will fix. Many thanks for the detailed comments, John.

detail/functions/pow.hpp:
* line 18 ff: pow_imp I don't like the fact that this implementation creates log2(p) temporaries on the stack and multiplies them all together on the way out of the recursion. I'd prefer to only use a constant number of temporaries.
* line 31: I don't see how the cases > 1 buy you anything. They don't change the number of calls to eval_multiply. (At least, it wouldn't if the algorithm were implemented correctly.) * line 47: ???: The work of this loop is repeated at every level of recursion. Please fix this function. It's a well-known algorithm. It shouldn't be hard to get it right.
Chris?
Good suggestion. The function has been rewritten in the sandbox using the S-and-X binary method, as described in D. E. Knuth, "The Art of Computer Programming", Vol. 2, Section 4.6.3. Oh no, I put it in the sandbox. I should have fixed it locally, as we are in a review. My bad. I am sorry if this is wrong. John, a merge with your checked out copy will be needed. <snip>
* line 351: if(t.compare(fp_type(2) / fp_type(3)) <= 0) Is the rounding error evaluating 2/3 important? I've convinced myself that it works, but it took a little thought. Maybe add a comment to the effect that the exact value of the boundary doesn't matter as long as its about 2/3?
Not sure, Chris?
I need to look at this function more closely. It needs more comments through-and-through. I will perform some boundary checking and add comments. I need a day or two for this change request. <snip>
Many thanks for the detailed comments, John.
Thanks for the great suggestions, Chris.

Oh no, I put it in the sandbox. I should have fixed it locally, as we are in a review. My bad. I am sorry if this is wrong. John, a merge with your checked out copy will be needed.
No problem, it merged OK, however it does fail some of the tests (incorrect conceptual assumptions)... testing a fix now, John.

Oh no, I put it in the sandbox. I should have fixed it locally, as we are in a review. My bad. I am sorry if this is wrong. John, a merge with your checked out copy will be needed.
No problem, it merged OK, however it does fail some of the tests (incorrect conceptual assumptions)... testing a fix now, John.
Oh, I see what you mean. The root of the problem runs deeper than my trivial optimization of the pow() function. It's a bit beyond my comprehension right now. I re-wrote the assignment to one about ten different ways and the disturbed tests fail on all re-writes. The solution must go deeper. John, did your fix work? Thanks, Chris.

No problem, it merged OK, however it does fail some of the tests (incorrect conceptual assumptions)... testing a fix now, John.
I re-wrote the assignment to one about ten different ways and the disturbed tests fail on all re-writes. The solution must go deeper.
John, did your fix work?
Yes, I haven't committed yet though as there are other changes mixed in with it that I haven't finished with yet, John.

Oh no, I put it in the sandbox. I should have fixed it locally, as we are in a review. My bad. I am sorry if this is wrong. John, a merge with your checked out copy will be needed.
No problem, it merged OK, however it does fail some of the tests (incorrect conceptual assumptions)... testing a fix now, John.
I re-wrote the assignment to one about ten different ways and the
disturbed tests fail on all re-writes. The solution must go deeper. John, did your fix work? Thanks, Chris.
Oh, I finally noticed that the my proposed pow_imp() function is in the abstraction layer that can only call the "eval_"-style functions. My assignment to the value of 1 would potentially need an "eval_set()" to avoid ambiguous resolution. The workaround below does pass the tests. In order to set the result of the pow:imp() to one, however, I first multiply with zero to clear the result, and subsequently add 1 to get 1---roundabout at best. Do we need an eval_set() that is callable from the "detail" layer? Thanks, Chris. ----------------- Bad WORKAROUND ----------------------------- namespace detail{ template<typename T, typename U> inline void pow_imp(T& result, const T& t, const U& p, const mpl::false_&) { // Compute the pure power of typename T t^p. // Use the S-and-X binary method, as described in // D. E. Knuth, "The Art of Computer Programming", Vol. 2, // Section 4.6.3 . The resulting computational complexity // is order log2[abs(p)]. U p2(p); // Determine if p is even or odd. const bool p_is_odd = (U(p2 % 2) != U(0)); if(p_is_odd) { result = t; } else { eval_multiply(result, 0LL); eval_add(result, 1LL); } // The variable x stores the binary powers of t. T x(t); while(U(p2 /= 2) != U(0)) { // Square x for each binary power. eval_multiply(x, x); const bool has_binary_power = (U(p2 % U(2)) != U(0)); if(has_binary_power) { // Multiply the result with each binary power contained in the exponent. eval_multiply(result, x); } } } ... }

Do we need an eval_set() that is callable from the "detail" layer?
Nope, there's already an operator= for all the backends, but the argument must be a type listed in one of the backends typelists, so the fix is: // Find the type as wide as U that T is assignable from: typedef typename boost::multiprecision::detail::canonical<U, T>::type int_type; // This will store the result. if(U(p % U(2)) != U(0)) { result = t; } else result = int_type(1); John.

Do we need an eval_set() that is callable from the "detail" layer?
Nope, there's already an operator= for all the backends, but the argument must be a type listed in one of the backends typelists, so the fix is:
// Find the type as wide as U that T is assignable from: typedef typename boost::multiprecision::detail::canonical<U, T>::type int_type;
// This will store the result. if(U(p % U(2)) != U(0)) { result = t; } else result = int_type(1);
John.
Thanks yet again. I got your update from the sandbox and tested it. The tests in my checkout of the trunk (without specfun) are OK with VS2010. It's amazing how quickly and reliably you work! Best regards, Chris.

AMDG On 06/25/2012 10:44 AM, John Maddock wrote:
* line 423: (std::numeric_limits<T>::digits + 1) * 1000L This limits the number of bits to LONG_MAX/1000. I'd like there to be some way to know that as long as I use less than k bits, the library won't have any integer overflow problems. k can depend on INT_MAX, LONG_MAX, etc, but the implementation limits need to be clearly documented somewhere. I really hate the normal ad hoc situation where you just ignore the problem and hope that all the values are small enough to avoid overflow.
If there are really more digits than fit in a long then we'll likely have other issues - such as not enough memory to store them all in - however, I've added a static_assert and a note to this effect.
Taking into account the fact that numeric_limits::digits is in bits and the extra factor of 1000, it only takes 500 KB to run into a problem when long is 32 bits. This is small enough that running out of memory is not necessarily an issue. In Christ, Steven Watanabe

Hi On Friday, June 08, 2012 04:28:09 PM Jeffrey Lee Hellrung, Jr. wrote:
Okay, it's that time. Apologies for not sending a reminder, but there had been a lot of pre-review comments over the last week or so, so I think this has been on at least some people's radar. The original review announcement is given below.
I hope it is not too late to give a short brief review of my experience with the Multiprecision library. First a general comment: this review is from a "naive user" perspective who just wants to use multiprecision and is happy finding a C++ library for that. My test case is to use floating point multiprecision types as the basis for our odeint library (www.odeint.com). As odeint is fully templatized I could just plug in, say, the cpp_dec_float_50 and the basic routines worked out of the box. The only problem I had was that there seems to be no support for min / max functions for expressions of multiprecision types. So can not write x = max( a+b , a*b ) when x,a,b are some mp types, however a1 = a+b a2 = a*b x = max( a1 , a2 ) works and I used it as a workaround, however I would like to see full support of min/max in Multiprecision (maybe I just missed something?) I can not say anything on the design or implementation as I did not look into it. May only remark is that the design is such that mp types work nicely with large numerical libraries and the implementation seems to be robust. Some basic performance test of cpp_dec_float_50 vs mpf_float_50 gave me a factor of about 1.8 in favor of mpf, as expected from what I read in the docs. As for the documentation I didn't go beyond the introduction, because I didn't need too. This is exactly how it should be, if you just want to use a library the Introduction/Tutorial should be all you have to read. To conclude this very brief review, I would be happy to see this library becoming a part of boost. Best, Mario
On Tue, May 29, 2012 at 2:08 PM, Jeffrey Lee Hellrung, Jr. <
jeffrey.hellrung@gmail.com> wrote:
Hi all,
The review of the proposed Boost.Multiprecision library authored by John Maddock and Christopher Kormanyos has been scheduled for
June 8th - June 17th, 2012
and will be managed by myself.
From the Introduction:
-------- "The Multiprecision Library provides *User-defined* integer, rational and floating-point C++ types which try to emulate as closely as practicable the C++ built-in types, but provide for more range and precision. Depending upon the number type, precision may be arbitrarily large (limited only by available memory), fixed at compile time values, for example 50 decimal digits, or a variable controlled at run-time by member functions. The types are expression-template-enabled for better performance than naive user-defined types." --------
And from the original formal review request from John:
-------- Features:
* Expression template enabled front end. * Support for Integer, Rational and Floating Point types.
Supported Integer backends:
* GMP. * Libtommath. * cpp_int.
cpp_int is an all C++ Boost licensed backend, supports both arbitrary precision types (with Allocator support), and signed and unsigned fixed precision types (with no memory allocation).
There are also some integer specific functions - for Miller Rabin testing, bit fiddling, random numbers. Plus interoperability with Boost.Rational (though that loses the expression template frontend).
Supported Rational Backends:
* GMP * libtommath * cpp_int (as above)
Supported Floating point backends:
* GMP * MPFR * cpp_dec_float
cpp_dec_float is an all C++ Boost licensed type, adapted from Christopher Kormanyos' e_float code (published in TOMS last year).
All the floating point types, have full std lib support (cos sin exp, pow etc), as well as full interoperability with Boost.Math.
There's nothing in principal to prevent extension to complex numbers and interval arithmetic types (plus any other number types I've forgotten!), but I've run out of energy for now ;-)
Code is in the sandbox under /big_number/.
Docs can be viewed online here: http://svn.boost.org/svn/boost/sandbox/big_number/libs/multiprecision/doc /html/index.html --------
Any review discussion should take place on the developers' list ( boost@lists.boost.org), and anyone may submit a formal review, either publicly to the entire list or privately to just myself.
As usual, please consider the following questions in your formal review:
What is your evaluation of the design? What is your evaluation of the implementation? What is your evaluation of the documentation? What is your evaluation of the potential usefulness of the library? Did you try to use the library? With what compiler? Did you have any problems? How much effort did you put into your evaluation? A glance? A quick reading? In-depth study? Are you knowledgeable about the problem domain?
And, most importantly, please explicitly answer the following question:
Do you think the library should be accepted as a Boost library?
Lastly, please consider that John and Christopher have compiled a TODO list [1] based on pre-review comments. Feel free to comment on the priority and necessity of such TODO items, and whether any might be show-stoppers or warrant conditional acceptance of the library.
Thanks in advance; looking forward to the discussion!
- Jeff (& John & Christopher)
[1] http://svn.boost.org/svn/boost/sandbox/big_number/libs/multiprecision/doc/h tml/boost_multiprecision/map/todo.html
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Thu, 28 Jun 2012, Mario Mulansky wrote:
My test case is to use floating point multiprecision types as the basis for our odeint library (www.odeint.com). As odeint is fully templatized I could just plug in, say, the cpp_dec_float_50 and the basic routines worked out of the box. The only problem I had was that there seems to be no support for min / max functions for expressions of multiprecision types. So can not write
x = max( a+b , a*b )
x = max<cpp_dec_float_50>( a+b , a*b ) maybe? (assuming they don't overload max for this library) -- Marc Glisse

On Thu, 2012-06-28 at 19:38 +0200, Marc Glisse wrote:
On Thu, 28 Jun 2012, Mario Mulansky wrote:
My test case is to use floating point multiprecision types as the basis for our odeint library (www.odeint.com). As odeint is fully templatized I could just plug in, say, the cpp_dec_float_50 and the basic routines worked out of the box. The only problem I had was that there seems to be no support for min / max functions for expressions of multiprecision types. So can not write
x = max( a+b , a*b )
x = max<cpp_dec_float_50>( a+b , a*b ) maybe?
(assuming they don't overload max for this library)
I think this would work, but I don't want to do that because then any user defined non-templatized function would not be found (see example below). I think the better way is an explicit cast of the arguments: max( static_cast< mp_type >( a+b ) , static_cast< mp_type >( a*b ) ); as also written in the MP introduction as I just found out. Example, where the user defined function is not used because it isn't templatized: #include <iostream> struct my_type { }; bool operator <( const my_type x , const my_type y ) { return true; } // for std::max to compile... bool max( const my_type x , const my_type y ) { std::cout << "test" << std::endl; }; int main() { my_type a , b; using std::max; max( a , b ); // calls user defined max max<my_type>( a , b ); // calls std::max, not good // user defined max, I think that's the right way: max( static_cast<my_type>(a) , static_cast<my_type>(b) ); }

Thank you for taking the time to try the library, Mario.
I hope it is not too late to give a short brief review of my experience with the Multiprecision library. First a general comment: this review is from a "naive user" perspective who just wants to use multiprecision and is happy finding a C++ library for that.
Users just wanting big number types are *the* target group. In fact, I consider myself to be a naive user.
My test case is to use floating point multiprecision types as the basis for our odeint library (www.odeint.com). As odeint is fully templatized I could just plug in, say, the cpp_dec_float_50 and the basic routines worked out of the box. The only problem I had was that there seems to be no support for min / max functions for expressions of multiprecision types. So can not write
x = max( a+b , a*b )
It works when the template parameter of std::max() is explicitly provided. The code below, for example, successfully compiles. boost::multiprecision::cpp_dec_float_100 a(boost::multiprecision::cpp_dec_float_100(1) / 3); boost::multiprecision::cpp_dec_float_100 b(boost::multiprecision::cpp_dec_float_100(1) / 7); boost::multiprecision::cpp_dec_float_100 c = std::max<boost::multiprecision::cpp_dec_float_100>(a + b, a * b); It does seem strange that the compiler needs an explicit template parameter for dis-ambiguity regarding the results of binary arithmetic. I don't have enough karma to know. John, what's your opinion?
when x,a,b are some mp types, however
a1 = a+b a2 = a*b x = max( a1 , a2 )
works and I used it as a workaround, however I would like to see full support of min/max in Multiprecision (maybe I just missed something?)
<snip> Best, Mario <snip> Thanks again, Mario. Best regards, Chris.

Mario wrote:
x = max( a+b , a*b ) Christopher wrote: It works when the template parameter of std::max() is explicitly provided. The code below, for example, successfully compiles. boost::multiprecision::cpp_dec_float_100 c = std::max<boost::multiprecision::cpp_dec_float_100>(a + b, a * b); It does seem strange that the compiler needs an explicit template parameter for dis-ambiguity regarding the results of binary arithmetic.
The result of the operator function call is an expression template (I'm guessing he had expression templates enabled). You are forcing an implicit cast to temporaries of the number type by specifying the max template parameter. a+b has a different expression template type than a*b. Since max takes only one template parameter you can't specify two different types as arguments and expect it to be able to deduce. template <class T> const T& max(const T& a, const T& b); Presumably disabling expression templates would have worked for Mario because a+b and a*b would result in temporaries both of type cpp_dec_float_100 rather than temporaries of different expression template types. Regards, Luke

The only problem I had was that there seems to be no support for min / max functions for expressions of multiprecision types. So can not write
x = max( a+b , a*b )
There is a mention of this in the introduction about 2/3 of the way down http://svn.boost.org/svn/boost/sandbox/big_number/libs/multiprecision/doc/ht... The problem is this: The result of a+b is an expression template, the result of a*b is different expression template type, so: 1) Template argument deduction for std::max fails. 2) Even if template argument deduction had succeeded, you would be passing the "wrong type" to the function - an expression template and not a number, so for example given: foo(a+b) template arg deduction would succeed for foo (assuming foo is a template function), but you'd likely get inscrutable errors inside the body of foo as a result of passing something that's not actually a number to it. So you either have to: * Explicitly cast the arguments to the underlying number type when calling a template function. Or, * Call an explicit template instance by passing the function the template args inside <>. Note that none of the above applies when calling non-template functions as all the usual conversions apply, I've also fixed up the multiprecision and Math lib's so they interoperate without having to worry about any of this. I'd forgotten all about std::min/max, but I'd add overloads for those too (and any other std lib functions I spot). Cheers, John.

Hello, Not quite a review as it's way too late :), but some comments on probably interesting experience I've got while trying out multiprecision from sandbox together with Boost.UBLAS . Both libraries are generic in the way that one can be used instead of builtin float and double, and the other should support such types. I've found out that they don't play well together, probably due to the fact that UBLAS uses its own expression templates. A simple example of code that fails to compile is below: ------------------ #include <boost/numeric/ublas/vector.hpp> #include <boost/multiprecision/cpp_dec_float.hpp> typedef boost::multiprecision::mp_number< boost::multiprecision::cpp_dec_float<50> , true > float_type; int main(int argc, char* argv[]) { boost::numeric::ublas::c_vector<float_type, 3> v1,v2; inner_prod( v1, v2 ); } ----------------- Fails with MSVC 9 and a little old clang from thunk (--version prints clang version 3.2 (trunk 156987)). The easy fix is to turn off expression templates inside mp_number. It also fails even with expression templates being turned off whenever it tries to compare mp_number with a convertible type: ------------------------- #include <boost/multiprecision/cpp_dec_float.hpp> typedef boost::multiprecision::mp_number< boost::multiprecision::cpp_dec_float<50> , false > float_type; struct A { operator float_type() const { return (float_type)1.0f; } }; int main(int argc, char* argv[]) { float_type f = 123.0f; A a; bool r = (a < f); } ------------------------- Such constructions appear sometimes within UBLAS (e.g., lu_factorize with NDEBUG undefined). The fix is probably non-trivial. I've successfully overcome it by modifying is_valid_comparison_imp: ------------------------ template <class Exp1, class Exp2> struct is_valid_comparison_imp { <.......skipped.......> mpl::and_< is2, mpl::or_< is1, is_arithmetic<Exp1> > >, + mpl::and_< + is_convertible<Exp1,Exp2>, + is2 + >, + mpl::and_< + is_convertible<Exp2,Exp1>, + is1 + > >::type type; }; ---------------------------- Though I didn't run the tests - it might break something. Also running the tests for UBLAS with mp_number put in place of floats/doubles would be interesting.
Do you think the library should be accepted as a Boost library?
Whenever issues with UBLAS interoperability are fixed (or some reasonable decision is made). -- ------------ Sergey Mitsyn.

Not quite a review as it's way too late :), but some comments on probably interesting experience I've got while trying out multiprecision from sandbox together with Boost.UBLAS . Both libraries are generic in the way that one can be used instead of builtin float and double, and the other should support such types.
I've found out that they don't play well together, probably due to the fact that UBLAS uses its own expression templates. A simple example of code that fails to compile is below:
------------------
#include <boost/numeric/ublas/vector.hpp> #include <boost/multiprecision/cpp_dec_float.hpp>
typedef boost::multiprecision::mp_number< boost::multiprecision::cpp_dec_float<50> , true > float_type;
int main(int argc, char* argv[]) { boost::numeric::ublas::c_vector<float_type, 3> v1,v2; inner_prod( v1, v2 ); }
I'm not surprised, third party template libraries would have to be "expression template safe" to be used unchanged with mp_number, a typical example of what fails is: foo(a+b) where foo is a template function. This can only be fixed inside uBlas unfortunately.
Fails with MSVC 9 and a little old clang from thunk (--version prints clang version 3.2 (trunk 156987)).
The easy fix is to turn off expression templates inside mp_number.
It also fails even with expression templates being turned off whenever it tries to compare mp_number with a convertible type:
-------------------------
#include <boost/multiprecision/cpp_dec_float.hpp>
typedef boost::multiprecision::mp_number< boost::multiprecision::cpp_dec_float<50> , false > float_type;
struct A { operator float_type() const { return (float_type)1.0f; } };
int main(int argc, char* argv[]) { float_type f = 123.0f; A a;
bool r = (a < f); }
-------------------------
Such constructions appear sometimes within UBLAS (e.g., lu_factorize with NDEBUG undefined). The fix is probably non-trivial. I've successfully overcome it by modifying is_valid_comparison_imp:
Sigh... that's just nasty. I probably need to rewrite the comparison code anyway for better efficiency, I'll try and fix this along the way.
Also running the tests for UBLAS with mp_number put in place of floats/doubles would be interesting.
Nod, will try.
Do you think the library should be accepted as a Boost library?
Whenever issues with UBLAS interoperability are fixed (or some reasonable decision is made).
Many thanks for the feedback, John.

On Sun, Jul 29, 2012 at 9:13 AM, Sergey Mitsyn <svm@jinr.ru> wrote:
Hello,
Not quite a review as it's way too late :), but some comments on probably interesting experience I've got while trying out multiprecision from sandbox together with Boost.UBLAS . Both libraries are generic in the way that one can be used instead of builtin float and double, and the other should support such types.
[...] Thanks Sergey for your observations. I've added them to my partial list of notes. (I'll finish up aggregating the discussion and post the review results within a couple weeks, I hope.) - Jeff

Also running the tests for UBLAS with mp_number put in place of floats/doubles would be interesting.
I've added modified versions of uBlas tests 1-7 to the multiprecision lib, most of the tests pass, but there are two outstanding issues: * Test1: I had to disable a couple of tests because they trigger debug assertons inside the STL. I get the same errors when testing with type double, so I conclude it's a uBlas issue, not sure why they aren't triggered in the regular regression tests though... * Test3: I can't get this to compile, and the error are so inscrutable I don't know what the issue is, I'd need some help from a uBlas expert to figure those out. Note that the above is with expression templates turned off in multiprecision. Getting uBlas to work with an expression template enabled number type would require extensive (but not difficult) modification of uBlas. If anyone wants to take that on I can offer advice. Cheers, John.
participants (16)
-
Belcourt, Kenneth
-
Christopher Kormanyos
-
Dave Abrahams
-
Edward Diener
-
Jeff Flinn
-
Jeffrey Lee Hellrung, Jr.
-
John Maddock
-
Keith Burton
-
Marc Glisse
-
Mario Mulansky
-
Matthieu Schaller
-
Paul A. Bristow
-
Sergey Mitsyn
-
Simonson, Lucanus J
-
Steven Watanabe
-
Vicente J. Botet Escriba