[review] Multiprecision review scheduled for June 8th - 17th, 2012

Hi all, The review of the proposed Boost.Multiprecision library authored by John Maddock and Christopher Kormanyos has been scheduled for June 8th - June 17th, 2012 and will be managed by myself.
From the Introduction:
-------- "The Multiprecision Library provides *User-defined* integer, rational and floating-point C++ types which try to emulate as closely as practicable the C++ built-in types, but provide for more range and precision. Depending upon the number type, precision may be arbitrarily large (limited only by available memory), fixed at compile time values, for example 50 decimal digits, or a variable controlled at run-time by member functions. The types are expression-template-enabled for better performance than naive user-defined types." -------- And from the original formal review request from John: -------- Features: * Expression template enabled front end. * Support for Integer, Rational and Floating Point types. Supported Integer backends: * GMP. * Libtommath. * cpp_int. cpp_int is an all C++ Boost licensed backend, supports both arbitrary precision types (with Allocator support), and signed and unsigned fixed precision types (with no memory allocation). There are also some integer specific functions - for Miller Rabin testing, bit fiddling, random numbers. Plus interoperability with Boost.Rational (though that loses the expression template frontend). Supported Rational Backends: * GMP * libtommath * cpp_int (as above) Supported Floating point backends: * GMP * MPFR * cpp_dec_float cpp_dec_float is an all C++ Boost licensed type, adapted from Christopher Kormanyos' e_float code (published in TOMS last year). All the floating point types, have full std lib support (cos sin exp, pow etc), as well as full interoperability with Boost.Math. There's nothing in principal to prevent extension to complex numbers and interval arithmetic types (plus any other number types I've forgotten!), but I've run out of energy for now ;-) Code is in the sandbox under /big_number/. Docs can be viewed online here: http://svn.boost.org/svn/boost/sandbox/big_number/libs/multiprecision/doc/ht... -------- I hope everyone interested can reserve some time to read through the documentation, try the code out, and post a formal review, either during the formal review window or before. I expect to conduct the review in the "traditional" manner, i.e., entirely within the regular boost developers' mailing list (boost@lists.boost.org). I will send a reminder of review process details closer to the review window. Thanks! - Jeff

Le 29/05/12 23:08, Jeffrey Lee Hellrung, Jr. a écrit :
Hi all,
The review of the proposed Boost.Multiprecision library authored by John Maddock and Christopher Kormanyos has been scheduled for
June 8th - June 17th, 2012
and will be managed by myself.
I hope everyone interested can reserve some time to read through the documentation, try the code out, and post a formal review, either during the formal review window or before.
Hi, glad to see that the library will be reviewed soon. I have spent some hours reading the documentation. Here are some comments and a lot of questions. * As all the classes are at the multi-precision namespace, why name the main class mp_number and not just number? typedef mp::number<mp::mpfr_float_backend<300> > my_float; * I think that the fact that operands of different backends can not be mixed on the same operation limits some interesting operations: I would expect the result of unary operator-() always signed? Is this operation defined for signed backends? I would expect the result of binary operator-() always signed? Is this operation defined for signed backends? what is the behavior of mp_uint128_t(0) - mp_uint128_t(1)? It would be great if the tutorial could show that it is possible however to add a mp_uint128_t and a mp_int256_t, or isn't it possible? I guess this is possible, but a conversion is needed before adding the operands. I don't know if this behavior is not hiding some possible optimizations. I think it should be possible to mix backends without too much complexity and that the library could provide the mechanism so that the backend developer could tell to the library about how to perform the operation and what should be the result. * Anyway, if the library authors don't want to open to this feature, the limitation should be stated more clearly, e.g in the reference documentation "The arguments to these functions must contain at least one of the following: An mp_number. An expression template type derived from mp_number. " there is nothing that let think mixing backend is not provided. * What about replacing the second bool template parameter by an enum class expression_template {disabled, enabled}; which will be more explicit. That is typedef mp::mp_number<mp::mpfr_float_backend<300>, false> my_float; versus typedef mp::mp_number<mp::mpfr_float_backend<300>, mp::expression_template::disabled> my_float; * As I posted in this ML already I think that allocators and precision are orthogonal concepts and the library should allow to associate one for fixed precision. What about adding a 3rd parameter to state if it is fixed or arbitrary precision? * Why cpp_dec_float doesn't have a template parameter to give the integral digits? or as the C++ standard proposal from Lawrence Crow (http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3352.html), take the range and resolution as template parameters? * What about adding Throws specification on the mp_number and backend requirements operations documentation? * Can the user define a backend for fixed int types that needs to manage with overflow? * Why bit_set is a free function? * I don't see nothing about overflow for cpp_dec_float backend operations. I guess it is up to the user to avoid overflow as for integers. what would be the result on overflow? Could this be added to the documentation? * can we convert from a cpp_dec_float_100 to a cpp_dec_float_50? if yes, which rounding policy is applied? Do you plan to let the user configure the rounding policy? BTW, I see in the reference "Type mp_number is default constructible, and both copy constructible and assignable from: ... Any type that the Backend is constructible or assignable from. " I would expect to have this information in some way on the tutorial. I will appreciate also if section "Constructing and Interconverting Between Number Types" says something about convert_to<T> member function. If not, what about a mp_number_cast function taking as parameter a rounding policy? * Does the cpp_dec_float back end satisfies any of the Optional Requirements? The same question for the other backends? * Is there a difference between implicit and explicit construction? * On c++11 compilers providing explicit conversion, couldn't the convert_to function be replaced by a explicit conversion operator? * Are implicit conversion possible? * Do you plan to add constexpr and noexcept to the interface? After thinking a little bit I'm wondering if this is this possible when using 3pp libraries backends that don't provide them? * Why do you allow the argument of left and right shift operations to be signed and throw an exception when negative? Why don't just forbid it for signed types? * Why the "Non-member standard library function support" can be used only with floating-point Backend types? Why not with fixed-point types? * What is the type of boost::multiprecision::number_category<B>::type for all the provided backends? Could the specialization for boost::multiprecision::number_category<B>::type be added in the documentation of each backend? and why not add also B::signed_types, B::unsigned_types, B::float_types, B::exponent_type? * Why have you chosen the following requirements for the backend? - negate instead of operator-() - eval_op instead of operator op=() - eval_convert_to instead of explicit operator T() - eval_floor instead of floor Optimization? Is this optimization valid for short types (e.g. up to 4/8 bytes)? * As the developer needs to define a class with some constraints to be a model of backend, which are the advantages of requiring free functions instead of member functions? * Couldn't these be optional if the backend defines the usual operations? * Or could the library provide a trivial backend adaptor that requires the backend just to provide the usual operations instead of the eval_xxx? * How the performances of mp_number<this_trivial_adaptor<float>, false> will compare with float? * I don't see in the reference section the relation between files and what is provided by them. Could this be added? * And last, I don't see anything related to rvalue references and move semantics. Have you analyzed if its use could improve the performances of the library? Good luck for the review. A really good work. Vicente

As per Jeff's comments I'm replying to the boost-list not boost-users.
I have spent some hours reading the documentation. Here are some comments and a lot of questions.
* As all the classes are at the multi-precision namespace, why name the main class mp_number and not just number?
typedef mp::number<mp::mpfr_float_backend<300> > my_float;
Good question :) I don't have a particulary strong view whether it's "number" or "mp_number", but would like to know what others think.
* I think that the fact that operands of different backends can not be mixed on the same operation limits some interesting operations:
I would expect the result of unary operator-() always signed? Is this operation defined for signed backends?
It is, but I'm not sure it's useful. Currently there's only one unsigned backend, and it does the equivalent of a two's complement negate - ie unary minus is equivalent to (~i + 1). It does this because this is used to implement some of the operations (at both frontend and backend level), so it's hard to change. It might be possible to poison the unary minus operator at the top level so it doesn't compile for unsigned integer types, but I'd have to investigate that. Basically unsigned types are frankly horrible :(
I would expect the result of binary operator-() always signed? Is this operation defined for signed backends? what is the behavior of mp_uint128_t(0) - mp_uint128_t(1)?
It's a mp_uint128_t, and the result is the same as you would get for a built in 128 bit unsigned type that does 2's complement arithmetic. This is intentional, as the intended use for fixed precision cpp_int's is as a replacement for built in types.
It would be great if the tutorial could show that it is possible however to add a mp_uint128_t and a mp_int256_t, or isn't it possible? I guess this is possible, but a conversion is needed before adding the operands. I don't know if this behavior is not hiding some possible optimizations.
Not currently possible (compiler error). I thought about mixed operations early on and decided it was such a can of worms that I wouldn't go there at this time. Basically there are enough design issues to argue about already ;-) One option would be to have a further review for that specific issue at a later date. However, consider this: in almost any non-trivial cenario I can think of, if mixed operations are allowed, then expression template enabled operations will yield a different result to non-expression template operations. In fact it's basically impossible for the user to reason about what expression templates might do in the face of mixed precision operations, and when/if promotions might occur. For that reason I'm basically against them, even if, as you say, it might allow for some optimisations in some cases.
* Anyway, if the library authors don't want to open to this feature, the limitation should be stated more clearly, e.g in the reference documentation "The arguments to these functions must contain at least one of the following:
An mp_number. An expression template type derived from mp_number. " there is nothing that let think mixing backend is not provided.
Nod, will fix.
* What about replacing the second bool template parameter by an enum class expression_template {disabled, enabled}; which will be more explicit. That is
typedef mp::mp_number<mp::mpfr_float_backend<300>, false> my_float;
versus
typedef mp::mp_number<mp::mpfr_float_backend<300>, mp::expression_template::disabled> my_float;
Not a bad idea actually, I'd like to know what others think.
* As I posted in this ML already I think that allocators and precision are orthogonal concepts and the library should allow to associate one for fixed precision. What about adding a 3rd parameter to state if it is fixed or arbitrary precision?
I could do that yes.
* Why cpp_dec_float doesn't have a template parameter to give the integral digits? or as the C++ standard proposal from Lawrence Crow (http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3352.html), take the range and resolution as template parameters?
I don't understand, how is that different from the number of decimal digits?
* What about adding Throws specification on the mp_number and backend requirements operations documentation?
Well mostly it would be empty ;-) But yes, there are a few situations were throwing is acceptable, but it's never a requirement.
* Can the user define a backend for fixed int types that needs to manage with overflow?
For sure, just flag an error (throw for example) for any operation that overflows.
* Why bit_set is a free function?
Why not? At the time, that seemed the natural way to go, but now you mention it I guess it could be an enable_if'ed member function. I guess I have no strong views either way.
* I don't see nothing about overflow for cpp_dec_float backend operations. I guess it is up to the user to avoid overflow as for integers. what would be the result on overflow? Could this be added to the documentation?
It supports infinities and NaN's - should be mentioned somewhere, but I'll add to the reference section. So basically behaviour is the same as for double/float/long double.
* can we convert from a cpp_dec_float_100 to a cpp_dec_float_50? if yes, which rounding policy is applied? Do you plan to let the user configure the rounding policy?
Yes you can convert, and the rounding is currently poorly defined :-( I'll let Chris answer about rounding policies, but basically it's a whole lot of work. The aim is not to try and compete with say MPFR, but be "good enough" for most purposes. For some suitable definition of "good enough" obviously ;-)
BTW, I see in the reference "Type mp_number is default constructible, and both copy constructible and assignable from: ... Any type that the Backend is constructible or assignable from. " I would expect to have this information in some way on the tutorial.
It should be in the "Constructing and Interconverting Between Number Types" section of the tutorial, but will check.
I will appreciate also if section "Constructing and Interconverting Between Number Types" says something about convert_to<T> member function.
Nod will do.
If not, what about a mp_number_cast function taking as parameter a rounding policy?
I think it would be very hard to a coherant set of rounding policies that were applicable to all backends... including third party ones that haven't been thought of yet. Basically ducking that issue at present :-(
* Does the cpp_dec_float back end satisfies any of the Optional Requirements? The same question for the other backends?
Yes, but it's irrelevant / an implementation detail. The optional requirements are there for optimisations, the user shouldn't be able to detect which ones a backend choses to support.
* Is there a difference between implicit and explicit construction?
Not currently.
* On c++11 compilers providing explicit conversion, couldn't the convert_to function be replaced by a explicit conversion operator?
I don't know, I'd have to think about that, what compilers support that now?
* Are implicit conversion possible?
To an mp_number, never from it.
* Do you plan to add constexpr and noexcept to the interface? After thinking a little bit I'm wondering if this is this possible when using 3pp libraries backends that don't provide them?
I'm also not sure if it's possible, or even what we would gain - I can't offhand think of any interfaces that could use constexp for example.
* Why do you allow the argument of left and right shift operations to be signed and throw an exception when negative? Why don't just forbid it for signed types?
Good question, although: * I think it's pretty common to write "mynumber << 4" and expect it to compile. * I don't want implicit conversions from signed to unsigned in this case as it can lead to hard to track down errors if the signed value really is negative.
* Why the "Non-member standard library function support" can be used only with floating-point Backend types? Why not with fixed-point types?
Because we don't currently have any to test this with. Is supporting pow or exp with a fixed point type a good idea?
* What is the type of boost::multiprecision::number_category<B>::type for all the provided backends? Could the specialization for boost::multiprecision::number_category<B>::type be added in the documentation of each backend? and why not add also B::signed_types, B::unsigned_types, B::float_types, B::exponent_type?
OK.
* Why have you chosen the following requirements for the backend? - negate instead of operator-() - eval_op instead of operator op=() - eval_convert_to instead of explicit operator T() - eval_floor instead of floor
* non-member functions are required if defaults are to be provided for the optional requirements. * There are some non-members that can't be written as overloaded non-member operators but can named free functions (sorry I forget which ones, but I remember seeing one or two along the way). * explicit conversions aren't well supported at present. * Compiler bug workaround (older GCC versions), there's a note at the requirements section: "The non-member functions are all named with an "eval_" prefix to avoid conflicts with template classes of the same name - in point of fact this naming convention shouldn't be necessary, but rather works around some compiler bugs. "
Optimization? Is this optimization valid for short types (e.g. up to 4/8 bytes)?
What optimisation?
* As the developer needs to define a class with some constraints to be a model of backend, which are the advantages of requiring free functions instead of member functions?
Easier for the library to provide default versions for the optional requirements.
* Couldn't these be optional if the backend defines the usual operations?
Well you can meta-program around anything I guess, doesn't mean I want to though...
* Or could the library provide a trivial backend adaptor that requires the backend just to provide the usual operations instead of the eval_xxx?
There is such a backend (undocumented) in SVN - it's call arithmetic_backend. However it's not nearly as useful as you might think - there are still a bunch of things that have to be written specifically for each backend type. That's why it's not part of the library submission.
* How the performances of mp_number<this_trivial_adaptor<float>, false> will compare with float?
No idea, might be interesting to find out, will investigate.
* I don't see in the reference section the relation between files and what is provided by them. Could this be added?
Nod.
* And last, I don't see anything related to rvalue references and move semantics. Have you analyzed if its use could improve the performances of the library?
Analyzed no, but rvalue references are supported for copying if the backend also supports it. I do seem to recall seeing different compilers which both claim to support rvalue refs doing different things with the code though - if I remember rightly gcc is much more willing to use rvalue based move semantics than VC++. Thanks for the comments, John.

Thank you for your comments, Vicente.
I have spent some hours reading the documentation. Here are some comments and a lot of questions.
Did you get a chance to use it also? You always have such good comments, that we would benefit from some your experience with use-cases.
* As all the classes are at the multi-precision namespace, why name the main class mp_number and not just number? typedef mp::number<mp::mpfr_float_backend<300> > my_float;
Good question :) I don't have a particulary strong view whether it's "number" or "mp_number", but would like to know what others think.
I actually *do* have a slight preference for "mp_number", with the *mp_* part. It simply reminds me that I'm actually doing something with multiple precision type. An experienced booster once told me that boost favors clarity over terseness. Perhaps this is a case thereof.
* I think that the fact that operands of different backends can not be mixed on the same operation limits some interesting operations:
I understand your comment, Vicente. In my experience, the reduced error resulting from forbidding non-explicit mixing of back-ends far outweighs potential benefits from allowing them. Of course, this is only an opinion. Others will disagree. When we discussed potential fixed-point types, you may remember my recommendation. I usually recommend: * Do support seamless interaction with built-in types. * Forbid implicit conversion of non-same specialized types. * Potentially support explicit construction of one type from another. Accordingly, we do support this: boost::multiprecision::cpp_dec_float_100 a(boost::multiprecision::cpp_dec_float_100(123) / 100); boost::multiprecision::cpp_dec_float_50 b(boost::multiprecision::cpp_dec_float_50(456) / 100); boost::multiprecision::cpp_dec_float_50 c = boost::multiprecision::cpp_dec_float_50(a) * b; But we do not support this: boost::multiprecision::cpp_dec_float_100 a(boost::multiprecision::cpp_dec_float_100(123) / 100); boost::multiprecision::cpp_dec_float_50 b(boost::multiprecision::cpp_dec_float_50(456) / 100); boost::multiprecision::cpp_dec_float_50 c = a * b;
* What about replacing the second bool template parameter by an enum class expression_template {disabled, enabled}; which will be more explicit. That is Not a bad idea actually, I'd like to know what others think.
I like the suggestion.
* can we convert from a cpp_dec_float_100 to a cpp_dec_float_50? if yes, which rounding policy is applied? Do you plan to let the user configure the rounding policy?
Yes you can convert, and the rounding is currently poorly defined :-( I'll let Chris answer about rounding policies, but basically it's a whole lot of work. The aim is not to try and compete with say MPFR, but be "good enough" for most purposes. For some suitable definition of "good enough" obviously ;-)
Yes, I confirm John's statement. You can explicitly convert. Unfortunately, there simply is no support for rounding at this time in the cpp_dec_float back-end. To be quite honest, I do not have the time to work out a sensible rounding scheme for the base-10 back-end in a reasonable time schedule. One of the difficulties of base-10 is its unruly nature regarding rounding. I see the present review candidate of Boost.Multiprecision as a good start on a long-term development with many potential future improvements. In particular, my long-term goal with a potential Boost.Multiprecision is to create a greatly improved base-2 floating-point back-end in the future. At that time, I would want to target vastly improved performance, sensible allocation/deallocation, clear rounding semantics, etc. One of the advantages of John's architecture is that mp_number can be used with any back-end fulfilling the requirements. So one could potentially phase out cpp_dec_float via deprecation in the future and support a future cpp_bin_float.
If not, what about a mp_number_cast function taking as parameter a rounding policy?
I think it would be very hard to a coherant set of rounding policies that were applicable to all backends... including third party ones that haven't been thought of yet. Basically ducking that issue at present :-(
Yes, and keep ducking. We won't get it done correctly in a reasonable time scale. In my opinion, we can consider in association with an improved BPL base-2 floating-point back-end in the future. Backwards compatibility could be achieved with a potential rounding policy of *no rounding*. MPFR, GMP, MPIR and the rest will get the rounding that they have or don't have.
* Do you plan to add constexpr and noexcept to the interface? After thinking a little bit I'm wondering if this is this possible when using 3pp libraries backends that don't provide them?
I'm also not sure if it's possible, or even what we would gain - I can't offhand think of any interfaces that could use constexp for example.
If you are aiming at compile-time constant mp_numbers, then I do not believe it is possible with the specified low-complexity constraints a constexpr functions and objects. This works with state-of-the-art compilers today. constexpr double pi = 3.14159265358979323846 But this may not work soon, or ever. constexpr boost::multiprecision::cpp_dec_float_50 pi("3.14159265358979323846264338327950288419716939937510582097494");
* Why the "Non-member standard library function support" can be used only with floating-point Backend types? Why not with fixed-point types?
Because we don't currently have any to test this with.
I performed some rudimentary tests with Boost.Multiprecision and my own fixed-point type for microcontrollers. The results were preliminarily positive and I do see potential here. I am greatly looking forward to progress with fixed-point.
* How the performances of mp_number<this_trivial_adaptor<float>, false> will compare with float?
No idea, might be interesting to find out, will investigate.
I never tested it yet. It interests me as well for my research in generic numeric programming. Thank you for your detailed comments and suggestions. Best regards, Chris.

Le 31/05/12 22:10, Christopher Kormanyos a écrit :
Thank you for your comments, Vicente.
I have spent some hours reading the documentation. Here are some comments and a lot of questions. Did you get a chance to use it also? You always have such good comments, that we would benefit from some your experience with use-cases. I would like to create a backend for my fixed_point library before the review.
I have run the test and I'm getting a lot of errors for test_float_io_cpp_dec_float bjam toolset=clang-2.9,clang-2.9x -j2 ... Precision is: 13 Got: -987654312.0000000000000 Expected: -0.0000000008567 Testing value -8.5665356058806096939e-10 Formatting flags were: fixed showpos Precision is: 13 Got: -987654312.0000000000000 Expected: -0.0000000008567 9360 errors detected. EXIT STATUS: 1 ====== END OUTPUT ====== and others like darwin.compile.c++ bin/floating_point_examples.test/darwin-4.2.1/debug/floating_point_examples.o ../example/floating_point_examples.cpp:11:17: error: array: No such file or directory ../example/floating_point_examples.cpp: In function 'mp_type mysin(const mp_type&)': ../example/floating_point_examples.cpp:361: error: expected initializer before '<' token ../example/floating_point_examples.cpp:666: error: expected `}' at end of input "g++" -ftemplate-depth-128 -O0 -fno-inline -Wall -g -dynamic -gdwarf-2 -fexceptions -fPIC -I"../../.." -I"/Users/viboes/boost/trunk" -c -o "bin/floating_point_examples.test/darwin-4.2.1/debug/floating_point_examples.o" "../example/floating_point_examples.cpp"
* As all the classes are at the multi-precision namespace, why name the main class mp_number and not just number? typedef mp::number<mp::mpfr_float_backend<300> > my_float; Good question :) I don't have a particulary strong view whether it's "number" or "mp_number", but would like to know what others think. I actually *do* have a slight preference for "mp_number", with the *mp_* part. It simply reminds me that I'm actually doing something with multiple precision type. An experienced booster once told me that boost favors clarity over terseness. Perhaps this is a case thereof. OK. I see.
* I think that the fact that operands of different backends can not be mixed on the same operation limits some interesting operations: I understand your comment, Vicente. In my experience, the reduced error resulting from forbidding non-explicit mixing of back-ends far outweighs potential benefits from allowing them.
Of course, this is only an opinion. Others will disagree. When we discussed potential fixed-point types, you may remember my recommendation.
I usually recommend: * Do support seamless interaction with built-in types. * Forbid implicit conversion of non-same specialized types. * Potentially support explicit construction of one type from another.
Accordingly, we do support this:
boost::multiprecision::cpp_dec_float_100 a(boost::multiprecision::cpp_dec_float_100(123) / 100); boost::multiprecision::cpp_dec_float_50 b(boost::multiprecision::cpp_dec_float_50(456) / 100); boost::multiprecision::cpp_dec_float_50 c = boost::multiprecision::cpp_dec_float_50(a) * b;
But we do not support this:
boost::multiprecision::cpp_dec_float_100 a(boost::multiprecision::cpp_dec_float_100(123) / 100); boost::multiprecision::cpp_dec_float_50 b(boost::multiprecision::cpp_dec_float_50(456) / 100); boost::multiprecision::cpp_dec_float_50 c = a * b; This is OK as there is no implicit conversion from cpp_dec_float_100 to cpp_dec_float_50. But I would expect
boost::multiprecision::cpp_dec_float_100 d = a * b; to compile, but it doesn't. Why the implicit conversion from cpp_dec_float_50 to cpp_dec_float_100 doesn't helps?
* can we convert from a cpp_dec_float_100 to a cpp_dec_float_50? if yes, which rounding policy is applied? Do you plan to let the user configure the rounding policy? Yes you can convert, and the rounding is currently poorly defined :-( I'll let Chris answer about rounding policies, but basically it's a whole lot of work. The aim is not to try and compete with say MPFR, but be "good enough" for most purposes. For some suitable definition of "good enough" obviously ;-) Yes, I confirm John's statement. You can explicitly convert. Unfortunately, there simply is no support for rounding at this time in the cpp_dec_float back-end.
To be quite honest, I do not have the time to work out a sensible rounding scheme for the base-10 back-end in a reasonable time schedule. One of the difficulties of base-10 is its unruly nature regarding rounding.
I see the present review candidate of Boost.Multiprecision as a good start on a long-term development with many potential future improvements.
In particular, my long-term goal with a potential Boost.Multiprecision is to create a greatly improved base-2 floating-point back-end in the future. At that time, I would want to target vastly improved performance, sensible allocation/deallocation, clear rounding semantics, etc.
One of the advantages of John's architecture is that mp_number can be used with any back-end fulfilling the requirements. So one could potentially phase out cpp_dec_float via deprecation in the future and support a future cpp_bin_float. Yes, having a common interface is a good thing and it can be obtained with different approaches. I believe the main goal of mp_number is to
I understand your words as, the user can not configure the way rounding is done. But AFAIU it is valid to do a conversion, so the documentation should add which kind of rounding is applied. using namespace boost::multiprecision; { cpp_dec_float_50 a(cpp_dec_float_50(-1) / 3); cpp_dec_float_100 b=a; { boost::io::ios_precision_saver ifs( std::cout ); std::cout << "setprecision(N) " << std::setprecision(50) << a << std::endl; std::cout << "setprecision(max_digits10)= " << std::setprecision(std::numeric_limits<cpp_dec_float_50>::max_digits10) << a << std::endl; } { boost::io::ios_precision_saver ifs( std::cout ); std::cout << "setprecision(N) " << std::setprecision(100) << b << std::endl; std::cout << "setprecision(max_digits10)= " << std::setprecision(std::numeric_limits<cpp_dec_float_100>::max_digits10) << b << std::endl; } } std::cout << "*****************" << std::endl; { cpp_dec_float_100 a(cpp_dec_float_100(-1) / 3); cpp_dec_float_50 b=cpp_dec_float_50(a); { boost::io::ios_precision_saver ifs( std::cout ); std::cout << "setprecision(N) " << std::setprecision(100) << a << std::endl; std::cout << "setprecision(max_digits10)= " << std::setprecision(std::numeric_limits<cpp_dec_float_100>::max_digits10) << a << std::endl; } { boost::io::ios_precision_saver ifs( std::cout ); std::cout << "setprecision(N) " << std::setprecision(100) << b << std::endl; std::cout << "setprecision(max_digits10)= " << std::setprecision(std::numeric_limits<cpp_dec_float_50>::max_digits10) << b << std::endl; } } std::cout << "*****************" << std::endl; { cpp_dec_float_50 a(cpp_dec_float_50(1) / 3); cpp_dec_float_100 b=a; { boost::io::ios_precision_saver ifs( std::cout ); std::cout << "setprecision(N) " << std::setprecision(50) << a << std::endl; std::cout << "setprecision(max_digits10)= " << std::setprecision(std::numeric_limits<cpp_dec_float_50>::max_digits10) << a << std::endl; } { boost::io::ios_precision_saver ifs( std::cout ); std::cout << "setprecision(N) " << std::setprecision(100) << b << std::endl; std::cout << "setprecision(max_digits10)= " << std::setprecision(std::numeric_limits<cpp_dec_float_100>::max_digits10) << b << std::endl; } } std::cout << "*****************" << std::endl; { cpp_dec_float_100 a(cpp_dec_float_100(1) / 3); cpp_dec_float_50 b=cpp_dec_float_50(a); { boost::io::ios_precision_saver ifs( std::cout ); std::cout << "setprecision(N) " << std::setprecision(100) << a << std::endl; std::cout << "setprecision(max_digits10)= " << std::setprecision(std::numeric_limits<cpp_dec_float_100>::max_digits10) << a << std::endl; } { boost::io::ios_precision_saver ifs( std::cout ); std::cout << "setprecision(N) " << std::setprecision(100) << b << std::endl; std::cout << "setprecision(max_digits10)= " << std::setprecision(std::numeric_limits<cpp_dec_float_50>::max_digits10) << b << std::endl; } } gives setprecision(N) -0.33333333333333333333333333333333333333333333333333 setprecision(max_digits10)= -0.33333333333333333333333333333333333333333333333333333333333333333333333333333333 setprecision(N) -0.33333333333333333333333333333333333333333333333333333333333333333333333333333333 setprecision(max_digits10)= -0.33333333333333333333333333333333333333333333333333333333333333333333333333333333 ***************** setprecision(N) -0.3333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333 setprecision(max_digits10)= -0.33333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333 setprecision(N) -0.33333333333333333333333333333333333333333333333333333333333333333333333333333333 setprecision(max_digits10)= -0.33333333333333333333333333333333333333333333333333333333333333333333333333333333 ***************** setprecision(N) 0.33333333333333333333333333333333333333333333333333 setprecision(max_digits10)= 0.33333333333333333333333333333333333333333333333333333333333333333333333333333333 setprecision(N) 0.33333333333333333333333333333333333333333333333333333333333333333333333333333333 setprecision(max_digits10)= 0.33333333333333333333333333333333333333333333333333333333333333333333333333333333 ***************** setprecision(N) 0.3333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333 setprecision(max_digits10)= 0.33333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333 setprecision(N) 0.33333333333333333333333333333333333333333333333333333333333333333333333333333333 setprecision(max_digits10)= 0.33333333333333333333333333333333333333333333333333333333333333333333333333333333 so I guess the rounding is towards zero. put in a common class the expression templates.
If not, what about a mp_number_cast function taking as parameter a rounding policy? I think it would be very hard to a coherant set of rounding policies that were applicable to all backends... including third party ones that haven't been thought of yet. Basically ducking that issue at present :-( Yes, and keep ducking. We won't get it done correctly in a reasonable time scale. In my opinion, we can consider in association with an improved BPL base-2 floating-point back-end in the future. Backwards compatibility could be achieved with a potential rounding policy of *no rounding*. MPFR, GMP, MPIR and the rest will get the rounding that they have or don't have.
Or *round_to_zero* instead of *no rounding*.
* Do you plan to add constexpr and noexcept to the interface? After thinking a little bit I'm wondering if this is this possible when using 3pp libraries backends that don't provide them? I'm also not sure if it's possible, or even what we would gain - I can't offhand think of any interfaces that could use constexp for example. If you are aiming at compile-time constant mp_numbers, then I do not believe it is possible with the specified low-complexity constraints a constexpr functions and objects.
This works with state-of-the-art compilers today. constexpr double pi = 3.14159265358979323846
But this may not work soon, or ever. constexpr boost::multiprecision::cpp_dec_float_50 pi("3.14159265358979323846264338327950288419716939937510582097494"); Not all operations can be defined as constexpr but I will expect some to be.
constexpr boost::multiprecision::cpp_dec_float_50 pi(3.14159265358979323846); http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3352.html propose also to have some kind of literals through some factory methods. |to_nonnegative<2884,-4>()| will produce a |nonnegative| constant with a range and resolution just sufficient to hold the value 2884*2^-4 . I don't know if this kind of factories could be applicable to the current backends.
* How the performances of mp_number<this_trivial_adaptor<float>, false> will compare with float? No idea, might be interesting to find out, will investigate. I never tested it yet. It interests me as well for my research in generic numeric programming.
Let us know as soon as you have some results. Best, Vicente

Thank you once again for your detailed report, Vicente.
I have spent some hours reading the documentation. Here are some comments and a lot of questions.
Did you get a chance to use it also? You always have such good comments, that we would benefit from some your experience with use-cases.
I would like to create a backend for my fixed_point library before the review.
Wow! That was fast. It seems like just yesterday when we were preliminarily talking about your development.
I have run the test and I'm getting a lot of errors for test_float_io_cpp_dec_float bjam toolset=clang-2.9,clang-2.9x -j2
Precision is: 13 Got: -987654312.0000000000000 Expected: -0.0000000008567 Testing value -8.5665356058806096939e-10 Formatting flags were: fixed showpos Precision is: 13 Got: -987654312.0000000000000 Expected: -0.0000000008567 9360 errors detected.
EXIT STATUS: 1 ====== END OUTPUT ======
OK. This one looks like a real nugget of a find. You can never do enough testing. I just ran the tests on Visual Studio 2010 and they all pass. I don't have clang but if we can't track this down relatively easily, I can build it. The error report tells me a few things. * The number is completely wrong. So it's not just a rounding thing. * It might be related to the conversion to string. * On the other hand, it could be related to the actual algorithms. Before I download and build that compiler from source, could you please do two things for me? 1) Could you please zip and send the entire error text? Or if all 9360 errors are simply too big of a file, maybe the first 1000 error text messages or so. I would like to see if there is a visible trend in the kinds of numbers failing. 2) Could you please (with your compiler) create the reported number from string as cpp_dec_float_50(" -8.5665356058806096939e-10")? Then simply print it out with precision(13), fixed and showpos. This helps me see if the error is merely in printout or rather goes deeper into the class algorithms. I used the code below and got a sensible result of -0.0000000008567. #include <iomanip> #include <iostream> #include <boost/multiprecision/cpp_dec_float.hpp> int main(int, char**) { boost::multiprecision::cpp_dec_float_50 x("-8.5665356058806096939e-10"); std::cout << std::setprecision(13) << std::fixed << std::showpos << x << std::endl; }
and others like
darwin.compile.c++ bin/floating_point_examples.test/darwin-4.2.1/debug/floating_point_examples.o ../example/floating_point_examples.cpp:11:17: error: array: No such file or directory ../example/floating_point_examples.cpp: In function 'mp_type mysin(const mp_type&)': ../example/floating_point_examples.cpp:361: error: expected initializer before '<' token ../example/floating_point_examples.cpp:666: error: expected `}' at end of input
"g++" -ftemplate-depth-128 -O0 -fno-inline -Wall -g -dynamic -gdwarf-2 -fexceptions -fPIC -I"../../.." -I"/Users/viboes/boost/trunk" -c -o "bin/floating_point_examples.test/darwin-4.2.1/debug> /floating_point_examples.o" "../example/floating_point_examples.cpp"
These are easy to understand. It's probably *my bad* on this one because I used some of C++11 in selected samples. For example, in the polynomial expansion of my_sin(), I use std::array. In the functional derivative and integral examples, I use a C++11 anonymous (lambda) function. Please tell me if I need to avoid using C++11 in the examples due to boost policy. I would like to use these examples because they exhibit where I want to go with the language. But if boost policy demands, I can re-write for C++03.
Accordingly, we do support this:
boost::multiprecision::cpp_dec_float_100 a(boost::multiprecision::cpp_dec_float_100(123) / 100); boost::multiprecision::cpp_dec_float_50 b(boost::multiprecision::cpp_dec_float_50(456) / 100); boost::multiprecision::cpp_dec_float_50 c = boost::multiprecision::cpp_dec_float_50(a) * b;
But we do not support this:
boost::multiprecision::cpp_dec_float_100 a(boost::multiprecision::cpp_dec_float_100(123) / 100); boost::multiprecision::cpp_dec_float_50 b(boost::multiprecision::cpp_dec_float_50(456) / 100); boost::multiprecision::cpp_dec_float_50 c = a * b;
This is OK as there is no implicit conversion from cpp_dec_float_100 to cpp_dec_float_50. But I would expect
boost::multiprecision::cpp_dec_float_100 d = a * b; to compile, but it doesn't. Why the implicit conversion from cpp_dec_float_50 to cpp_dec_float_100 doesn't helps?
Well, I guess this is simply a design choice. At this time, we decided to prohibit both narrowing as well as widening implicit conversion of binary arithmetic operations. This means that you need to explicitly convert both larger as well as smaller digit ranges in binary arithmetic operations. For example, boost::multiprecision::cpp_dec_float_100 d = a * boost::multiprecision::cpp_dec_float_100(b);
To be quite honest, I do not have the time to work out a sensible rounding scheme for the base-10 back-end in a reasonable time schedule. One of the difficulties of base-10 is its unruly nature regarding rounding.
I understand your words as, the user can not configure the way rounding is done. But AFAIU it is valid to do a conversion, so the documentation should add which kind of rounding is applied.
<snip>
so I guess the rounding is towards zero.
Yes. you are right. My previous post was also misleading. The cpp_dec_float back-end does not round on operations or conversions from 50 to 100 digits, etc. It does, however, round when preparing an output string. I believe that I used round-to-zero. John has indicated a slight preference not to document these internal details.
I see the present review candidate of Boost.Multiprecision as a good start on a long-term development with many potential future improvements.
In particular, my long-term goal with a potential Boost.Multiprecision is to create a greatly improved base-2 floating-point back-end in the future. At that time, I would want to target vastly improved performance, sensible allocation/deallocation, clear rounding semantics, etc.
One of the advantages of John's architecture is that mp_number can be used with any back-end fulfilling the requirements. So one could potentially phase out cpp_dec_float via deprecation in the future and support a future cpp_bin_float.
Yes, having a common interface is a good thing and it can be obtained with different approaches. I believe the main goal of mp_number is to put in a common class the expression templates.
Yes. The whole of Boost.Multiprecision is, however, larger in its scope. It also should provide BPL big number float, integer and rational support. And as soon as we make more progress with fixed-point, then that as well. Today's collection of back-ends is sort of jumbled. But mp_number puts a generic interface on them. My future goal is to improve the floating-point back-end and also try to find any sensible unification for integer, float, rational and fixed-point. This goal does, however, go far beyond the scope of the upcoming review.
Or *round_to_zero* instead of *no rounding*. OK.
* Do you plan to add constexpr and noexcept to the interface? After thinking a little bit I'm wondering if this is this possible when using 3pp libraries backends that don't provide them? I'm also not sure if it's possible, or even what we would gain - I can't
offhand think of any interfaces that could use constexp for example. If you are aiming at compile-time constant mp_numbers, then I do not believe it is possible with the specified low-complexity constraints a constexpr functions and objects.
This works with state-of-the-art compilers today. constexpr double pi = 3.14159265358979323846
But this may not work soon, or ever. constexpr boost::multiprecision::cpp_dec_float_50 pi("3.14159265358979323846264338327950288419716939937510582097494");
Not all operations can be defined as constexpr but I will expect some to be. constexpr boost::multiprecision::cpp_dec_float_50 pi(3.14159265358979323846);
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3352.html propose also to have some kind of literals through some factory methods. |to_nonnegative<2884,-4>()| will produce a |nonnegative| constant with a range and resolution just sufficient to hold the value 2884*2^-4. I don't know if this kind of factories could be applicable to the current backends.
I suspect it can not be applied to the current back-ends because the creation of a big number from a string or a built-in type simply exceeds the low-complexity limit required for constexpr. Maybe I'm wrong here. Your idea is certainly a valid goal to look for in the future.
* How the performances of mp_number<this_trivial_adaptor<float>, false> will compare with float? No idea, might be interesting to find out, will investigate. I never tested it yet. It interests me as well for my research in generic numeric programming.
Let us know as soon as you have some results. Best, Vicente
OK. I need to find a day for the benchmark. Best regards, Chris.

Le 03/06/12 18:37, Christopher Kormanyos a écrit :
Thank you once again for your detailed report, Vicente.
I have spent some hours reading the documentation. Here are some comments and a lot of questions.
Did you get a chance to use it also? You always have such good comments, that we would benefit from some your experience with use-cases. I would like to create a backend for my fixed_point library before the review. Wow! That was fast. It seems like just yesterday when we were preliminarily talking about your development. Hrr, unfortunately I have not implemented closed arithmetic operations on my fixed_point library, so I can not create the backend yet. I have run the test and I'm getting a lot of errors for test_float_io_cpp_dec_float bjam toolset=clang-2.9,clang-2.9x -j2 Precision is: 13 Got: -987654312.0000000000000 Expected: -0.0000000008567 Testing value -8.5665356058806096939e-10 Formatting flags were: fixed showpos Precision is: 13 Got: -987654312.0000000000000 Expected: -0.0000000008567 9360 errors detected. EXIT STATUS: 1 ====== END OUTPUT ====== OK. This one looks like a real nugget of a find. You can never do enough testing. I just ran the tests on Visual Studio 2010 and they all pass. I don't have clang but if we can't track this down relatively easily, I can build it.
The error report tells me a few things. * The number is completely wrong. So it's not just a rounding thing. * It might be related to the conversion to string. * On the other hand, it could be related to the actual algorithms.
Before I download and build that compiler from source, could you please do two things for me?
1) Could you please zip and send the entire error text? Or if all 9360 errors are simply too big of a file, maybe the first 1000 error text messages or so. I would like to see if there is a visible trend in the kinds of numbers failing.
2) Could you please (with your compiler) create the reported number from string as cpp_dec_float_50(" -8.5665356058806096939e-10")? Then simply print it out with precision(13), fixed and showpos. This helps me see if the error is merely in printout or rather goes deeper into the class algorithms.
I used the code below and got a sensible result of -0.0000000008567.
#include<iomanip> #include<iostream>
#include<boost/multiprecision/cpp_dec_float.hpp>
int main(int, char**) { boost::multiprecision::cpp_dec_float_50 x("-8.5665356058806096939e-10");
std::cout<< std::setprecision(13) << std::fixed << std::showpos << x << std::endl; } The result is
-0.0000000008567
and others like darwin.compile.c++ bin/floating_point_examples.test/darwin-4.2.1/debug/floating_point_examples.o ../example/floating_point_examples.cpp:11:17: error: array: No such file or directory ../example/floating_point_examples.cpp: In function 'mp_type mysin(const mp_type&)': ../example/floating_point_examples.cpp:361: error: expected initializer before '<' token ../example/floating_point_examples.cpp:666: error: expected `}' at end of input "g++" -ftemplate-depth-128 -O0 -fno-inline -Wall -g -dynamic -gdwarf-2 -fexceptions -fPIC -I"../../.." -I"/Users/viboes/boost/trunk" -c -o "bin/floating_point_examples.test/darwin-4.2.1/debug> /floating_point_examples.o" "../example/floating_point_examples.cpp" These are easy to understand. It's probably *my bad* on this one because I used some of C++11 in selected samples. For example, in the polynomial expansion of my_sin(), I use std::array. In the functional derivative and integral examples, I use a C++11 anonymous (lambda) function.
Please tell me if I need to avoid using C++11 in the examples due to boost policy. I would like to use these examples because they exhibit where I want to go with the language. But if boost policy demands, I can re-write for C++03. There is no issue using specific c++11, but the example should be run either only on c++11 mode or just compile conditionally the c++11 parts.
Accordingly, we do support this:
boost::multiprecision::cpp_dec_float_100 a(boost::multiprecision::cpp_dec_float_100(123) / 100); boost::multiprecision::cpp_dec_float_50 b(boost::multiprecision::cpp_dec_float_50(456) / 100); boost::multiprecision::cpp_dec_float_50 c = boost::multiprecision::cpp_dec_float_50(a) * b;
But we do not support this:
boost::multiprecision::cpp_dec_float_100 a(boost::multiprecision::cpp_dec_float_100(123) / 100); boost::multiprecision::cpp_dec_float_50 b(boost::multiprecision::cpp_dec_float_50(456) / 100); boost::multiprecision::cpp_dec_float_50 c = a * b; This is OK as there is no implicit conversion from cpp_dec_float_100 to cpp_dec_float_50. But I would expect boost::multiprecision::cpp_dec_float_100 d = a * b; to compile, but it doesn't. Why the implicit conversion from cpp_dec_float_50 to cpp_dec_float_100 doesn't helps? Well, I guess this is simply a design choice. At this time, we decided to prohibit both narrowing as well as widening implicit conversion of binary arithmetic operations. This means that you need to explicitly convert both larger as well as smaller digit ranges in binary arithmetic operations. For example,
boost::multiprecision::cpp_dec_float_100 d = a * boost::multiprecision::cpp_dec_float_100(b);
This not answer my question "Why the implicit conversion from cpp_dec_float_50 to cpp_dec_float_100 doesn't helps?". Have you an idea?
To be quite honest, I do not have the time to work out a sensible rounding scheme for the base-10 back-end in a reasonable time schedule. One of the difficulties of base-10 is its unruly nature regarding rounding. I understand your words as, the user can not configure the way rounding is done. But AFAIU it is valid to do a conversion, so the documentation should add which kind of rounding is applied. <snip>
so I guess the rounding is towards zero. Yes. you are right. My previous post was also misleading. The cpp_dec_float back-end does not round on operations or conversions from 50 to 100 digits, etc. It does, however, round when preparing an output string. I believe that I used round-to-zero. John has indicated a slight preference not to document these internal details.
Yes but it round while converting 100 digits to 50, and this should be documented.
* Do you plan to add constexpr and noexcept to the interface? After thinking a little bit I'm wondering if this is this possible when using 3pp libraries backends that don't provide them? I'm also not sure if it's possible, or even what we would gain - I can't offhand think of any interfaces that could use constexp for example. If you are aiming at compile-time constant mp_numbers, then I do not believe it is possible with the specified low-complexity constraints a constexpr functions and objects.
This works with state-of-the-art compilers today. constexpr double pi = 3.14159265358979323846
But this may not work soon, or ever. constexpr boost::multiprecision::cpp_dec_float_50 pi("3.14159265358979323846264338327950288419716939937510582097494"); Not all operations can be defined as constexpr but I will expect some to be. constexpr boost::multiprecision::cpp_dec_float_50 pi(3.14159265358979323846); http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3352.html propose also to have some kind of literals through some factory methods. |to_nonnegative<2884,-4>()| will produce a |nonnegative| constant with a range and resolution just sufficient to hold the value 2884*2^-4. I don't know if this kind of factories could be applicable to the current backends. I suspect it can not be applied to the current back-ends because the creation of a big number from a string or a built-in type simply exceeds the low-complexity limit required for constexpr. Maybe I'm wrong here. Your idea is certainly a valid goal to look for in the future.
Does this mean that we can not create a generic class mp_number that provides some of the interfaces the backend can provide and the user will need to be able to create instances of mp_number from instances of the backend. Could the following constructor be implemented as a constexpr if the move of BE is a constexpr? constexpr mp_number<BE>::mp_number(const BE&&); Best, Vicente

Thanks again Vicente.
I have run the test and I'm getting a lot of errors for
test_float_io_cpp_dec_float bjam toolset=clang-2.9,clang-2.9x -j2 Precision is: 13 Got: -987654312.0000000000000 Expected: -0.0000000008567 Testing value -8.5665356058806096939e-10 Formatting flags were: fixed showpos Precision is: 13 Got: -987654312.0000000000000 Expected: -0.0000000008567 9360 errors detected. EXIT STATUS: 1 ====== END OUTPUT ====== OK. This one looks like a real nugget of a find. You can never do enough testing. I just ran the tests on Visual Studio 2010 and they all pass. I don't have clang but if we can't track this down relatively easily, I can build it.
2) Could you please (with your compiler) create the reported number from string as cpp_dec_float_50(" -8.5665356058806096939e-10")? Then simply print it out with precision(13), fixed and showpos. This helps me see if the error is merely in printout or rather goes deeper into the class algorithms.
The result is -0.0000000008567
This is an issue of great concern. So the small number above can be successfully created and printed. Something else must be going on. I'll bet there is an initialization issue somewhere. We tested with GCC and MSVC. Clang does something different (either rightly or wrongly). You know, the number 12345678 is frequently used in the float I/O test case. And this number appears in a corrupted form in the error report. I wonder if clang has a different opinion on the default initialization of boost::array? That's how the limbs in cpp_dec_float are stored. Could you please send more examples of test case error reports that you receive? I really need to see more examples of the error text before I break down and get this other compiler running. <snip>
Accordingly, we do support this:
boost::multiprecision::cpp_dec_float_100 a(boost::multiprecision::cpp_dec_float_100(123) / 100); boost::multiprecision::cpp_dec_float_50 b(boost::multiprecision::cpp_dec_float_50(456) / 100); boost::multiprecision::cpp_dec_float_50 c = boost::multiprecision::cpp_dec_float_50(a) * b;
But we do not support this:
boost::multiprecision::cpp_dec_float_100 a(boost::multiprecision::cpp_dec_float_100(123) / 100); boost::multiprecision::cpp_dec_float_50 b(boost::multiprecision::cpp_dec_float_50(456) / 100); boost::multiprecision::cpp_dec_float_50 c = a * b;
This is OK as there is no implicit conversion from cpp_dec_float_100 to cpp_dec_float_50. But I would expect boost::multiprecision::cpp_dec_float_100 d = a * b; to compile, but it doesn't. Why the implicit conversion from cpp_dec_float_50 to cpp_dec_float_100 doesn't helps?
Well, I guess this is simply a design choice. At this time, we decided to prohibit both narrowing as well as widening implicit conversion of binary arithmetic operations. This means that you need to explicitly convert both larger as well as smaller digit ranges in binary arithmetic operations. For example,
boost::multiprecision::cpp_dec_float_100 d = a * boost::multiprecision::cpp_dec_float_100(b);
This not answer my question "Why the implicit conversion from cpp_dec_float_50 to cpp_dec_float_100 doesn't helps?". Have you an idea?
I think we simply have different interpretations of explicit and implicit construction here. Perhaps mine will be wrong. Basically, you have d = a * b, where both d and a have 100 digits, and a has 50 digits. The problem is with the (a * b) part of the expression. When computing (a * b), we are asking the compiler to do binary math such as "cpp_dec_float_100" = "cpp_dec_float_50" * "cpp_dec_float_100" Your expectation is quite reasonable. I mean everything like this works. "std::uint32_t" = "std::uint16_t" * "std::uint32_t" But in fact to support your desired conversion, we would have to write global template operators for mixed-digit binary arithmetic. Like this: template<unsigned digits_left, unsigned digits_right> cpp_dec_float<digits_left> operator+(const cpp_dec_float<digits_left>& left, const cpp_dec_float<digits_right>& right) { return cpp_dec_float<digits_left>(left) += cpp_dec_float<digits_left>(right); } There is certainly no technical reason to not do this, even though we would dread the typing. The fact of the matter is, we simply decided not to support these operators. <snip>
To be quite honest, I do not have the time to work out a sensible rounding scheme for the base-10 back-end in a reasonable time schedule. One of the difficulties of base-10 is its unruly nature regarding rounding.
I understand your words as, the user can not configure the way rounding is done. But AFAIU it is valid to do a conversion, so the documentation should add which kind of rounding is applied. <snip>
so I guess the rounding is towards zero. Yes. you are right. My previous post was also misleading. The cpp_dec_float back-end does not round on operations or conversions from 50 to 100 digits, etc. It does, however, round when preparing an output string. I believe that I used round-to-zero. John has indicated a slight preference not to document these internal details.
Yes but it round while converting 100 digits to 50, and this should be documented.
No. I wish we *would* already have rounding in these cases. But we don't! In fact, the class constructor has a TODO-style comment indicating that it does not round. Conversion to another digit range does not round. Since rounding occurs when printing the number, and because there are guard digits, you must be experiencing the "illusion" of rounding via a rounded printout.
* Do you plan to add constexpr and noexcept to the interface? After thinking a little bit I'm wondering if this is this possible when using 3pp libraries backends that don't provide them? I'm also not sure if it's possible, or even what we would gain - I can't
offhand think of any interfaces that could use constexp for example. If you are aiming at compile-time constant mp_numbers, then I do not believe it is possible with the specified low-complexity constraints a constexpr functions and objects.
This works with state-of-the-art compilers today. constexpr double pi = 3.14159265358979323846
But this may not work soon, or ever. constexpr boost::multiprecision::cpp_dec_float_50 pi("3.14159265358979323846264338327950288419716939937510582097494");
Not all operations can be defined as constexpr but I will expect some to be. constexpr boost::multiprecision::cpp_dec_float_50 pi(3.14159265358979323846); http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3352.html propose also to have some kind of literals through some factory methods. |to_nonnegative<2884,-4>()| will produce a |nonnegative| constant with a range and resolution just sufficient to hold the value 2884*2^-4. I don't know if this kind of factories could be applicable to the current backends.
I suspect it can not be applied to the current back-ends because the creation of a big number from a string or a built-in type simply exceeds the low-complexity limit required for constexpr. Maybe I'm wrong here. Your idea is certainly a valid goal to look for in the future.
Does this mean that we can not create a generic class mp_number that provides some of the interfaces the backend can provide and the user will need to be able to create instances of mp_number from instances of the backend.
Could the following constructor be implemented as a constexpr if the move of BE is a constexpr?
constexpr mp_number<BE>::mp_number(const BE&&); Best, Vicente
Ahhhhh! I see! You are already thinking about high-level optimizations in your work with fixed-point. Finally I am starting to get your point. Now that I get it, I don't know because I don't know enough about C++11 constexpr to fully comprehend and answer your question. Your question remains open from my side. Best regards, Chris.

Le 03/06/12 21:24, Christopher Kormanyos a écrit :
Thanks again Vicente.
I have run the test and I'm getting a lot of errors for
2) Could you please (with your compiler) create the reported number from string as cpp_dec_float_50(" -8.5665356058806096939e-10")? Then simply print it out with precision(13), fixed and showpos. This helps me see if the error is merely in printout or rather goes deeper into the class algorithms. The result is -0.0000000008567 This is an issue of great concern. So the small number above can be successfully created and printed. Something else must be going on. I'll bet there is an initialization issue somewhere. We tested with GCC and MSVC. Clang does something different (either rightly or wrongly).
You know, the number 12345678 is frequently used in the float I/O test case. And this number appears in a corrupted form in the error report. I wonder if clang has a different opinion on the default initialization of boost::array? That's how the limbs in cpp_dec_float are stored.
Could you please send more examples of test case error reports that you receive? I really need to see more examples of the error text before I break down and get this other compiler running.
I have committed a log file error.txt under svn ci -m "Multiprecision: added error log" Adding test/error.txt Transmitting file data . Committed revision 78803. pc3:test viboes$ pwd /sand/big_number/libs/multiprecision/test Sorry fro the non orthodox transfer. Please, be free to remove it :-)
<snip>
Well, I guess this is simply a design choice. At this time, we decided to prohibit both narrowing as well as widening implicit conversion of binary arithmetic operations. This means that you need to explicitly convert both larger as well as smaller digit ranges in binary arithmetic operations. For example,
boost::multiprecision::cpp_dec_float_100 d = a * boost::multiprecision::cpp_dec_float_100(b); This not answer my question "Why the implicit conversion from cpp_dec_float_50 to cpp_dec_float_100 doesn't helps?". Have you an idea? I think we simply have different interpretations of explicit and implicit construction here. Perhaps mine will be wrong.
Basically, you have d = a * b, where both d and a have 100 digits, and a has 50 digits. The problem is with the (a * b) part of the expression. When computing (a * b), we are asking the compiler to do binary math such as
"cpp_dec_float_100" = "cpp_dec_float_50" * "cpp_dec_float_100"
I would expect the implicit conversion from cpp_dec_float_50 to cpp_dec_float_100 to be found here, but I don't know why the compiler don't find it.
Your expectation is quite reasonable. I mean everything like this works.
"std::uint32_t" = "std::uint16_t" * "std::uint32_t"
Yes but it round while converting 100 digits to 50, and this should be documented. No. I wish we *would* already have rounding in these cases. But we don't! In fact, the class constructor has a TODO-style comment indicating that it does not round.
Well, I don't know how a type that has less bytes to store the conversion could do it without rounding. Maybe you mean that you have taken just the first 50 digits, but this is a kind of rounding.
Conversion to another digit range does not round. Since rounding occurs when printing the number, and because there are guard digits, you must be experiencing the "illusion" of rounding via a rounded printout.
Hmm, I should be missing something trivial :( Best, Vicente

Could you please send more examples of test case error
reports that you receive? I really need to see more examples of the error text before I break down and get this other compiler running.
I have committed a log file error.txt under
svn ci -m "Multiprecision: added error log" Adding test/error.txt Transmitting file data . Committed revision 78803. pc3:test viboes$ pwd /sand/big_number/libs/multiprecision/test
Thanks. I'll take a look soon. <snip>
I would expect the implicit conversion from cpp_dec_float_50 to cpp_dec_float_100 to be found here, but I don't know why the compiler don't find it.
Because cpp_dec_float_50 and cpp_dec_float_100 are *different types*. The compiler does not find binary arithmetic involving these two different types because it does not exist. We decided not to support it. This design choice has both advantages as well as disadvantages. In our opinion, the advantages outweigh the disadvantages. <snip>
Yes but it round while converting 100 digits to 50, and this should be documented.
No. I wish we *would* already have rounding in these cases. But we don't! In fact, the class constructor has a TODO-style comment indicating that it does not round.
Well, I don't know how a type that has less bytes to store the conversion could do it without rounding. Maybe you mean that you have taken just the first 50 digits, but this is a kind of rounding.
Yes, that's exactly what I mean! My implementation takes the first 50 digits plus the guard digits of the 100 digit type and copies these to the 50 digit type. I don't call that rounding. But you do. Sorry about the confusion.
Conversion to another digit range does not round. Since rounding occurs when printing the number, and because there are guard digits, you must be experiencing the "illusion" of rounding via a rounded printout.
Hmm, I should be missing something trivial :( Best, Vicente
No, you are right. Rather, I did not understand your questions. Based on your continued clarifications and my previous misunderstanding, you have got it. For cpp_dec-float, rounding is truncation. The exception is conversion of cpp_dec_float to string, which rounds. But I forgot if it rounds toward zero, nearest or infinity. I will look it up and tell you. Best regards, Chris.

I would expect the implicit conversion from cpp_dec_float_50 to cpp_dec_float_100 to be found here, but I don't know why the compiler don't find it. Because cpp_dec_float_50 and cpp_dec_float_100 are *different types*. The compiler does not find binary arithmetic involving these two different types because it does not exist. We decided not to support it.
This design choice has both advantages as well as disadvantages. In our opinion, the advantages outweigh the disadvantages.
<snip>
Yes but it round while converting 100 digits to 50, and this should be documented. No. I wish we *would* already have rounding in these cases. But we don't! In fact, the class constructor has a TODO-style comment indicating that it does not round. Well, I don't know how a type that has less bytes to store the conversion could do it without rounding. Maybe you mean that you have taken just the first 50 digits, but this is a kind of rounding. Yes, that's exactly what I mean!
My implementation takes the first 50 digits plus the guard digits of the 100 digit type and copies these to the 50 digit type. I don't call that rounding. But you do. Sorry about the confusion.
Conversion to another digit range does not round. Since rounding occurs when printing the number, and because there are guard digits, you must be experiencing the "illusion" of rounding via a rounded printout. Hmm, I should be missing something trivial :(
No, you are right. Rather, I did not understand your questions. Based on your continued clarifications and my previous misunderstanding, you have got it. For cpp_dec-float, rounding is truncation. No I was wrong and missing yet something important. The example I was using let me thought that a rounding was done because all the digits were fractional, but if there were integral the operation should be a
Le 04/06/12 01:09, Christopher Kormanyos a écrit : truncation taking the most significant digits.
The exception is conversion of cpp_dec_float to string, which rounds. But I forgot if it rounds toward zero, nearest or infinity. I will look it up and tell you.
Sorry for the noise. Vicente

I have run the test and I'm getting a lot of errors for test_float_io_cpp_dec_float
bjam toolset=clang-2.9,clang-2.9x -j2
Works for me with clang SVN trunk on Ubuntu x86 Linux (except for one internal compiler error in another test). John.
I am trying to investigate the issue with float I/O test that Vicente reported. But I am stuck. Preliminary: I have not used clang before and I'm a beginner with building boost. I started my Ubuntu x86_64 machine and checked out the trunk. I can successfully build trunk with b2 and GCC 4.5. I can also successfully run the multiprecision tests with GCC 4.5. I got the x84_64-linux-gnu clang 2.9 binaries and unpacked the tar ball. Then I add the clang bin directory to path. Then I try to run the multiprecision tests with clang toolchain. But even though clang 2.9 gets invoked, it can't find all of the C++ standard library. It seems to default to using the LINUX default includes and tries to use GCC's library headers. I'm real dumb. My apologies. But can I get clang running in a short time frame to try to reproduce Vicente's error report? How can this compiler find its C++ standard library and STL? Thanks for any help. Best regards, Chris.

I am trying to investigate the issue with float I/O test that Vicente reported. But I am stuck.
Preliminary: I have not used clang before and I'm a beginner with building boost.
I started my Ubuntu x86_64 machine and checked out the trunk. I can successfully build trunk with b2 and GCC 4.5. I can also successfully run the multiprecision tests with GCC 4.5.
But even though clang 2.9 gets invoked, it can't find all of the C++ standard library. It seems to default to using the LINUX default includes and tries to use GCC's library headers.
I believe clang does use GCC's std lib headers on Linux - it always has for me - but like you I know almost nothing about Clang. This may be the problem actually - on the Mac, I believe Clang uses the libc++ as the standard lib, but that's only supported on the Mac :-( John.

* How the performances of mp_number<this_trivial_adaptor<float>, false> will compare with float? No idea, might be interesting to find out, will investigate. I never tested it yet. It interests me as well for my research in generic numeric programming.
Let us know as soon as you have some results.
Tested the time taken to run through all the Boost.Math Bessel function tests, with type double, real_concept (a simple wrapper around double from Boost.Math's test suite) and mp_number<float_backend<double> > (a trivial mp_number backend that wraps a floating point type): Time for double = 0.0157549 seconds Time for real_concept = 0.0166421 seconds Time for float_backend<double> = 0.0408635 seconds Time for float_backend<double> - no expression templates = 0.0253047 seconds So as you can see, expression templates are a dis-optimization for such simple types, and even without them you take a noticeable hit: I'm a little disappointed by this, but not that surprised, mp_number is a rather heavyweight solution for lightweight types if you see what I mean. In any case I'll have a look at the assembly to see if there are any obvious non-optimizations going on. John.

Le 31/05/12 20:17, John Maddock a écrit :
* I think that the fact that operands of different backends can not be mixed on the same operation limits some interesting operations:
I would expect the result of unary operator-() always signed? Is this operation defined for signed backends?
It is, but I'm not sure it's useful.
I don't reach to find it now on the documentation for mp_number, neither on the code. Could you point me where it is defined?
Currently there's only one unsigned backend, and it does the equivalent of a two's complement negate - ie unary minus is equivalent to (~i + 1). It does this because this is used to implement some of the operations (at both frontend and backend level), so it's hard to change. It might be possible to poison the unary minus operator at the top level so it doesn't compile for unsigned integer types, but I'd have to investigate that.
Basically unsigned types are frankly horrible :(
I agree, but they are also quite useful ;-)
I would expect the result of binary operator-() always signed? Is this operation defined for signed backends? what is the behavior of mp_uint128_t(0) - mp_uint128_t(1)?
It's a mp_uint128_t, and the result is the same as you would get for a built in 128 bit unsigned type that does 2's complement arithmetic. This is intentional, as the intended use for fixed precision cpp_int's is as a replacement for built in types.
I could understand that you want the class cpp_int behave as the builtin types, but I can understand also that others expect that a high level numeric class shouldn't suffer from the inconveniences the builtin types suffer and be closer to the mathematical model. I expected mp_number to manage with these different expectation using a different backend, but maybe my expectations are wrong.
It would be great if the tutorial could show that it is possible however to add a mp_uint128_t and a mp_int256_t, or isn't it possible? I guess this is possible, but a conversion is needed before adding the operands. I don't know if this behavior is not hiding some possible optimizations.
Not currently possible (compiler error). Why? mp_uint128_t is not convertible to mp_int256_t?
I thought about mixed operations early on and decided it was such a can of worms that I wouldn't go there at this time. Basically there are enough design issues to argue about already ;-) As for example?
One option would be to have a further review for that specific issue at a later date. Maybe this is a good compromise.
However, consider this: in almost any non-trivial cenario I can think of, if mixed operations are allowed, then expression template enabled operations will yield a different result to non-expression template operations. Why? could you clarify?
In fact it's basically impossible for the user to reason about what expression templates might do in the face of mixed precision operations, and when/if promotions might occur. For that reason I'm basically against them, even if, as you say, it might allow for some optimisations in some cases. It is not only an optimization matter. When working with fixed precision it important to know what is the result type precision of an arithmetic operation that don't loss information by overflow or resolution.
* What about replacing the second bool template parameter by an enum class expression_template {disabled, enabled}; which will be more explicit. That is
typedef mp::mp_number<mp::mpfr_float_backend<300>, false> my_float;
versus
typedef mp::mp_number<mp::mpfr_float_backend<300>, mp::expression_template::disabled> my_float;
Not a bad idea actually, I'd like to know what others think. The same applies for the sign.
* Why cpp_dec_float doesn't have a template parameter to give the integral digits? or as the C++ standard proposal from Lawrence Crow (http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3352.html), take the range and resolution as template parameters?
I don't understand, how is that different from the number of decimal digits? Oh I got it now. the decimal digits concern the mantissa and not the digits of the fractional part?
* What about adding Throws specification on the mp_number and backend requirements operations documentation?
Well mostly it would be empty ;-) But yes, there are a few situations were throwing is acceptable, but it's never a requirement. By empty, do you mean that the operation throws nothing? if yes, this is a important feature and/or requirement.
* Can the user define a backend for fixed int types that needs to manage with overflow?
For sure, just flag an error (throw for example) for any operation that overflows. I guess then that most of the operations could throw if the backend throws, so that the Throw specification should take care of this.
* Why bit_set is a free function?
Why not?
At the time, that seemed the natural way to go, but now you mention it I guess it could be an enable_if'ed member function.
I guess I have no strong views either way. I just wanted to know if there were some strong reasons. An alternative could be to follow the std::bitset<> or dynamic_bitset interfaces.
* I don't see nothing about overflow for cpp_dec_float backend operations. I guess it is up to the user to avoid overflow as for integers. what would be the result on overflow? Could this be added to the documentation?
It supports infinities and NaN's - should be mentioned somewhere, but I'll add to the reference section. So basically behaviour is the same as for double/float/long double. OK. I see.
* can we convert from a cpp_dec_float_100 to a cpp_dec_float_50? if yes, which rounding policy is applied? Do you plan to let the user configure the rounding policy?
Yes you can convert, and the rounding is currently poorly defined :-(
I'll let Chris answer about rounding policies, but basically it's a whole lot of work. The aim is not to try and compete with say MPFR, but be "good enough" for most purposes. For some suitable definition of "good enough" obviously ;-)
BTW, I see in the reference "Type mp_number is default constructible, and both copy constructible and assignable from: ... Any type that the Backend is constructible or assignable from. " I would expect to have this information in some way on the tutorial.
It should be in the "Constructing and Interconverting Between Number Types" section of the tutorial, but will check. I didn't find it there.
If not, what about a mp_number_cast function taking as parameter a rounding policy?
I think it would be very hard to a coherant set of rounding policies that were applicable to all backends... including third party ones that haven't been thought of yet. Basically ducking that issue at present :-( Could we expect this as an improvement for future releases?
* Does the cpp_dec_float back end satisfies any of the Optional Requirements? The same question for the other backends?
Yes, but it's irrelevant / an implementation detail. The optional requirements are there for optimisations, the user shouldn't be able to detect which ones a backend choses to support. Even the conversion constructors?
* Is there a difference between implicit and explicit construction?
Not currently. So, I guess that only implicit construction is supported. I really think that mp_number should provide both constructors if the backed provide them.
* On c++11 compilers providing explicit conversion, couldn't the convert_to function be replaced by a explicit conversion operator?
I don't know, I'd have to think about that, what compilers support that now? gcc and clang atleast. Does msvc 11?
* Are implicit conversion possible?
To an mp_number, never from it. Do you mean that there is no implicit conversion from mp_number to a builtin type?
* Do you plan to add constexpr and noexcept to the interface? After thinking a little bit I'm wondering if this is this possible when using 3pp libraries backends that don't provide them?
I'm also not sure if it's possible, or even what we would gain - I can't offhand think of any interfaces that could use constexp for example. It depends on the backend. But construction from builtins and most of the arithmetic operations could be constexpr.
Note that noexcept is a different matter.
* Why do you allow the argument of left and right shift operations to be signed and throw an exception when negative? Why don't just forbid it for signed types?
Good question, although:
* I think it's pretty common to write "mynumber << 4" and expect it to compile.
Is it so hard to write "mynumber << 4u"?
* I don't want implicit conversions from signed to unsigned in this case as it can lead to hard to track down errors if the signed value really is negative. I agree. unsigned can be converted to signed but not the opposite.
* Why the "Non-member standard library function support" can be used only with floating-point Backend types? Why not with fixed-point types?
Because we don't currently have any to test this with. Well, you can replace the documentation to say just that.
Is supporting pow or exp with a fixed point type a good idea? I don't know if they will be used often. I'm just working on a log on fixed points.
* Why have you chosen the following requirements for the backend? - negate instead of operator-() - eval_op instead of operator op=() - eval_convert_to instead of explicit operator T() - eval_floor instead of floor
* non-member functions are required if defaults are to be provided for the optional requirements. * There are some non-members that can't be written as overloaded non-member operators but can named free functions (sorry I forget which ones, but I remember seeing one or two along the way). * explicit conversions aren't well supported at present. * Compiler bug workaround (older GCC versions), there's a note at the requirements section: "The non-member functions are all named with an "eval_" prefix to avoid conflicts with template classes of the same name - in point of fact this naming convention shouldn't be necessary, but rather works around some compiler bugs. "
OK. I understand it now why.
Optimization? Is this optimization valid for short types (e.g. up to 4/8 bytes)?
What optimisation? I though this was related to optimization of the expression templates. * Or could the library provide a trivial backend adaptor that requires the backend just to provide the usual operations instead of the eval_xxx?
There is such a backend (undocumented) in SVN - it's call arithmetic_backend.
However it's not nearly as useful as you might think - there are still a bunch of things that have to be written specifically for each backend type. That's why it's not part of the library submission. I will take a look. * And last, I don't see anything related to rvalue references and move semantics. Have you analyzed if its use could improve the performances of the library?
Analyzed no, but rvalue references are supported for copying if the backend also supports it.
I do seem to recall seeing different compilers which both claim to support rvalue refs doing different things with the code though - if I remember rightly gcc is much more willing to use rvalue based move semantics than VC++.
I don't know why I though rvalue references and move semantics should help to optimize in this domain. Maybe some experts could tell a word on this. Best, Vicente

<snip>
* Why cpp_dec_float doesn't have a template parameter to give the integral digits? or as the C++ standard proposal from Lawrence Crow (http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3352.html), take the range and resolution as template parameters?
I don't understand, how is that different from the number of decimal digits? Oh I got it now. the decimal digits concern the mantissa and not the digits of the fractional part?
<snip>
Best, Vicente
Actually, I believe there might be a subtle difference. Crow's paper describes fixed-point types. In my opinion, a fixed-point number is a specialized real data type, optionally signed, that has a fixed number of digits before the decimal point and another fixed number of digits after the decimal point. The cpp_dec_float back-end, however, more closely emulates a floating-point type rather than a fixed-point type---even though the internal storage and manipulation mechanisms are quite similar for both. So, in fact, the number of decimal digits of precision of a cpp_dec_float is for the *whole thing* including both mantissa and fraction. There is also an exponent field. But this does not count in the precision. So when you create the cpp_dec_float_50 representation of (1 / 3), you get 50 total digits of precision. In this case, this means that there are 50 individual three's after the decimal point (plus so-called *guard* digits if you really dig deep into it). For example, using boost::multiprecision::cpp_dec_float_50; // 0.33333 33333 33333 33333 33333 33333 33333 33333 33333 33333 const cpp_dec_float_50 third(cpp_dec_float_50(1) / 3); Thanks for your comments. Best regards, Chris.

* I think that the fact that operands of different backends can not be mixed on the same operation limits some interesting operations:
I would expect the result of unary operator-() always signed? Is this operation defined for signed backends?
It is, but I'm not sure it's useful. I don't reach to find it now on the documentation for mp_number, neither on the code. Could you point me where it is defined?
In code? mp_number_base.cpp#583. In the docs, looks like I missed it :-( Will add (likewise unary +).
It's a mp_uint128_t, and the result is the same as you would get for a built in 128 bit unsigned type that does 2's complement arithmetic. This is intentional, as the intended use for fixed precision cpp_int's is as a replacement for built in types. I could understand that you want the class cpp_int behave as the builtin types, but I can understand also that others expect that a high level numeric class shouldn't suffer from the inconveniences the builtin types suffer and be closer to the mathematical model. I expected mp_number to manage with these different expectation using a different backend, but maybe my expectations are wrong.
There are a lot of possible different behaviour possible and only a limited amount of time :-( At this stage cpp_int is intended to be a basic "proof of principle" implementation, useful, but it doesn't provide everything that could be done.
It would be great if the tutorial could show that it is possible however to add a mp_uint128_t and a mp_int256_t, or isn't it possible? I guess this is possible, but a conversion is needed before adding the operands. I don't know if this behavior is not hiding some possible optimizations.
Not currently possible (compiler error).
Why? mp_uint128_t is not convertible to mp_int256_t?
Because we deliberately choose not to provide it. On a technical level the code looks like: template <class Backend> some-return-type operator+(const mp_number<Backend>&, const mp_number<Backend>&); So the operator overload can not be deduced for differing backends. Interestingly, had these been non-templates then it would have worked as you expected (the conversion would have been found).
I thought about mixed operations early on and decided it was such a can of worms that I wouldn't go there at this time. Basically there are enough design issues to argue about already ;-)
As for example?
Well, you've raised quite a few ;-) Interface, naming convensions, expression templates, scope....
However, consider this: in almost any non-trivial cenario I can think of, if mixed operations are allowed, then expression template enabled operations will yield a different result to non-expression template operations. Why? could you clarify?
Consider the Horner polynomial evaluation example: a = (c1 * x + c2) * x + c3; Expression templates transform this into: a = c1 * x; // evaluated in place using "a" as temporary storage if required. a += c2; a *= x; a += c3; Now suppose that the constants cN, x and a all have different precisions. Rounding will change depending whether you evaluate using temporaries as "(c1 * x + c2) * x + c3" and then assign (and possibly round) to "a", or evaluate in place as above.
In fact it's basically impossible for the user to reason about what expression templates might do in the face of mixed precision operations, and when/if promotions might occur. For that reason I'm basically against them, even if, as you say, it might allow for some optimisations in some cases. It is not only an optimization matter. When working with fixed precision it important to know what is the result type precision of an arithmetic operation that don't loss information by overflow or resolution.
Right. So make sure you're using the correct type. Basically we're saying "if you want to do mixed precision arithmetic, then you have to decide (by using casts) which of the precisions is the correct one, and specify that explicitly in the code".
I don't understand, how is that different from the number of decimal digits? Oh I got it now. the decimal digits concern the mantissa and not the digits of the fractional part?
Yes, all the digits in the mantissa, both the whole and fractional parts. Remember this is *floating* point, not fixed point.
* What about adding Throws specification on the mp_number and backend requirements operations documentation?
Well mostly it would be empty ;-) But yes, there are a few situations were throwing is acceptable, but it's never a requirement. By empty, do you mean that the operation throws nothing? if yes, this is a important feature and/or requirement.
No, I mean we have nothing to say about it - the front end doesn't *require* that the backend do any particular thing, throw or not throw. It's up to the backend to decide if throwing is or is not appropriate.
BTW, I see in the reference "Type mp_number is default constructible, and both copy constructible and assignable from: ... Any type that the Backend is constructible or assignable from. " I would expect to have this information in some way on the tutorial.
It should be in the "Constructing and Interconverting Between Number Types" section of the tutorial, but will check. I didn't find it there.
It's rather brief, but it's the last item: " Other interconversions may be allowed as special cases, whenever the backend allows it: mpf_t m; // Native GMP type. mpf_init_set_ui(m, 0); // set to a value; mpf_float i(m); // copies the value of the native type. " There's more specifics in the tutorial for each backend, for example the mpfr section has: "As well as the usual conversions from arithmetic and string types, instances of mp_number<mpfr_float_backend<N> > are copy constructible and assignable from: The GMP native types mpf_t, mpz_t, mpq_t. The MPFR native type mpfr_t. The mp_number wrappers around those types: mp_number<mpfr_float_backend<M> >, mp_number<mpf_float<M> >, mp_number<gmp_int>, mp_number<gmp_rational>. "
If not, what about a mp_number_cast function taking as parameter a rounding policy?
I think it would be very hard to a coherant set of rounding policies that were applicable to all backends... including third party ones that haven't been thought of yet. Basically ducking that issue at present :-( Could we expect this as an improvement for future releases?
Maybe ;-) I hope we can improve cpp_dec_float at some point, a coherent interface for all types I suspect may elude us as we can't force a particular backend that's mostly third party implemented to follow some model. It would be pretty hard to impose a rounding model on gmp's mpf_t for example - short of reimplementing mpfr - and I can't see us ever doing that!
Yes, but it's irrelevant / an implementation detail. The optional requirements are there for optimisations, the user shouldn't be able to detect which ones a backend choses to support. Even the conversion constructors?
OK, you got me on those those, I was thing of say the various eval_add / _multiplty / _subtract overloads which are just there to optimise certain operations.
* Is there a difference between implicit and explicit construction?
Not currently. So, I guess that only implicit construction is supported. I really think that mp_number should provide both constructors if the backed provide them.
Yes, it's now on the list of things to try and implement - I want to keep the code pretty stable at present though as the review is iminent.
I don't know, I'd have to think about that, what compilers support that now? gcc and clang atleast. Does msvc 11?
I don't think so.
* Are implicit conversion possible?
To an mp_number, never from it. Do you mean that there is no implicit conversion from mp_number to a builtin type?
Correct, that would be unsafe and surprising IMO.
* Do you plan to add constexpr and noexcept to the interface? After thinking a little bit I'm wondering if this is this possible when using 3pp libraries backends that don't provide them?
I'm also not sure if it's possible, or even what we would gain - I can't offhand think of any interfaces that could use constexp for example. It depends on the backend. But construction from builtins and most of the arithmetic operations could be constexpr.
I'd have to think about that and experiment (it's fixing up the internals to be constexp safe that could be tricky). The expression template arithmetic ops couldn't ever be constexp, possibly the non-expression template ones could be though.
Good question, although:
* I think it's pretty common to write "mynumber << 4" and expect it to compile. Is it so hard to write "mynumber << 4u"?
Hard no, surprising that you have to do that yes. I can just hear the support requests comming in now...
* Why the "Non-member standard library function support" can be used only with floating-point Backend types? Why not with fixed-point types?
Because we don't currently have any to test this with. Well, you can replace the documentation to say just that.
OK. Regards, John.

On Mon, Jun 4, 2012 at 1:25 AM, John Maddock <boost.regex@virgin.net> wrote:
Yes, it's now on the list of things to try and implement - I want to keep the code pretty stable at present though as the review is iminent.
John, I think it will help potential reviewers during the formal review if you provide a publicly available TODO list or list of planned changes, so such things aren't needlessly rehashed during the formal review period. If you want, I can give you (and Christopher) access to my notes on the pre-review comments (it's a Google document). - Jeff

Yes, it's now on the list of things to try and implement - I want to keep the code pretty stable at present though as the review is iminent.
John, I think it will help potential reviewers during the formal review if you provide a publicly available TODO list or list of planned changes, so such things aren't needlessly rehashed during the formal review period. If you want, I can give you (and Christopher) access to my notes on the pre-review comments (it's a Google document).
Thanks, I've been maintaining a TODO list in the docs, hopefully now up to date: http://svn.boost.org/svn/boost/sandbox/big_number/libs/multiprecision/doc/ht... John.

On Thu, May 31, 2012 at 11:17 AM, John Maddock <boost.regex@virgin.net>wrote: [...]
* Why have you chosen the following requirements for the backend?
- negate instead of operator-() - eval_op instead of operator op=() - eval_convert_to instead of explicit operator T() - eval_floor instead of floor
* non-member functions are required if defaults are to be provided for the optional requirements. * There are some non-members that can't be written as overloaded non-member operators but can named free functions (sorry I forget which ones, but I remember seeing one or two along the way). * explicit conversions aren't well supported at present. * Compiler bug workaround (older GCC versions), there's a note at the requirements section: "The non-member functions are all named with an "eval_" prefix to avoid conflicts with template classes of the same name - in point of fact this naming convention shouldn't be necessary, but rather works around some compiler bugs. "
One alternative to using a collection of free functions and metafunctions to extend an interface for a given type is to first mediate this customization via a traits class with static member functions and nested typedefs and metafunctions. Indeed, the default implementation of the traits class would just use the (perhaps overridden) free functions. John: Have you considered this extra level of indirection, or do you have any comments on this? - Jeff

* non-member functions are required if defaults are to be provided for the optional requirements. * There are some non-members that can't be written as overloaded non-member operators but can named free functions (sorry I forget which ones, but I remember seeing one or two along the way). * explicit conversions aren't well supported at present. * Compiler bug workaround (older GCC versions), there's a note at the requirements section: "The non-member functions are all named with an "eval_" prefix to avoid conflicts with template classes of the same name - in point of fact this naming convention shouldn't be necessary, but rather works around some compiler bugs. "
One alternative to using a collection of free functions and metafunctions to extend an interface for a given type is to first mediate this customization via a traits class with static member functions and nested typedefs and metafunctions. Indeed, the default implementation of the traits class would just use the (perhaps overridden) free functions. John: Have you considered this extra level of indirection, or do you have any comments on this?
No, but from a purely practical point of view, I believe the free function approach is easier for both the lib developer and a backend developer - overloading free functions is easy to do and the overload resolution rules well understood I believe. In addition if someone wants to experiment with modifying an existing backend, then extending it's functionality by providing additional function overloads is simple, and requires no modification to the original header. John.

On Wed, May 30, 2012 at 10:43 PM, Vicente J. Botet Escriba < vicente.botet@wanadoo.fr> wrote:
* As all the classes are at the multi-precision namespace, why name the main class mp_number and not just number?
I might prefer to see less redundancy in the naming as well, but we'll see if anyone else brings this up. - Jeff

On Wed, May 30, 2012 at 10:43 PM, Vicente J. Botet Escriba < vicente.botet@wanadoo.fr> wrote:
Le 29/05/12 23:08, Jeffrey Lee Hellrung, Jr. a écrit :
Hi all,
The review of the proposed Boost.Multiprecision library authored by John Maddock and Christopher Kormanyos has been scheduled for
June 8th - June 17th, 2012
and will be managed by myself.
I hope everyone interested can reserve some time to read through the documentation, try the code out, and post a formal review, either during the formal review window or before.
Hi,
glad to see that the library will be reviewed soon.
I have spent some hours reading the documentation. Here are some comments and a lot of questions.
[...]
Good luck for the review. A really good work.
Thanks, Vicente, for your detailed pre-review comments! I'm doing my best to keep some notes regarding your comments and concerns and John and Christopher's responses. - Jeff
participants (4)
-
Christopher Kormanyos
-
Jeffrey Lee Hellrung, Jr.
-
John Maddock
-
Vicente J. Botet Escriba