
Thanks again Vicente.
I have run the test and I'm getting a lot of errors for
test_float_io_cpp_dec_float bjam toolset=clang-2.9,clang-2.9x -j2 Precision is: 13 Got: -987654312.0000000000000 Expected: -0.0000000008567 Testing value -8.5665356058806096939e-10 Formatting flags were: fixed showpos Precision is: 13 Got: -987654312.0000000000000 Expected: -0.0000000008567 9360 errors detected. EXIT STATUS: 1 ====== END OUTPUT ====== OK. This one looks like a real nugget of a find. You can never do enough testing. I just ran the tests on Visual Studio 2010 and they all pass. I don't have clang but if we can't track this down relatively easily, I can build it.
2) Could you please (with your compiler) create the reported number from string as cpp_dec_float_50(" -8.5665356058806096939e-10")? Then simply print it out with precision(13), fixed and showpos. This helps me see if the error is merely in printout or rather goes deeper into the class algorithms.
The result is -0.0000000008567
This is an issue of great concern. So the small number above can be successfully created and printed. Something else must be going on. I'll bet there is an initialization issue somewhere. We tested with GCC and MSVC. Clang does something different (either rightly or wrongly). You know, the number 12345678 is frequently used in the float I/O test case. And this number appears in a corrupted form in the error report. I wonder if clang has a different opinion on the default initialization of boost::array? That's how the limbs in cpp_dec_float are stored. Could you please send more examples of test case error reports that you receive? I really need to see more examples of the error text before I break down and get this other compiler running. <snip>
Accordingly, we do support this:
boost::multiprecision::cpp_dec_float_100 a(boost::multiprecision::cpp_dec_float_100(123) / 100); boost::multiprecision::cpp_dec_float_50 b(boost::multiprecision::cpp_dec_float_50(456) / 100); boost::multiprecision::cpp_dec_float_50 c = boost::multiprecision::cpp_dec_float_50(a) * b;
But we do not support this:
boost::multiprecision::cpp_dec_float_100 a(boost::multiprecision::cpp_dec_float_100(123) / 100); boost::multiprecision::cpp_dec_float_50 b(boost::multiprecision::cpp_dec_float_50(456) / 100); boost::multiprecision::cpp_dec_float_50 c = a * b;
This is OK as there is no implicit conversion from cpp_dec_float_100 to cpp_dec_float_50. But I would expect boost::multiprecision::cpp_dec_float_100 d = a * b; to compile, but it doesn't. Why the implicit conversion from cpp_dec_float_50 to cpp_dec_float_100 doesn't helps?
Well, I guess this is simply a design choice. At this time, we decided to prohibit both narrowing as well as widening implicit conversion of binary arithmetic operations. This means that you need to explicitly convert both larger as well as smaller digit ranges in binary arithmetic operations. For example,
boost::multiprecision::cpp_dec_float_100 d = a * boost::multiprecision::cpp_dec_float_100(b);
This not answer my question "Why the implicit conversion from cpp_dec_float_50 to cpp_dec_float_100 doesn't helps?". Have you an idea?
I think we simply have different interpretations of explicit and implicit construction here. Perhaps mine will be wrong. Basically, you have d = a * b, where both d and a have 100 digits, and a has 50 digits. The problem is with the (a * b) part of the expression. When computing (a * b), we are asking the compiler to do binary math such as "cpp_dec_float_100" = "cpp_dec_float_50" * "cpp_dec_float_100" Your expectation is quite reasonable. I mean everything like this works. "std::uint32_t" = "std::uint16_t" * "std::uint32_t" But in fact to support your desired conversion, we would have to write global template operators for mixed-digit binary arithmetic. Like this: template<unsigned digits_left, unsigned digits_right> cpp_dec_float<digits_left> operator+(const cpp_dec_float<digits_left>& left, const cpp_dec_float<digits_right>& right) { return cpp_dec_float<digits_left>(left) += cpp_dec_float<digits_left>(right); } There is certainly no technical reason to not do this, even though we would dread the typing. The fact of the matter is, we simply decided not to support these operators. <snip>
To be quite honest, I do not have the time to work out a sensible rounding scheme for the base-10 back-end in a reasonable time schedule. One of the difficulties of base-10 is its unruly nature regarding rounding.
I understand your words as, the user can not configure the way rounding is done. But AFAIU it is valid to do a conversion, so the documentation should add which kind of rounding is applied. <snip>
so I guess the rounding is towards zero. Yes. you are right. My previous post was also misleading. The cpp_dec_float back-end does not round on operations or conversions from 50 to 100 digits, etc. It does, however, round when preparing an output string. I believe that I used round-to-zero. John has indicated a slight preference not to document these internal details.
Yes but it round while converting 100 digits to 50, and this should be documented.
No. I wish we *would* already have rounding in these cases. But we don't! In fact, the class constructor has a TODO-style comment indicating that it does not round. Conversion to another digit range does not round. Since rounding occurs when printing the number, and because there are guard digits, you must be experiencing the "illusion" of rounding via a rounded printout.
* Do you plan to add constexpr and noexcept to the interface? After thinking a little bit I'm wondering if this is this possible when using 3pp libraries backends that don't provide them? I'm also not sure if it's possible, or even what we would gain - I can't
offhand think of any interfaces that could use constexp for example. If you are aiming at compile-time constant mp_numbers, then I do not believe it is possible with the specified low-complexity constraints a constexpr functions and objects.
This works with state-of-the-art compilers today. constexpr double pi = 3.14159265358979323846
But this may not work soon, or ever. constexpr boost::multiprecision::cpp_dec_float_50 pi("3.14159265358979323846264338327950288419716939937510582097494");
Not all operations can be defined as constexpr but I will expect some to be. constexpr boost::multiprecision::cpp_dec_float_50 pi(3.14159265358979323846); http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3352.html propose also to have some kind of literals through some factory methods. |to_nonnegative<2884,-4>()| will produce a |nonnegative| constant with a range and resolution just sufficient to hold the value 2884*2^-4. I don't know if this kind of factories could be applicable to the current backends.
I suspect it can not be applied to the current back-ends because the creation of a big number from a string or a built-in type simply exceeds the low-complexity limit required for constexpr. Maybe I'm wrong here. Your idea is certainly a valid goal to look for in the future.
Does this mean that we can not create a generic class mp_number that provides some of the interfaces the backend can provide and the user will need to be able to create instances of mp_number from instances of the backend.
Could the following constructor be implemented as a constexpr if the move of BE is a constexpr?
constexpr mp_number<BE>::mp_number(const BE&&); Best, Vicente
Ahhhhh! I see! You are already thinking about high-level optimizations in your work with fixed-point. Finally I am starting to get your point. Now that I get it, I don't know because I don't know enough about C++11 constexpr to fully comprehend and answer your question. Your question remains open from my side. Best regards, Chris.