Decimal Floating Point Library Beta
Hello, For the past year Chris Kormanyos and I have been working on a new library to implement IEEE 754 decimal floating point types. We are pleased to announce the library is now in beta, and can be found at: https://github.com/cppalliance/decimal, and the documentation is at: https://cppalliance.org/decimal/decimal.html First, what are Decimal Floating Point Numbers? They are floating point numbers where the significand is stored in base-10 (decimal) instead of base-2 (binary). This means that numbers can be represented exactly avoiding cases such as the famous 0.1 + 0.2 != 0.3: https://0.30000000000000004.com. The library provides types decimal32, decimal64, and decimal128 as specified in IEEE 754 and STL-like functionality. The library is header-only, has no dependencies, and only requires C++14. It provides most of the STL functionality that you are familiar with such as <cmath>, <cstdlib>, <charconv>, etc. We are proceeding as a beta right now rather than pursuing complete boost review as we are missing STL features such as C++17 special math, and believe we can continue to increase performance. We do intend to go through the review process at a later time. Please give the library a go, and let us know how we can make it better. We look forward to any and all feedback. If you use the Cpplang slack channel I am active (from the Central European Time zone) on both #boost and #boost-decimal. On a personal note as of today I have also moved from being employed half-time to full-time with the C++ Alliance. This affords me a greater opportunity to develop and maintain new and existing Boost libraries. Matt Borland —- C++ Alliance Staff Engineer
This is a really needed library for native C++ big number math, with better precision for money calculations, to avoid floating point pitfalls I'll give it a try, nice job! On Mon, May 13, 2024 at 10:44 AM Matt Borland via Boost < boost@lists.boost.org> wrote:
Hello,
For the past year Chris Kormanyos and I have been working on a new library to implement IEEE 754 decimal floating point types. We are pleased to announce the library is now in beta, and can be found at: https://github.com/cppalliance/decimal, and the documentation is at: https://cppalliance.org/decimal/decimal.html
First, what are Decimal Floating Point Numbers? They are floating point numbers where the significand is stored in base-10 (decimal) instead of base-2 (binary). This means that numbers can be represented exactly avoiding cases such as the famous 0.1 + 0.2 != 0.3: https://0.30000000000000004.com.
The library provides types decimal32, decimal64, and decimal128 as specified in IEEE 754 and STL-like functionality. The library is header-only, has no dependencies, and only requires C++14. It provides most of the STL functionality that you are familiar with such as <cmath>, <cstdlib>, <charconv>, etc. We are proceeding as a beta right now rather than pursuing complete boost review as we are missing STL features such as C++17 special math, and believe we can continue to increase performance. We do intend to go through the review process at a later time.
Please give the library a go, and let us know how we can make it better. We look forward to any and all feedback. If you use the Cpplang slack channel I am active (from the Central European Time zone) on both #boost and #boost-decimal.
On a personal note as of today I have also moved from being employed half-time to full-time with the C++ Alliance. This affords me a greater opportunity to develop and maintain new and existing Boost libraries.
Matt Borland
—- C++ Alliance Staff Engineer
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On May 13, 2024, at 16:44, Matt Borland via Boost
wrote: Hello,
For the past year Chris Kormanyos and I have been working on a new library to implement IEEE 754 decimal floating point types. We are pleased to announce the library is now in beta, and can be found at: https://github.com/cppalliance/decimal, and the documentation is at: https://cppalliance.org/decimal/decimal.html
A first look convinces me that you have done a competent job, and that I might use the library, but I have concerns about the accompanying wording.
First, what are Decimal Floating Point Numbers? They are floating point numbers where the significand is stored in base-10 (decimal) instead of base-2 (binary). This means that numbers can be represented exactly ...
This does not mean that real numbers can be represented exactly, only that some DECIMAL fractions can be represented exactly. The advantage (if any) is coming not from storing the significand in base-10 but from the base of the floating exponent being 10: n*10^e where n and e are integers. BTW, there is no requirement to store n and e in decimal. Similarly, in the Documentation, I see the dubious statement: "... floating point types store the significand ... as binary digits. Famously this leads to rounding errors:" The behaviour of 0.1+0.2 mentioned is not even a rounding error, probably the technical term is "representation error". And also, in the Use Cases section: "The use case for Decimal Floating Point numbers is where rounding errors are significantly impactful such as finance. In applications where integer or fixed-point arithmetic are used to combat this issue Decimal Floating Point numbers can provide a significantly greater range of values." I dont see how one could say they provide a significantly greater range of values. Already, somebody replied that "This is a really needed library for native C++ big number math" ??? Sorry, but no. What DFP promise to do is correct elementary arithmetic with decimal fractions. As such, it is not clear who possibly needs math special functions in decimal... but ok, the standard says so. Instead, how do I do fixed-point arithmetic with Decimal Floating Point, i.e. what are the facilities in this library for calculating mortgage interest in dollars and cents, rounding the cents correctly at each stage? As it is, the doc is completely silent about this. Finally, no references in the documentation, a good review of the state-of-the-art might be: Decimal Floating-Point: Algorism for Computers Michael F. Cowlishaw https://www.cs.tufts.edu/~nr/cs257/archive/mike-cowlishaw/decimal-arith.pdf Cheers, Kostas
On Tuesday, May 14th, 2024 at 1:29 PM, Kostas Savvidis via Boost
On May 13, 2024, at 16:44, Matt Borland via Boost boost@lists.boost.org wrote:
Hello,
For the past year Chris Kormanyos and I have been working on a new library to implement IEEE 754 decimal floating point types. We are pleased to announce the library is now in beta, and can be found at: https://github.com/cppalliance/decimal, and the documentation is at: https://cppalliance.org/decimal/decimal.html
A first look convinces me that you have done a competent job, and that I might use the library, but I have concerns about the accompanying wording.
First, what are Decimal Floating Point Numbers? They are floating point numbers where the significand is stored in base-10 (decimal) instead of base-2 (binary). This means that numbers can be represented exactly ...
This does not mean that real numbers can be represented exactly, only that some DECIMAL fractions can be represented exactly. The advantage (if any) is coming not from storing the significand in base-10 but from the base of the floating exponent being 10: n*10^e where n and e are integers. BTW, there is no requirement to store n and e in decimal.
The way to store n and e are provided by IEEE 754 which offers two encodings: Binary Integer Decimal (BID) and Densely Packed Decimal (DPD). We use the former as it was targeted for software implementations, and the later is for hardware designers.
Similarly, in the Documentation, I see the dubious statement: "... floating point types store the significand ... as binary digits. Famously this leads to rounding errors:" The behaviour of 0.1+0.2 mentioned is not even a rounding error, probably the technical term is "representation error".
I will change the wording.
And also, in the Use Cases section: "The use case for Decimal Floating Point numbers is where rounding errors are significantly impactful such as finance. In applications where integer or fixed-point arithmetic are used to combat this issue Decimal Floating Point numbers can provide a significantly greater range of values."
I dont see how one could say they provide a significantly greater range of values.
I like the example from wikipedia for this: "For example, while a fixed-point representation that allocates 8 decimal digits and 2 decimal places can represent the numbers 123456.78, 8765.43, 123.00, and so on, a floating-point representation with 8 decimal digits could also represent 1.2345678, 1234567.8, 0.000012345678, 12345678000000000, and so on." I will add this to the docs.
Already, somebody replied that "This is a really needed library for native C++ big number math" ??? Sorry, but no. What DFP promise to do is correct elementary arithmetic with decimal fractions. As such, it is not clear who possibly needs math special functions in decimal... but ok, the standard says so.
The answer is two-fold. One of the goals of the library is that it will seamlessly plug into boost.math so you have a huge companion library from the start. By providing an implementation of say beta you will be able to use the beta distribution from boost.math without worrying about implicit conversions to binary floating point and back (which are disallowed by default anyway). Second, our longer term goal is ISO standardization so by providing a complete reference implementation we hope our proposal will be more compelling.
Instead, how do I do fixed-point arithmetic with Decimal Floating Point, i.e. what are the facilities in this library for calculating mortgage interest in dollars and cents, rounding the cents correctly at each stage? As it is, the doc is completely silent about this.
I can add a section on simple financial calculations. A trading firm that uses the library adopted it because the precision of a bitcoin (1 bitcoin = 100,000,000 Satoshis) exceeded what their in-house fixed-point arithmetic library could represent. They said the basic math operations, and functionality of <charconv> are what they need for now. Derivatives pricing calculations will need the aforementioned special functions and boost.math integration (The CDF of the normal distribution requires erf).
Finally, no references in the documentation, a good review of the state-of-the-art might be:
Decimal Floating-Point: Algorism for Computers Michael F. Cowlishaw https://www.cs.tufts.edu/~nr/cs257/archive/mike-cowlishaw/decimal-arith.pdf
Will add to the docs.
Cheers, Kostas
Thanks for the feedback, Matt
On 13.05.24 15:44, Matt Borland via Boost wrote:
Hello,
For the past year Chris Kormanyos and I have been working on a new library to implement IEEE 754 decimal floating point types. We are pleased to announce the library is now in beta, and can be found at: https://github.com/cppalliance/decimal, and the documentation is at: https://cppalliance.org/decimal/decimal.html
First, what are Decimal Floating Point Numbers? They are floating point numbers where the significand is stored in base-10 (decimal) instead of base-2 (binary). This means that numbers can be represented exactly avoiding cases such as the famous 0.1 + 0.2 != 0.3: https://0.30000000000000004.com.
The library provides types decimal32, decimal64, and decimal128 as specified in IEEE 754 and STL-like functionality. The library is header-only, has no dependencies, and only requires C++14. It provides most of the STL functionality that you are familiar with such as <cmath>, <cstdlib>, <charconv>, etc. We are proceeding as a beta right now rather than pursuing complete boost review as we are missing STL features such as C++17 special math, and believe we can continue to increase performance. We do intend to go through the review process at a later time.
Please give the library a go, and let us know how we can make it better. We look forward to any and all feedback. If you use the Cpplang slack channel I am active (from the Central European Time zone) on both #boost and #boost-decimal.
On a personal note as of today I have also moved from being employed half-time to full-time with the C++ Alliance. This affords me a greater opportunity to develop and maintain new and existing Boost libraries.
I don't really see the use for this. For actual physical measurements (which are inexact by nature), binary floating point is more efficient and more accurate. For money values (which are exact by nature), I would stick to integer or fixed point types. Floating point types, binary or decimal, are inherently inexact: the number of (binary) digits is fixed, so as more digits are added to the left, digits start getting dropped to right. Worse, adding extra digits to the decimal point is just plain wrong. If I have to pay 25% of an amount rounded to cents, then correctly rounding to cents is as important as calculating the 25% correctly. If I wanted actually exact fractions, without any rounding, I would use boost::rational. Fixed point and floating point are both equally incorrect, and using decimal instead of binary does nothing to make things more accurate. What's the point of 0.2+0.3=0.5 if 3*(1/3)!=1? The only real advantage of decimal floating point is lossless conversion to and from decimal text representations. I guess decimal floating point could be useful for a calculator app or a spreadsheet, where the text representation of a number is paramount, but not much else. -- Rainer Deyke (rainerd@eldwood.com)
The only real advantage of decimal floating point is lossless conversion to and from decimal text representations. I guess decimal floating point could be useful for a calculator app or a spreadsheet, where the text representation of a number is paramount, but not much else.
That's a large class of applications. One of our known users is a trading firm and the majority of their use is add/sub/mul/div and conversion to/from text via `to_chars` and `from_chars`. Not only are we able to do these text conversions exactly we can do it fast since we have deep familiarity with the problem set: https://cppalliance.org/decimal/decimal.html#benchmarks_charconv. Matt
Hi Matt,
What is the distinction to
https://www.boost.org/doc/libs/1_85_0/libs/multiprecision/doc/html/index.htm...
?
This also has a float128 type and gmp and other backends....
Bye Georg
13.05.2024 21:45:00 Matt Borland via Boost
Hello,
For the past year Chris Kormanyos and I have been working on a new library to implement IEEE 754 decimal floating point types. We are pleased to announce the library is now in beta, and can be found at: https://github.com/cppalliance/decimal, and the documentation is at: https://cppalliance.org/decimal/decimal.html
First, what are Decimal Floating Point Numbers? They are floating point numbers where the significand is stored in base-10 (decimal) instead of base-2 (binary). This means that numbers can be represented exactly avoiding cases such as the famous 0.1 + 0.2 != 0.3: https://0.30000000000000004.com.
The library provides types decimal32, decimal64, and decimal128 as specified in IEEE 754 and STL-like functionality. The library is header-only, has no dependencies, and only requires C++14. It provides most of the STL functionality that you are familiar with such as <cmath>, <cstdlib>, <charconv>, etc. We are proceeding as a beta right now rather than pursuing complete boost review as we are missing STL features such as C++17 special math, and believe we can continue to increase performance. We do intend to go through the review process at a later time.
Please give the library a go, and let us know how we can make it better. We look forward to any and all feedback. If you use the Cpplang slack channel I am active (from the Central European Time zone) on both #boost and #boost-decimal.
On a personal note as of today I have also moved from being employed half-time to full-time with the C++ Alliance. This affords me a greater opportunity to develop and maintain new and existing Boost libraries.
Matt Borland
—- C++ Alliance Staff Engineer
working on a new library to>> implement IEEE 754 decimal floating>> point types
What is the distinction to [Boost.Multiprecision] We are glad you asked. Matt will add morebut I'll start (in fact I will expound). Decimal intends to provide the decimaltypes that are explicitly specified inIEEE-754:2008. These will allow clientsto perform base-10 numericalcalculations that retain portability.Here we mean portability in the senseof changing your company, compiler,operating system or whatever and still being able to use your portable code. C++ has that attribut, partially, todayfor binary floats but not yet for decimal. And a secondary intent of Decimalis to provide a reference applicationfor potential specification(in the sense of C++/SG21/LEWG)for actually getting these intothe Standard.
Boost.Multiprecision does not intendto provide those types that are exactlyspecified in IEEE-754:2008.
We had this discussion along theway and as recently as yesterdaysince we are both co-autohrs ofall of Decimal/Math/Multiprecision.Should Decimal be part of Multiprecisionor vice-versa? We found the answerto be NO. Decimal is not Multiprecision.
So when we approach LEWGand say, look we have decimal(32/64/128)_t,this will ultimately beg a larger, yet unanswered,question. OK, great, decimal(32/64/128)_t.So where is float(32/64/128)_t ina mandatory spec? And the corrolaryquerry what about Int/uint(32/64/128)_t.
These questions will ultimately needto be answered. But that's a long-termdiscussion. Following all that, the intentis that Multiprecision will swing in forunlimited precision byone the so-calledbasics of 32/64/128.
Best, Christopher.
On Wednesday, May 15, 2024 at 02:30:53 AM GMT+2, Georg Gast via Boost
Hello,
For the past year Chris Kormanyos and I have been working on a new library to implement IEEE 754 decimal floating point types. We are pleased to announce the library is now in beta, and can be found at: https://github.com/cppalliance/decimal, and the documentation is at: https://cppalliance.org/decimal/decimal.html
First, what are Decimal Floating Point Numbers? They are floating point numbers where the significand is stored in base-10 (decimal) instead of base-2 (binary). This means that numbers can be represented exactly avoiding cases such as the famous 0.1 + 0.2 != 0.3: https://0.30000000000000004.com.
The library provides types decimal32, decimal64, and decimal128 as specified in IEEE 754 and STL-like functionality. The library is header-only, has no dependencies, and only requires C++14. It provides most of the STL functionality that you are familiar with such as <cmath>, <cstdlib>, <charconv>, etc. We are proceeding as a beta right now rather than pursuing complete boost review as we are missing STL features such as C++17 special math, and believe we can continue to increase performance. We do intend to go through the review process at a later time.
Please give the library a go, and let us know how we can make it better. We look forward to any and all feedback. If you use the Cpplang slack channel I am active (from the Central European Time zone) on both #boost and #boost-decimal.
On a personal note as of today I have also moved from being employed half-time to full-time with the C++ Alliance. This affords me a greater opportunity to develop and maintain new and existing Boost libraries.
Matt Borland
—- C++ Alliance Staff Engineer
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
What is the distinction to [Boost.Multiprecision]
We are glad you asked. Matt will add morebut I'll start (in fact I will expound). Decimal intends to provide the decimaltypes that are explicitly specified inIEEE-754:2008. These will allow clientsto perform base-10 numericalcalculations that retain portability.Here we mean portability in the senseof changing your company, compiler,operating system or whatever and still being able to use your portable code. C++ has that attribut, partially, todayfor binary floats but not yet for decimal. And a secondary intent of Decimalis to provide a reference applicationfor potential specification(in the sense of C++/SG21/LEWG)for actually getting these intothe Standard.
Boost.Multiprecision does not intendto provide those types that are exactlyspecified in IEEE-754:2008.
We had this discussion along theway and as recently as yesterdaysince we are both co-autohrs ofall of Decimal/Math/Multiprecision.Should Decimal be part of Multiprecisionor vice-versa? We found the answerto be NO. Decimal is not Multiprecision. So when we approach LEWGand say, look we have decimal(32/64/128)_t,this will ultimately beg a larger, yet unanswered,question. OK, great, decimal(32/64/128)_t.So where is float(32/64/128)_t ina mandatory spec? And the corrolaryquerry what about Int/uint(32/64/128)_t. These questions will ultimately needto be answered. But that's a long-termdiscussion. Following all that, the intentis that Multiprecision will swing in forunlimited precision byone the so-calledbasics of 32/64/128.
Best, Christopher.
The closest approximation in multiprecision is cpp_dec_float (non-IEEE 754 compliant): https://www.boost.org/doc/libs/1_85_0/libs/multiprecision/doc/html/boost_mul.... Chris is actually the original author of this type. The decimal library is partly an evolution from this since we know what did and did not work well. We are also able to extract a lot more performance out of decimal since instead of operating on an arbitrary number of limbs we have fixed width calculations that are specialized for each of the three types. We are also able to pre-compute polynomials for cmath functions instead of running Decimal CORDIC or other approximation algorithm at runtime: https://github.com/cppalliance/decimal/blob/develop/include/boost/decimal/de.... Matt
On Mon, May 13, 2024 at 9:44 AM Matt Borland wrote:
For the past year Chris Kormanyos and I have been working on a new library to implement IEEE 754 decimal floating point types.
The library provides types decimal32, decimal64, and decimal128 as specified in IEEE 754 and STL-like functionality.
Matt, do we have benchmarks comparing yours to the decimal64
implementation in GCC?
i.e. The C++ std::decimal::decimal64 in
Matt, do we have benchmarks comparing yours to the decimal64 implementation in GCC? i.e. The C++ std::decimal::decimal64 in
as well as
the C _Decimal64 (https://gcc.gnu.org/onlinedocs/gcc/Decimal-Float.html) Those are backed by Intel's BID library.
Not yet, but I will add them. Our focus has been on ensuring correctness first, and only recently began optimizing routines for performance.
Your implementation of decimal64 also stores a uint64_t of the IEEE 754 Decimal64 (presumably BID not DPD) format, correct?
Correct, we use the BID format.
This also means that on every operation you have to decode significand, exponent, sign out of this - which isn't trivial given the format.
It's a series of bit-fiddling operations: https://github.com/cppalliance/decimal/blob/develop/include/boost/decimal/de.... Encoding from significand, exponent and sign is a much harder operation: https://github.com/cppalliance/decimal/blob/develop/include/boost/decimal/de... Matt
On Wednesday, May 15, 2024, Matt Borland wrote:
Matt, do we have benchmarks comparing yours to the decimal64 implementation in GCC? i.e. The C++ std::decimal::decimal64 in
as well as the C _Decimal64 (https://gcc.gnu.org/onlinedocs/gcc/Decimal-Float.html) Those are backed by Intel's BID library.
Not yet, but I will add them. Our focus has been on ensuring correctness first, and only recently began optimizing routines for performance.
Your implementation of decimal64 also stores a uint64_t of the IEEE 754 Decimal64 (presumably BID not DPD) format, correct?
Correct, we use the BID format.
This also means that on every operation you have to decode significand, exponent, sign out of this - which isn't trivial given the format.
It's a series of bit-fiddling operations: https://github.com/ cppalliance/decimal/blob/develop/include/boost/decimal/decimal64.hpp#L1006
I know, because I also implemented them at one point. :) For real usage the repeated cost wasn't negligible. Which is why I want to think about (or compare against) an implementation where a decimal64 would contain { significand; exponent; } instead of { bid64; } Regarding comparison to Intel's implementation (bid based) which is used in both GCC's decimal64 and also in Bloomberg, that's especially important because it used some impressively large tables to get performance (which also bloats binaries) Glen
Which is why I want to think about (or compare against) an implementation where a decimal64 would contain { significand; exponent; } instead of { bid64; }
When you first asked me about decimal 2 or so years ago I toyed with an implementation that skipped the combination field: https://github.com/mborland/decimal/blob/main/include/boost/decimal/decimal3.... It sacrificed some of the range of the exponent to make that happen.
Regarding comparison to Intel's implementation (bid based) which is used in both GCC's decimal64 and also in Bloomberg, that's especially important because it used some impressively large tables to get performance (which also bloats binaries)
So far we don't rely on giant tables. Chris added an STM board QEMU to our CI that checks our ROM usage among other things. I'll work on some benchmarks and see if it's worth building out the above non-IEEE754 decimal32. Matt
Regarding comparison to Intel's implementation (bid based) which is used in both GCC's decimal64 and also in Bloomberg, that's especially important because it used some impressively large tables to get performance (which also bloats binaries)
So far we don't rely on giant tables. Chris added an STM board QEMU to our CI that checks our ROM usage among other things. I'll work on some benchmarks and see if it's worth building out the above non-IEEE754 decimal32.
I have added benchmarks for the GCC builtin _Decimal32, _Decimal64, and _Decimal128. For operations add, sub, mul, div the geometric mean of the runtime ratios (boost.decimal runtime / GCC runtime) are: decimal32: 0.932 decimal64: 1.750 decimal128: 4.837 It's interesting that for every operation the GCC _Decimal64 is faster than _Decimal32 where as ours increases run time with size. In any event I should be able to use all of the existing boost::decimal::decimal32 implementations for the basic operations (since they are already benchmarking faster than reference) with a class that directly stores the sign, exp, and significand to see if it's noticeably faster. Matt
13 mai 2024 à 15:44 "Matt Borland via Boost"
The library provides types decimal32, decimal64, and decimal128 as specified in IEEE 754 and STL-like functionality. The library is header-only, has no dependencies, and only requires C++14. It provides most of the STL functionality that you are familiar with such as <cmath>, <cstdlib>, <charconv>, etc.
Thanks for sharing. During the boost charconv review, it was pointed out that std from_chars has a serious design defect in case of ERANGE, because the return value cannot differentiate between different range errors (value too big, too small, positive / negative, etc.), despite the information being reliably available from the parsing. IIRC boost charconv works around this by modifiying the provided value argument, which is forbidden by std. Is the same workaround used for decimal? (in which case the documentation should state this). Or should it be seen as an opportunity for fixing the from_chars interface / providing a better error reporting? Another note about the documentation: some examples should use literal suffixes The current way: constexpr decimal64 b {2, -1}; // 2e-1 or 0.2 is pretty unreadable / ugly. Also, there should be a statement on the differences between: constexpr auto b1 = 0.2_DD; constexpr auto b2 = decimal64(0.2); constexpr auto b3 = decimal64(2, -1); I expect the first and third ones to be identical and yield precise decimal values, and the second to yield imprecise values, although i could not find a pathological case from quick tests where we would have b2 != b3. Regards, Julien
Thanks for sharing.
During the boost charconv review, it was pointed out that std from_chars has a serious design defect in case of ERANGE, because the return value cannot differentiate between different range errors (value too big, too small, positive / negative, etc.), despite the information being reliably available from the parsing.
IIRC boost charconv works around this by modifiying the provided value argument, which is forbidden by std.
Is the same workaround used for decimal? (in which case the documentation should state this). Or should it be seen as an opportunity for fixing the from_chars interface / providing a better error reporting?
Yes the same workaround is applied to decimal. I will make a note in the docs about the behavior.
Another note about the documentation: some examples should use literal suffixes
The current way:
constexpr decimal64 b {2, -1}; // 2e-1 or 0.2
is pretty unreadable / ugly. Also, there should be a statement on the differences between:
constexpr auto b1 = 0.2_DD; constexpr auto b2 = decimal64(0.2); constexpr auto b3 = decimal64(2, -1);
I expect the first and third ones to be identical and yield precise decimal values, and the second to yield imprecise values, although i could not find a pathological case from quick tests where we would have b2 != b3.
Since it's a limitation of the language 0.2_DD would actually be interpreted as a decimal64(0.2L) whereas "0.2"_DD would be equivalent to decimal64(2, -1) so b1 == b2 and b2 != b3. I will annotate this potential pitfall in the docs. Thanks for the feedback. Matt
participants (8)
-
Christopher Kormanyos
-
Georg Gast
-
Glen Fernandes
-
Julien Blanc
-
Kostas Savvidis
-
Matt Borland
-
Rainer Deyke
-
Virgilio Fornazin