Decimal: Formal review begins.

The review of the proposed Decimal Number library by Matt Borland and Chris Kormanyos begins today and runs until 22nd Jan. You will find documentation here: https://cppalliance.org/decimal/decimal.html And the code repository is here: https://github.com/cppalliance/decimal/ Boost.Decimal is an implementation of IEEE 754 <https://standards.ieee.org/ieee/754/6210/> and ISO/IEC DTR 24733 <https://www.open-std.org/JTC1/SC22/WG21/docs/papers/2009/n2849.pdf> Decimal Floating Point numbers. The library is header-only, has no dependencies, and requires C++14. Please provide feedback on the following general topics: - What is your evaluation of the design? - What is your evaluation of the implementation? - What is your evaluation of the documentation? - What is your evaluation of the potential usefulness of the library? Do you already use it in industry? - Did you try to use the library? With which compiler(s)? Did you have any problems? - How much effort did you put into your evaluation? A glance? A quick reading? In-depth study? - Are you knowledgeable about the problem domain? Ensure to explicitly include with your review: ACCEPT, REJECT, or CONDITIONAL ACCEPT (with acceptance conditions).header-only, has no dependencies, and requires C++14. Best, John Maddock (review manager).

No dia 15 de jan. de 2025, às 10:34, John Maddock via Boost <boost@lists.boost.org> escreveu:
The review of the proposed Decimal Number library by Matt Borland and Chris Kormanyos begins today and runs until 22nd Jan.
You will find documentation here: https://cppalliance.org/decimal/decimal.html
A small typo I spotted: s/convince header/convenience header Joaquin M Lopez Munoz

Joaquín M López Muñoz wrote:
No dia 15 de jan. de 2025, às 10:34, John Maddock via Boost <boost@lists.boost.org> escreveu:
The review of the proposed Decimal Number library by Matt Borland and Chris Kormanyos begins today and runs until 22nd Jan.
You will find documentation here: https://cppalliance.org/decimal/decimal.html
A small typo I spotted: s/convince header/convenience header
Some quick comments from me as well: "The final number constructed is in the form (sign || coeff < 0 ? -1 : 1) x abs(coeff) x 10^exp." This doesn't make much sense formatting-wise; the "(sign || coeff < 0 ? -1 : 1) x abs(coeff) x 10^exp" part should be formatted differently and not mix code and ad-hoc 'x' multiplication notations. namespace boost { namespace decimal { // Paragraph numbers are from ISO/IEC DTR 24733 // 3.2.2.1 construct/copy/destroy constexpr decimal32() noexcept = default; This doesn't compile. "class decimal32 {" is missing. The default "decimal32" name is occupied by the storage-optimized form and not by the operation-optimized form. I don't think this is the right decision. The storage-optimized form should only exist in memory, and everything else should use the operation-optimized form. E.g. if we have (illustrative) decimal32 f( decimal32 a, decimal32 b ) { return a+b; } decimal32 g( decimal32 a, decimal32 b ) { return a*b; } int main() { decimal32 a, b, c, d, e; e = f( g(a, b), g(c, d) ); } this currently would do a few unnecessary packs and unpacks. But if f/g take and return the operation-optimized form, these unnecessary pack+unpack operations are avoided. That is, if "decimal32" is the operation-optimized form, and we have e.g. "decimal32bid" for the BID encoded form, main would be int main() { decimal32bid a, b, c, d, e; e = f( g(a, b), f(c, d) ); } So the actual storage ("at rest") is still the same, but operations are more efficient. Note that in this case decimal32bid has no operations, only conversions from and to decimal32. The actual arithmetic is always performed over the operation-optimized form.

ick comments from me as well:
"The final number constructed is in the form (sign || coeff < 0 ? -1 : 1) x abs(coeff) x 10^exp."
This doesn't make much sense formatting-wise; the "(sign || coeff < 0 ? -1 : 1) x abs(coeff) x 10^exp" part should be formatted differently and not mix code and ad-hoc 'x' multiplication notations.
See discussion here: https://github.com/cppalliance/decimal/pull/785. I previously had only coeff x 10^exp.
namespace boost { namespace decimal {
// Paragraph numbers are from ISO/IEC DTR 24733
// 3.2.2.1 construct/copy/destroy constexpr decimal32() noexcept = default;
This doesn't compile. "class decimal32 {" is missing.
It was previously described here: https://cppalliance.org/decimal/decimal.html#generic_decimal_ so I did not duplicate anything from that block.
The default "decimal32" name is occupied by the storage-optimized form and not by the operation-optimized form. I don't think this is the right decision. The storage-optimized form should only exist in memory, and everything else should use the operation-optimized form.
E.g. if we have (illustrative)
decimal32 f( decimal32 a, decimal32 b ) { return a+b; }
decimal32 g( decimal32 a, decimal32 b ) { return a*b; }
int main() { decimal32 a, b, c, d, e; e = f( g(a, b), g(c, d) ); }
this currently would do a few unnecessary packs and unpacks.
But if f/g take and return the operation-optimized form, these unnecessary pack+unpack operations are avoided.
That is, if "decimal32" is the operation-optimized form, and we have e.g. "decimal32bid" for the BID encoded form, main would be
int main() { decimal32bid a, b, c, d, e; e = f( g(a, b), f(c, d) ); }
So the actual storage ("at rest") is still the same, but operations are more efficient.
Note that in this case decimal32bid has no operations, only conversions from and to decimal32. The actual arithmetic is always performed over the operation-optimized form.
The name "decimal32" is occupied by the IEEE-754 compliant type. The naming scheme matches published standards and existing practice such as uint32_t vs uint_fast32_t. There's a clear statement of intent using the latter. I don't think people should pick decimal32 over decimal32_fast in the general case, but I also think it would be a bad idea to diverge from IEEE in our naming scheme. Matt

Matt Borland wrote:
namespace boost { namespace decimal {
// Paragraph numbers are from ISO/IEC DTR 24733
// 3.2.2.1 construct/copy/destroy constexpr decimal32() noexcept = default;
This doesn't compile. "class decimal32 {" is missing.
It was previously described here: https://cppalliance.org/decimal/decimal.html#generic_decimal_ so I did not duplicate anything from that block.
It doesn't compile. The synopsis must be valid C++. Just because you have a forward declaration class decimal32; doesn't mean that you can just start writing member declarations directly in the namespace and expect anything to compile or make sense.

On Wednesday, January 15th, 2025 at 3:03 PM, Peter Dimov <pdimov@gmail.com> wrote:
Matt Borland wrote:
namespace boost { namespace decimal {
// Paragraph numbers are from ISO/IEC DTR 24733
// 3.2.2.1 construct/copy/destroy constexpr decimal32() noexcept = default;
This doesn't compile. "class decimal32 {" is missing.
It was previously described here: https://cppalliance.org/decimal/decimal.html#generic_decimal_ so I did not duplicate anything from that block.
It doesn't compile.
The synopsis must be valid C++. Just because you have a forward declaration
class decimal32;
doesn't mean that you can just start writing member declarations directly in the namespace and expect anything to compile or make sense.
Oh, I see what you're saying. I'll fix it shortly. Matt

Matt Borland wrote:
The name "decimal32" is occupied by the IEEE-754 compliant type. The naming scheme matches published standards and existing practice such as uint32_t vs uint_fast32_t. There's a clear statement of intent using the latter. I don't think people should pick decimal32 over decimal32_fast in the general case, but I also think it would be a bad idea to diverge from IEEE in our naming scheme.
Fair enough, N2849/TR 24733 do say that decimal32 occupies four octets. (Although this gives significantly more guarantees in 24733 because you could bit_cast the standard decimal32 to uint32_t and observe the IEEE representation, whereas I don't think you can do that for this library's decimal32.) However... even if we accept that the name decima32 belongs to the compact version, shouldn't there at least be a way to implicitly convert decimal32 to decimal32_fast? Shouldn't there be a way to (explicitly or implicitly) convert decimal32_fast to decimal32? Or at minimum, assign decimal32_fast to decimal32? E.g. void f( decimal32& x, decimal32& y ) { decimal32_fast x2 = x; decimal32_fast y2 = y; // do calculations here using decimal32_fast x = <some result of type decimal32_fast>; y = <some result of type decimal32_fast>; } Incidentally, the synopsis of decimal32_fast says explicit constexpr operator decimal64() const noexcept; explicit constexpr operator decimal128() const noexcept; but this doesn't seem correct to me. It also doesn't seem to correspond to what the source code says. Furthermore, consider void f( decimal32& x, decimal32& y ) { decimal32_fast z = x + y; decimal32_fast w = x - y; // do calculations here using decimal32_fast x = <some result of type decimal32_fast>; y = <some result of type decimal32_fast>; } These lines decimal32_fast z = x + y; decimal32_fast w = x - y; still perform an unnecessary pack and unpack. Shouldn't we be worried about that? Maybe a compiler can optimize this out, maybe it can't; I'll need to put Decimal on Compiler Explorer somehow to check. This can be avoided by making operator+(decimal32, decimal32) return decimal32_fast. That's not unheard of because it's for instance what operator+(short, short) does, but it is not what TR 24733 specifies.

The name "decimal32" is occupied by the IEEE-754 compliant type. The naming scheme matches published standards and existing practice such as uint32_t vs uint_fast32_t. There's a clear statement of intent using the latter. I don't think people should pick decimal32 over decimal32_fast in the general case, but I also think it would be a bad idea to diverge from IEEE in our naming scheme.
Fair enough, N2849/TR 24733 do say that decimal32 occupies four octets.
(Although this gives significantly more guarantees in 24733 because you could bit_cast the standard decimal32 to uint32_t and observe the IEEE representation, whereas I don't think you can do that for this library's decimal32.)
You can do that, and it's actually how I debugged the library in the infancy stage. You'll find tests against pre-defined bitsets I generated with pen and paper.
However... even if we accept that the name decima32 belongs to the compact version, shouldn't there at least be a way to implicitly convert decimal32 to decimal32_fast? Shouldn't there be a way to (explicitly or implicitly) convert decimal32_fast to decimal32? Or at minimum, assign decimal32_fast to decimal32?
E.g.
void f( decimal32& x, decimal32& y ) { decimal32_fast x2 = x; decimal32_fast y2 = y; // do calculations here using decimal32_fast x = <some result of type decimal32_fast>;
y = <some result of type decimal32_fast>;
}
Incidentally, the synopsis of decimal32_fast says
explicit constexpr operator decimal64() const noexcept; explicit constexpr operator decimal128() const noexcept;
but this doesn't seem correct to me. It also doesn't seem to correspond to what the source code says.
Furthermore, consider
void f( decimal32& x, decimal32& y ) { decimal32_fast z = x + y; decimal32_fast w = x - y; // do calculations here using decimal32_fast x = <some result of type decimal32_fast>;
y = <some result of type decimal32_fast>;
}
These lines
decimal32_fast z = x + y; decimal32_fast w = x - y;
still perform an unnecessary pack and unpack. Shouldn't we be worried about that? Maybe a compiler can optimize this out, maybe it can't; I'll need to put Decimal on Compiler Explorer somehow to check.
This can be avoided by making operator+(decimal32, decimal32) return decimal32_fast. That's not unheard of because it's for instance what operator+(short, short) does, but it is not what TR 24733 specifies.
Every decimal type can be explicitly converted to any other decimal type. In the promotion system if you have an operation like decimal32 + decimal32_fast the result will be promoted to decimal32_fast since we considered it to be higher precedence (like how decimal32 + decimal64 yields decimal64). Right now the definition of BOOST_DECIMAL_DEC_EVAL_METHOD only allows internal promotion of width like FLT_EVAL_METHOD: https://en.cppreference.com/w/cpp/types/climits/FLT_EVAL_METHOD. What could be added is additional values that specify that if we have f(decimal32, decimal32) all internal calculations should be done with decimal32_fast. That would likely offer a decent speedup in the <cmath> implementations. Matt

On 1/15/25 12:33, John Maddock via Boost wrote:
The review of the proposed Decimal Number library by Matt Borland and Chris Kormanyos begins today and runs until 22nd Jan.
You will find documentation here: https://cppalliance.org/decimal/ decimal.html
Links from https://cppalliance.org do not open for me, the connection times out. Is it just me?

Links from https://cppalliance.org do not open for me
And this link - https://cppalliance.org/decimal/decimal.html ? It is working for me. Those sites are hosted on github pages. And then proxied via Cloudflare which is the DNS provider.

proxied via Cloudflare
Andrey, A new development... From https://adguard-dns.io/en/blog/encrypted-client-hello-misconceptions-future.... "In November 2024, Russia began blocking Cloudflare’s implementation of Encrypted Client Hello (ECH), a privacy-focused extension of the TLS protocol. “This technology is a means of circumventing restrictions on access to information banned in Russia. Its use violates Russian law and is restricted by the Technical Measure to Combat Threats (TSPU),” the statement by the Russian Internet regulator read. Russia, known for its tight control over internet access, views ECH as a tool for bypassing geo-restrictions, though that was never its intended purpose. "

On 1/16/25 00:03, Sam Darwin wrote:
proxied via Cloudflare
Andrey,
A new development... From https://adguard-dns.io/en/blog/encrypted- client-hello-misconceptions-future.html <https://adguard-dns.io/en/blog/ encrypted-client-hello-misconceptions-future.html>
"In November 2024, Russia began blocking Cloudflare’s implementation of Encrypted Client Hello (ECH), a privacy-focused extension of the TLS protocol. “This technology is a means of circumventing restrictions on access to information banned in Russia. Its use violates Russian law and is restricted by the Technical Measure to Combat Threats (TSPU),” the statement by the Russian Internet regulator read. Russia, known for its tight control over internet access, views ECH as a tool for bypassing geo-restrictions, though that was never its intended purpose. "
I see. Thanks for looking into this.

On 1/15/25 12:33, John Maddock via Boost wrote:
The review of the proposed Decimal Number library by Matt Borland and Chris Kormanyos begins today and runs until 22nd Jan.
A quick question. Why are the UDLs not in a dedicated nested namespace, e.g. boost::decimal::literals? This is a very useful convention established by the standard library that allows to not import the entire library namespace into the user's scope but only enable the literals.

On Wednesday, January 15th, 2025 at 7:15 PM, Andrey Semashev via Boost <boost@lists.boost.org> wrote:
On 1/15/25 12:33, John Maddock via Boost wrote:
The review of the proposed Decimal Number library by Matt Borland and Chris Kormanyos begins today and runs until 22nd Jan.
A quick question. Why are the UDLs not in a dedicated nested namespace, e.g. boost::decimal::literals? This is a very useful convention established by the standard library that allows to not import the entire library namespace into the user's scope but only enable the literals.
There's compelling reason. I can change it to match the STL since that is established practice. Matt

On Wed, 15 Jan 2025 at 10:34, John Maddock via Boost <boost@lists.boost.org> wrote:
The review of the proposed Decimal Number library by Matt Borland and Chris Kormanyos begins today and runs until 22nd Jan.
You will find documentation here: https://cppalliance.org/decimal/decimal.html
And the code repository is here: https://github.com/cppalliance/decimal/
Boost.Decimal is an implementation of IEEE 754 <https://standards.ieee.org/ieee/754/6210/> and ISO/IEC DTR 24733 <https://www.open-std.org/JTC1/SC22/WG21/docs/papers/2009/n2849.pdf> Decimal Floating Point numbers. The library is header-only, has no dependencies, and requires C++14.
What is the status of the DTR? It looks to be dated from 2009. Is it realistic to think it will get into std at some point? Thanks, Ruben.

What is the status of the DTR? It looks to be dated from 2009. Is it realistic to think it will get into std at some point?
Thanks, Ruben.
The only update I found was https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3407.html. I emailed Dietmar a while back to ask why it was never accepted, but I did not receive a response. Do I think decimal numbers should be in the standard? Yes, they're a type specified by IEEE 754 since 2008. SG6 has decimal floating point listed in their blurb of focus areas on the ISO Cpp website. It would be an uphill battle because Chris and I are outsiders to the committee, and this would be a huge proposal that touches a large percentage of the standard library. It would then take many years to get implemented. Last I checked libc++ still does not support C++17 math special functions and this would add to that. Chris and I said we would revisit talks on whether or not this is worth the effort once it's been accepted in boost, because then we at least have the reference implementation in hand. Matt

On Thu, 16 Jan 2025 at 14:06, Matt Borland <matt@mattborland.com> wrote:
What is the status of the DTR? It looks to be dated from 2009. Is it realistic to think it will get into std at some point?
Thanks, Ruben.
The only update I found was https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3407.html. I emailed Dietmar a while back to ask why it was never accepted, but I did not receive a response.
Do I think decimal numbers should be in the standard? Yes, they're a type specified by IEEE 754 since 2008. SG6 has decimal floating point listed in their blurb of focus areas on the ISO Cpp website. It would be an uphill battle because Chris and I are outsiders to the committee, and this would be a huge proposal that touches a large percentage of the standard library. It would then take many years to get implemented. Last I checked libc++ still does not support C++17 math special functions and this would add to that. Chris and I said we would revisit talks on whether or not this is worth the effort once it's been accepted in boost, because then we at least have the reference implementation in hand.
Thanks. I'm trying to understand whether sticking to the TR has enough value. If it doesn't, maybe we could consider applying Peter's comments about making decimal32 be the fast one, or dropping sprintf. Some more questions: 1. As a user, when should I pick decimal64 vs decimal64_fast? I intend to implement support for your decimal types in the static interface of Boost.MySQL as part of this review - should I support decimalXY, decimalXY_fast, or both? 2. Is there a rationale behind only supporting the convenience header, as stated in the docs? Including the entire library in my machine (gcc-12, Ubuntu 22.04, -std=c++23) is about 4.5seconds - similar to the entire Asio in magnitude. Including only decimal32.hpp cuts the time to around 1s. 3. In the line of the previous question, is there a reason to have BOOST_DECIMAL_DISABLE_IOSTREAM instead of splitting iostream functionality to a separate header? In my experience, the more config macros you have, the more chances of getting bugs. Also, is the test suite being run with these macros defined? 4. From sprintf's documentation: "In the interest of safety sprintf simply calls snprintf with buf_size equal to sizeof(buffer). ". This doesn't look right. This is what the implementation looks like: template <typename... T> inline auto sprintf(char* buffer, const char* format, T... values) noexcept #ifndef BOOST_DECIMAL_HAS_CONCEPTS -> std::enable_if_t<detail::is_decimal_floating_point_v<std::common_type_t<T...>>, int> #else -> int requires detail::is_decimal_floating_point_v<std::common_type_t<T...>> #endif { return detail::snprintf_impl(buffer, sizeof(buffer), format, values...); } If I'm reading this correctly, this is calling snprintf_impl with the sizeof a pointer, which is probably not what we want. You'd need to template the function on the buffer size and take a char array reference to make this secure. Regards, Ruben.
Matt

Thanks. I'm trying to understand whether sticking to the TR has enough value. If it doesn't, maybe we could consider applying Peter's comments about making decimal32 be the fast one, or dropping sprintf.
The TR was a good starting point, but yes we have added much since then to make the types as seamlessly interoperable with the STL as we can. Like two weeks ago someone asked for <format> support and we provide that which obviously won't be in a TR from 2009. Is there any real gain from dropping a standard function that's already generally implemented? I don't think so especially if we pursue standardization. It's not a totally complete implementation of sprintf but it would groundwork for bolting on support to existing std:: implementations.
Some more questions: 1. As a user, when should I pick decimal64 vs decimal64_fast? I intend to implement support for your decimal types in the static interface of Boost.MySQL as part of this review - should I support decimalXY, decimalXY_fast, or both?
It would be a pretty easy template to support decimalXY and decimalXY_fast. I know for fact decimalXY is being used out in industry, but not sure about decimalXY_fast. It would be more space efficient to store as decimalXY.
2. Is there a rationale behind only supporting the convenience header, as stated in the docs? Including the entire library in my machine (gcc-12, Ubuntu 22.04, -std=c++23) is about 4.5seconds - similar to the entire Asio in magnitude. Including only decimal32.hpp cuts the time to around 1s.
I never tried many of the permutations of headers outside of the convenience one. The library is structured to match the STL so that it is unsurprising to the average user. I think you could pick and choose if you wanted to.
3. In the line of the previous question, is there a reason to have BOOST_DECIMAL_DISABLE_IOSTREAM instead of splitting iostream functionality to a separate header? In my experience, the more config macros you have, the more chances of getting bugs. Also, is the test suite being run with these macros defined?
We have a the options to disable a bunch of the clib functionality so that the library can run on embedded platforms. We do have QEMU of an STM board in the CI which tests all of this. Why test embedded you ask? It's not uncommon for finance devs to run on bare metal platforms.
4. From sprintf's documentation: "In the interest of safety sprintf simply calls snprintf with buf_size equal to sizeof(buffer). ". This doesn't look right. This is what the implementation looks like:
template <typename... T>
inline auto sprintf(char* buffer, const char* format, T... values) noexcept #ifndef BOOST_DECIMAL_HAS_CONCEPTS -> std::enable_if_t<detail::is_decimal_floating_point_v<std::common_type_t<T...>>,
int>
#else -> int requires
detail::is_decimal_floating_point_v<std::common_type_t<T...>>
#endif { return detail::snprintf_impl(buffer, sizeof(buffer), format, values...); }
If I'm reading this correctly, this is calling snprintf_impl with the sizeof a pointer, which is probably not what we want. You'd need to template the function on the buffer size and take a char array reference to make this secure.
I'll take a look. Matt

On Thu, 16 Jan 2025 at 16:05, Matt Borland <matt@mattborland.com> wrote:
Thanks. I'm trying to understand whether sticking to the TR has enough value. If it doesn't, maybe we could consider applying Peter's comments about making decimal32 be the fast one, or dropping sprintf.
The TR was a good starting point, but yes we have added much since then to make the types as seamlessly interoperable with the STL as we can. Like two weeks ago someone asked for <format> support and we provide that which obviously won't be in a TR from 2009. Is there any real gain from dropping a standard function that's already generally implemented? I don't think so especially if we pursue standardization. It's not a totally complete implementation of sprintf but it would groundwork for bolting on support to existing std:: implementations.
I didn't make my point here clear, sorry. I completely agree with including every cstdio function, except for sprintf. There's no way to communicate sprintf the length of the input buffer, so it's very easy to end up with buffer overflows. I can't think of a use case where sprintf should be used over snprintf, which just adds the length parameter. So that's the only function I was advising to be removed.
Some more questions: 1. As a user, when should I pick decimal64 vs decimal64_fast? I intend to implement support for your decimal types in the static interface of Boost.MySQL as part of this review - should I support decimalXY, decimalXY_fast, or both?
It would be a pretty easy template to support decimalXY and decimalXY_fast. I know for fact decimalXY is being used out in industry, but not sure about decimalXY_fast. It would be more space efficient to store as decimalXY.
I will implement both. It's true that from my library's perspective it doesn't matter. But I think as a user, I'd like to see some guidelines on what use cases should I employ each one for.
2. Is there a rationale behind only supporting the convenience header, as stated in the docs? Including the entire library in my machine (gcc-12, Ubuntu 22.04, -std=c++23) is about 4.5seconds - similar to the entire Asio in magnitude. Including only decimal32.hpp cuts the time to around 1s.
I never tried many of the permutations of headers outside of the convenience one. The library is structured to match the STL so that it is unsurprising to the average user. I think you could pick and choose if you wanted to.
I tried picking charconv and it didn't work :) I think that the actual header structure is good, matching STL as you said. I don't agree with recommending users to always include the entire library - I think that increases compile times without much benefit. Hence I was asking whether there was an actual reason to do it.
3. In the line of the previous question, is there a reason to have BOOST_DECIMAL_DISABLE_IOSTREAM instead of splitting iostream functionality to a separate header? In my experience, the more config macros you have, the more chances of getting bugs. Also, is the test suite being run with these macros defined?
We have a the options to disable a bunch of the clib functionality so that the library can run on embedded platforms. We do have QEMU of an STM board in the CI which tests all of this. Why test embedded you ask? It's not uncommon for finance devs to run on bare metal platforms.
I understand the objective, and I think it's great having tests for that. But I don't think the method is the best. I've reviewed all uses of BOOST_DECIMAL_DISABLE_IOSTREAM, and if I'm reading this correctly, they all guard functions that are exclusively used in the tests. I don't think these functions should be in the headers shipped to users, but in the tests. I acknowledge that these functions require access to private members of public classes, so I guess that's why they are defined there. I use a dummy friend struct placed in the detail namespace when I have such problems (I think I copied the pattern from Boost.Json). I think you can get rid of all the iostream includes altogether doing this (except for the ones in io.hpp, which are actually not guarded by the macro). BOOST_DECIMAL_DISABLE_CLIB ifdefs-out entire headers - wouldn't it be simpler to have a subset of headers allowable in embedded systems, with others just labelled as "not supported"? Regards, Ruben.

I didn't make my point here clear, sorry. I completely agree with including every cstdio function, except for sprintf. There's no way to communicate sprintf the length of the input buffer, so it's very easy to end up with buffer overflows. I can't think of a use case where sprintf should be used over snprintf, which just adds the length parameter. So that's the only function I was advising to be removed.
That makes sense to me. I think all the major compilers issue warnings on sprintf usage already.
Some more questions: 1. As a user, when should I pick decimal64 vs decimal64_fast? I intend to implement support for your decimal types in the static interface of Boost.MySQL as part of this review - should I support decimalXY, decimalXY_fast, or both?
It would be a pretty easy template to support decimalXY and decimalXY_fast. I know for fact decimalXY is being used out in industry, but not sure about decimalXY_fast. It would be more space efficient to store as decimalXY.
I will implement both. It's true that from my library's perspective it doesn't matter. But I think as a user, I'd like to see some guidelines on what use cases should I employ each one for.
A guidelines section would be easy enough to add. Since I expect Peter Turcan to provide a laundry list of solid feedback on the docs we will hold for now.
2. Is there a rationale behind only supporting the convenience header, as stated in the docs? Including the entire library in my machine (gcc-12, Ubuntu 22.04, -std=c++23) is about 4.5seconds - similar to the entire Asio in magnitude. Including only decimal32.hpp cuts the time to around 1s.
I never tried many of the permutations of headers outside of the convenience one. The library is structured to match the STL so that it is unsurprising to the average user. I think you could pick and choose if you wanted to.
I tried picking charconv and it didn't work :)
I think that the actual header structure is good, matching STL as you said. I don't agree with recommending users to always include the entire library - I think that increases compile times without much benefit. Hence I was asking whether there was an actual reason to do it.
So there's things I have to manually set that most never worry about for builtin types like global rounding mode and float evaluation method. Some impl headers need forward declarations of the types, some don't. It's pretty convenient from a design perspective to make things just work with no effort on the part of the user. I'm sure everything could be made to work piecemeal, but for a difference of maybe 3 seconds of compile time it's not worth the effort.
3. In the line of the previous question, is there a reason to have BOOST_DECIMAL_DISABLE_IOSTREAM instead of splitting iostream functionality to a separate header? In my experience, the more config macros you have, the more chances of getting bugs. Also, is the test suite being run with these macros defined?
We have a the options to disable a bunch of the clib functionality so that the library can run on embedded platforms. We do have QEMU of an STM board in the CI which tests all of this. Why test embedded you ask? It's not uncommon for finance devs to run on bare metal platforms.
I understand the objective, and I think it's great having tests for that. But I don't think the method is the best.
I've reviewed all uses of BOOST_DECIMAL_DISABLE_IOSTREAM, and if I'm reading this correctly, they all guard functions that are exclusively used in the tests. I don't think these functions should be in the headers shipped to users, but in the tests.
I acknowledge that these functions require access to private members of public classes, so I guess that's why they are defined there. I use a dummy friend struct placed in the detail namespace when I have such problems (I think I copied the pattern from Boost.Json). I think you can get rid of all the iostream includes altogether doing this (except for the ones in io.hpp, which are actually not guarded by the macro).
BOOST_DECIMAL_DISABLE_CLIB ifdefs-out entire headers - wouldn't it be simpler to have a subset of headers allowable in embedded systems, with others just labelled as "not supported"?
The functions are only used in tests because they are for the end user. We have no need for streaming in the implementation. Since this is a header-only library I am not worried about library incompatibilities from different configurations. Matt

I never tried many of the permutations of headers outside of the convenience one. The library is structured to match the STL so that it is unsurprising to the average user. I think you could pick and choose if you wanted to.
I tried picking charconv and it didn't work :)
I think that the actual header structure is good, matching STL as you said. I don't agree with recommending users to always include the entire library - I think that increases compile times without much benefit. Hence I was asking whether there was an actual reason to do it.
So there's things I have to manually set that most never worry about for builtin types like global rounding mode and float evaluation method. Some impl headers need forward declarations of the types, some don't. It's pretty convenient from a design perspective to make things just work with no effort on the part of the user. I'm sure everything could be made to work piecemeal, but for a difference of maybe 3 seconds of compile time it's not worth the effort.
I'm afraid I don't agree here. Since this is a header-only library, this is 3 seconds added to both direct and indirect users of the library. If all Boost libraries did this, compile times would become unmanageable. Also, Boost doesn't have the best reputation in this aspect, so I think taking care of this is valuable. Having a set of public headers that work is established practice in Boost, and I'd advise to follow it.
3. In the line of the previous question, is there a reason to have BOOST_DECIMAL_DISABLE_IOSTREAM instead of splitting iostream functionality to a separate header? In my experience, the more config macros you have, the more chances of getting bugs. Also, is the test suite being run with these macros defined?
We have a the options to disable a bunch of the clib functionality so that the library can run on embedded platforms. We do have QEMU of an STM board in the CI which tests all of this. Why test embedded you ask? It's not uncommon for finance devs to run on bare metal platforms.
I understand the objective, and I think it's great having tests for that. But I don't think the method is the best.
I've reviewed all uses of BOOST_DECIMAL_DISABLE_IOSTREAM, and if I'm reading this correctly, they all guard functions that are exclusively used in the tests. I don't think these functions should be in the headers shipped to users, but in the tests.
I acknowledge that these functions require access to private members of public classes, so I guess that's why they are defined there. I use a dummy friend struct placed in the detail namespace when I have such problems (I think I copied the pattern from Boost.Json). I think you can get rid of all the iostream includes altogether doing this (except for the ones in io.hpp, which are actually not guarded by the macro).
BOOST_DECIMAL_DISABLE_CLIB ifdefs-out entire headers - wouldn't it be simpler to have a subset of headers allowable in embedded systems, with others just labelled as "not supported"?
The functions are only used in tests because they are for the end user. We have no need for streaming in the implementation. Since this is a header-only library I am not worried about library incompatibilities from different configurations.
I think we might be talking about different things here. Grepping for BOOST_DECIMAL_DISABLE_IOSTREAM, it protects the following functions: * debug_pattern: not documented and excluded from coverage * bit_string: not documented * Streaming native/emulated, signed/unsigned 128/256 integer types, all of which are in namespace detail. Is the end user expected to use any of these?
Matt
Regards, Ruben.

Another quick question: decimal::to_chars returns a structure that contains an error code and a pointer, implying that the function can fail. Having a quick glance at the implementation, it returns an error code if an invalid character range is supplied (e.g. end_pointer < begin_pointer). Other than that, can to_chars fail anyhow? Thanks, Ruben.

Another quick question:
decimal::to_chars returns a structure that contains an error code and a pointer, implying that the function can fail. Having a quick glance at the implementation, it returns an error code if an invalid character range is supplied (e.g. end_pointer < begin_pointer). Other than that, can to_chars fail anyhow?
It's designed to match the STL interface (and you can use the STL interface if you want with >= C++17). The only other fail case is not being able to fit the printed value in the buffer at the specified precision (or precision 6 for unspecified). You get std::errc::value_too_large in that case. Matt

On Fri, 17 Jan 2025 at 21:01, Matt Borland <matt@mattborland.com> wrote:
Another quick question:
decimal::to_chars returns a structure that contains an error code and a pointer, implying that the function can fail. Having a quick glance at the implementation, it returns an error code if an invalid character range is supplied (e.g. end_pointer < begin_pointer). Other than that, can to_chars fail anyhow?
It's designed to match the STL interface (and you can use the STL interface if you want with >= C++17). The only other fail case is not being able to fit the printed value in the buffer at the specified precision (or precision 6 for unspecified). You get std::errc::value_too_large in that case.
This makes sense to me. Thanks.
Matt

Question: I've noticed that this: 1.23456789098787237982792742932938492382342382342002934932_df Compiles, although it truncates the value to 1.234568. Would it make sense to somehow tell the user "hey, this literal is too long" and maybe error? I have never worked with custom literals before, so I don't know if this is possible or makes sense. Thanks, Ruben.

Question: I've noticed that this: 1.23456789098787237982792742932938492382342382342002934932_df Compiles, although it truncates the> value to 1.234568. Would it make> sense to somehow tell the user "hey,> this literal is too long" It would absolutely make sense, butthen you would need the same thing tohappen for float, double and longdouble in the Standard, which isn'tgoing to happen. So Decimal follows suit. On Friday, January 17, 2025 at 10:18:21 PM GMT+1, Ruben Perez via Boost <boost@lists.boost.org> wrote:
Question: I've noticed that this: 1.23456789098787237982792742932938492382342382342002934932_df Compiles, although it truncates the value to 1.234568. Would it make sense to somehow tell the user "hey, this literal is too long" and maybe error? I have never worked with custom literals before, so I don't know if this is possible or makes sense. Thanks, Ruben. _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Fri, 17 Jan 2025 at 22:22, Christopher Kormanyos <e_float@yahoo.com> wrote:
Question:
I've noticed that this:
1.23456789098787237982792742932938492382342382342002934932_df
Compiles, although it truncates the value to 1.234568. Would it make sense to somehow tell the user "hey, this literal is too long"
It would absolutely make sense, but then you would need the same thing to happen for float, double and long double in the Standard, which isn't going to happen.
So Decimal follows suit.
Hm, would you? If I understood correctly, decimals are suited for exact calculations, while standard floating point values are intended for approximate calculations. In this line of thought, I think it would make sense for decimals to be much more strict with losing precision than floats. That is, rounding will almost always happen with a float literal, while rounding with a decimal literal is more likely to be a programmer error (I'd say). Of course, you know your users and the field better than me.

Christopher Kormanyos wrote:
Question: I've noticed that this:
1.23456789098787237982792742932938492382342382342002934932_d f
Compiles, although it truncates the> value to 1.234568. Would it make> sense to somehow tell the user "hey,> this literal is too long" It would absolutely make sense, butthen you would need the same thing tohappen for float, double and longdouble in the Standard, which isn'tgoing to happen. So Decimal follows suit.
That's not the same. Base 2 floats don't need a fixed number of decimal digits to be represented unambiguously, but decimal floats do.

>>> Question:>>> I've noticed that this:>>>>>> 1.23456789098787237982792742932938492382342382342002934932_df>>> Compiles, although it truncates the>>> value to 1.234568. Would it make>>> sense to somehow tell the user "hey,>>> this literal is too long" >> It would absolutely make sense,>> but then you would need the same thing>> to happen for float, double and>> long double in the Standard, which>> isn't going to happen.>> So Decimal follows suit. > That's not the same. Base 2 floats> don't need a fixed number of decimal> digits to be represented unambiguously,> but decimal floats do. Oh that's a good point. It goesinto the direction of languagespecification. When we talk about adding valueto this library today, we couldhave the discussion. I guesswe would need to figure out a portableway to provide such compile-time info. This might seem a bit skeptical frommy side, but, as much as I agree,I don't feel that it really leadsanywhere in the case of DecimalTODAY. I don't believe it's within the scopeor purpose of Decimal to figureout how to inform the client"Hey, your fixed-point representationjust walked off its digits". That would be something for theStandard, and perhaps evensomething for fixed-point ingeneral. - Chris On Friday, January 17, 2025 at 10:31:28 PM GMT+1, Peter Dimov via Boost <boost@lists.boost.org> wrote: Christopher Kormanyos wrote: > > Question: > > I've noticed that this: > > > 1.23456789098787237982792742932938492382342382342002934932_d > f > > Compiles, although it truncates the> value to 1.234568. Would it make> > sense to somehow tell the user "hey,> this literal is too long" > It would absolutely make sense, butthen you would need the same thing > tohappen for float, double and longdouble in the Standard, which isn'tgoing to > happen. > So Decimal follows suit. That's not the same. Base 2 floats don't need a fixed number of decimal digits to be represented unambiguously, but decimal floats do. _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Fri, 17 Jan 2025 at 22:57, Christopher Kormanyos via Boost <boost@lists.boost.org> wrote:
Question:>>> I've noticed that this:>>>>>> 1.23456789098787237982792742932938492382342382342002934932_df>>> Compiles, although it truncates the>>> value to 1.234568. Would it make>>> sense to somehow tell the user "hey,>>> this literal is too long" It would absolutely make sense,>> but then you would need the same thing>> to happen for float, double and>> long double in the Standard, which>> isn't going to happen.>> So Decimal follows suit. That's not the same. Base 2 floats> don't need a fixed number of decimal> digits to be represented unambiguously,> but decimal floats do. Oh that's a good point. It goesinto the direction of languagespecification. When we talk about adding valueto this library today, we couldhave the discussion. I guesswe would need to figure out a portableway to provide such compile-time info. This might seem a bit skeptical frommy side, but, as much as I agree,I don't feel that it really leadsanywhere in the case of DecimalTODAY. I don't believe it's within the scopeor purpose of Decimal to figureout how to inform the client"Hey, your fixed-point representationjust walked off its digits". That would be something for theStandard, and perhaps evensomething for fixed-point ingeneral.
I was thinking of something along the lines: BOOST_DECIMAL_EXPORT consteval auto operator ""_dd(const char* str, std::size_t) -> decimal64 { decimal64 d; bool has_precision_error = detail::from_chars_report_loss_of_precision(str, str + detail::strlen(str), d); if (has_precision_error) throw std::runtime_error(); // throwing in a consteval function just yields a compile time error return d; } In a similar spirit to what fmtlib does to error at compile time when you make a mistake with format arguments. Of course, this requires a bunch of ifdefs to make it work in C++14. So if you think it doesn't add enough value, I understand it. Regards, Ruben.

>>> Question:>>> I've noticed that this:>>>>>> 1.23456789098787237982792742932938492382342382342002934932_df>>> Compiles, although it truncates the>>> value to 1.234568. Would it make>>> sense to somehow tell the user "hey,>>> this literal is too long" <snip> > Example about compile-time throw> message indication > In a similar spirit to what fmtlib> does to error at compile time when> you make a mistake with format arguments. > Of course, this requires a bunch> of ifdefs to make it work in C++14. I think we have all those idiomswithin a C++-14-constexpr context.So we would just need to collectthem. > So if you think it doesn't add> enough value, I understand it. I was just thinking to myself,prior to reading this post,hey, wouldn't it make more senseif we catch this at compilet-time? Now I'm liking this idea. If we end up doing it or somethinglike it, why would we throw()instead of static_assert-ing it?Wouldn't static_assert be better? - Chris On Friday, January 17, 2025 at 11:04:48 PM GMT+1, Ruben Perez <rubenperez038@gmail.com> wrote: On Fri, 17 Jan 2025 at 22:57, Christopher Kormanyos via Boost <boost@lists.boost.org> wrote: > > >>> Question:>>> I've noticed that this:>>>>>> 1.23456789098787237982792742932938492382342382342002934932_df>>> Compiles, although it truncates the>>> value to 1.234568. Would it make>>> sense to somehow tell the user "hey,>>> this literal is too long" > >> It would absolutely make sense,>> but then you would need the same thing>> to happen for float, double and>> long double in the Standard, which>> isn't going to happen.>> So Decimal follows suit. > > That's not the same. Base 2 floats> don't need a fixed number of decimal> digits to be represented unambiguously,> but decimal floats do. > Oh that's a good point. It goesinto the direction of languagespecification. > When we talk about adding valueto this library today, we couldhave the discussion. I guesswe would need to figure out a portableway to provide such compile-time info. > This might seem a bit skeptical frommy side, but, as much as I agree,I don't feel that it really leadsanywhere in the case of DecimalTODAY. > I don't believe it's within the scopeor purpose of Decimal to figureout how to inform the client"Hey, your fixed-point representationjust walked off its digits". > That would be something for theStandard, and perhaps evensomething for fixed-point ingeneral. I was thinking of something along the lines: BOOST_DECIMAL_EXPORT consteval auto operator ""_dd(const char* str, std::size_t) -> decimal64 { decimal64 d; bool has_precision_error = detail::from_chars_report_loss_of_precision(str, str + detail::strlen(str), d); if (has_precision_error) throw std::runtime_error(); // throwing in a consteval function just yields a compile time error return d; } In a similar spirit to what fmtlib does to error at compile time when you make a mistake with format arguments. Of course, this requires a bunch of ifdefs to make it work in C++14. So if you think it doesn't add enough value, I understand it. Regards, Ruben.

>>> Question:>>> I've noticed that this:>>>>>> 1.23456789098787237982792742932938492382342382342002934932_df>>> Compiles, although it truncates the>>> value to 1.234568. Would it make>>> sense to somehow tell the user "hey,>>> this literal is too long" <snip> >> So if you think it doesn't add>> enough value, I understand it. > Now I'm liking this idea. But my final advice goes like this,and you probably won't like it. It will detract value. Clientsmaking data tables will constantlywonder why they are hitting allkinds of compiler problems. When writing code for severalwidths of types, there might needto be separate sets of literalconstants for different decimalwidths (unless the highest widthwere taken for table entires andstatic_cast-ing were judiciouslyused). And this whole great idea will turninto more of a mess than a help.Clients would get discouragedand this would ultimately reducelibrary acceptance and tractiongaining. I would advise avoiding theoreticallycool-seeming compile-time assertslike these. - Chris On Saturday, January 18, 2025 at 08:51:02 AM GMT+1, Christopher Kormanyos <e_float@yahoo.com> wrote: >>> Question:>>> I've noticed that this:>>>>>> 1.23456789098787237982792742932938492382342382342002934932_df>>> Compiles, although it truncates the>>> value to 1.234568. Would it make>>> sense to somehow tell the user "hey,>>> this literal is too long" <snip> > Example about compile-time throw> message indication > In a similar spirit to what fmtlib> does to error at compile time when> you make a mistake with format arguments. > Of course, this requires a bunch> of ifdefs to make it work in C++14. I think we have all those idiomswithin a C++-14-constexpr context.So we would just need to collectthem. > So if you think it doesn't add> enough value, I understand it. I was just thinking to myself,prior to reading this post,hey, wouldn't it make more senseif we catch this at compilet-time? Now I'm liking this idea. If we end up doing it or somethinglike it, why would we throw()instead of static_assert-ing it?Wouldn't static_assert be better? - Chris On Friday, January 17, 2025 at 11:04:48 PM GMT+1, Ruben Perez <rubenperez038@gmail.com> wrote: On Fri, 17 Jan 2025 at 22:57, Christopher Kormanyos via Boost <boost@lists.boost.org> wrote: > > >>> Question:>>> I've noticed that this:>>>>>> 1.23456789098787237982792742932938492382342382342002934932_df>>> Compiles, although it truncates the>>> value to 1.234568. Would it make>>> sense to somehow tell the user "hey,>>> this literal is too long" > >> It would absolutely make sense,>> but then you would need the same thing>> to happen for float, double and>> long double in the Standard, which>> isn't going to happen.>> So Decimal follows suit. > > That's not the same. Base 2 floats> don't need a fixed number of decimal> digits to be represented unambiguously,> but decimal floats do. > Oh that's a good point. It goesinto the direction of languagespecification. > When we talk about adding valueto this library today, we couldhave the discussion. I guesswe would need to figure out a portableway to provide such compile-time info. > This might seem a bit skeptical frommy side, but, as much as I agree,I don't feel that it really leadsanywhere in the case of DecimalTODAY. > I don't believe it's within the scopeor purpose of Decimal to figureout how to inform the client"Hey, your fixed-point representationjust walked off its digits". > That would be something for theStandard, and perhaps evensomething for fixed-point ingeneral. I was thinking of something along the lines: BOOST_DECIMAL_EXPORT consteval auto operator ""_dd(const char* str, std::size_t) -> decimal64 { decimal64 d; bool has_precision_error = detail::from_chars_report_loss_of_precision(str, str + detail::strlen(str), d); if (has_precision_error) throw std::runtime_error(); // throwing in a consteval function just yields a compile time error return d; } In a similar spirit to what fmtlib does to error at compile time when you make a mistake with format arguments. Of course, this requires a bunch of ifdefs to make it work in C++14. So if you think it doesn't add enough value, I understand it. Regards, Ruben.

On Sat, 18 Jan 2025 at 10:26, Christopher Kormanyos <e_float@yahoo.com> wrote:
Question: I've noticed that this:
1.23456789098787237982792742932938492382342382342002934932_df Compiles, although it truncates the value to 1.234568. Would it make sense to somehow tell the user "hey, this literal is too long"
<snip>
So if you think it doesn't add enough value, I understand it.
Now I'm liking this idea.
But my final advice goes like this, and you probably won't like it.
Well, I don't have a strong opinion about this. You have much more field experience about this than me, so you're best suited to make the final decision. I'd like to understand your reasoning though.
It will detract value. Clients making data tables will constantly wonder why they are hitting all kinds of compiler problems.
When writing code for several widths of types, there might need to be separate sets of literal constants for different decimal widths (unless the highest width were taken for table entires and static_cast-ing were judiciously used).
I think I'm not following you here. Could you illustrate such a use case? Concretely, what do you mean by several widths of types? Do you mean decimal32 vs decimal64? With the syntax you have today, you already have distinct suffixes for each type (i.e. _df always constructs decimal32, _dd always constructs decimal64, and so on).
And this whole great idea will turn into more of a mess than a help. Clients would get discouraged and this would ultimately reduce library acceptance and traction gaining.
I would advise avoiding theoretically cool-seeming compile-time asserts like these.
- Chris

>> It will detract value. Clients>> making data tables will constantly>> wonder why they are hitting all>> kinds of compiler problems.>>>> When writing code for several>> widths of types, there might need>> to be separate sets of literal>> constants for different decimal>> widths (unless the highest width>> were taken for table entires and>> static_cast-ing were judiciously>> used). > I think I'm not following you here. I think honestly I'm being a bitunfairly terse here. > Could you illustrate such a use case?> Concretely, what do you mean by> several widths of types? Do you> mean decimal32 vs decimal64? Yes. Let's say you made a programthat uses one data table for a typeT that could be decimal32, 64 or 128.But you only wanted one table. You would make the table entries withthe highest width suffix and decorateeach table entry with a static-castto T in order to get all the tableentries into type T. To be fair, everyone already has todo something like this with normalfloat, double and long double. > With the syntax you have today,> you already have distinct suffixes> for each type (i.e. _df always constructs> decimal32, _dd always constructs decimal64,> and so on). Yes. And I gave one potential recipeon how to deal with this in genericcode above. My judgement may havebeen hurried and might seem unfair. But my intuition tells me, a wholebunch of clients would run intocompilation failures. Until theycome up with their own recipes forgeneric literal values. And the result would be morefrustration than had we simplydone nothing. This is what my intuition andexperience tells me. - Chris On Saturday, January 18, 2025 at 12:35:58 PM GMT+1, Ruben Perez <rubenperez038@gmail.com> wrote: On Sat, 18 Jan 2025 at 10:26, Christopher Kormanyos <e_float@yahoo.com> wrote: > > >>> Question: > >>> I've noticed that this: > >>> > >>> 1.23456789098787237982792742932938492382342382342002934932_df > >>> Compiles, although it truncates the > >>> value to 1.234568. Would it make > >>> sense to somehow tell the user "hey, > >>> this literal is too long" > > <snip> > > >> So if you think it doesn't add > >> enough value, I understand it. > > > Now I'm liking this idea. > > But my final advice goes like this, > and you probably won't like it. Well, I don't have a strong opinion about this. You have much more field experience about this than me, so you're best suited to make the final decision. I'd like to understand your reasoning though. > > It will detract value. Clients > making data tables will constantly > wonder why they are hitting all > kinds of compiler problems. > > When writing code for several > widths of types, there might need > to be separate sets of literal > constants for different decimal > widths (unless the highest width > were taken for table entires and > static_cast-ing were judiciously > used). I think I'm not following you here. Could you illustrate such a use case? Concretely, what do you mean by several widths of types? Do you mean decimal32 vs decimal64? With the syntax you have today, you already have distinct suffixes for each type (i.e. _df always constructs decimal32, _dd always constructs decimal64, and so on). > > And this whole great idea will turn > into more of a mess than a help. > Clients would get discouraged > and this would ultimately reduce > library acceptance and traction > gaining. > > I would advise avoiding theoretically > cool-seeming compile-time asserts > like these. > > - Chris > >

Could you illustrate such a use case? Concretely, what do you mean by several widths of types? Do you mean decimal32 vs decimal64?
Yes. Let's say you made a program that uses one data table for a type T that could be decimal32, 64 or 128. But you only wanted one table.
You would make the table entries with the highest width suffix and decorate each table entry with a static-cast to T in order to get all the table entries into type T.
To be fair, everyone already has to do something like this with normal float, double and long double.
With the syntax you have today, you already have distinct suffixes for each type (i.e. _df always constructs decimal32, _dd always constructs decimal64, and so on).
Yes. And I gave one potential recipe on how to deal with this in generic code above. My judgement may have been hurried and might seem unfair.
But my intuition tells me, a whole bunch of clients would run into compilation failures. Until they come up with their own recipes for generic literal values.
And the result would be more frustration than had we simply done nothing.
This is what my intuition and experience tells me.
I will trust your experience here, as it's much wider than mine. Thanks, Ruben.
- Chris
On Saturday, January 18, 2025 at 12:35:58 PM GMT+1, Ruben Perez <rubenperez038@gmail.com> wrote:
On Sat, 18 Jan 2025 at 10:26, Christopher Kormanyos <e_float@yahoo.com> wrote:
Question: I've noticed that this:
1.23456789098787237982792742932938492382342382342002934932_df Compiles, although it truncates the value to 1.234568. Would it make sense to somehow tell the user "hey, this literal is too long"
<snip>
So if you think it doesn't add enough value, I understand it.
Now I'm liking this idea.
But my final advice goes like this, and you probably won't like it.
Well, I don't have a strong opinion about this. You have much more field experience about this than me, so you're best suited to make the final decision.
I'd like to understand your reasoning though.
It will detract value. Clients making data tables will constantly wonder why they are hitting all kinds of compiler problems.
When writing code for several widths of types, there might need to be separate sets of literal constants for different decimal widths (unless the highest width were taken for table entires and static_cast-ing were judiciously used).
I think I'm not following you here. Could you illustrate such a use case? Concretely, what do you mean by several widths of types? Do you mean decimal32 vs decimal64? With the syntax you have today, you already have distinct suffixes for each type (i.e. _df always constructs decimal32, _dd always constructs decimal64, and so on).
And this whole great idea will turn into more of a mess than a help. Clients would get discouraged and this would ultimately reduce library acceptance and traction gaining.
I would advise avoiding theoretically cool-seeming compile-time asserts like these.
- Chris

When writing code for several widths of types, there might need to be separate sets of literal constants for different decimal widths (unless the highest width were taken for table entires and static_cast-ing were judiciously used). I think I'm not following you here. Could you illustrate such a use case? Concretely, what do you mean by several widths of types? Do you mean decimal32 vs decimal64? With the syntax you have today, you already have distinct suffixes for each type (i.e. _df always constructs decimal32, _dd always constructs decimal64, and so on).
And this whole great idea will turn into more of a mess than a help. Clients would get discouraged and this would ultimately reduce library acceptance and traction gaining.
I would advise avoiding theoretically cool-seeming compile-time asserts like these.
Chris, Rubin, I *think* you are somewhat talking about different things here... Rubin's point was that if you have a literal such as 1.2345678912345678_DD Then the type computed is a decimal64 and the result will be rounded to 16 decimal places. This is true regardless of whether it is subsequently static_cast to something else (potentially causing double rounding). So... I'm *reasonably* sure that Chris's argument doesn't hold so much water. But, I do see one argument against a compile time assert, which is the "this is a right pain in the butt" argument, basically that users may not want to be forced to round all the arguments themselves, and/or that this may be error prone: have you counted exactly 16 digits and not accidentally rounded to 15? Plus string streaming will presumably round excess digits so it kind of makes sense to be consistent. But... I'm supposed to be an impartial judge, so I'll shut up now and let you get on with it ;) I just thought you were perhaps misunderstanding each other. John.

John Maddock wrote:
Chris, Rubin, I *think* you are somewhat talking about different things here...
Rubin's point was that if you have a literal such as
1.2345678912345678_DD
Then the type computed is a decimal64 and the result will be rounded to 16 decimal places. This is true regardless of whether it is subsequently static_cast to something else (potentially causing double rounding).
So... I'm *reasonably* sure that Chris's argument doesn't hold so much water.
But, I do see one argument against a compile time assert, which is the "this is a right pain in the butt" argument, basically that users may not want to be forced to round all the arguments themselves, and/or that this may be error prone: have you counted exactly 16 digits and not accidentally rounded to 15? Plus string streaming will presumably round excess digits so it kind of makes sense to be consistent.
A better option than failing here is probably to issue a warning, but constexpr_warn_str is still years away. https://isocpp.org/files/papers/P2758R4.html

For anyone that is interested Braden Ganetsky was kind enough to put together pretty printers for the library which can be found here: https://github.com/cppalliance/decimal/compare/develop...k3DW:decimal:gdb-pr.... Debugging decimalXX can be a bit challenging because all GDB will show is the is value of the manipulated underlying unsigned integer. These should be considered beta for now and will be merged into the library once they are complete. Thanks for the help Braden. Matt

On 18/01/2025 17:56, Peter Dimov via Boost wrote:
John Maddock wrote:
Chris, Rubin, I *think* you are somewhat talking about different things here...
Rubin's point was that if you have a literal such as
1.2345678912345678_DD
Then the type computed is a decimal64 and the result will be rounded to 16 decimal places. This is true regardless of whether it is subsequently static_cast to something else (potentially causing double rounding).
So... I'm *reasonably* sure that Chris's argument doesn't hold so much water.
But, I do see one argument against a compile time assert, which is the "this is a right pain in the butt" argument, basically that users may not want to be forced to round all the arguments themselves, and/or that this may be error prone: have you counted exactly 16 digits and not accidentally rounded to 15? Plus string streaming will presumably round excess digits so it kind of makes sense to be consistent. A better option than failing here is probably to issue a warning, but constexpr_warn_str is still years away. Indeed, this would be perfect for that!
https://isocpp.org/files/papers/P2758R4.html
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

>> Rubin's point was that if you have a literal such as >> >> 1.2345678912345678_DD >> >> Then the type computed is a decimal64 and the result will be rounded to >> 16 decimal places. This is true regardless of whether it is subsequently >> static_cast to something else (potentially causing double rounding). >> >> So... I'm *reasonably* sure that Chris's argument doesn't hold so much water. Agreed. Yes. I do think, however, we all kind of wound upfiguring this out. My original argumentation mighthave been off. Maybe intuitively I sensed the "users will get frustrated"part. So we are all on the same page now. > But, I do see one argument against a compile time assert, which is the "this is a> right pain in the butt" argument, basically that users may not want to be > forced to round all the arguments themselves, and/or that this may be error > prone: have you counted exactly 16 digits and not accidentally rounded to > 15? Plus string streaming will presumably round excess digits so it kind of > makes sense to be consistent. My advice remains the same. Do not botherour clients by forcing them to create their ownexact representations for tables, exact valuesin subroutines and the like. This would lead toa higher level of client dissatisfaction overall. Thanks Ruben, thanks John, I think in my/ourroundabout way, we figured this out. - Chris On Saturday, January 18, 2025 at 06:57:10 PM GMT+1, Peter Dimov via Boost <boost@lists.boost.org> wrote: John Maddock wrote: > Chris, Rubin, I *think* you are somewhat talking about different things here... > > Rubin's point was that if you have a literal such as > > 1.2345678912345678_DD > > Then the type computed is a decimal64 and the result will be rounded to > 16 decimal places. This is true regardless of whether it is subsequently > static_cast to something else (potentially causing double rounding). > > So... I'm *reasonably* sure that Chris's argument doesn't hold so much water. > > But, I do see one argument against a compile time assert, which is the "this is a > right pain in the butt" argument, basically that users may not want to be > forced to round all the arguments themselves, and/or that this may be error > prone: have you counted exactly 16 digits and not accidentally rounded to > 15? Plus string streaming will presumably round excess digits so it kind of > makes sense to be consistent. A better option than failing here is probably to issue a warning, but constexpr_warn_str is still years away. https://isocpp.org/files/papers/P2758R4.html _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Sat, Jan 18, 2025 at 6:57 PM Peter Dimov via Boost <boost@lists.boost.org> wrote:
A better option than failing here is probably to issue a warning, but constexpr_warn_str is still years away.
Why not let users check the exactness of the initialization themselves? Something like: static constexpr const char* init = "3.1415926535897932384626433"; and then user can do template<typename Dec> inline constexpr auto my_pi = exact_from_chars<Dec>(init); Where exact_from_chars would fail(assert/throw/return ec/...) when decimal number in string can not be exactly represented by Dec. Obviously this should be done in a way that it is easy to transform the init array(potentially multidimentional) into array of Dec, beside supporting just scalar scenario, but this seems ok solution to me.

A better option than failing here is probably to issue a warning, but constexpr_warn_str is still years away.
Why not let users check the exactness of the initialization themselves?
Is there a <cmath> equivalent doing the samefor float, double, etc.? I don't want to be some kind of a "downer"but it is not the place of decimal to extendthe standard library or its interpretation ofIEEE-754. Just because you can do somethingdoes not mean you should. - Chris On Saturday, January 18, 2025 at 09:30:40 PM GMT+1, Ivan Matek via Boost <boost@lists.boost.org> wrote: On Sat, Jan 18, 2025 at 6:57 PM Peter Dimov via Boost <boost@lists.boost.org> wrote:
A better option than failing here is probably to issue a warning, but constexpr_warn_str is still years away.
Why not let users check the exactness of the initialization themselves? Something like: static constexpr const char* init = "3.1415926535897932384626433"; and then user can do template<typename Dec> inline constexpr auto my_pi = exact_from_chars<Dec>(init); Where exact_from_chars would fail(assert/throw/return ec/...) when decimal number in string can not be exactly represented by Dec. Obviously this should be done in a way that it is easy to transform the init array(potentially multidimentional) into array of Dec, beside supporting just scalar scenario, but this seems ok solution to me. _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Am 18.01.25 um 12:48 schrieb Christopher Kormanyos via Boost:
Could you illustrate such a use case? Concretely, what do you mean by several widths of types? Do you mean decimal32 vs decimal64? Yes. Let's say you made a program that uses one data table for a type T that could be decimal32, 64 or 128. But you only wanted one table. You would make the table entries with the highest width suffix and decoratee ach table entry with a static-cast to T in order to get all the table entries into type T. To be fair, everyone already has todo something like this with normal float, double and long double. With the syntax you have today, you already have distinct suffixes for each type (i.e. _df always constructs decimal32, _dd always constructs decimal64, and so on). Yes. And I gave one potential recipe on how to deal with this in generic code above. My judgement may have been hurried and might seem unfair. But my intuition tells me, a whole bunch of clients would run into compilation failures. Until they come up with their own recipes for generic literal values. And the result would be more frustration than had we simply done nothing. I'm not sure I understand this. How can you have "generic code" with different suffixes where there are compilation failures that would not be there in non-generic code? From "one data table for a type T" I imagine something like
`T table[] = {1.25_df, 3.2_df, 51_df}` So while your table type is generic the constants used are not. If the above errors due to using `1.25352397826723672375_df` you ought to use `_dd` to be able to use it in `decimal64 table[]` anyway. I don't see a scenario where it would be ok to truncate a constant and then store them in a type that would have been able to store the nun-truncated value. The other way round is more reasonable to use rounding/truncation when really required by the target type. Can you explain which case you had in mind?

>> one potential recipe on how>> to deal with this in generic>> code above. > I'm not sure I understand this.> How can you have "generic code"> with different suffixes where there> are compilation failures that> would not be there in non-generic> code? > From "one data table for a type T"> I imagine something like > `T table[] = {1.25_df, 3.2_df, 51_df}` > So while your table type is generic> the constants used are not. You are right and my rhetoricwas flawed. In fact, that is exactly what I meant.My description got all confusedand was not quite right. You mightcast each table menber to T viastatic_cast, but that is the generalidea. Now one thing I am not confused aboutis that I do not want compilationerrors nor warnings if the clientputs not-enough/too-many digitsin the string literals. That wasthe main issue I was attemptingto debate. Thank you and good catch. - Chris On Monday, January 20, 2025 at 11:17:15 AM GMT+1, Alexander Grund via Boost <boost@lists.boost.org> wrote: Am 18.01.25 um 12:48 schrieb Christopher Kormanyos via Boost: > > Could you illustrate such a use case? > > Concretely, what do you mean by > > several widths of types? Do you > > mean decimal32 vs decimal64? > Yes. Let's say you made a program > that uses one data table for a type T that could be decimal32, 64 or 128. > But you only wanted one table. > You would make the table entries with the highest width suffix and decoratee ach table entry with a static-cast to T in order to get all the table entries into type T. > To be fair, everyone already has todo something like this with normal float, double and long double. >> With the syntax you have today, >> you already have distinct suffixes >> for each type (i.e. _df always constructs >> decimal32, _dd always constructs decimal64, >> and so on). > Yes. And I gave one potential recipe on how to deal with this in generic code above. My judgement may have been hurried and might seem unfair. > But my intuition tells me, a whole bunch of clients would run into compilation failures. Until they come up with their own recipes for generic literal values. > And the result would be more frustration than had we simply done nothing. I'm not sure I understand this. How can you have "generic code" with different suffixes where there are compilation failures that would not be there in non-generic code? From "one data table for a type T" I imagine something like `T table[] = {1.25_df, 3.2_df, 51_df}` So while your table type is generic the constants used are not. If the above errors due to using `1.25352397826723672375_df` you ought to use `_dd` to be able to use it in `decimal64 table[]` anyway. I don't see a scenario where it would be ok to truncate a constant and then store them in a type that would have been able to store the nun-truncated value. The other way round is more reasonable to use rounding/truncation when really required by the target type. Can you explain which case you had in mind? _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Friday, January 17th, 2025 at 2:35 PM, Ruben Perez <rubenperez038@gmail.com> wrote:
I never tried many of the permutations of headers outside of the convenience one. The library is structured to match the STL so that it is unsurprising to the average user. I think you could pick and choose if you wanted to.
I tried picking charconv and it didn't work :)
I think that the actual header structure is good, matching STL as you said. I don't agree with recommending users to always include the entire library - I think that increases compile times without much benefit. Hence I was asking whether there was an actual reason to do it.
So there's things I have to manually set that most never worry about for builtin types like global rounding mode and float evaluation method. Some impl headers need forward declarations of the types, some don't. It's pretty convenient from a design perspective to make things just work with no effort on the part of the user. I'm sure everything could be made to work piecemeal, but for a difference of maybe 3 seconds of compile time it's not worth the effort.
I'm afraid I don't agree here. Since this is a header-only library, this is 3 seconds added to both direct and indirect users of the library. If all Boost libraries did this, compile times would become unmanageable. Also, Boost doesn't have the best reputation in this aspect, so I think taking care of this is valuable.
Having a set of public headers that work is established practice in Boost, and I'd advise to follow it.
3. In the line of the previous question, is there a reason to have BOOST_DECIMAL_DISABLE_IOSTREAM instead of splitting iostream functionality to a separate header? In my experience, the more config macros you have, the more chances of getting bugs. Also, is the test suite being run with these macros defined?
We have a the options to disable a bunch of the clib functionality so that the library can run on embedded platforms. We do have QEMU of an STM board in the CI which tests all of this. Why test embedded you ask? It's not uncommon for finance devs to run on bare metal platforms.
I understand the objective, and I think it's great having tests for that. But I don't think the method is the best.
I've reviewed all uses of BOOST_DECIMAL_DISABLE_IOSTREAM, and if I'm reading this correctly, they all guard functions that are exclusively used in the tests. I don't think these functions should be in the headers shipped to users, but in the tests.
I acknowledge that these functions require access to private members of public classes, so I guess that's why they are defined there. I use a dummy friend struct placed in the detail namespace when I have such problems (I think I copied the pattern from Boost.Json). I think you can get rid of all the iostream includes altogether doing this (except for the ones in io.hpp, which are actually not guarded by the macro).
BOOST_DECIMAL_DISABLE_CLIB ifdefs-out entire headers - wouldn't it be simpler to have a subset of headers allowable in embedded systems, with others just labelled as "not supported"?
The functions are only used in tests because they are for the end user. We have no need for streaming in the implementation. Since this is a header-only library I am not worried about library incompatibilities from different configurations.
I think we might be talking about different things here. Grepping for BOOST_DECIMAL_DISABLE_IOSTREAM, it protects the following functions:
* debug_pattern: not documented and excluded from coverage * bit_string: not documented * Streaming native/emulated, signed/unsigned 128/256 integer types, all of which are in namespace detail.
Is the end user expected to use any of these?
No, but if you define BOOST_DECIMAL_DISABLE_CLIB like you previously mentioned it defines for you BOOST_DECIMAL_DISABLE_IOSTREAM (if not already defined). The guard in detail/io.hpp should be the latter instead of the former for consistency as detailed in the docs. We use BOOST_DECIMAL_DISABLE_CLIB in the metal CI. Matt

BOOST_DECIMAL_DISABLE_CLIB ifdefs-out>>> entire headers - wouldn't it be simpler>>> to have a subset of headers allowable>>> in embedded systems, with others just>>> labelled as "not supported"? Detailed comments follow below, butI personally doubt if there wouldbe anything remotely simple about havingdifferent amounts of headers. To the contrary,it would add more complexity. It has been a while since we invented thesecompiler switches and I actually needed toreview them myself. BOOST_DECIMAL_DISABLE_CLIB seems to begoing in the direction (if you follow this)of the freestanding movement in C++26.Basically, BOOST_DECIMAL_DISABLE_CLIBdisables all potentially heavyweightcomponents. I can list what these arein a second post. So you might notget <string> or <charconv>-like supporton the metal. BOOST_DECIMAL_DISABLE_CLIB has the secondaryeffect of *also* definingBOOST_DECIMAL_DISABLE_IOSTREAM (see below).Disabling I/O streaming only cuts outheaders like <iostream>, <sstream> andtheir buddies.
The functions are only used in tests>> because they are for the end user.>> We have no need for streaming in the>> implementation. Since this is a>> header-only library I am not worried>> about library incompatibilities from>> different configurations. I think we might be talking about different> things here. Grepping for> BOOST_DECIMAL_DISABLE_IOSTREAM,> it protects the following functions:> * debug_pattern: not documented and> excluded from coverage> * bit_string: not documented> * Streaming native/emulated,> signed/unsigned 128/256 integer types,> all of which are in namespace detail. BOOST_DECIMAL_DISABLE_IOSTREAM does prettymuch *literally what it says. It disablesI/O streaming functions. In contrast to BOOST_DECIMAL_DISABLE_CLIB,which disables a lot more like <string>and the like and a bunch of stuff that'snot already by its nature more or lessfreestanding. To be honest, I am completely satisfiedwith the inner workings of these compileroptions at the moment. We don't havefreestanding yet, so this situationsort of mocks that up.
On Friday, January 17, 2025 at 08:36:06 PM GMT+1, Ruben Perez via Boost <boost@lists.boost.org> wrote:
I never tried many of the permutations of headers outside of the convenience one. The library is structured to match the STL so that it is unsurprising to the average user. I think you could pick and choose if you wanted to.
I tried picking charconv and it didn't work :)
I think that the actual header structure is good, matching STL as you said. I don't agree with recommending users to always include the entire library - I think that increases compile times without much benefit. Hence I was asking whether there was an actual reason to do it.
So there's things I have to manually set that most never worry about for builtin types like global rounding mode and float evaluation method. Some impl headers need forward declarations of the types, some don't. It's pretty convenient from a design perspective to make things just work with no effort on the part of the user. I'm sure everything could be made to work piecemeal, but for a difference of maybe 3 seconds of compile time it's not worth the effort.
I'm afraid I don't agree here. Since this is a header-only library, this is 3 seconds added to both direct and indirect users of the library. If all Boost libraries did this, compile times would become unmanageable. Also, Boost doesn't have the best reputation in this aspect, so I think taking care of this is valuable. Having a set of public headers that work is established practice in Boost, and I'd advise to follow it.
3. In the line of the previous question, is there a reason to have BOOST_DECIMAL_DISABLE_IOSTREAM instead of splitting iostream functionality to a separate header? In my experience, the more config macros you have, the more chances of getting bugs. Also, is the test suite being run with these macros defined?
We have a the options to disable a bunch of the clib functionality so that the library can run on embedded platforms. We do have QEMU of an STM board in the CI which tests all of this. Why test embedded you ask? It's not uncommon for finance devs to run on bare metal platforms.
I understand the objective, and I think it's great having tests for that. But I don't think the method is the best.
I've reviewed all uses of BOOST_DECIMAL_DISABLE_IOSTREAM, and if I'm reading this correctly, they all guard functions that are exclusively used in the tests. I don't think these functions should be in the headers shipped to users, but in the tests.
I acknowledge that these functions require access to private members of public classes, so I guess that's why they are defined there. I use a dummy friend struct placed in the detail namespace when I have such problems (I think I copied the pattern from Boost.Json). I think you can get rid of all the iostream includes altogether doing this (except for the ones in io.hpp, which are actually not guarded by the macro).
BOOST_DECIMAL_DISABLE_CLIB ifdefs-out entire headers - wouldn't it be simpler to have a subset of headers allowable in embedded systems, with others just labelled as "not supported"?
The functions are only used in tests because they are for the end user. We have no need for streaming in the implementation. Since this is a header-only library I am not worried about library incompatibilities from different configurations.
I think we might be talking about different things here. Grepping for BOOST_DECIMAL_DISABLE_IOSTREAM, it protects the following functions: * debug_pattern: not documented and excluded from coverage * bit_string: not documented * Streaming native/emulated, signed/unsigned 128/256 integer types, all of which are in namespace detail. Is the end user expected to use any of these?
Matt
Regards, Ruben. _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Fri, 17 Jan 2025 at 22:16, Christopher Kormanyos <e_float@yahoo.com> wrote:
BOOST_DECIMAL_DISABLE_CLIB ifdefs-out entire headers - wouldn't it be simpler to have a subset of headers allowable in embedded systems, with others just labelled as "not supported"?
Detailed comments follow below, but I personally doubt if there would be anything remotely simple about having different amounts of headers. To the contrary, it would add more complexity.
It has been a while since we invented these compiler switches and I actually needed to review them myself.
BOOST_DECIMAL_DISABLE_CLIB seems to be going in the direction (if you follow this) of the freestanding movement in C++26. Basically, BOOST_DECIMAL_DISABLE_CLIB disables all potentially heavyweight components. I can list what these are in a second post. So you might not get <string> or <charconv>-like support on the metal.
From what I've seen, there's a __STDC_HOSTED__ macro that tells you whether you're in bare metal (value 0) or not (value 1). Then, some
No, I haven't been following. What follows are my thoughts, and upfront apologies if I got something wrong. headers have some parts of them ifdef-ed out when the macro is set to 1 (freestanding mode). Peeking at libcstdc++ (gcc-13) implementation, which supports this, there are headers that do this (when there's some definitions that are present on freestanding implementations and others that are not), and headers that require a hosted implementation to be included. For instance, fstream, which is not supported in freestanding mode, does the following: #include <bits/requires_hosted.h> // iostreams // Rest of the contents, unguarded This file will issue an error in freestanding mode: #include <bits/c++config.h> #if !_GLIBCXX_HOSTED // this is defined to __STDC_HOSTED__ # error "This header is not available in freestanding mode." #endif #endif All the headers in decimal I've seen would fall in the second category: their contents are either fully available in freestanding mode, or not at all. So I think it would make sense to follow what libstdc++ does. In any case (and apologies if I got this wrong), it looks like the approach you have stems from only supporting the convenience header, and not the other smaller headers. I'd say that something like: // Freestanding OK #include <boost/decimal/decimal32.hpp> #include <boost/decimal/decimal32_fast.hpp> // ... #ifndef BOOST_DECIMAL_DISABLE_CLIB #include <boost/decimal/cstdlib.hpp> #include <boost/decimal/detail/io.hpp> // ... #endif Would make sense. Of course, this requires you to slightly rework some parts of the implementation. Regards, Ruben.
BOOST_DECIMAL_DISABLE_CLIB has the secondary effect of *also* defining BOOST_DECIMAL_DISABLE_IOSTREAM (see below). Disabling I/O streaming only cuts out headers like <iostream>, <sstream> and their buddies.
The functions are only used in tests because they are for the end user. We have no need for streaming in the implementation. Since this is a header-only library I am not worried about library incompatibilities from different configurations.
I think we might be talking about different things here. Grepping for BOOST_DECIMAL_DISABLE_IOSTREAM, it protects the following functions: * debug_pattern: not documented and excluded from coverage * bit_string: not documented * Streaming native/emulated, signed/unsigned 128/256 integer types, all of which are in namespace detail.
BOOST_DECIMAL_DISABLE_IOSTREAM does pretty much *literally what it says. It disables I/O streaming functions.
In contrast to BOOST_DECIMAL_DISABLE_CLIB, which disables a lot more like <string> and the like and a bunch of stuff that's not already by its nature more or less freestanding.
To be honest, I am completely satisfied with the inner workings of these compiler options at the moment. We don't have freestanding yet, so this situation sort of mocks that up.
On Friday, January 17, 2025 at 08:36:06 PM GMT+1, Ruben Perez via Boost <boost@lists.boost.org> wrote:
I never tried many of the permutations of headers outside of the convenience one. The library is structured to match the STL so that it is unsurprising to the average user. I think you could pick and choose if you wanted to.
I tried picking charconv and it didn't work :)
I think that the actual header structure is good, matching STL as you said. I don't agree with recommending users to always include the entire library - I think that increases compile times without much benefit. Hence I was asking whether there was an actual reason to do it.
So there's things I have to manually set that most never worry about for builtin types like global rounding mode and float evaluation method. Some impl headers need forward declarations of the types, some don't. It's pretty convenient from a design perspective to make things just work with no effort on the part of the user. I'm sure everything could be made to work piecemeal, but for a difference of maybe 3 seconds of compile time it's not worth the effort.
I'm afraid I don't agree here. Since this is a header-only library, this is 3 seconds added to both direct and indirect users of the library. If all Boost libraries did this, compile times would become unmanageable. Also, Boost doesn't have the best reputation in this aspect, so I think taking care of this is valuable.
Having a set of public headers that work is established practice in Boost, and I'd advise to follow it.
3. In the line of the previous question, is there a reason to have BOOST_DECIMAL_DISABLE_IOSTREAM instead of splitting iostream functionality to a separate header? In my experience, the more config macros you have, the more chances of getting bugs. Also, is the test suite being run with these macros defined?
We have a the options to disable a bunch of the clib functionality so that the library can run on embedded platforms. We do have QEMU of an STM board in the CI which tests all of this. Why test embedded you ask? It's not uncommon for finance devs to run on bare metal platforms.
I understand the objective, and I think it's great having tests for that. But I don't think the method is the best.
I've reviewed all uses of BOOST_DECIMAL_DISABLE_IOSTREAM, and if I'm reading this correctly, they all guard functions that are exclusively used in the tests. I don't think these functions should be in the headers shipped to users, but in the tests.
I acknowledge that these functions require access to private members of public classes, so I guess that's why they are defined there. I use a dummy friend struct placed in the detail namespace when I have such problems (I think I copied the pattern from Boost.Json). I think you can get rid of all the iostream includes altogether doing this (except for the ones in io.hpp, which are actually not guarded by the macro).
BOOST_DECIMAL_DISABLE_CLIB ifdefs-out entire headers - wouldn't it be simpler to have a subset of headers allowable in embedded systems, with others just labelled as "not supported"?
The functions are only used in tests because they are for the end user. We have no need for streaming in the implementation. Since this is a header-only library I am not worried about library incompatibilities from different configurations.
I think we might be talking about different things here. Grepping for BOOST_DECIMAL_DISABLE_IOSTREAM, it protects the following functions:
* debug_pattern: not documented and excluded from coverage * bit_string: not documented * Streaming native/emulated, signed/unsigned 128/256 integer types, all of which are in namespace detail.
Is the end user expected to use any of these?
Matt
Regards, Ruben.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

I'd say that something like:
// Freestanding OK #include <boost/decimal/decimal32.hpp> #include <boost/decimal/decimal32_fast.hpp> // ... #ifndef BOOST_DECIMAL_DISABLE_CLIB #include <boost/decimal/cstdlib.hpp> #include <boost/decimal/detail/io.hpp> // ... #endif
Would make sense. Of course,> this requires you to slightly rework some parts of the implementation.
Regards, Ruben. Yeah that would be fine Ruben.I'll wait for Matt too.
Basically you just moved the PPqueries to a higher point upstream. I thought you wanted us to actuallyhave two physical sets of headers,which kind of scared me. Chris On Friday, January 17, 2025 at 10:45:11 PM GMT+1, Ruben Perez <rubenperez038@gmail.com> wrote: On Fri, 17 Jan 2025 at 22:16, Christopher Kormanyos <e_float@yahoo.com> wrote:
BOOST_DECIMAL_DISABLE_CLIB ifdefs-out entire headers - wouldn't it be simpler to have a subset of headers allowable in embedded systems, with others just labelled as "not supported"?
Detailed comments follow below, but I personally doubt if there would be anything remotely simple about having different amounts of headers. To the contrary, it would add more complexity.
It has been a while since we invented these compiler switches and I actually needed to review them myself.
BOOST_DECIMAL_DISABLE_CLIB seems to be going in the direction (if you follow this) of the freestanding movement in C++26. Basically, BOOST_DECIMAL_DISABLE_CLIB disables all potentially heavyweight components. I can list what these are in a second post. So you might not get <string> or <charconv>-like support on the metal.
From what I've seen, there's a __STDC_HOSTED__ macro that tells you whether you're in bare metal (value 0) or not (value 1). Then, some
No, I haven't been following. What follows are my thoughts, and upfront apologies if I got something wrong. headers have some parts of them ifdef-ed out when the macro is set to 1 (freestanding mode). Peeking at libcstdc++ (gcc-13) implementation, which supports this, there are headers that do this (when there's some definitions that are present on freestanding implementations and others that are not), and headers that require a hosted implementation to be included. For instance, fstream, which is not supported in freestanding mode, does the following: #include <bits/requires_hosted.h> // iostreams // Rest of the contents, unguarded This file will issue an error in freestanding mode: #include <bits/c++config.h> #if !_GLIBCXX_HOSTED // this is defined to __STDC_HOSTED__ # error "This header is not available in freestanding mode." #endif #endif All the headers in decimal I've seen would fall in the second category: their contents are either fully available in freestanding mode, or not at all. So I think it would make sense to follow what libstdc++ does. In any case (and apologies if I got this wrong), it looks like the approach you have stems from only supporting the convenience header, and not the other smaller headers. I'd say that something like: // Freestanding OK #include <boost/decimal/decimal32.hpp> #include <boost/decimal/decimal32_fast.hpp> // ... #ifndef BOOST_DECIMAL_DISABLE_CLIB #include <boost/decimal/cstdlib.hpp> #include <boost/decimal/detail/io.hpp> // ... #endif Would make sense. Of course, this requires you to slightly rework some parts of the implementation. Regards, Ruben.
BOOST_DECIMAL_DISABLE_CLIB has the secondary effect of *also* defining BOOST_DECIMAL_DISABLE_IOSTREAM (see below). Disabling I/O streaming only cuts out headers like <iostream>, <sstream> and their buddies.
The functions are only used in tests because they are for the end user. We have no need for streaming in the implementation. Since this is a header-only library I am not worried about library incompatibilities from different configurations.
I think we might be talking about different things here. Grepping for BOOST_DECIMAL_DISABLE_IOSTREAM, it protects the following functions: * debug_pattern: not documented and excluded from coverage * bit_string: not documented * Streaming native/emulated, signed/unsigned 128/256 integer types, all of which are in namespace detail.
BOOST_DECIMAL_DISABLE_IOSTREAM does pretty much *literally what it says. It disables I/O streaming functions.
In contrast to BOOST_DECIMAL_DISABLE_CLIB, which disables a lot more like <string> and the like and a bunch of stuff that's not already by its nature more or less freestanding.
To be honest, I am completely satisfied with the inner workings of these compiler options at the moment. We don't have freestanding yet, so this situation sort of mocks that up.
On Friday, January 17, 2025 at 08:36:06 PM GMT+1, Ruben Perez via Boost <boost@lists.boost.org> wrote:
I never tried many of the permutations of headers outside of the convenience one. The library is structured to match the STL so that it is unsurprising to the average user. I think you could pick and choose if you wanted to.
I tried picking charconv and it didn't work :)
I think that the actual header structure is good, matching STL as you said. I don't agree with recommending users to always include the entire library - I think that increases compile times without much benefit. Hence I was asking whether there was an actual reason to do it.
So there's things I have to manually set that most never worry about for builtin types like global rounding mode and float evaluation method. Some impl headers need forward declarations of the types, some don't. It's pretty convenient from a design perspective to make things just work with no effort on the part of the user. I'm sure everything could be made to work piecemeal, but for a difference of maybe 3 seconds of compile time it's not worth the effort.
I'm afraid I don't agree here. Since this is a header-only library, this is 3 seconds added to both direct and indirect users of the library. If all Boost libraries did this, compile times would become unmanageable. Also, Boost doesn't have the best reputation in this aspect, so I think taking care of this is valuable.
Having a set of public headers that work is established practice in Boost, and I'd advise to follow it.
3. In the line of the previous question, is there a reason to have BOOST_DECIMAL_DISABLE_IOSTREAM instead of splitting iostream functionality to a separate header? In my experience, the more config macros you have, the more chances of getting bugs. Also, is the test suite being run with these macros defined?
We have a the options to disable a bunch of the clib functionality so that the library can run on embedded platforms. We do have QEMU of an STM board in the CI which tests all of this. Why test embedded you ask? It's not uncommon for finance devs to run on bare metal platforms.
I understand the objective, and I think it's great having tests for that. But I don't think the method is the best.
I've reviewed all uses of BOOST_DECIMAL_DISABLE_IOSTREAM, and if I'm reading this correctly, they all guard functions that are exclusively used in the tests. I don't think these functions should be in the headers shipped to users, but in the tests.
I acknowledge that these functions require access to private members of public classes, so I guess that's why they are defined there. I use a dummy friend struct placed in the detail namespace when I have such problems (I think I copied the pattern from Boost.Json). I think you can get rid of all the iostream includes altogether doing this (except for the ones in io.hpp, which are actually not guarded by the macro).
BOOST_DECIMAL_DISABLE_CLIB ifdefs-out entire headers - wouldn't it be simpler to have a subset of headers allowable in embedded systems, with others just labelled as "not supported"?
The functions are only used in tests because they are for the end user. We have no need for streaming in the implementation. Since this is a header-only library I am not worried about library incompatibilities from different configurations.
I think we might be talking about different things here. Grepping for BOOST_DECIMAL_DISABLE_IOSTREAM, it protects the following functions:
* debug_pattern: not documented and excluded from coverage * bit_string: not documented * Streaming native/emulated, signed/unsigned 128/256 integer types, all of which are in namespace detail.
Is the end user expected to use any of these?
Matt
Regards, Ruben.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Friday, January 17th, 2025 at 4:54 PM, Christopher Kormanyos <e_float@yahoo.com> wrote:
I'd say that something like:
// Freestanding OK #include <boost/decimal/decimal32.hpp> #include <boost/decimal/decimal32_fast.hpp> // ... #ifndef BOOST_DECIMAL_DISABLE_CLIB #include <boost/decimal/cstdlib.hpp> #include <boost/decimal/detail/io.hpp> // ... #endif
Would make sense. Of course, this requires you to slightly rework some parts of the implementation.
Regards, Ruben. Yeah that would be fine Ruben. I'll wait for Matt too.
Basically you just moved the PP queries to a higher point upstream.
I thought you wanted us to actually have two physical sets of headers, which kind of scared me.
Chris
Yeah, I don't see any issue with this suggestion. Matt

Ruben Perez wrote:
4. From sprintf's documentation: "In the interest of safety sprintf simply calls snprintf with buf_size equal to sizeof(buffer). ". This doesn't look right. This is what the implementation looks like:
template <typename... T> inline auto sprintf(char* buffer, const char* format, T... values) noexcept #ifndef BOOST_DECIMAL_HAS_CONCEPTS -> std::enable_if_t<detail::is_decimal_floating_point_v<std::common_type_t<T. ..>>, int> #else -> int requires detail::is_decimal_floating_point_v<std::common_type_t<T...>> #endif ...
It would be better to make the constraint say what we actually mean, namely, "each type in T... should be integral, floating point, or decimal floating point". Using something entirely different just because we think it's equivalent to the above isn't superior in any way to just saying what we actually mean, even if it were actually equivalent. (It isn't because common_type can be specialized by the user.) (Providing sprintf in 2025 is a bit debatable, std::format is the way to go here.)

On Thursday, January 16th, 2025 at 11:45 AM, Peter Dimov via Boost <boost@lists.boost.org> wrote:
Ruben Perez wrote:
4. From sprintf's documentation: "In the interest of safety sprintf simply calls snprintf with buf_size equal to sizeof(buffer). ". This doesn't look right. This is what the implementation looks like:
template <typename... T> inline auto sprintf(char* buffer, const char* format, T... values) noexcept #ifndef BOOST_DECIMAL_HAS_CONCEPTS -> std::enable_if_t<detail::is_decimal_floating_point_v<std::common_type_t<T. ..>>, int> #else -> int requires detail::is_decimal_floating_point_v<std::common_type_t<T...>> #endif
...
It would be better to make the constraint say what we actually mean, namely, "each type in T... should be integral, floating point, or decimal floating point".
Using something entirely different just because we think it's equivalent to the above isn't superior in any way to just saying what we actually mean, even if it were actually equivalent. (It isn't because common_type can be specialized by the user.)
I guess I don't know a "superior" way to write all types in T should be decimal floating point types. I thought that was pretty succinct and communicated requirements fine.
(Providing sprintf in 2025 is a bit debatable, std::format is the way to go here.)
I found that to get <format> to actually work correctly you need GCC >= 13, Clang >= 18, and MSVC >= 1940 which is a relatively high bar. Matt

Matt Borland wrote:
I guess I don't know a "superior" way to write all types in T should be decimal floating point types.
(is_decimal_floating_point_v<T> && ...) when fold expressions are available, and std::conjunction_v<is_decimal_floating_point<T>> when not. (Or mp11::mp_all instead of std::conjunction.) But that's not what the common_type constraint said. common_type<int, decimal32> is decimal32, so it would accept (int, decimal32). (But would not accept int by itself, or (int, int).)

On Thu, Jan 16, 2025 at 12:08, Peter Dimov <pdimov@gmail.com> wrote: Matt Borland wrote: > I guess I don't know a "superior" way to write all types in T should be decimal > floating point types. (is_decimal_floating_point_v<T> && ...) when fold expressions are available, and std::conjunction_v<is_decimal_floating_point<T>> I did not know conjunction existed. Learn something new everyday. Thanks. when not. (Or mp11::mp_all instead of std::conjunction.) But that's not what the common_type constraint said. common_type<int, decimal32> is decimal32, so it would accept (int, decimal32). (But would not accept int by itself, or (int, int).)

On Thu, Jan 16, 2025 at 2:07 PM Matt Borland via Boost < boost@lists.boost.org> wrote:
The only update I found was https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3407.html. I emailed Dietmar a while back to ask why it was never accepted, but I did not receive a response.
I found this SO answer, <https://stackoverflow.com/a/4689840/> quite informative: *I happened to be at the meeting where IBM initially proposed the decimal types to WG14 and WG21. Their initial proposal was to provide them as native types, which is pretty much the only solution in C. However, WG21 wasn't entirely convinced and pointed out that C++ already has std::complex<> as a mathematical type in the library, so why not std::decimal<>? Initial confusion about the performance overhead was quickly ended when it was pointer out that std::decimal could obviously wrap a _Decimal compiler extension.After poiting out that this could be done in a library, the next question was then whether this should be in the Standard library. It's after all a specialized domain in which this is useful. The most commonly though-of domain, finance, doesn't actually need it (they really need decimal fixed-point, not decimal floating point). IBM didn't push their proposal a lot further, after this feedback.These types do not resolve the issue of floating-point inaccuracy. 1/3 still isn't representable. However, 1/5 is.*

The only update I found was https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3407.html. I emailed Dietmar a while back to ask why it was never accepted, but I did not receive a response.
I found this SO answer, quite informative:
I happened to be at the meeting where IBM initially proposed the decimal types to WG14 and WG21. Their initial proposal was to provide them as native types, which is pretty much the only solution in C. However, WG21 wasn't entirely convinced and pointed out that C++ already has std::complex<> as a mathematical type in the library, so why not std::decimal<>? Initial confusion about the performance overhead was quickly ended when it was pointer out that std::decimal could obviously wrap a _Decimal compiler extension.
After poiting out that this could be done in a library, the next question was then whether this should be in the Standard library. It's after all a specialized domain in which this is useful. The most commonly though-of domain, finance, doesn't actually need it (they really need decimal fixed-point, not decimal floating point). IBM didn't push their proposal a lot further, after this feedback.
These types do not resolve the issue of floating-point inaccuracy. 1/3 still isn't representable. However, 1/5 is.
That's a decent perspective. Assuming Decimal makes it into Boost it will be broadly and easily accessible anyway. Again there is also the implementation factor. This library took Chris and I a solid 18 months in our specialty domain. Is it worth the time for all the standard library implementers to re-implement something that's already portable and accessible? SLOCCounts analysis is a bit more draconian: "generated using David A. Wheeler's 'SLOCCount'.": Total Physical Source Lines of Code (SLOC) = 58,068 Development Effort Estimate, Person-Years (Person-Months) = 14.23 (170.74) (Basic COCOMO model, Person-Months = 2.4 * (KSLOC**1.05)) Schedule Estimate, Years (Months) = 1.47 (17.63) (Basic COCOMO model, Months = 2.5 * (person-months**0.38)) Estimated Average Number of Developers (Effort/Schedule) = 9.69 Total Estimated Cost to Develop = $ 1,922,097 (average salary = $56,286/year, overhead = 2.40). Matt

On Sat, 18 Jan 2025 at 15:03, Ivan Matek <libbooze@gmail.com> wrote:
On Thu, Jan 16, 2025 at 2:07 PM Matt Borland via Boost <boost@lists.boost.org> wrote:
The only update I found was https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3407.html. I emailed Dietmar a while back to ask why it was never accepted, but I did not receive a response.
I found this SO answer, quite informative:
I happened to be at the meeting where IBM initially proposed the decimal types to WG14 and WG21. Their initial proposal was to provide them as native types, which is pretty much the only solution in C. However, WG21 wasn't entirely convinced and pointed out that C++ already has std::complex<> as a mathematical type in the library, so why not std::decimal<>? Initial confusion about the performance overhead was quickly ended when it was pointer out that std::decimal could obviously wrap a _Decimal compiler extension.
After poiting out that this could be done in a library, the next question was then whether this should be in the Standard library. It's after all a specialized domain in which this is useful. The most commonly though-of domain, finance, doesn't actually need it (they really need decimal fixed-point, not decimal floating point). IBM didn't push their proposal a lot further, after this feedback.
Actually, MySQL DECIMAL is fixed-point, not floating-point [1]. You can select the precision using DECIMAL(p, s), where p is the precision (between 1 and 65) and s is the scale (number of decimals). I'm assuming that, as long as p <= the precision for the corresponding Boost.Decimal type, things will work fine. For instance, decimal32 would be interoperable with DECIMAL(7) and less precise, and so on. Is my assumption correct? Are there any caveats I should be aware of? Thanks, Ruben. [1] https://dev.mysql.com/doc/refman/8.4/en/precision-math-decimal-characteristi...

Actually, MySQL DECIMAL is fixed-point, not floating-point [1]. You can select the precision using DECIMAL(p, s), where p is the precision (between 1 and 65) and s is the scale (number of decimals). I'm assuming that, as long as p <= the precision for the corresponding Boost.Decimal type, things will work fine. For instance, decimal32 would be interoperable with DECIMAL(7) and less precise, and so on.
Is my assumption correct? Are there any caveats I should be aware of?
Using the example from the webpage DECIMAL(5,2) gives us the range -999.99 to 999.99, but decimal will store it as -9.9999e+02 to 9.9999e+02. As long as you limit yourself to chars_format::fixed on any potential output to the user you should be fine. One of the original users wanted to add bitcoin to their trading platform but the smallest divisible unit of a bitcoin, a satoshi = 1/100 million of a bitcoin, exceeded the precision of their in-house fixed point system so they switched to using decimal64. Matt

On 18.01.25 17:44, Matt Borland via Boost wrote:
Using the example from the webpage DECIMAL(5,2) gives us the range -999.99 to 999.99, but decimal will store it as -9.9999e+02 to 9.9999e+02. As long as you limit yourself to chars_format::fixed on any potential output to the user you should be fine. One of the original users wanted to add bitcoin to their trading platform but the smallest divisible unit of a bitcoin, a satoshi = 1/100 million of a bitcoin, exceeded the precision of their in-house fixed point system so they switched to using decimal64.
Using floating point when you mean fixed point is a really bad practice. You are going to end up with fractional satoshis, and you won't even see them because you are using chars_format::fixed. All you'll see is that 0.000000001 + 0.000000001 is sometimes mysteriously 0.000000003 instead of 0.000000002. You might as well use binary instead of decimal floating point at that point. -- Rainer Deyke - rainerd@eldwood.com

On 18 Jan 2025, at 16:03, Ivan Matek via Boost <boost@lists.boost.org> wrote:
The most commonly though-of domain, finance, doesn't actually need it (they really need decimal fixed-point, not decimal floating point).
Its an interesting point, and I was interested to see if the authors will make some strong case that decimal floating-point is needed in finance. Obviously, as you mention, if financial calculations are supposed to be rounded to cents at every stage, one could write the entire software operating with integers (cents), and only converting to dollars and cents at the report printing stage. Similarly, in the bitcoin domain, AFAIK, transfers and accounting is supposed to be done not in floating point but in integer satoshi. But should it be done this way or in floating decimals? Is it possible to emulate decimal fixed-point using decimal floating-point? The user guide is silent on any of these questions, so I have not been able to make up my mind if this library is needed or not in finance. For general mathematical use, and any sort of scientitic calculations, I am now 100% convinced decimals must not be used. The only other conceivable use, to write a spreadsheet and/or calculator software, but again there is nothing in the docs to inform us how is this supposed to be accomplished. Cheers, Kostas

On Sun, Jan 19, 2025 at 9:15 PM Kostas Savvidis via Boost < boost@lists.boost.org> wrote:
For general mathematical use, and any sort of scientitic calculations, I am now 100% convinced decimals must not be used.
The only other conceivable use, to write a spreadsheet and/or calculator software, but again there is nothing in the docs to inform us how is this supposed to be accomplished.
I presume you consider Machine Learning scientific computation and they
use fp8. :) It all depends on what you need. Decimal is nice because humans use base10. That does not mean we can represent all numbers with it, beside obvious like 𝜋 or ℯ another example is 1/7 is 0.*142857*142857... (notice the period). I think this is nice short recap https://www.reddit.com/r/swift/comments/qpi486/comment/hjtxkv7/

Its an interesting point, and I was interested to see if the authors will make some strong case that decimal floating-point is needed in finance. Obviously, as you mention, if financial calculations are supposed to be rounded to cents at every stage, one could write the entire software operating with integers (cents), and only converting to dollars and cents at the report printing stage.
Similarly, in the bitcoin domain, AFAIK, transfers and accounting is supposed to be done not in floating point but in integer satoshi.
But should it be done this way or in floating decimals? Is it possible to emulate decimal fixed-point using decimal floating-point?
In Reuben's MySQL case that was previously answered I don't see any reason we can't replicate fixed-point. The general case would obviously be different but restricted subsets it's fine. We also have the rescale function you brought up in your pre-review comments.
The user guide is silent on any of these questions, so I have not been able to make up my mind if this library is needed or not in finance.
For general mathematical use, and any sort of scientitic calculations, I am now 100% convinced decimals must not be used.
The only other conceivable use, to write a spreadsheet and/or calculator software, but again there is nothing in the docs to inform us how is this supposed to be accomplished.
There are actually a few examples on financial calculations, since you asked for this in pre-reivew, that are discussed in the docs. This one even shows you how to parse and use CSV data since that's tied to calculations with your spreadsheets: https://github.com/cppalliance/decimal/blob/develop/examples/moving_average.... and this one shows you how to parse the CSV data and then leverage our designed integration with Boost.Math: https://github.com/cppalliance/decimal/blob/develop/examples/statistics.cpp Matt

On 20 Jan 2025, at 16:41, Matt Borland <matt@mattborland.com> wrote:
There are actually a few examples on financial calculations, since you asked for this in pre-reivew, that are discussed in the docs. This one even shows you how to parse and use CSV data since that's tied to calculations with your spreadsheets: https://github.com/cppalliance/decimal/blob/develop/examples/moving_average.... and this one shows you how to parse the CSV data and then leverage our designed integration with Boost.Math: https://github.com/cppalliance/decimal/blob/develop/examples/statistics.cpp
Thanks, the examples indeed show how to do it. Is it a good thing or a bad thing that the result of those calculations is expected to be exactly the same as with doubles? (within machine epsilon) Its a good thing, but leaves the question "when should we use decimal?" unanswered. I became interested in this subject when I was working with some stock exchange data and saw numbers like 32.1100006103 in the CSV. Obviously the price of the stock was 32.11. How is that even possible? Doubles and floats do not represent this number exactly, but if you do double s=32.11; std::cout << s ; you will get 32.11 back. To get back e.g. 32.11000061 you have to do float s=32.11; std::cout << std::setprecision(8) << std::fixed << s << "\n"; The fact that the exchange and/or data provider somehow did muck this up, may or may not mean this could have been avoided by using decimal. The mistake is: printing with more digits than machine epsilon. I am more puzzled than ever about what this library is for. Cheers, Kostas

> Is it a good thing or a bad thing> that the result of those calculations> is expected to be exactly the same> as with doubles? (within machine epsilon) That is a great question, inaddition to good/bad, I would includesubtle. It is a subtle detail. Numbers like 1/10 will be exactin the decimal system, but slightlynon-exact in a binary representation. It is hard to believe, but thiscan lead to incorrect resultswhen doing decimal calculationswith binary floating-point types. [snip] > about what this library is for. It is for doing exact decimalcalculations, if you want to do them. - Chris On Friday, January 31, 2025 at 03:02:06 PM GMT+1, Kostas Savvidis <kotika98@yahoo.com> wrote: > On 20 Jan 2025, at 16:41, Matt Borland <matt@mattborland.com> wrote: > > There are actually a few examples on financial calculations, since you asked for this in pre-reivew, that are discussed in the docs. This one even shows you how to parse and use CSV data since that's tied to calculations with your spreadsheets: > https://github.com/cppalliance/decimal/blob/develop/examples/moving_average.cpp > and this one shows you how to parse the CSV data and then leverage our designed integration with Boost.Math: > https://github.com/cppalliance/decimal/blob/develop/examples/statistics.cpp Thanks, the examples indeed show how to do it. Is it a good thing or a bad thing that the result of those calculations is expected to be exactly the same as with doubles? (within machine epsilon) Its a good thing, but leaves the question "when should we use decimal?" unanswered. I became interested in this subject when I was working with some stock exchange data and saw numbers like 32.1100006103 in the CSV. Obviously the price of the stock was 32.11. How is that even possible? Doubles and floats do not represent this number exactly, but if you do double s=32.11; std::cout << s ; you will get 32.11 back. To get back e.g. 32.11000061 you have to do float s=32.11; std::cout << std::setprecision(8) << std::fixed << s << "\n"; The fact that the exchange and/or data provider somehow did muck this up, may or may not mean this could have been avoided by using decimal. The mistake is: printing with more digits than machine epsilon. I am more puzzled than ever about what this library is for. Cheers, Kostas

On 31 Jan 2025, at 19:51, Christopher Kormanyos <e_float@yahoo.com> wrote:
It is for doing exact decimal calculations, if you want to do them.
This we know cannot be done. Floating point calculations with exact decimal fractions give non-exact result when you hit the limits machine precision, as has been discussed already in this thread by other people. For reference, near the limits of machine precision decimal64 is worse than double, and here is just one example, it is a very general phenomenon: decimal64 d = 12345678900123456; cout << std::setprecision(16) << std::fixed << d << "\n"; double s = 12345678900123456; cout << std::setprecision(16) << std::fixed << s << "\n"; 12345678900123460.0000000000000000 12345678900123456.0000000000000000 Cheers, Kostas

>> It is for doing exact decimal>> calculations, if you want to do them. > This we know cannot be done.> Floating point calculations with> exact decimal fractions give non-exact> result when you hit the limits machine> precision, as has been discussed> already in this thread by other people. > For reference, near the limits> of machine precision decimal64> is worse than double, and here> is just one example, it is a very> general phenomenon: I'm not sure, but I think you mightbe thinking too wide. At the extremeportion of the precision, binaryand decimal have similar (I guess)exactness. This whole thing is for calculationswith things like 10, 100, or 1/10or even 1 / 100-million which seemsto be that new crypto currency cent. It is these kinds of things thatget better and within the middleof the range. If you don't like Decimal, thengo tell them to take it out ofIEEE-754:2008 and also get them toremove _Decimal from C23 please.When you get these two thingsdone, we can haggle about inclusion(ot not). - Chris On Friday, January 31, 2025 at 08:38:29 PM GMT+1, Kostas Savvidis <kotika98@yahoo.com> wrote: > On 31 Jan 2025, at 19:51, Christopher Kormanyos <e_float@yahoo.com> wrote: > > It is for doing exact decimal > calculations, if you want to do them. This we know cannot be done. Floating point calculations with exact decimal fractions give non-exact result when you hit the limits machine precision, as has been discussed already in this thread by other people. For reference, near the limits of machine precision decimal64 is worse than double, and here is just one example, it is a very general phenomenon: decimal64 d = 12345678900123456; cout << std::setprecision(16) << std::fixed << d << "\n"; double s = 12345678900123456; cout << std::setprecision(16) << std::fixed << s << "\n"; 12345678900123460.0000000000000000 12345678900123456.0000000000000000 Cheers, Kostas

Ruben Perez wrote:
On Wed, 15 Jan 2025 at 10:34, John Maddock via Boost <boost@lists.boost.org> wrote:
The review of the proposed Decimal Number library by Matt Borland and Chris Kormanyos begins today and runs until 22nd Jan.
You will find documentation here: https://cppalliance.org/decimal/decimal.html
And the code repository is here: https://github.com/cppalliance/decimal/
Boost.Decimal is an implementation of IEEE 754 <https://standards.ieee.org/ieee/754/6210/> and ISO/IEC DTR 24733 <https://www.open-std.org/JTC1/SC22/WG21/docs/papers/2009/n2849.pdf> Decimal Floating Point numbers. The library is header-only, has no dependencies, and requires C++14.
What is the status of the DTR? It looks to be dated from 2009. Is it realistic to think it will get into std at some point?
It's a real TR: https://www.iso.org/standard/38843.html The draft just doesn't cost CHF 199 to read.

On Wed, 15 Jan 2025, John Maddock via Boost wrote:
The review of the proposed Decimal Number library by Matt Borland and Chris Kormanyos begins today and runs until 22nd Jan.
You will find documentation here: https://cppalliance.org/decimal/decimal.html
There exists hardware that supports decimal numbers, but I don't see that discussed anywhere (how does the boost type interact with the native one?). Did I look for the wrong keywords? -- Marc Glisse

On Wed, 15 Jan 2025, John Maddock via Boost wrote:
The review of the proposed Decimal Number library by Matt Borland and Chris Kormanyos begins today and runs until 22nd Jan.
You will find documentation here: https://cppalliance.org/decimal/decimal.html
There exists hardware that supports decimal numbers, but I don't see that discussed anywhere (how does the boost type interact with the native one?). Did I look for the wrong keywords?
-- Marc Glisse
There is no interoperability with hardware types. Machines with a decimal floating point unit are rare, and we have no access to one in order to make interoperability work. Matt

On Thu, 16 Jan 2025, Matt Borland wrote:
Machines with a decimal floating point unit are rare, and we have no access to one in order to make interoperability work.
I thought it was present on POWER systems (wikipedia says so), which you can freely access on https://portal.cfarm.net/ for open source development? -- Marc Glisse

On Thu, Jan 16, 2025 at 12:10, Marc Glisse <marc.glisse@inria.fr> wrote: On Thu, 16 Jan 2025, Matt Borland wrote: > Machines with a decimal floating point unit are rare, and we have no access to one in order to make interoperability work. I thought it was present on POWER systems (wikipedia says so), which you can freely access on https://portal.cfarm.net/ for open source development? -- Marc Glisse My understanding is POWER6 and 7 had it and 8+ don’t. IBM z9 and z10, and variants of SPARC64 also had support. Unless I’m missing a modern architecture that has support I don’t think investigating interoperability is worthwhile. The RISC-V spec has a section for a decimal floating point unit, but it’s just a placeholder. Matt

On Thu, 16 Jan 2025, Matt Borland via Boost wrote:
My understanding is POWER6 and 7 had it and 8+ don’t.
I just tested on cfarm120 (POWER10), and I can compile and run programs using _Decimal64 just fine (and it generates instructions like "ddiv", not calls to library emulation). All the documentation I can find says that this processor has both QPFP (__float128) and DFP (_Decimal*). -- Marc Glisse

I just tested on cfarm120 (POWER10), and I can compile and run programs using _Decimal64 just fine (and it generates instructions like "ddiv", not calls to library emulation). All the documentation I can find says that this processor has both QPFP (__float128) and DFP (_Decimal*).
-- Marc Glisse
Interesting, I will have to look into that. We have encoding conversion functions which could be the foundation for interoperability. Matt

Has there been given any thought to making decimal32 and it's brethren "safe" in the sense that Boost.Safe Numerics types are "safe"? Seems to me to be a worthy idea given the types of applications that this decimal32 would be meant to support. Robert Ramey

Has there been given any thought to making decimal32 and it's brethren "safe" in the sense that Boost.Safe Numerics types are "safe"? Seems to me to be a worthy idea given the types of applications that this decimal32 would be meant to support.
We have not looked into a safe version, but we follow the normal IEEE 754 convention where overflow, underflow, div by 0, etc. result in +/-INF and NAN. I think (but have no tried) that the decimal types fulfill all of the Safe Numerics conceptual requirements for Numeric<T>, and then could be used with Safe Numerics if that safety is desired. Matt

Matt Borland wrote:
Has there been given any thought to making decimal32 and it's brethren "safe" in the sense that Boost.Safe Numerics types are "safe"? Seems to me to be a worthy idea given the types of applications that this decimal32 would be meant to support.
We have not looked into a safe version, but we follow the normal IEEE 754 convention where overflow, underflow, div by 0, etc. result in +/-INF and NAN. I think (but have no tried) that the decimal types fulfill all of the Safe Numerics conceptual requirements for Numeric<T>, and then could be used with Safe Numerics if that safety is desired.
IEEE floating point doesn't have undefined behavior, so it's always "safe" in the "no undefined behavior" sense. (Assuming that floating point exceptions are disabled. When they are enabled, it's not so clear.)

IEEE floating point doesn't have undefined behavior, so it's always "safe" in the "no undefined behavior" sense. (Assuming that floating point exceptions are disabled. When they are enabled, it's not so clear.)
In that since there's little to be done here too. We'll never set anything that can be queried by std::fetestexcept (since internally it's all integer maths), and I don't think we gain anything by calling std::feraiseexcept with the right arguments, because we'll have to add a bunch of workarounds to keep constexpr. More importantly there's not a lot of support and even fewer people that care. Matt

On 1/16/25 10:25 AM, Peter Dimov via Boost wrote:
IEEE floating point doesn't have undefined behavior, so it's always "safe" in the "no undefined behavior" sense. (Assuming that floating point exceptions are disabled. When they are enabled, it's not so clear.)
For better or worse, safe numerics goes beyond undefined behavior in trapping behavior which fails to replicate the arithmetical behavior of the underlying type. This is meant to guarantee the we always get a valid arithmetic result or a trap. In any case, knowing that decimal32 et al fulfills the requirements for Numeric<T> should be good enough. If it's not, that's something that should be addressed within safe numerics. In a related note, the interval arithmetic library should also work with decimal32. I don't know how much this library is used or even it's maintained. But I would be curious if it's usable with decimal32. I had looked at the library as the implementation of safe numerics requires this functionality, but I found it unsuitable for non-floating types - ie integer types.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Hi, Some problems I've found: The decimal32 doc page states that the minimum normalized value is 1e-95. The std::numeric_limits<>::min() function agrees with that. However, fpclassify and similar functions don't seem to agree: // Both of the above print "1" std::cout << issubnormal(std::numeric_limits<boost::decimal::decimal32>::min()) << std::endl; std::cout << issubnormal(1.0e-95_df) << std::endl; Printing very small values seem to do nothing: // This prints an empty line auto v = 1.0e-94_df; std::cout << v << std::endl; Calling to_chars with subnormal values fails with std::errc::not_supported, regardless of the chars_format used: // This prints: Error: generic:95 void f(decimal32 v) { char buff[64]{}; auto r = boost::decimal::to_chars(buff, buff + sizeof(buff), v); std::cout << "Error: " << std::make_error_code(r.ec) << std::endl; } int main() { auto v = 1e-95_df; f(v); } Am I doing something stupid? Regards, Ruben.

On Sun, Jan 19, 2025 at 08:04, Ruben Perez <rubenperez038@gmail.com> wrote: Hi, Some problems I've found: The decimal32 doc page states that the minimum normalized value is 1e-95. The std::numeric_limits<>::min() function agrees with that. However, fpclassify and similar functions don't seem to agree: // Both of the above print "1" std::cout << issubnormal(std::numeric_limits<boost::decimal::decimal32>::min()) << std::endl; std::cout << issubnormal(1.0e-95_df) << std::endl; Printing very small values seem to do nothing: // This prints an empty line auto v = 1.0e-94_df; std::cout << v << std::endl; Calling to_chars with subnormal values fails with std::errc::not_supported, regardless of the chars_format used: // This prints: Error: generic:95 void f(decimal32 v) { char buff[64]{}; auto r = boost::decimal::to_chars(buff, buff + sizeof(buff), v); std::cout << "Error: " << std::make_error_code(r.ec) << std::endl; } int main() { auto v = 1e-95_df; f(v); } Am I doing something stupid? Nope. See issue 794 and pr 796 Matt

Hi all, This is my review of the proposed Boost.Decimal. First of all, thanks to Matt and Chris for proposing the library and answering my questions, and to John for managing the review. - What is your evaluation of the design? Good and clean in general. I like that it follows STL structure as much as it can, making it predictable, in general. Some comments: 1. Docs state that only the convenience header (boost/decimal.hpp) as opposed to individual headers (e.g. boost/decimal/charconv.hpp). I think that this might not be the best. It goes against the "don't pay for what you don't use" principle (in terms of compile times), and doesn't follow the established Boost practice. Including boost/decimal.hpp takes around 4.5s, while single headers like boost/decimal/decimal32.hpp take just 1.5s. Boost has been criticized for its compile times a lot, and I think we shouldn't be indulgent with this. Also, tools like clangd tend to follow include-what-you-use schemes, so fixing this helps them. My proposals around this issue are: 1.a. Encourage the use of individual headers in the docs. 1.b. Move all public functionality to public headers. For example, this means moving io.hpp from boost/decimal/detail to boost/decimal. 1.c. Make individual headers standalone. This means that the code for suppressing warnings in boost/decimal.hpp should be moved to a separate header that is included in all the public headers. See how Boost.Asio does it for an example. 1.d. Test that individual headers work. For instance, including boost/decimal/charconv.hpp errors in my machine. Boost.Beast does this, for instance. 2. For each decimal size, there's two types: decimal32 is the storage optimized one, and decimal32_fast is the operation-optimized one. Like Peter, I'm not convinced that making the storage optimized be the default one is adequate. I understand that this is what the TR states, though. I'd advise to consider Peter's suggestions [1] [2]. I don't have enough field experience to know whether this is sufficiently relevant, though. In any case, the documentation should state when to use the standard ones vs the fast ones. 3. I found conversions between decimal types unintuitive. I'd expect conversions that don't imply loss of precision to be implicit. For instance, I think that these should be implicit: * decimal32 to {decimal32_fast, decimal64, decimal64_fast, decimal128, decimal128_fast}. * decimal64 to {decimal64_fast, decimal128, decimal128_fast}. * decimal128 to decimal128_fast. * (Same for the fast types). At the moment, all decimal <=> decimal conversions are explicit, regardless of whether they imply loss of precision or not. I'd consider re-thinking this. 4. I was surprised to find support for an inherently insecure function like sprintf. sprintf(char* buffer, const char* format, T... values) writes characters to buffer but doesn't accept a buffer length, making it prone to buffer overflow attacks. The current implementation looks to be incorrect, as it calls snprintf(buffer, sizeof(buffer), ...), where buffer is a pointer, effectively assuming the buffer size is always sizeof(void*). No tests cover this and no-one has reported this, making me think that no-one might be using it. I'd advise removing sprintf. 5. I'm also not convinced about the value added by the other cstdio functions, namely printf, fprintf and snprintf. They only support decimal arguments. This means that printf("Value for id=%s is %H", id, value) is an error if id is a const char*. Integers work because they are cast to decimal types, which I found surprising, too. Given an int i, printf("Value for %x is %H", i, value) works but doesn't apply the hex specifier. The integer is promoted to a decimal type, which is then printed. I'd have expected the latter to be a compile-time error. I'd say that this will be the case if you replace std::common_type by std::conjunction as Peter suggested. If you're not going to support all standard type specifiers, I'd think of exposing a std::format-like function that works in C++14. Something like decimal::format_decimal and decimal::print_decimal. This is just a suggestion, though. I understand that, if this is pushed to the standard, having these functions might be valuable. 6. Literals should be in a separate namespace, namely boost::decimal::literals or similar, and not in the boost::decimal namespace. 7. Are concept names supposed to be public? If they are, please document them as regular entities. If they are not, please move them into the detail namespace. 8. The interface exposes functions taking and returning detail::int128 objects. I don't think this should happen. If a type appears in a function signature, it should be public. The documentation actually lists uint128 with its members, so this makes me think that the type is in fact public, and should be placed in the boost::decimal namespace. 9. Is boost::decimal::to_string public? It's handy, and in the public namespace, but in a detail/ header and not documented. I'd advise to make it public by moving the header to boost/decimal/ and documenting it. - What is your evaluation of the implementation? I'm not knowledgeable of decimal floating point math as to evaluate the mathematical correctness of the implementation. My comments here address general C++ aspects of the implementation. In general, I've been able to follow it easily. It's well written. There are some points that should likely be addressed before the library goes into Boost, though: * The significand_type, exponent_type, and biased_exponent_type typedefs in the decimal types are public but undocumented. If they're private, they should be placed in private scope. Otherwise, they should be documented. * Code shipped to the user (i.e. anything in include/) should not contain functions or definitions that are exclusive to the tests. The following should be moved: * BOOST_DECIMAL_REDUCE_TEST_DEPTH (include/boost/decimal/detail/config.hpp) * debug_pattern (include/boost/decimal32.hpp) * bit_string (include/boost/decimal128.hpp) * operator<< for detail::uint128_t and detail::uint128 * The macro BOOST_DECIMAL_DISABLE_IOSTREAM seems dubious. Most of the bits it disables seem to be either test-exclusive functions (which shouldn't be there) or leftover includes (as there is no other matching #ifdef in the file). I think it can be removed with a bit of refactoring. On the other hand, BOOST_DECIMAL_DISABLE_CLIB looks good to me. * sprintf and fprintf seem to have no tests. * BOOST_DECIMAL_ALLOW_IMPLICIT_CONVERSIONS affects decimal32, decimal64 and decimal128, but is only tested for decimal64. Also, it doesn't seem to affect the fast types. Is there a reason for it? * I'd advise to avoid includes in config.hpp. They get propagated everywhere, violating the include-what-you-use principle. I've also found them problematic in my modularization efforts in Boost.Charconv. * The build.jam file lists the library name as "boost_core" instead of "boost_decimal". * I'd advise to define BOOST_DECIMAL_CONSTEXPR in config.hpp, rather than in a file containing definitions. This is a pattern that has also caused me trouble in charconv. Also, BOOST_DECIMAL_CONSTEXPR may be a confusing name, since its semantics don't match with BOOST_CONSTEXPR semantics in Boost.Config. * BOOST_DECIMAL_FAST_MATH seems to heavily affect many of the mathematical functions, but there is only a single test for it. Are we sure that's enough coverage? Would it make sense to have a full CI build defining BOOST_DECIMAL_FAST_MATH? * I can see some fuzz testing, which is great. It would be great to extend it to cover other character formatting functions like to_chars or snprintf. * LCOV_EXCL_START/LCOV_EXCL_STOP should only be used in places where code flow is unreachable. These places should use BOOST_DECIMAL_UNREACHABLE. Other usages distort the code coverage metric. Concretely, [4] is related to [5], which is marked as excluded from coverage but is reachable. * The charconv functions issue some spurious warnings under gcc [7]. * I found some problems with numeric_limits and subnormal value formatting [4] which had already been reported and are in the process of being addressed. As per Matt's request, I've also included some comments about the C++20 module support that this library includes: * If you want to follow the conventions that we're using in the initiative I've been writing, the module name should be "boost.decimal", instead of "boost2.decimal". The module interface file should be named boost_decimal.cppm, instead of decimal.cxx. * The UINT64_C/UINT32_C macro definitions in config.hpp look unnecessary, and may cause trouble in some platforms/configurations [3]. You shouldn't need to define these. You get access to these by including cstdint in the global module fragment, which you are doing. * The export namespace std block in the interface unit should be removed. I don't know for sure, but it's likely undefined behavior in theory. Specializations don't need to be exported (although MSVC has several bugs here). You're including the entire library in the purview, so I don't think these bugs affect you. * You shouldn't need to export the forward declarations, either. * Have you tried to run quick_test.cpp with import std? I don't think import std is as easy as checking a config macro - you need to enable it at the build system level. - What is your evaluation of the documentation? It's a bit terse. The information you need is mainly there, but it could be made better for the newcomer. Concretely, a longer exposition explaining the available types and operations would have helped me when I started. I feel it jumps into the reference section too soon. I'd say it also assumes that the reader has read the decimal TR and the IEEE754 spec, which may not be the case. I think making the docs more self-contained would be beneficial. A section on when to use decimalXY vs. decimalXY_fast would be very useful, too. A comparison section to other libraries would also be great. Intel seems to have a similar library (although it's C, and not sure about its license). I've also found libmpdec [8], which powers Python's Decimal [9]. I think its scope is completely different to Boost.Decimal, though (fixed-point arithmetic only and arbitrary precision). A section on "what if I just need fixed-point arithmetic?" would also be great. From my own experience and the comments I've seen in this review, this seems to be a common use case. I understood that the use case is supported, as long as the longest precision you need is <= the max precision your decimal type supports (e.g. 7 digits for decimal32). A small section stating this would suffice. The reference lists all the functions with proper links to the equivalent standard library functions, which is helpful. However, I'd prefer these links to be just informative, with the reference containing a proper description, preconditions, exception specification, and so on. As an example, decimal::to_chars with chars_format::fixed changed behavior with one of the fixes that was merged during the review. 9.1_df would be formatted as "9.1000000" before the fix, and as "9.1" after the fix. Both things made sense to me. It'd been great if the reference explained this, for instance. The documentation is single page, which I always find hard to navigate. Some minor points: * All code snippets have extra indentation in their first line, making them look strange. * The "Note" and "Important" admonitions look incorrectly formatted. It looks like these should have contained icons, and a fallback piece of text is being displayed. * There's a typo in basics.adoc ("souce" instead of "source") making the second code snippet in the docs to not be syntax highlighted. * Concepts are not listed in the reference. * In the synopsis of decimal32 and siblings, members are not indented. * It would be nice to make clearer that BID is "fast but more space" and DPD is "slower but compact" here: https://cppalliance.org/decimal/decimal.html#conversions * The numeric_limits synopsis includes an #ifdef _MSC_VER that is really an implementation detail. I don't think that MSVC declaring numeric_limits as a class is relevant to the user anyhow. * It would be great if to_chars listed the possible errors under which the function fails, and the error codes it produces. * In the documentation, numbers like e_v are in the boost::decimal namespace, which doesn't match the actual code (boost::decimal::numbers namespace). I think the latter is correct. * In the documentation, numbers are defined as static constexpr, not matching the code. They should be listed as "inline constexpr". * The scroll bar is located in the center of the screen, rather than on the right side. - What is your evaluation of the potential usefulness of the library? Do you already use it in industry? I think it's useful. According to the author, it's already in use in some financial companies, which is great. SQL databases also have DECIMAL types. I think that having it in Boost adds value. - Did you try to use the library? With which compiler(s)? Did you have any problems? I've used it to implement support for the DECIMAL type in Boost.MySQL. MySQL's DECIMAL(p, s) type [10] is a fixed-point decimal with a configurable precision p. p ranges from 1 to 65 digits. s is the decimal scale, with 0 <= s <= p. That is, DECIMAL(5, 2) stores values like 134.15. I've added support for using decimal32, decimal64 and decimal128 (and their fast counterparts) in the following contexts: * In the static interface, so you can define row types like "struct employee { decimal::decimal32 salary; };". MySQL sends the type's precision before the rows, so I check it before incurring in rounding errors (using a decimal32 with a DECIMAL(10) is always an error). * In SQL formatting, so you can issue queries containing decimals like this: conn.execute(mysql::with_params("SELECT * FROM employee WHERE salary > {}"), result); PR is here: https://github.com/boostorg/mysql/pull/399 This exercises the charconv API and some utility functions. I've tested it in my CIs, covering gcc >=5, clang >= 3.6, MSVC >= 14.1. Aside from some problems in older compilers (not listed as supported in the docs), things have worked fine. I've encountered a couple of bugs that I reported and have been promptly fixed. - How much effort did you put into your evaluation? A glance? A quick reading? In-depth study? I've spent most of this week reading the documentation, the DTR and the public includes, building the MySQL prototype, asking questions, reporting potential problems and writing this review. I've spent around 30h in the process. - Are you knowledgeable about the problem domain? I'm no expert in decimal number arithmetic - just a user. I know nothing about their internals. - Affiliation disclosure I'm currently affiliated with the C++ Alliance, as is Matt Borland. Ensure to explicitly include with your review: ACCEPT, REJECT, or CONDITIONAL ACCEPT (with acceptance conditions). My recommendation is to CONDITIONALLY ACCEPT the proposed library into Boost, under the following conditions: * Individual include files (e.g. boost/decimal/charconv.hpp) are supported, tested and its usage encouraged. * sprintf should be fixed or, better, removed. I recommend applying the rest of the feedback, but that's up to the authors. My general feeling about the library is good. A lot of effort from the authors has gone into this. The library covers real use cases and is used as of today. I think Boost is better with this library than without it. Many thanks Matt, Chris and John for your work. Regards, Ruben. [1] https://lists.boost.org/Archives/boost/2025/01/259012.php [2] https://lists.boost.org/Archives/boost/2025/01/259021.php [3] https://stackoverflow.com/questions/52490211/purpose-of-using-uint64-c [4] https://github.com/cppalliance/decimal/issues/794 [5] https://github.com/cppalliance/decimal/blob/394daf2624f9a001de6cf8c392e42398... (not guaranteed to trigger) [6] https://github.com/cppalliance/decimal/blob/394daf2624f9a001de6cf8c392e42398... [7] https://github.com/cppalliance/decimal/issues/801 [8] https://www.bytereef.org/mpdecimal/doc/libmpdec/index.html [9] https://docs.python.org/3/library/decimal.html [10] https://dev.mysql.com/doc/refman/8.4/en/precision-math-decimal-characteristi...

Ruben Perez wrote:
3. I found conversions between decimal types unintuitive. I'd expect conversions that don't imply loss of precision to be implicit. For instance, I think that these should be implicit:
* decimal32 to {decimal32_fast, decimal64, decimal64_fast, decimal128, decimal128_fast}. * decimal64 to {decimal64_fast, decimal128, decimal128_fast}. * decimal128 to decimal128_fast. * (Same for the fast types).
That's what the TR specifies, and it was what the documentation said when I looked at it last time, although it's been changed now. I, too, would expect this behavior. Or, if there are reasons to not do it, maybe the authors can explain what they are.

Hi all,
This is my review of the proposed Boost.Decimal. First of all, thanks to Matt and Chris for proposing the library and answering my questions, and to John for managing the review.
- What is your evaluation of the design?
Good and clean in general. I like that it follows STL structure as much as it can, making it predictable, in general. Some comments:
1. Docs state that only the convenience header (boost/decimal.hpp) as opposed to individual headers (e.g. boost/decimal/charconv.hpp). I think that this might not be the best. It goes against the "don't pay for what you don't use" principle (in terms of compile times), and doesn't follow the established Boost practice. Including boost/decimal.hpp takes around 4.5s, while single headers like boost/decimal/decimal32.hpp take just 1.5s. Boost has been criticized for its compile times a lot, and I think we shouldn't be indulgent with this. Also, tools like clangd tend to follow include-what-you-use schemes, so fixing this helps them. My proposals around this issue are: 1.a. Encourage the use of individual headers in the docs. 1.b. Move all public functionality to public headers. For example, this means moving io.hpp from boost/decimal/detail to boost/decimal. 1.c. Make individual headers standalone. This means that the code for suppressing warnings in boost/decimal.hpp should be moved to a separate header that is included in all the public headers. See how Boost.Asio does it for an example. 1.d. Test that individual headers work. For instance, including boost/decimal/charconv.hpp errors in my machine. Boost.Beast does this, for instance.
2. For each decimal size, there's two types: decimal32 is the storage optimized one, and decimal32_fast is the operation-optimized one. Like Peter, I'm not convinced that making the storage optimized be the default one is adequate. I understand that this is what the TR states, though. I'd advise to consider Peter's suggestions [1] [2]. I don't have enough field experience to know whether this is sufficiently relevant, though. In any case, the documentation should state when to use the standard ones vs the fast ones.
In this case it is the naming scheme provided by IEEE-754, which I think we would be remiss to ignore. I think the suggestion from Peter (and you below in bullet 3) to add implicit lossless conversions is good. <stdfloat> has far stricter rules than builtin floating point types but allows promotion up implicitly, but errors on shortening.
3. I found conversions between decimal types unintuitive. I'd expect conversions that don't imply loss of precision to be implicit. For instance, I think that these should be implicit:
* decimal32 to {decimal32_fast, decimal64, decimal64_fast, decimal128, decimal128_fast}. * decimal64 to {decimal64_fast, decimal128, decimal128_fast}. * decimal128 to decimal128_fast. * (Same for the fast types).
At the moment, all decimal <=> decimal conversions are explicit,
regardless of whether they imply loss of precision or not. I'd consider re-thinking this.
4. I was surprised to find support for an inherently insecure function like sprintf. sprintf(char* buffer, const char* format, T... values) writes characters to buffer but doesn't accept a buffer length, making it prone to buffer overflow attacks. The current implementation looks to be incorrect, as it calls snprintf(buffer, sizeof(buffer), ...), where buffer is a pointer, effectively assuming the buffer size is always sizeof(void*). No tests cover this and no-one has reported this, making me think that no-one might be using it. I'd advise removing sprintf.
Dropping sprintf should be fine in 2025. I concur that I don't think anyones using it.
5. I'm also not convinced about the value added by the other cstdio functions, namely printf, fprintf and snprintf. They only support decimal arguments. This means that printf("Value for id=%s is %H", id, value) is an error if id is a const char*. Integers work because they are cast to decimal types, which I found surprising, too. Given an int i, printf("Value for %x is %H", i, value) works but doesn't apply the hex specifier. The integer is promoted to a decimal type, which is then printed. I'd have expected the latter to be a compile-time error. I'd say that this will be the case if you replace std::common_type by std::conjunction as Peter suggested.
If you're not going to support all standard type specifiers, I'd think of exposing a std::format-like function that works in C++14. Something like decimal::format_decimal and decimal::print_decimal. This is just a suggestion, though. I understand that, if this is pushed to the standard, having these functions might be valuable.
The implementations of those functions was more in the realm of reference for how you would add the functionality to the existing STL should they be standardized. Making a complete working reimplementation of all those clib functions would be a huge undertaking for little benefit. Since <charconv> is available at C++14 doesn't that generally cover what format_decimal, and print_decimal would do? I specifically avoided using the STL here because requiring C++17 for a two member struct and an four member enum seemed silly.
6. Literals should be in a separate namespace, namely boost::decimal::literals or similar, and not in the boost::decimal namespace.
Agreed and there's an open issue for that
7. Are concept names supposed to be public? If they are, please document them as regular entities. If they are not, please move them into the detail namespace.
8. The interface exposes functions taking and returning detail::int128 objects. I don't think this should happen. If a type appears in a function signature, it should be public. The documentation actually lists uint128 with its members, so this makes me think that the type is in fact public, and should be placed in the boost::decimal namespace.
That's fair.
9. Is boost::decimal::to_string public? It's handy, and in the public namespace, but in a detail/ header and not documented. I'd advise to make it public by moving the header to boost/decimal/ and documenting it.
I'll move it. I think it goes along with your printing point, and is generally something I would expect to be provided for a numeric type.
- What is your evaluation of the implementation?
I'm not knowledgeable of decimal floating point math as to evaluate the mathematical correctness of the implementation. My comments here address general C++ aspects of the implementation.
In general, I've been able to follow it easily. It's well written. There are some points that should likely be addressed before the library goes into Boost, though:
* The significand_type, exponent_type, and biased_exponent_type typedefs in the decimal types are public but undocumented. If they're private, they should be placed in private scope. Otherwise, they should be documented. * Code shipped to the user (i.e. anything in include/) should not contain functions or definitions that are exclusive to the tests. The following should be moved: * BOOST_DECIMAL_REDUCE_TEST_DEPTH (include/boost/decimal/detail/config.hpp) * debug_pattern (include/boost/decimal32.hpp) * bit_string (include/boost/decimal128.hpp) * operator<< for detail::uint128_t and detail::uint128 * The macro BOOST_DECIMAL_DISABLE_IOSTREAM seems dubious. Most of the bits it disables seem to be either test-exclusive functions (which shouldn't be there) or leftover includes (as there is no other matching #ifdef in the file). I think it can be removed with a bit of refactoring. On the other hand, BOOST_DECIMAL_DISABLE_CLIB looks good to me. * sprintf and fprintf seem to have no tests. * BOOST_DECIMAL_ALLOW_IMPLICIT_CONVERSIONS affects decimal32, decimal64 and decimal128, but is only tested for decimal64. Also, it doesn't seem to affect the fast types. Is there a reason for it? * I'd advise to avoid includes in config.hpp. They get propagated everywhere, violating the include-what-you-use principle. I've also found them problematic in my modularization efforts in Boost.Charconv. * The build.jam file lists the library name as "boost_core" instead of "boost_decimal". * I'd advise to define BOOST_DECIMAL_CONSTEXPR in config.hpp, rather than in a file containing definitions. This is a pattern that has also caused me trouble in charconv. Also, BOOST_DECIMAL_CONSTEXPR may be a confusing name, since its semantics don't match with BOOST_CONSTEXPR semantics in Boost.Config. * BOOST_DECIMAL_FAST_MATH seems to heavily affect many of the mathematical functions, but there is only a single test for it. Are we sure that's enough coverage? Would it make sense to have a full CI build defining BOOST_DECIMAL_FAST_MATH? * I can see some fuzz testing, which is great. It would be great to extend it to cover other character formatting functions like to_chars or snprintf. * LCOV_EXCL_START/LCOV_EXCL_STOP should only be used in places where code flow is unreachable. These places should use BOOST_DECIMAL_UNREACHABLE. Other usages distort the code coverage metric. Concretely, [4] is related to [5], which is marked as excluded from coverage but is reachable. * The charconv functions issue some spurious warnings under gcc [7]. * I found some problems with numeric_limits and subnormal value formatting [4] which had already been reported and are in the process of being addressed.
As per Matt's request, I've also included some comments about the C++20 module support that this library includes:
* If you want to follow the conventions that we're using in the initiative I've been writing, the module name should be "boost.decimal", instead of "boost2.decimal". The module interface file should be named boost_decimal.cppm, instead of decimal.cxx. * The UINT64_C/UINT32_C macro definitions in config.hpp look unnecessary, and may cause trouble in some platforms/configurations [3]. You shouldn't need to define these. You get access to these by including cstdint in the global module fragment, which you are doing.
I didn't think they were needed, but I have run into issues where the compiler threw a million errors on macros undefined.
* The export namespace std block in the interface unit should be removed. I don't know for sure, but it's likely undefined behavior in theory. Specializations don't need to be exported (although MSVC has several bugs here). You're including the entire library in the purview, so I don't think these bugs affect you. * You shouldn't need to export the forward declarations, either.
I'll remove them and see how it goes, at one point it was needed.
* Have you tried to run quick_test.cpp with import std? I don't think import std is as easy as checking a config macro - you need to enable it at the build system level.
- What is your evaluation of the documentation?
It's a bit terse. The information you need is mainly there, but it could be made better for the newcomer.
Concretely, a longer exposition explaining the available types and operations would have helped me when I started. I feel it jumps into the reference section too soon. I'd say it also assumes that the reader has read the decimal TR and the IEEE754 spec, which may not be the case. I think making the docs more self-contained would be beneficial. A section on when to use decimalXY vs. decimalXY_fast would be very useful, too.
A comparison section to other libraries would also be great. Intel seems to have a similar library (although it's C, and not sure about its license). I've also found libmpdec [8], which powers Python's Decimal [9]. I think its scope is completely different to Boost.Decimal, though (fixed-point arithmetic only and arbitrary precision).
That's fair since I already include benchmarks against a different library.
A section on "what if I just need fixed-point arithmetic?" would also be great. From my own experience and the comments I've seen in this review, this seems to be a common use case. I understood that the use case is supported, as long as the longest precision you need is <= the max precision your decimal type supports (e.g. 7 digits for decimal32). A small section stating this would suffice.
The reference lists all the functions with proper links to the equivalent standard library functions, which is helpful. However, I'd prefer these links to be just informative, with the reference containing a proper description, preconditions, exception specification, and so on. As an example, decimal::to_chars with chars_format::fixed changed behavior with one of the fixes that was merged during the review. 9.1_df would be formatted as "9.1000000" before the fix, and as "9.1" after the fix. Both things made sense to me. It'd been great if the reference explained this, for instance.
The documentation is single page, which I always find hard to navigate.
Some minor points:
* All code snippets have extra indentation in their first line, making them look strange. * The "Note" and "Important" admonitions look incorrectly formatted. It looks like these should have contained icons, and a fallback piece of text is being displayed.
I messaged the guys who maintain the stylesheet.
* There's a typo in basics.adoc ("souce" instead of "source") making the second code snippet in the docs to not be syntax highlighted. * Concepts are not listed in the reference. * In the synopsis of decimal32 and siblings, members are not indented. * It would be nice to make clearer that BID is "fast but more space" and DPD is "slower but compact" here: https://cppalliance.org/decimal/decimal.html#conversions * The numeric_limits synopsis includes an #ifdef _MSC_VER that is really an implementation detail. I don't think that MSVC declaring numeric_limits as a class is relevant to the user anyhow. * It would be great if to_chars listed the possible errors under which the function fails, and the error codes it produces. * In the documentation, numbers like e_v are in the boost::decimal namespace, which doesn't match the actual code (boost::decimal::numbers namespace). I think the latter is correct. * In the documentation, numbers are defined as static constexpr, not matching the code. They should be listed as "inline constexpr". * The scroll bar is located in the center of the screen, rather than on the right side.
- What is your evaluation of the potential usefulness of the library? Do you already use it in industry?
I think it's useful. According to the author, it's already in use in some financial companies, which is great. SQL databases also have DECIMAL types. I think that having it in Boost adds value.
- Did you try to use the library? With which compiler(s)? Did you have any problems?
I've used it to implement support for the DECIMAL type in Boost.MySQL. MySQL's DECIMAL(p, s) type [10] is a fixed-point decimal with a configurable precision p. p ranges from 1 to 65 digits. s is the decimal scale, with 0 <= s <= p. That is, DECIMAL(5, 2) stores values like 134.15. I've added support for using decimal32, decimal64 and decimal128 (and their fast counterparts) in the following contexts:
* In the static interface, so you can define row types like "struct employee { decimal::decimal32 salary; };". MySQL sends the type's precision before the rows, so I check it before incurring in rounding errors (using a decimal32 with a DECIMAL(10) is always an error). * In SQL formatting, so you can issue queries containing decimals like this:
conn.execute(mysql::with_params("SELECT * FROM employee WHERE salary >
{}"), result);
PR is here: https://github.com/boostorg/mysql/pull/399
This exercises the charconv API and some utility functions. I've tested it in my CIs, covering gcc >=5, clang >= 3.6, MSVC >= 14.1.
Aside from some problems in older compilers (not listed as supported in the docs), things have worked fine. I've encountered a couple of bugs that I reported and have been promptly fixed.
- How much effort did you put into your evaluation? A glance? A quick reading? In-depth study?
I've spent most of this week reading the documentation, the DTR and the public includes, building the MySQL prototype, asking questions, reporting potential problems and writing this review. I've spent around 30h in the process.
- Are you knowledgeable about the problem domain?
I'm no expert in decimal number arithmetic - just a user. I know nothing about their internals.
- Affiliation disclosure
I'm currently affiliated with the C++ Alliance, as is Matt Borland.
Ensure to explicitly include with your review: ACCEPT, REJECT, or CONDITIONAL ACCEPT (with acceptance conditions).
My recommendation is to CONDITIONALLY ACCEPT the proposed library into Boost, under the following conditions:
* Individual include files (e.g. boost/decimal/charconv.hpp) are supported, tested and its usage encouraged. * sprintf should be fixed or, better, removed.
I recommend applying the rest of the feedback, but that's up to the authors.
My general feeling about the library is good. A lot of effort from the authors has gone into this. The library covers real use cases and is used as of today. I think Boost is better with this library than without it.
Many thanks Matt, Chris and John for your work.
Regards, Ruben.
Thank you for your detailed review. Your conditions are fair and I'll add issues and tag you in them.
[1] https://lists.boost.org/Archives/boost/2025/01/259012.php [2] https://lists.boost.org/Archives/boost/2025/01/259021.php [3] https://stackoverflow.com/questions/52490211/purpose-of-using-uint64-c [4] https://github.com/cppalliance/decimal/issues/794 [5] https://github.com/cppalliance/decimal/blob/394daf2624f9a001de6cf8c392e42398... (not guaranteed to trigger) [6] https://github.com/cppalliance/decimal/blob/394daf2624f9a001de6cf8c392e42398... [7] https://github.com/cppalliance/decimal/issues/801 [8] https://www.bytereef.org/mpdecimal/doc/libmpdec/index.html [9] https://docs.python.org/3/library/decimal.html [10] https://dev.mysql.com/doc/refman/8.4/en/precision-math-decimal-characteristi...

2. For each decimal size, there's two types: decimal32 is the storage optimized one, and decimal32_fast is the operation-optimized one. Like Peter, I'm not convinced that making the storage optimized be the default one is adequate. I understand that this is what the TR states, though. I'd advise to consider Peter's suggestions [1] [2]. I don't have enough field experience to know whether this is sufficiently relevant, though. In any case, the documentation should state when to use the standard ones vs the fast ones.
In this case it is the naming scheme provided by IEEE-754, which I think we would be remiss to ignore. I think the suggestion from Peter (and you below in bullet 3) to add implicit lossless conversions is good. <stdfloat> has far stricter rules than builtin floating point types but allows promotion up implicitly, but errors on shortening.
That's fair.
If you're not going to support all standard type specifiers, I'd think of exposing a std::format-like function that works in C++14. Something like decimal::format_decimal and decimal::print_decimal. This is just a suggestion, though. I understand that, if this is pushed to the standard, having these functions might be valuable.
The implementations of those functions was more in the realm of reference for how you would add the functionality to the existing STL should they be standardized. Making a complete working reimplementation of all those clib functions would be a huge undertaking for little benefit.
Since <charconv> is available at C++14 doesn't that generally cover what format_decimal, and print_decimal would do? I specifically avoided using the STL here because requiring C++17 for a two member struct and an four member enum seemed silly.
Yes, that's fair, too. Charconv should be enough.
9. Is boost::decimal::to_string public? It's handy, and in the public namespace, but in a detail/ header and not documented. I'd advise to make it public by moving the header to boost/decimal/ and documenting it.
I'll move it. I think it goes along with your printing point, and is generally something I would expect to be provided for a numeric type.
I discovered it through the IDE and I think it's a useful thing. I think this is the right thing to do. Thanks, Ruben.

On Wed, Jan 15, 2025 at 5:34 PM John Maddock via Boost < boost@lists.boost.org> wrote:
The review of the proposed Decimal Number library by Matt Borland and Chris Kormanyos begins today and runs until 22nd Jan.
You will find documentation here: https://cppalliance.org/decimal/decimal.html
And the code repository is here: https://github.com/cppalliance/decimal/
Boost.Decimal is an implementation of IEEE 754 <https://standards.ieee.org/ieee/754/6210/> and ISO/IEC DTR 24733 <https://www.open-std.org/JTC1/SC22/WG21/docs/papers/2009/n2849.pdf> Decimal Floating Point numbers. The library is header-only, has no dependencies, and requires C++14.
Thanks to Matt & Chris for submitting your library for review. From glancing at the docs it looks great. However, I got a very basic question: What are the actual applications for decimal floating points? Where are they used or where would potential users want to use them? Why wouldn't one just use fixed-point arithmetic when the binary floating points don't provide the proper rounding behaviour?

On Monday, January 20th, 2025 at 11:52 AM, Klemens Morgenstern via Boost <boost@lists.boost.org> wrote:
On Wed, Jan 15, 2025 at 5:34 PM John Maddock via Boost < boost@lists.boost.org> wrote:
The review of the proposed Decimal Number library by Matt Borland and Chris Kormanyos begins today and runs until 22nd Jan.
You will find documentation here: https://cppalliance.org/decimal/decimal.html
And the code repository is here: https://github.com/cppalliance/decimal/
Boost.Decimal is an implementation of IEEE 754 https://standards.ieee.org/ieee/754/6210/ and ISO/IEC DTR 24733 https://www.open-std.org/JTC1/SC22/WG21/docs/papers/2009/n2849.pdf Decimal Floating Point numbers. The library is header-only, has no dependencies, and requires C++14.
Thanks to Matt & Chris for submitting your library for review. From glancing at the docs it looks great. However, I got a very basic question:
What are the actual applications for decimal floating points? Where are they used or where would potential users want to use them?
The canonical example is finance. We have an example[1] where a user can parse a CSV of stock pricing data without the rounding error that would come with binary floating point, and then are able to perform analysis leveraging Boost.Math.
Why wouldn't one just use fixed-point arithmetic when the binary floating points don't provide the proper rounding behaviour?
In one of Reuben's questions he asked about emulating fixed-point and I believe that is possible, but have not tried to do it myself. We have a user who was using fixed-point arithmetic, but when they needed to support bitcoins it exceeded the precision of their fixed-point system so they either had to re-write or use an off the shelf solution like we offer here. They chose the latter and to my knowledge it has worked out for them. Matt [1] https://github.com/cppalliance/decimal/blob/develop/examples/statistics.cpp

On Wed, Jan 15, 2025 at 10:34 AM John Maddock via Boost < boost@lists.boost.org> wrote:
The review of the proposed Decimal Number library by Matt Borland and Chris Kormanyos begins today and runs until 22nd Jan.
You will find documentation here: https://cppalliance.org/decimal/decimal.html
And the code repository is here: https://github.com/cppalliance/decimal/
Hi everyone, First of all, a big thank you to Matt and Chris for your hard work, I personally find this library very useful. Here are a few quick comments: 1. Literals I agree with earlier suggestions about placing them in a separate namespace, such as boost::decimal::literals, rather than boost::decimal. This matches typical Boost conventions. 2. Headers I also concur with the feedback regarding compile times. Relying exclusively on <boost/decimal.hpp> can be slow. Please provide smaller headers. 3. Fast vs. Non-Fast Variants I think the authors’ choice of default is probably fine, though I’m not 100% certain. In any case, clearer documentation on when to pick the standard type vs. the fast variant would be really helpful. Some guidance on weighing storage vs. speed would benefit end users. 4. <cstdio> Support I share the concerns about potentially unsafe functions and limited specifier support. Maybe focusing on <format> (for C++20) or a safer, custom formatter would be more straightforward and less error-prone. 5. Concepts If the Concepts defined here are intended for user code, I agree they should be public and documented. I would like to use them in my C++20 (and beyond) code. 6. Hardware Decimal Support I don't see explicit mention of how this library could leverage hardware decimal capabilities, but I imagine that could be added in the future, possibly via a pull request from someone who has access to that hardware. It is a nice to have feature but not a must have. I would not stop the library from being released without it. 7. Compiler-Explorer support It would be great if this library was supported on Compiler-Explorer for doing some testing. I know it is not related to the library but it helps a lot for reviewing it. 8. Benchmarks I plan to run some additional benchmarks on my own machines. One thing I'm curious about is how the built-in GCC _Decimal32/_Decimal64 types compare. Do they behave more like the "standard" or the fast variants in practice? Thanks again! I think this library is shaping up wonderfully and can't wait to see how it evolves. Best, Fernando

On Monday, January 20th, 2025 at 2:29 PM, Fernando Pelliccioni via Boost <boost@lists.boost.org> wrote:
On Wed, Jan 15, 2025 at 10:34 AM John Maddock via Boost < boost@lists.boost.org> wrote:
The review of the proposed Decimal Number library by Matt Borland and Chris Kormanyos begins today and runs until 22nd Jan.
You will find documentation here: https://cppalliance.org/decimal/decimal.html
And the code repository is here: https://github.com/cppalliance/decimal/
Hi everyone,
First of all, a big thank you to Matt and Chris for your hard work, I personally find this library very useful. Here are a few quick comments:
1. Literals I agree with earlier suggestions about placing them in a separate namespace, such as boost::decimal::literals, rather than boost::decimal. This matches typical Boost conventions.
2. Headers I also concur with the feedback regarding compile times. Relying exclusively on <boost/decimal.hpp> can be slow. Please provide smaller
headers.
There are issues open for 1 and 2 to be addressed.
3. Fast vs. Non-Fast Variants I think the authors’ choice of default is probably fine, though I’m not 100% certain. In any case, clearer documentation on when to pick the standard type vs. the fast variant would be really helpful. Some guidance on weighing storage vs. speed would benefit end users.
4. <cstdio> Support
I share the concerns about potentially unsafe functions and limited specifier support. Maybe focusing on <format> (for C++20) or a safer,
custom formatter would be more straightforward and less error-prone.
I think pushing people to charconv is the better move. <format> support needs GCC >= 13, Clang >= 18, or _MSC_VER >= 1940. That's a high bar for a C++14 library.
5. Concepts If the Concepts defined here are intended for user code, I agree they should be public and documented. I would like to use them in my C++20 (and beyond) code.
6. Hardware Decimal Support I don't see explicit mention of how this library could leverage hardware decimal capabilities, but I imagine that could be added in the future, possibly via a pull request from someone who has access to that hardware. It is a nice to have feature but not a must have. I would not stop the library from being released without it.
7. Compiler-Explorer support It would be great if this library was supported on Compiler-Explorer for doing some testing. I know it is not related to the library but it helps a lot for reviewing it.
8. Benchmarks I plan to run some additional benchmarks on my own machines. One thing I'm curious about is how the built-in GCC _Decimal32/_Decimal64 types compare. Do they behave more like the "standard" or the fast variants in practice?
"Standard", but I will note the behavior of the decimalXX, and decimalXX_fast are identical except the latter does not support sub-normals.
Thanks again! I think this library is shaping up wonderfully and can't wait to see how it evolves.
Best, Fernando
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

John Here is my review of the Decimal library documentation. I am the Technical Writer for the CppAlliance. I appreciate the effort that has gone in to try to make this doc set complete. Main issues for me are the absence of a table of contents for the docs (though perhaps a build script will create this?), moving the finance examples to a much higher profile position within the docs - as this seems to be the main use case, perhaps articulating other compelling (or author favorite) use cases, and sentences explaining what the given examples/functions/macros do. Hope the following review helps! - best Peter *Decimal library documentation review* Great that the *Use Cases* is present as part of the introductory topics. What about other scenarios than finance - perhaps statistical applications (variance, standard deviation, correlations), Astronomy (planetary orbits over long time frames, gravitation), Mathematical Proofs (verifying conjectures, symbolic algebra - where exactness is key), Cryptography, Machine Learning (optimization algorithms, autonomous driving!), Graphics (to avoid artifacts)? If you have any favorites - mention, ideally with an example or two. Perhaps this would be created by a build script, but the docs could really use a *Table of Contents*, with all the headers and subheaders clear to see. This also helps us understand the structure and scope of the library. Odd formatting of the "Note" with Note centered vertically in the box, should probably be top left. Same throughout the docs - for "Important" topics as well as Note. Some odd leading spaces in the examples and source code " #include" rather than "#include", for example. "A short example of the basic usage:" - would be better to explain what the example shows, even if it does seem obvious - make it clear for the first-time user of this type of library. As finance has been explicitly named as a compelling use-case, perhaps add a meaningful (commented) finance example to the "Basic Usage" section. Or, move the Financial Applications section up here - or both. *API Reference* Add an intro sentence as to what is available in the reference. Such as "this section contains complete details on all the types, structures, enums, constants and macros that are defined in this library" - make sure that this sentence is true - have you added complete details on all this stuff - no matter how peripheral they may be. Good to see links to the API components. Not sure why there is */* see 3.2.8 */* in that section, yet "3.2.9" does not. Perhaps remove the "/* see 3.2.8 */ comment - the introductory comment should do it. However, ensure there are NO exceptions to a general statement like this. If there are, add a comment to the individual entry to illuminate. Perhaps add a bit more information to the *Formatted input:* section - OK these are "locale dependent" but what do they do? I assume take an input stream of characters - but consider explaining this and what terminates the input stream? And the same comment for the output. The *Description *of *Decimal32 *is given as a list of bullet points, whereas the information is more suited to a table. The first column would be something like "Attribute" the second "Value", such as: *Attribute Value* Storage width 32 bits Precision 16 decimal digits Max exponent - the point is: bullet points are overused and the content suggests a table. Tables look nicer than bullet lists. Same comment on bullets versus tables for all the other entries in the API reference. For all the functions, consider listing the Errors and Exceptions that might be thrown - if any. For maximum value, consider adding what to do about it if a particular error is thrown. For functions, add an introductory sentence under the name of the function stating what it does - even if it seems obvious. If there are any limitations or gotchas this is a good place to articulate them. *cfloat support *- syntax style and coloring appears to have been lost. Macros - does describe what is disabled or defined, but consider adding a sentence as to why you would do this. "Disabling the use of I/O streaming enables........what?" Same for all the other macros. Add the use-case or benefit. *Examples* Add a sentence, or two, as to what the example shows. Great if limitations and gotchas are noted here too. For example "Generic Programming" - great there is example code but what does the example show? *Financial Applications* Great - but consider moving much closer to the top of the documentation. Example use cases are some of the first things a dev wants to see. *Design Decisions* Perhaps move this section ABOVE the API Reference. It is short and explanatory as to the thinking behind the library, and is important to understand early on. References and Copyright etc. are OK at the end of the doc. On Mon, Jan 20, 2025 at 12:56 PM Matt Borland via Boost < boost@lists.boost.org> wrote:
On Monday, January 20th, 2025 at 2:29 PM, Fernando Pelliccioni via Boost < boost@lists.boost.org> wrote:
On Wed, Jan 15, 2025 at 10:34 AM John Maddock via Boost < boost@lists.boost.org> wrote:
The review of the proposed Decimal Number library by Matt Borland and Chris Kormanyos begins today and runs until 22nd Jan.
You will find documentation here: https://cppalliance.org/decimal/decimal.html
And the code repository is here: https://github.com/cppalliance/decimal/
Hi everyone,
First of all, a big thank you to Matt and Chris for your hard work, I personally find this library very useful. Here are a few quick comments:
1. Literals I agree with earlier suggestions about placing them in a separate namespace, such as boost::decimal::literals, rather than boost::decimal. This matches typical Boost conventions.
2. Headers I also concur with the feedback regarding compile times. Relying exclusively on <boost/decimal.hpp> can be slow. Please provide smaller
headers.
There are issues open for 1 and 2 to be addressed.
3. Fast vs. Non-Fast Variants I think the authors’ choice of default is probably fine, though I’m not 100% certain. In any case, clearer documentation on when to pick the standard type vs. the fast variant would be really helpful. Some guidance on weighing storage vs. speed would benefit end users.
4. <cstdio> Support
I share the concerns about potentially unsafe functions and limited specifier support. Maybe focusing on <format> (for C++20) or a safer,
custom formatter would be more straightforward and less error-prone.
I think pushing people to charconv is the better move. <format> support needs GCC >= 13, Clang >= 18, or _MSC_VER >= 1940. That's a high bar for a C++14 library.
5. Concepts If the Concepts defined here are intended for user code, I agree they should be public and documented. I would like to use them in my C++20 (and beyond) code.
6. Hardware Decimal Support I don't see explicit mention of how this library could leverage hardware decimal capabilities, but I imagine that could be added in the future, possibly via a pull request from someone who has access to that hardware. It is a nice to have feature but not a must have. I would not stop the library from being released without it.
7. Compiler-Explorer support It would be great if this library was supported on Compiler-Explorer for doing some testing. I know it is not related to the library but it helps a lot for reviewing it.
8. Benchmarks I plan to run some additional benchmarks on my own machines. One thing I'm curious about is how the built-in GCC _Decimal32/_Decimal64 types compare. Do they behave more like the "standard" or the fast variants in practice?
"Standard", but I will note the behavior of the decimalXX, and decimalXX_fast are identical except the latter does not support sub-normals.
Thanks again! I think this library is shaping up wonderfully and can't wait to see how it evolves.
Best, Fernando
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On 21/01/2025 05:53, Peter Turcan via Boost wrote:
John
Here is my review of the Decimal library documentation. I am the Technical Writer for the CppAlliance.
I appreciate the effort that has gone in to try to make this doc set complete.
Main issues for me are the absence of a table of contents for the docs (though perhaps a build script will create this?), moving the finance examples to a much higher profile position within the docs - as this seems to be the main use case, perhaps articulating other compelling (or author favorite) use cases, and sentences explaining what the given examples/functions/macros do.
Hope the following review helps!
Thank you Peter, but I don't see a conclusion (accept/reject/conditional etc)? Best, John.

John,
From an information/education perspective - ACCEPT - conditionally on the few key doc updates mentioned in my intro.
Good luck with the review process! - Peter On Tue, Jan 21, 2025 at 1:30 AM John Maddock via Boost < boost@lists.boost.org> wrote:
On 21/01/2025 05:53, Peter Turcan via Boost wrote:
John
Here is my review of the Decimal library documentation. I am the Technical Writer for the CppAlliance.
I appreciate the effort that has gone in to try to make this doc set complete.
Main issues for me are the absence of a table of contents for the docs (though perhaps a build script will create this?), moving the finance examples to a much higher profile position within the docs - as this seems to be the main use case, perhaps articulating other compelling (or author favorite) use cases, and sentences explaining what the given examples/functions/macros do.
Hope the following review helps!
Thank you Peter, but I don't see a conclusion (accept/reject/conditional etc)?
Best, John.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

John
Here is my review of the Decimal library documentation. I am the Technical Writer for the CppAlliance.
I appreciate the effort that has gone in to try to make this doc set complete.
Main issues for me are the absence of a table of contents for the docs (though perhaps a build script will create this?), moving the finance examples to a much higher profile position within the docs - as this seems to be the main use case, perhaps articulating other compelling (or author favorite) use cases, and sentences explaining what the given examples/functions/macros do.
Hope the following review helps!
- best Peter
Decimal library documentation review
Great that the Use Cases is present as part of the introductory topics. What about other scenarios than finance - perhaps statistical applications (variance, standard deviation, correlations), Astronomy (planetary orbits over long time frames, gravitation), Mathematical Proofs (verifying conjectures, symbolic algebra - where exactness is key), Cryptography, Machine Learning (optimization algorithms, autonomous driving!), Graphics (to avoid artifacts)? If you have any favorites - mention, ideally with an example or two.
Perhaps this would be created by a build script, but the docs could really use a Table of Contents, with all the headers and subheaders clear to see. This also helps us understand the structure and scope of the library.
Odd formatting of the "Note" with Note centered vertically in the box, should probably be top left. Same throughout the docs - for "Important" topics as well as Note.
Some odd leading spaces in the examples and source code " #include" rather than "#include", for example.
"A short example of the basic usage:" - would be better to explain what the example shows, even if it does seem obvious - make it clear for the first-time user of this type of library.
As finance has been explicitly named as a compelling use-case, perhaps add a meaningful (commented) finance example to the "Basic Usage" section. Or, move the Financial Applications section up here - or both.
API Reference
Add an intro sentence as to what is available in the reference. Such as "this section contains complete details on all the types, structures, enums, constants and macros that are defined in this library" - make sure that this sentence is true - have you added complete details on all this stuff - no matter how peripheral they may be.
Good to see links to the API components.
Not sure why there is / see 3.2.8 / in that section, yet "3.2.9" does not. Perhaps remove the "/* see 3.2.8 */ comment - the introductory comment should do it. However, ensure there are NO exceptions to a general statement like this. If there are, add a comment to the individual entry to illuminate.
Perhaps add a bit more information to the Formatted input: section - OK these are "locale dependent" but what do they do? I assume take an input stream of characters - but consider explaining this and what terminates the input stream? And the same comment for the output.
The *Description *of *Decimal32 *is given as a list of bullet points, whereas the information is more suited to a table. The first column would be something like "Attribute" the second "Value", such as:
Attribute Value Storage width 32 bits Precision 16 decimal digits Max exponent
- the point is: bullet points are overused and the content suggests a table. Tables look nicer than bullet lists.
Same comment on bullets versus tables for all the other entries in the API reference.
For all the functions, consider listing the Errors and Exceptions that might be thrown - if any. For maximum value, consider adding what to do about it if a particular error is thrown.
For functions, add an introductory sentence under the name of the function stating what it does - even if it seems obvious. If there are any limitations or gotchas this is a good place to articulate them.
*cfloat support *- syntax style and coloring appears to have been lost.
Macros - does describe what is disabled or defined, but consider adding a sentence as to why you would do this. "Disabling the use of I/O streaming enables........what?" Same for all the other macros. Add the use-case or benefit.
Examples Add a sentence, or two, as to what the example shows. Great if limitations and gotchas are noted here too. For example "Generic Programming" - great there is example code but what does the example show?
Financial Applications Great - but consider moving much closer to the top of the documentation. Example use cases are some of the first things a dev wants to see.
Design Decisions Perhaps move this section ABOVE the API Reference. It is short and explanatory as to the thinking behind the library, and is important to understand early on.
References and Copyright etc. are OK at the end of the doc.
Peter, As always thanks for your review of the docs. We will incorporate your changes shortly. Some of the weird styling (e.g. the important block) has been reported to the guys that maintain the stylesheet. Matt

Boost.Decimal Review I want to thank Matt Borland and Chris Kormanyos for their work.
Does this library bring real benefit to C++ developers for real world use-case?
Yes. It should be helpful in "human-centric calculations". It might be useful in financial applications, although I have some question about this below.
Do you have an application for this library?
Not at the moment.
Does the API match with current best practices?
My biggest point of disagreement is this single-header design. The documentation says "The entire library should be accessed using the convenience header `<boost/decimal.hpp>`": Although providing a single header *as an extra option* is common practice for Boost libraries, I don't think it should be a *recommendation* (with the word "should") let alone enforced as the only way to consume the library. The rest of the documentation always assumes the user will include `<boost/decimal.hpp>`. - The sections "cmath support", "cstdlib support", "format support", and so on..., don't even tell me what header I need to only include that functionality. - The documentation implies `<boost/decimal.hpp>` includes way more than anyone needs because even the C++ standard library splits that functionality into different headers while Boost.Decimal doesn't. `<boost/decimal.hpp>` is forcing users to unnecessarily include a big chunk of the standard library. And this functionality is so diverse that's very unlikely the user really wants to use even 10% of what's being included. - A single-header design is not a way "to make things just work with no effort on the user's part" because it forces the user to use the library this way, not simply allowing them to do so like all other boost libraries. - If a potential user says, "X seconds of compile time is a problem," it's unhelpful to reply with something like "I don't think X seconds of compile time is not a problem *for you*." Especially when you consider that it could be 3 seconds per translation unit.Most recent libraries are making an effort to reduce compile times by breaking up the headers *and* avoiding header-only libraries. This is a big complaint people have about Boost. The library is already header-only and making no effort to break up the headers makes no sense to me. - Telling the user to rely on macros to disable optional functionality doesn't sound good either, because now users have to blacklist rather than whitelist what they want. Blacklisting what you don't want with macros is a huge workaround. It involves a new convention and blacklisting to solve a design problem that has already been solved. - Implying this design matches the C++ standard library is also unreasonable because it brings in many different headers from the standard library. What would match the STL interface, almost by definition, is one header for cmath support, one header for cstdlib support, one header for charconv support... Even the documentation is structured by the different standard library headers it replicates the functionality for. A minor point: - The public API probably shouldn't return `detail::uint128`. It can't be considered an implementation detail if the user can be passing it around. It could be a "see-below" type, where the user should ignore information about the members but can still use.
Is the documentation helpful and clear?
Yes. The main point I'd like to discuss is in terms of explaining the trade-offs. The documentation says "Decimal floating point numbers avoid this issue by storing the significand in base-10 (decimal)." and the "issue" mentioned in the previous sentence is "representation errors". This seems misleading. It makes it sound like `decimalX` it's a variant of a bigint type and that attempts to avoid all representation errors. But if I understand correctly, decimal numbers avoid much more specific types of errors: representation errors when the base is not a multiple of a prime factor. It has the same representation errors as base-2 floating point numbers when 1) the base is not a multiple of a prime factor (although there are more multiples of prime factors) and 2) there are more and more rounding errors in the value as the exponent gets larger and larger. So the final trade-off is *less* representation errors (because there are more multiples of prime factors) at the cost of performance ("ratio to double" between ~30 and ~150 for most operations). It might be wrong about these details, but I still believe that sentence in the documentation is misleading. There's also a technical question I still don't understand about 1) the relationship between decimal numbers and fixed-point real numbers and 2) the impact of this relationship on potential applications. For instance, Bitcoin uses integers (Satoshis) to represent "decimal" units of BTC. This representation is just an integer and a fixed/implicit scaling factor/exponent. The same representation can be used for dollars: the internal units are cents and the main unit is a dollar. When we compare that with decimal floating point numbers, the trade-off becomes 1) faster operations (even faster than native base-2 floating point types) and 2) no precision errors up / exact accurary for numbers in the predefined scale, and 3) reliable precision regardless of the best exponent to represent the number; at the cost of a more limited range of values that can be represented. With that trade-off in mind, my intuition is decimal floating point numbers are not so useful for financial applications (the only application discussed in the "Examples" section). With 32-bit fixed-point numbers, I could represent more dollars (or cents) or BTC (or satoshis) than will ever exist (the limited range of values is not a problem) while getting the performance and precision benefits of integers. If the number is larger than that maximum, that means either I made a mistake (for instance, in currency conversion) and I should definitly fix this mistake (I can't lose $1000000 and ignore it because my best exponent is now Y for some reason - $1000000 is still a lot and I can't move on until this "precision error" is fixed). If this is not a mistake, then I'm in a subdomain where the precision errors don't matter anymore because I need those large exponents to represent something else. The problem is the extra precision from more multiples of prime factors are now not as relevant to me and base-2 numbers are probably fine (for instance, in some statistics about the data). I might be wrong about all of that, but if that's the case I assume other people will also be wrong about it. The docs could correct these mistakes or, if I'm wrong, provide other examples of useful applications. That's why I think the main application of the library is "human-centric calculations" and it's only useful in other fields when there's some intersection with that, like there is when reading data in base-10 format that might not fit a fixed-point representation. This discussion could also open possibilities for representations using integers in this library or other libraries in the sense that base-10 floating-point is, in a way, fixed-point numbers of a dynamic exponent. Other minor points: - It seems like most code blocks start with some whitespaces in the first line for some reason. - Constructing a decimal from floating point numbers leads to precision errors, and constructing it from independent integers makes it difficult to read. Since we construct it from strings (Literals Support), this could be listed in "Basic Usage". I couldn't figure out the functions to construct these values from literals without using the literals in the "Literals Support" section. - The signatures could contain the explicit requires clauses and concepts instead of "Where types `T` and `T2` are integral types" in the exposition. - "Lastly the sign follows the convention of `signbit`": This could be linked to https://learn.microsoft.com/en-us/cpp/c-runtime-library/reference/signbit or removed from the beginning of the exposition. The reader might think it is important information to understand the rest of the exposition and stop reading the documentation to google it. Also, `bool sign` only needs to be explained so much because it's unintuitive. If it were called `bool is_negative`, or something like that, there would be nothing to explain. - The "3.2.8 Note" could just be part of the reference documentation. This way the user would already have this information when looking at the function. It would also be useful to explain how that behavior compares with other numeric types. - Typo in "Integeral" - I missed some better explanation about the difference between fast and regular types. Like in Boost.Unordered where they explain the data structures and then the trade-off. - I'm used to the API reference being at the end of the documentation. I was surprised to see there's a lot of exposition after the reference: Examples, Benchmarks, "Design Decisions", ... - "Comparisons" section: "between vec[i] and vec[i + 1]": where the numbers come from in the implementation is irrelevant. Only the number of replicates (20,000,000) is relevant. "This is repeated 5 times to generate stable results" doesn't add entropy to help stability. It just means you have 100,000,000 replicates with a bias. You could just generate 100,000,000 values. It's also not relevant to the experiment that the values are in a vector. In fact, unless there's a technical reason for that, generating the numbers for the tests and discarding them is much easier. In any case, 100,000,000 is way too much for any probability distribution. After much less than that you'll have a stable p-value. - Comparison results: The tables need some explaining so the reader doesn't have to look at each column and header to infer what's happening. It's usual to include something like "The second column means ____. Lower values are better.". It's also common practice to include the standard deviation in the table. - The tables are not sorted by the values. This is particularly confusing because `GCC _Decimal<X>` always beats `decimal32`. This is not discussed in the document. `GCC_Decimal<X>` isn't even defined and discussed. - There should be a conclusion about the benchmark results. Did anything change from one platform to the other, etc? If I read the results correctly, the conclusion is I should always use `decimalX_fast`, then `GCC_DecimalX` if I need other properties, then `decimalX` if not using GCC. - If section "Basic Operations" is about "+, -, *, /", then the first comparison subsection "Comparisons" should be renamed to reflect what operations it compares ">, >=, <, <=, ==, and !=" (relational operators). - Both columns in the table have the same information in a different scale proportion. This means all benchmarks for a platform could be in a single bar plot where the x-axis has each number type, and for each number type, we have the ratio to double for each operation in a bar.
Did you try to use it? What problems or surprises did you encounter?
Yes. I compiled the code and ran the examples.
What is your evaluation of the implementation?
It seems like performance is still much worse than other implentations. I don't think this is a reason to reject the library. Just I hope the authors would keep an eye on that, implementing improvements over time, if possible. Or maybe some other native types or implementations could be wrapped in the library.
Please explicitly state that you either *accept* or *reject* the inclusion of this library into boost.
Also please indicate the time & effort spent on the evaluation and give
I recommend accepting the library. the reasons for your decision. I read the documentation and the source code. I compiled the library and ran the examples. I spent time on and off over the last week. In total, I spent about 2 days evaluating the library.
Disclaimer
I'm affiliated with the C++ Alliance. Em qua., 15 de jan. de 2025 às 06:34, John Maddock via Boost < boost@lists.boost.org> escreveu:
The review of the proposed Decimal Number library by Matt Borland and Chris Kormanyos begins today and runs until 22nd Jan.
You will find documentation here: https://cppalliance.org/decimal/decimal.html
And the code repository is here: https://github.com/cppalliance/decimal/
Boost.Decimal is an implementation of IEEE 754 <https://standards.ieee.org/ieee/754/6210/> and ISO/IEC DTR 24733 <https://www.open-std.org/JTC1/SC22/WG21/docs/papers/2009/n2849.pdf> Decimal Floating Point numbers. The library is header-only, has no dependencies, and requires C++14.
Please provide feedback on the following general topics:
- What is your evaluation of the design? - What is your evaluation of the implementation? - What is your evaluation of the documentation? - What is your evaluation of the potential usefulness of the library? Do you already use it in industry? - Did you try to use the library? With which compiler(s)? Did you have any problems? - How much effort did you put into your evaluation? A glance? A quick reading? In-depth study? - Are you knowledgeable about the problem domain?
Ensure to explicitly include with your review: ACCEPT, REJECT, or CONDITIONAL ACCEPT (with acceptance conditions).header-only, has no dependencies, and requires C++14.
Best, John Maddock (review manager).
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- Alan Freitas https://alandefreitas.github.io/alandefreitas/ <https://github.com/alandefreitas>

> Does this library bring real benefit> to C++ developers for real world> use-case? > Yes. It should be helpful in> "human-centric calculations".> It might be useful in financial> applications, although I have> some question about this below. <snip> Thank you, Alan for your detailed,clear and rich review of thisdecimal implementation. We read with great clarity allof the excellent points you make,stretching from compile-time,then on to efficiency andto granularity of the library(or lack thereof). > My biggest point of disagreement is> this single-header design. Which is one thing I kind oflike about it <snip> All of the points you mention could,would or should be adressed ina potential evolution of Decimal.This might ultimately reachor influence standardization ofDecimal (if that were ever to be)in the sense of the languare Standard. So you are not alone with yourconcerns. We will follow these in evolution. As it turns out, the last "number"that C++ specified was std::(u)int64_t,a full 12 years after C99 dit it.Hardly a rich numerical contribution. It goes without saying thata real specification would addresseach, every and even moreof your/these concerns. But we have something here.And we have a lot. The proposed Boost.Decimalmovement drives this forward. Our clients are using itvery happily and successfully even aswe write these notes. We offer* Highly practical, usable today.* Portable to many compilers* Even suitable for embedded* Or suitable for GPU/FPGA research* Quality* Testing* Documented behavior* Standards-capable interfacing. And this is really a lotfor a numerical type. I'm not saying that Decimal shouldor would ever be specified in C++.Nor do I imply that that specificationwould or should stem from a Boostimplementation thereof. But wedo have an implementation. And believe that your review pointsare well-received within this context. - Chris On Wednesday, January 22, 2025 at 09:21:10 PM GMT+1, Alan de Freitas via Boost <boost@lists.boost.org> wrote: Boost.Decimal Review I want to thank Matt Borland and Chris Kormanyos for their work. > Does this library bring real benefit to C++ developers for real world use-case? Yes. It should be helpful in "human-centric calculations". It might be useful in financial applications, although I have some question about this below. > Do you have an application for this library? Not at the moment. > Does the API match with current best practices? My biggest point of disagreement is this single-header design. The documentation says "The entire library should be accessed using the convenience header `<boost/decimal.hpp>`": Although providing a single header *as an extra option* is common practice for Boost libraries, I don't think it should be a *recommendation* (with the word "should") let alone enforced as the only way to consume the library. The rest of the documentation always assumes the user will include `<boost/decimal.hpp>`. - The sections "cmath support", "cstdlib support", "format support", and so on..., don't even tell me what header I need to only include that functionality. - The documentation implies `<boost/decimal.hpp>` includes way more than anyone needs because even the C++ standard library splits that functionality into different headers while Boost.Decimal doesn't. `<boost/decimal.hpp>` is forcing users to unnecessarily include a big chunk of the standard library. And this functionality is so diverse that's very unlikely the user really wants to use even 10% of what's being included. - A single-header design is not a way "to make things just work with no effort on the user's part" because it forces the user to use the library this way, not simply allowing them to do so like all other boost libraries. - If a potential user says, "X seconds of compile time is a problem," it's unhelpful to reply with something like "I don't think X seconds of compile time is not a problem *for you*." Especially when you consider that it could be 3 seconds per translation unit.Most recent libraries are making an effort to reduce compile times by breaking up the headers *and* avoiding header-only libraries. This is a big complaint people have about Boost. The library is already header-only and making no effort to break up the headers makes no sense to me. - Telling the user to rely on macros to disable optional functionality doesn't sound good either, because now users have to blacklist rather than whitelist what they want. Blacklisting what you don't want with macros is a huge workaround. It involves a new convention and blacklisting to solve a design problem that has already been solved. - Implying this design matches the C++ standard library is also unreasonable because it brings in many different headers from the standard library. What would match the STL interface, almost by definition, is one header for cmath support, one header for cstdlib support, one header for charconv support... Even the documentation is structured by the different standard library headers it replicates the functionality for. A minor point: - The public API probably shouldn't return `detail::uint128`. It can't be considered an implementation detail if the user can be passing it around. It could be a "see-below" type, where the user should ignore information about the members but can still use. > Is the documentation helpful and clear? Yes. The main point I'd like to discuss is in terms of explaining the trade-offs. The documentation says "Decimal floating point numbers avoid this issue by storing the significand in base-10 (decimal)." and the "issue" mentioned in the previous sentence is "representation errors". This seems misleading. It makes it sound like `decimalX` it's a variant of a bigint type and that attempts to avoid all representation errors. But if I understand correctly, decimal numbers avoid much more specific types of errors: representation errors when the base is not a multiple of a prime factor. It has the same representation errors as base-2 floating point numbers when 1) the base is not a multiple of a prime factor (although there are more multiples of prime factors) and 2) there are more and more rounding errors in the value as the exponent gets larger and larger. So the final trade-off is *less* representation errors (because there are more multiples of prime factors) at the cost of performance ("ratio to double" between ~30 and ~150 for most operations). It might be wrong about these details, but I still believe that sentence in the documentation is misleading. There's also a technical question I still don't understand about 1) the relationship between decimal numbers and fixed-point real numbers and 2) the impact of this relationship on potential applications. For instance, Bitcoin uses integers (Satoshis) to represent "decimal" units of BTC. This representation is just an integer and a fixed/implicit scaling factor/exponent. The same representation can be used for dollars: the internal units are cents and the main unit is a dollar. When we compare that with decimal floating point numbers, the trade-off becomes 1) faster operations (even faster than native base-2 floating point types) and 2) no precision errors up / exact accurary for numbers in the predefined scale, and 3) reliable precision regardless of the best exponent to represent the number; at the cost of a more limited range of values that can be represented. With that trade-off in mind, my intuition is decimal floating point numbers are not so useful for financial applications (the only application discussed in the "Examples" section). With 32-bit fixed-point numbers, I could represent more dollars (or cents) or BTC (or satoshis) than will ever exist (the limited range of values is not a problem) while getting the performance and precision benefits of integers. If the number is larger than that maximum, that means either I made a mistake (for instance, in currency conversion) and I should definitly fix this mistake (I can't lose $1000000 and ignore it because my best exponent is now Y for some reason - $1000000 is still a lot and I can't move on until this "precision error" is fixed). If this is not a mistake, then I'm in a subdomain where the precision errors don't matter anymore because I need those large exponents to represent something else. The problem is the extra precision from more multiples of prime factors are now not as relevant to me and base-2 numbers are probably fine (for instance, in some statistics about the data). I might be wrong about all of that, but if that's the case I assume other people will also be wrong about it. The docs could correct these mistakes or, if I'm wrong, provide other examples of useful applications. That's why I think the main application of the library is "human-centric calculations" and it's only useful in other fields when there's some intersection with that, like there is when reading data in base-10 format that might not fit a fixed-point representation. This discussion could also open possibilities for representations using integers in this library or other libraries in the sense that base-10 floating-point is, in a way, fixed-point numbers of a dynamic exponent. Other minor points: - It seems like most code blocks start with some whitespaces in the first line for some reason. - Constructing a decimal from floating point numbers leads to precision errors, and constructing it from independent integers makes it difficult to read. Since we construct it from strings (Literals Support), this could be listed in "Basic Usage". I couldn't figure out the functions to construct these values from literals without using the literals in the "Literals Support" section. - The signatures could contain the explicit requires clauses and concepts instead of "Where types `T` and `T2` are integral types" in the exposition. - "Lastly the sign follows the convention of `signbit`": This could be linked to https://learn.microsoft.com/en-us/cpp/c-runtime-library/reference/signbit or removed from the beginning of the exposition. The reader might think it is important information to understand the rest of the exposition and stop reading the documentation to google it. Also, `bool sign` only needs to be explained so much because it's unintuitive. If it were called `bool is_negative`, or something like that, there would be nothing to explain. - The "3.2.8 Note" could just be part of the reference documentation. This way the user would already have this information when looking at the function. It would also be useful to explain how that behavior compares with other numeric types. - Typo in "Integeral" - I missed some better explanation about the difference between fast and regular types. Like in Boost.Unordered where they explain the data structures and then the trade-off. - I'm used to the API reference being at the end of the documentation. I was surprised to see there's a lot of exposition after the reference: Examples, Benchmarks, "Design Decisions", ... - "Comparisons" section: "between vec[i] and vec[i + 1]": where the numbers come from in the implementation is irrelevant. Only the number of replicates (20,000,000) is relevant. "This is repeated 5 times to generate stable results" doesn't add entropy to help stability. It just means you have 100,000,000 replicates with a bias. You could just generate 100,000,000 values. It's also not relevant to the experiment that the values are in a vector. In fact, unless there's a technical reason for that, generating the numbers for the tests and discarding them is much easier. In any case, 100,000,000 is way too much for any probability distribution. After much less than that you'll have a stable p-value. - Comparison results: The tables need some explaining so the reader doesn't have to look at each column and header to infer what's happening. It's usual to include something like "The second column means ____. Lower values are better.". It's also common practice to include the standard deviation in the table. - The tables are not sorted by the values. This is particularly confusing because `GCC _Decimal<X>` always beats `decimal32`. This is not discussed in the document. `GCC_Decimal<X>` isn't even defined and discussed. - There should be a conclusion about the benchmark results. Did anything change from one platform to the other, etc? If I read the results correctly, the conclusion is I should always use `decimalX_fast`, then `GCC_DecimalX` if I need other properties, then `decimalX` if not using GCC. - If section "Basic Operations" is about "+, -, *, /", then the first comparison subsection "Comparisons" should be renamed to reflect what operations it compares ">, >=, <, <=, ==, and !=" (relational operators). - Both columns in the table have the same information in a different scale proportion. This means all benchmarks for a platform could be in a single bar plot where the x-axis has each number type, and for each number type, we have the ratio to double for each operation in a bar. > Did you try to use it? What problems or surprises did you encounter? Yes. I compiled the code and ran the examples. > What is your evaluation of the implementation? It seems like performance is still much worse than other implentations. I don't think this is a reason to reject the library. Just I hope the authors would keep an eye on that, implementing improvements over time, if possible. Or maybe some other native types or implementations could be wrapped in the library. > Please explicitly state that you either *accept* or *reject* the inclusion of this library into boost. I recommend accepting the library. > Also please indicate the time & effort spent on the evaluation and give the reasons for your decision. I read the documentation and the source code. I compiled the library and ran the examples. I spent time on and off over the last week. In total, I spent about 2 days evaluating the library. > Disclaimer I'm affiliated with the C++ Alliance. Em qua., 15 de jan. de 2025 às 06:34, John Maddock via Boost < boost@lists.boost.org> escreveu: > The review of the proposed Decimal Number library by Matt Borland and > Chris Kormanyos begins today and runs until 22nd Jan. > > You will find documentation here: > https://cppalliance.org/decimal/decimal.html > > And the code repository is here: https://github.com/cppalliance/decimal/ > > Boost.Decimal is an implementation of IEEE 754 > <https://standards.ieee.org/ieee/754/6210/> and ISO/IEC DTR 24733 > <https://www.open-std.org/JTC1/SC22/WG21/docs/papers/2009/n2849.pdf> > Decimal Floating Point numbers. The library is header-only, has no > dependencies, and requires C++14. > > Please provide feedback on the following general topics: > > - What is your evaluation of the design? > - What is your evaluation of the implementation? > - What is your evaluation of the documentation? > - What is your evaluation of the potential usefulness > of the library? Do you already use it in industry? > - Did you try to use the library? With which compiler(s)? Did > you have any problems? > - How much effort did you put into your evaluation? > A glance? A quick reading? In-depth study? > - Are you knowledgeable about the problem domain? > > Ensure to explicitly include with your review: ACCEPT, REJECT, or > CONDITIONAL ACCEPT (with acceptance conditions).header-only, has no > dependencies, and requires C++14. > > Best, John Maddock (review manager). > > > _______________________________________________ > Unsubscribe & other changes: > http://lists.boost.org/mailman/listinfo.cgi/boost > -- Alan Freitas https://alandefreitas.github.io/alandefreitas/ <https://github.com/alandefreitas> _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Wednesday, January 22nd, 2025 at 3:20 PM, Alan de Freitas via Boost <boost@lists.boost.org> wrote:
Boost.Decimal Review
I want to thank Matt Borland and Chris Kormanyos for their work.
Does this library bring real benefit to C++ developers for real world
use-case?
Yes. It should be helpful in "human-centric calculations". It might be useful in financial applications, although I have some question about this below.
Do you have an application for this library?
Not at the moment.
Does the API match with current best practices?
My biggest point of disagreement is this single-header design. The documentation says "The entire library should be accessed using the convenience header `<boost/decimal.hpp>`": Although providing a single
header as an extra option is common practice for Boost libraries, I don't think it should be a recommendation (with the word "should") let alone enforced as the only way to consume the library. The rest of the documentation always assumes the user will include `<boost/decimal.hpp>`.
- The sections "cmath support", "cstdlib support", "format support", and so on..., don't even tell me what header I need to only include that functionality. - The documentation implies `<boost/decimal.hpp>` includes way more than
anyone needs because even the C++ standard library splits that functionality into different headers while Boost.Decimal doesn't. `<boost/decimal.hpp>` is forcing users to unnecessarily include a big chunk
of the standard library. And this functionality is so diverse that's very unlikely the user really wants to use even 10% of what's being included. - A single-header design is not a way "to make things just work with no effort on the user's part" because it forces the user to use the library this way, not simply allowing them to do so like all other boost libraries. - If a potential user says, "X seconds of compile time is a problem," it's unhelpful to reply with something like "I don't think X seconds of compile time is not a problem for you." Especially when you consider that it could be 3 seconds per translation unit.Most recent libraries are making an effort to reduce compile times by breaking up the headers and avoiding header-only libraries. This is a big complaint people have about Boost. The library is already header-only and making no effort to break up the headers makes no sense to me. - Telling the user to rely on macros to disable optional functionality doesn't sound good either, because now users have to blacklist rather than whitelist what they want. Blacklisting what you don't want with macros is a huge workaround. It involves a new convention and blacklisting to solve a design problem that has already been solved. - Implying this design matches the C++ standard library is also unreasonable because it brings in many different headers from the standard library. What would match the STL interface, almost by definition, is one header for cmath support, one header for cstdlib support, one header for charconv support... Even the documentation is structured by the different standard library headers it replicates the functionality for.
See: https://github.com/cppalliance/decimal/issues/804. There's an active branch `modular` where I am working on resolving the issue.
A minor point:
- The public API probably shouldn't return `detail::uint128`. It can't be considered an implementation detail if the user can be passing it around. It could be a "see-below" type, where the user should ignore information about the members but can still use.
Yes this change makes sense, and several others have brought it up. I tagged you an issue to track it since you're the most recent one to mention it.
Is the documentation helpful and clear?
Yes. The main point I'd like to discuss is in terms of explaining the trade-offs. The documentation says "Decimal floating point numbers avoid this issue by storing the significand in base-10 (decimal)." and the "issue" mentioned in the previous sentence is "representation errors". This seems misleading. It makes it sound like `decimalX` it's a variant of a bigint type and that attempts to avoid all representation errors. But if I understand correctly, decimal numbers avoid much more specific types of errors: representation errors when the base is not a multiple of a prime factor. It has the same representation errors as base-2 floating point numbers when 1) the base is not a multiple of a prime factor (although there are more multiples of prime factors) and 2) there are more and more rounding errors in the value as the exponent gets larger and larger. So the final trade-off is less representation errors (because there are more multiples of prime factors) at the cost of performance ("ratio to double" between ~30 and ~150 for most operations). It might be wrong about these details, but I still believe that sentence in the documentation is misleading.
There's also a technical question I still don't understand about 1) the relationship between decimal numbers and fixed-point real numbers and 2) the impact of this relationship on potential applications. For instance, Bitcoin uses integers (Satoshis) to represent "decimal" units of BTC. This representation is just an integer and a fixed/implicit scaling factor/exponent. The same representation can be used for dollars: the internal units are cents and the main unit is a dollar. When we compare that with decimal floating point numbers, the trade-off becomes 1) faster operations (even faster than native base-2 floating point types) and 2) no precision errors up / exact accurary for numbers in the predefined scale, and 3) reliable precision regardless of the best exponent to represent the number; at the cost of a more limited range of values that can be represented.
With that trade-off in mind, my intuition is decimal floating point numbers are not so useful for financial applications (the only application discussed in the "Examples" section). With 32-bit fixed-point numbers, I could represent more dollars (or cents) or BTC (or satoshis) than will ever exist (the limited range of values is not a problem) while getting the performance and precision benefits of integers.
In dollars with a 32-bit unsigned integer you could represent ~4.3 Billion USD. The United States Government spends on the order of 10,000 Billion USD a year.
If the number is larger than that maximum, that means either I made a mistake (for instance, in currency conversion) and I should definitly fix this mistake (I can't lose $1000000 and ignore it because my best exponent is now Y for some reason - $1000000 is still a lot and I can't move on until this "precision error" is fixed). If this is not a mistake, then I'm in a subdomain where the precision errors don't matter anymore because I need those large exponents to represent something else. The problem is the extra precision from more multiples of prime factors are now not as relevant to me and base-2 numbers are probably fine (for instance, in some statistics about the data).
I might be wrong about all of that, but if that's the case I assume other people will also be wrong about it. The docs could correct these mistakes or, if I'm wrong, provide other examples of useful applications. That's why I think the main application of the library is "human-centric calculations" and it's only useful in other fields when there's some intersection with that, like there is when reading data in base-10 format that might not fit a fixed-point representation. This discussion could also open possibilities for representations using integers in this library or other libraries in the sense that base-10 floating-point is, in a way, fixed-point numbers of a dynamic exponent.
I think including something like Reuben is planning on doing where he synthesizes a variable fixed point type could be a useful addition.
Other minor points:
- It seems like most code blocks start with some whitespaces in the first line for some reason. - Constructing a decimal from floating point numbers leads to precision errors, and constructing it from independent integers makes it difficult to read. Since we construct it from strings (Literals Support), this could be listed in "Basic Usage". I couldn't figure out the functions to construct these values from literals without using the literals in the "Literals Support" section.
That's fair. I have also moved literals into a literals namespace since the beginning of the review since several people have mentioned that as well.
- The signatures could contain the explicit requires clauses and concepts instead of "Where types `T` and `T2` are integral types" in the exposition. - "Lastly the sign follows the convention of `signbit`": This could be linked to https://learn.microsoft.com/en-us/cpp/c-runtime-library/reference/signbit or removed from the beginning of the exposition. The reader might think it is important information to understand the rest of the exposition and stop reading the documentation to google it. Also, `bool sign` only needs to be explained so much because it's unintuitive. If it were called `bool is_negative`, or something like that, there would be nothing to explain.
I'll add an explanation of signbit rather than assuming people know it's counterintuitive.
- The "3.2.8 Note" could just be part of the reference documentation. This way the user would already have this information when looking at the function. It would also be useful to explain how that behavior compares with other numeric types. - Typo in "Integeral" - I missed some better explanation about the difference between fast and regular types. Like in Boost.Unordered where they explain the data structures and then the trade-off. - I'm used to the API reference being at the end of the documentation. I was surprised to see there's a lot of exposition after the reference: Examples, Benchmarks, "Design Decisions", ... - "Comparisons" section: "between vec[i] and vec[i + 1]": where the numbers come from in the implementation is irrelevant. Only the number of replicates (20,000,000) is relevant. "This is repeated 5 times to generate stable results" doesn't add entropy to help stability. It just means you have 100,000,000 replicates with a bias. You could just generate 100,000,000 values. It's also not relevant to the experiment that the values are in a vector. In fact, unless there's a technical reason for that, generating the numbers for the tests and discarding them is much easier. In any case, 100,000,000 is way too much for any probability distribution. After much less than that you'll have a stable p-value. - Comparison results: The tables need some explaining so the reader doesn't have to look at each column and header to infer what's happening. It's usual to include something like "The second column means ____. Lower values are better.". It's also common practice to include the standard deviation in the table. - The tables are not sorted by the values. This is particularly confusing because `GCC _Decimal<X>` always beats `decimal32`. This is not discussed
in the document. `GCC_Decimal<X>` isn't even defined and discussed.
I believe Peter Turcan brought up similar points. I'll add a link to the GCC docs at the top explaining what it is.
- There should be a conclusion about the benchmark results. Did anything change from one platform to the other, etc? If I read the results correctly, the conclusion is I should always use `decimalX_fast`, then `GCC_DecimalX` if I need other properties, then `decimalX` if not using GCC. - If section "Basic Operations" is about "+, -, *, /", then the first comparison subsection "Comparisons" should be renamed to reflect what operations it compares ">, >=, <, <=, ==, and !=" (relational operators).
- Both columns in the table have the same information in a different scale proportion. This means all benchmarks for a platform could be in a single bar plot where the x-axis has each number type, and for each number type, we have the ratio to double for each operation in a bar.
Did you try to use it? What problems or surprises did you encounter?
Yes. I compiled the code and ran the examples.
What is your evaluation of the implementation?
It seems like performance is still much worse than other implentations. I don't think this is a reason to reject the library. Just I hope the authors would keep an eye on that, implementing improvements over time, if possible. Or maybe some other native types or implementations could be wrapped in the library.
Chris and I have been discussing offering a wrapper around the Intel library for those who have it, and want to use it. It's the same model that multi-precision uses.
Please explicitly state that you either accept or reject the
inclusion of this library into boost.
I recommend accepting the library.
Also please indicate the time & effort spent on the evaluation and give
the reasons for your decision.
I read the documentation and the source code. I compiled the library and ran the examples.
I spent time on and off over the last week. In total, I spent about 2 days evaluating the library.
Thank you for your review and comments. Matt
participants (18)
-
Alan de Freitas
-
Alexander Grund
-
Andrey Semashev
-
Christopher Kormanyos
-
Fernando Pelliccioni
-
Ivan Matek
-
Joaquín M López Muñoz
-
John Maddock
-
Klemens Morgenstern
-
Kostas Savvidis
-
Marc Glisse
-
Matt Borland
-
Peter Dimov
-
Peter Turcan
-
Rainer Deyke
-
Robert Ramey
-
Ruben Perez
-
Sam Darwin