
ick comments from me as well:
"The final number constructed is in the form (sign || coeff < 0 ? -1 : 1) x abs(coeff) x 10^exp."
This doesn't make much sense formatting-wise; the "(sign || coeff < 0 ? -1 : 1) x abs(coeff) x 10^exp" part should be formatted differently and not mix code and ad-hoc 'x' multiplication notations.
See discussion here: https://github.com/cppalliance/decimal/pull/785. I previously had only coeff x 10^exp.
namespace boost { namespace decimal {
// Paragraph numbers are from ISO/IEC DTR 24733
// 3.2.2.1 construct/copy/destroy constexpr decimal32() noexcept = default;
This doesn't compile. "class decimal32 {" is missing.
It was previously described here: https://cppalliance.org/decimal/decimal.html#generic_decimal_ so I did not duplicate anything from that block.
The default "decimal32" name is occupied by the storage-optimized form and not by the operation-optimized form. I don't think this is the right decision. The storage-optimized form should only exist in memory, and everything else should use the operation-optimized form.
E.g. if we have (illustrative)
decimal32 f( decimal32 a, decimal32 b ) { return a+b; }
decimal32 g( decimal32 a, decimal32 b ) { return a*b; }
int main() { decimal32 a, b, c, d, e; e = f( g(a, b), g(c, d) ); }
this currently would do a few unnecessary packs and unpacks.
But if f/g take and return the operation-optimized form, these unnecessary pack+unpack operations are avoided.
That is, if "decimal32" is the operation-optimized form, and we have e.g. "decimal32bid" for the BID encoded form, main would be
int main() { decimal32bid a, b, c, d, e; e = f( g(a, b), f(c, d) ); }
So the actual storage ("at rest") is still the same, but operations are more efficient.
Note that in this case decimal32bid has no operations, only conversions from and to decimal32. The actual arithmetic is always performed over the operation-optimized form.
The name "decimal32" is occupied by the IEEE-754 compliant type. The naming scheme matches published standards and existing practice such as uint32_t vs uint_fast32_t. There's a clear statement of intent using the latter. I don't think people should pick decimal32 over decimal32_fast in the general case, but I also think it would be a bad idea to diverge from IEEE in our naming scheme. Matt