
When writing code for several widths of types, there might need to be separate sets of literal constants for different decimal widths (unless the highest width were taken for table entires and static_cast-ing were judiciously used). I think I'm not following you here. Could you illustrate such a use case? Concretely, what do you mean by several widths of types? Do you mean decimal32 vs decimal64? With the syntax you have today, you already have distinct suffixes for each type (i.e. _df always constructs decimal32, _dd always constructs decimal64, and so on).
And this whole great idea will turn into more of a mess than a help. Clients would get discouraged and this would ultimately reduce library acceptance and traction gaining.
I would advise avoiding theoretically cool-seeming compile-time asserts like these.
Chris, Rubin, I *think* you are somewhat talking about different things here... Rubin's point was that if you have a literal such as 1.2345678912345678_DD Then the type computed is a decimal64 and the result will be rounded to 16 decimal places. This is true regardless of whether it is subsequently static_cast to something else (potentially causing double rounding). So... I'm *reasonably* sure that Chris's argument doesn't hold so much water. But, I do see one argument against a compile time assert, which is the "this is a right pain in the butt" argument, basically that users may not want to be forced to round all the arguments themselves, and/or that this may be error prone: have you counted exactly 16 digits and not accidentally rounded to 15? Plus string streaming will presumably round excess digits so it kind of makes sense to be consistent. But... I'm supposed to be an impartial judge, so I'll shut up now and let you get on with it ;) I just thought you were perhaps misunderstanding each other. John.