
On Sun, 6 Mar 2005 20:50:12 -0700, Jonathan Turkanis <technews@kangaroologic.com> wrote:
We might want to allow different rounding policies.
i'll try to write an explanation on why i think there is a single Right rounding method, but it'll take time in case you can convince me, or i can't convince you, and we end up with several rounding policies, give a thought to this: rounding is a property of the operations, not the representation, so one may want + to round upwards and * to round downwards (or this particular + to round up, and the next one to truncate)...
Also, we might want assertion failures instead of exceptions.
ok, that's a new variant, then :O)
The "legacy" code doesn't do too well.
i'm not sure how indicative a test with 10000 random numbers is, maybe doing the operations on the results of previous operations would be closer to the real-life scenarios, but atm the statistics are a side-effect only, the test bed is only intentended to catch blatant errors in the implementations
The most obvious signature would be
template< typename Elem, typename Rounding = use_default, typename Checking = use_default > class rational;
[i assume checking would mean reacting by assertion failures, throwing exceptions etc. did i misunderstood?] theory: rounding (or not), and checking is an either/or: if you do rounding (or ignore the overflows), you have a result, and the operation succeeded, if you fail/thow the operation failed, and no need to bother with rounding the result you can't return implementation: with assertions and exceptions it seems polite to leave the operands unchanged (so if a *= a fails, you can print the offending a), but that requires temporaries, maintaining which can seriously degrade performance of the rounding/ignoring code
I don't like this. Shouldn't rational<Elem> make sense for any type Elem for which the Euclidean algorithm makes sense?
yes, it should, i was thinking more about what a user wants: "i'm writing an architectural CAD application, and my i/o numbers will be in meters, with millimeter precision, largest value less than a hundred meters, the calculations are this complex, so i want intermediate results with .01mm precision, and volumes at most 1e6 m^3, thus i need rational<1000000>" it is less obvious why a user would say "i want rational<long>" (also if the user is developing for multiple platforms this one gets ugly) still, rational<1000,round> will end up triggering a policy to be used, so a hard-line user can still create her own policy to enforce a rational with unsigned char numerator and unsigned long long denominator for her special high-precision calculations with non-negative numbers less than 1
I think you can get the effect of restricting the absolute value of the <snip>
[the intent was not bother users with what i feel is implementation details of the representation, but] i was wondering whether it makes sense to allow restricting the num/den ranges: why would anyone ever deliberately want less precise results than she can get for free? otoh one may want to get the exact same results on a two's complement 32 bit binary machine as on her other, signed decimal architecture, question is whether something like this will ever happen... (if not, always using the native limits may allow some implementation optimizations, don't know yet) adding an explicit round(num_max,den_max) or round(num_min,num_max,den_max) fn might be a much more useful alternative br, andras