
"Peder Holt" <peder.holt@gmail.com> wrote in message news:4c1c5632050924115073d9251c@mail.gmail.com...
What should we do about the accuracy of double_ operations?
The implementation of plus,minus,times and divide mimics the behaviour of the runtime equivalent, double. This means that the mantissa is trunkated from 61 to 52 bit for every fundamental operation. The result of this, is that complex functions such as sine and exponential will differ from their runtime counterpart, unless a specialization is made for double_. The problem would disappear if we allow calculations with double_ to be more accurate than calculations with double. Is this a problem?
There is neverending discussion on comp.lang.c++.mod regarding differences between runtime floats on various platforms. IMO this is an ideal opportunity to create a platform independent floating point type, IOW one with an exact length of mantissa and exponent specified and with a consistent policy on rounding etc. I think this is how it is done in Java though the only link I can find is:http://www.concentric.net/~Ttwang/tech/javafloat.htm Other question is ... What sort of rounding do you use? Also ... I'm assuming it is simple enough to program the length of mantissa and exponent?. By making them length adjustable you would be able to regain platform dependence via typedefs where required. thus providing the best of both worlds. regards Andy Little