
On Sat, Jan 23, 2010 at 10:25 AM, Thomas Klimpel <Thomas.Klimpel@synopsys.com> wrote:
I have to admit that I read "What Every Computer Scientist Should Know About Floating-Point Arithmetic" only superficially: http://www.engrng.pitt.edu/hunsaker/3097/floatingpoint.pdf http://docs.sun.com/source/806-3568/ncg_goldberg.html
I also have to admit that I write code with floating point driven logic from time to time in case it seems unavoidable to me, and that this code often has bugs that are only discovered months after the code is already finished. But I never found a bug in such code that would have been a fundamental problem with floating point, but always just the "usual" floating point traps I keep overlooking by mistake.
I remember working with an app which used doubles for storing monetary amounts. The rounding errors associated with the precision of doubles didn't show up until we started aggregating together enough money, and accounting for the discrepancies which occurred was not a fun problem to solve. When I try to break in someone who is new to this problem, it's usually with the simple exercise: double dime = 0.1, ten_dimes=1.0, pocket = 0.0; for (int i=0; i<10; i++) pocket += dime; assert (pocket == ten_dimes); // fails. And then follow that up with a similar exrcise involving 0.3333 (base 10) to help them understand why the prior example doesn't work. So far, when there is a level of guaranteed precision which is needed (such as with money) we always end up working with integer types.