
In practice, nothing can be assumed to be exactly rounded if it goes through a decent set of optimizations, since the compiler gets to choose when values go in and out of 80 bit registers.
Ah, that's a whole other issue: having an extended 80-bit double really screws things up because you can get double-rounding of a result pushing it off by one bit. Machines that don't have that data type, or if you force the Intel FPU into 64-bit mode, don't have that problem I believe. I'm not entirely sure, but I believe that the AMD64 model effectively deprecates the old x87 80-bit registers infavour of 64-bit SMD registers, so again the problem goes away there.
Actually I have not read it in its entirety, though I will do that shortly.
Good luck, if a tough read in places, but very useful.
I have skimmed it and read similar things in the past. However, I couldn't find any mention of determinism in that document. There are plenty of discussions of non-portability, but the determinism question is separate. Specifically, I've been assuming the following:
If I have a function that accesses no global source of nondeterministic (e.g., other global variables, threads, etc.), and I compile it once into a separate translation unit from whatever calls it (to avoid inlining or other interprocedural weirdness), and call it twice on the same machine at different times with exactly the same bits as input, I will get the same result.
Correct.
I also usually assume that the compiler is determistic given the same set of optimization flags on the same machine with the same environment.
Yep, but the IEEE standard is much stronger than that: you will get exactly the same result from the same input on different machines and/or architectures. In practice certain optimisations can mess things up, as can the 80-bit double rounding problem, but we're remarkably close to that result even now. Of course this assumes you don't make any std lib calls, since the quality of implementation of exp/pow etc can vary quite a bit. John.