
"Robert Ramey" <ramey@rrsd.com> writes:
David Abrahams wrote:
"Robert Ramey" <ramey@rrsd.com> writes:
c) "The IEEE standard strongly recommends that implementations allow trap handlers to be installed." C++ doesn't permit this.
Incorrect. C++ absolutely does permit implementations to allow trap handlers to be installed. C++ simply does not require it.
My basis for citing this is page 122 of "The C++ Progamming Language" by Stroustrup, copyright 2000, reprinted May 2003 with corrections. It says, "In particular, underflow, overflow and division by zero do not throw standard exceptions". If that's wrong, incomplete or out of date, I would be curious to know about it.
What do you want me to tell you about it? B.S. was probably writing colloquially, as in "there is no guarantee that the implementation will throw a standard exception."
It seems to comport with my personal experience with C++ numeric operations.
That means nothing about what implementations are allowed to do, and I'm pretty sure I can set VC++ up to throw a C++ exception in these cases. Yep, there it is: http://tinyurl.com/9my88 (http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dv_vstechar...)
Feel free to expand upon this.
No, I'm not gonna expand upon it. Don't make me do the legwork; get a copy of standard and read what it says. Describing to you what's plainly written in the standard until you believe me is a big time and bandwidth waster.
d) "Another ambiguity in most language definitions concerns what happens on overflow, underflow and other exceptions. The IEEE standard precisely specifies the behavior of exceptions, and so languages that use the standard as a model can avoid any ambiguity on this point. " But C++ doesn't permit exceptions to be thrown in these instances.
Incorrect. Exceptions can be thrown anywhere that undefined behavior is specified. Overflow, underflow, and divide-by-zero all induce undefined behavior.
Hmmm I suppose that any thing can happen when undefined behavior is specified.
Yes, that's intentional latitude for the implementation to be able to behave gracefully or not, as the situation demands.
So writing a program that depends upon an undefined operation yielding a Nan would be a bad idea - wouldn't it?
No. If your implementation says it is IEEE-754 compliant, as many are, it's a perfectly good idea. Just like it's a perfectly good idea to depend on pthreads if you know your implementation is on POSIX, or to depend on the presence of 64-bit integers if your implementation tells you it supports long long, or...
b) is just a special case of a). I will agree that eliminating undefined floating point behaviors will make C++ more predictable.
Until one of the above (or maybe something else) is done. There can really be no unambiguous resolution to the problem of passing results from undefined operations from one machine to another.
Of course there can be. All you need to do is write a specification for it that describes what happens in all cases, and it will be unambiguous. If you can do this for ints that have nonportable values greater than 32767, you can do it for floats and doubles, too.
Besides writing such a specification, wouldn't C++ vendors have to agree to implement it?.
No, we're talking about a specification for serialization. That's not the job of the C++ vendor. The C++ vendor already tells you everything you need to know, e.g. "I support quiet NaN (or not)" and "here's how to produce a quiet NaN if I support them," etc.
Obviously, I believe that the adoption of b) above would result in fewer programs with hidden bugs.
That's almost certainly wrong. Floating point divide-by-zero is almost never due to a program bug. And you can get the same effects when dividing by a nonzero number if the result can't be represented.
The kind of situation I'm thinking of is more like the following. I've got a program which among its operations is a matrix inversion. The program correctly implements the chosen algorithm. Now I load a near-singular matrix and invoke the matrix inversion operation. The sequence of operatons results in over/under flows in some intermediate results. No exception is thrown but some NaN's are propagated through the calculations. The final result Matrix may or may not have one or man Nan's. So now I have a result that is wrong but do not know it and have no way of knowing it.
If the implementation supports NaNs, of course you do. Check to see if there are NaNs in the matrix. This is no different from a calculation on ints that may have produced intermediate values greater than 32767, except that the condition is easier to detect because NaNs are sticky. In this regard, the usual implementation of floating point math is much less error-prone than the usual implementation of integer math.
In FORTRAN this was never a problem as the program aborts at the first overflow/underflow or whatever.
I guarantee you the FORTRAN spec doesn't say that the program will abort upon "overflow/underflow or whatever." FORTRAN was probably the first language to ever implement IEEE-754. Just google for "fortran nan" and you'll see what I mean.
What am I expected to do here? I could recode the matrix inversion to check each intermediate result to see if its a NaN?
No, NaNs propagate into every calculation they touch, so if you got them in the matrix, you'll see them in the output. And if you multiply that matrix by a vector, the resulting non-NaN elements will still be meaningful (provided the original matrix was well-conditioned, which is a whole other matter).
I can't imagine that's what I'm expected to do. How do people handle this now?
I am not a numerics expert, but I know enough to understand that they do handle it in predicatable ways, and that doing so is important to them. Why don't you do a little research yourself? I'm sure a few well-aimed web searches will yield a wealth of information. -- Dave Abrahams Boost Consulting www.boost-consulting.com