
Chad Nelson wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 03/30/2010 11:45 AM, Peter Dimov wrote:
Which is only a problem if you're doing something that produces a NaN. In XInt, that's rare, and all such cases are very carefully documented.
Right. What is the purpose of the NaN? Its function is to delay the exception until the first use of the result, making the point of the erroneous calculation harder to trace. In what scenario is this a good thing?
If you ignore NaN results from functions that return them, then yes, that is the result. If you use them properly, and check for NaN as soon as a function that can return it returns, you will know the exact line that it came from.
So, in other words, the purpose of the NaN is to be a "throwing error code". You are supposed to always check for it, and if you omit a check, it throws when someone looks at it.
In what scenario is this a bad thing?
In general, but this leads us into the generic exceptions vs error codes debate. Error handling philosophies aside, in this specific case, I don't see how checking the result of every op/ for NaN is in any way better, more readable or more expressive than checking the denominator for being zero beforehand, but to each his own.
I'm sorry you don't see the logic behind it. I do, and n1744 (the "standardese" for the large-integer library proposal) specifies it, on page two: "all integer operations which return an (temporary) integer and many member operators may fail due to insufficient memory. Such errors are reported as overflow_error exceptions (as available memory defines the range of the integer class)."
I see the logic, I just don't agree with it. Term abuse aside, the correct response to std::bad_alloc (making more memory available, if possible) typically differs from the correct response to std::overflow_error. If you insist on masking std::bad_alloc, it would be better to choose a distinct type so that people can treat it as a bad_alloc.