
Martin Bonner wrote:
Robert Ramey wrote
Geoffrey Irving wrote:
Arguing that 1/0 should crash because that's what happens in "math" is the same as arguing that 2*max_int should crash because that's what happens in math...unless you have modulo arithmetic.
True - I would argue that as well
Except that 2*max_int doesn't crash in mathematics. If we are talking about the integers then then depending on /exactly/ which set of integers we are talking about, either there isn't a value 'max_int', or it is positive infinity (and doubling it leaves it unchanged). If we are talking about modulo arithmetic, then doubling the largest possible integer is perfectly well defined - it's just that the result is smaller than the original value.
This is a little bit off the topic - but not toooo far.
From "The C++ Programming Language" by Stroustrup page 122.
"This [example] will (eventually) try to increase i past the largest integer. What happens is undefined, but typically the value "wraps around" to a negative number" Of course this is extremely common behavior and many programs depend upon it. Is this a good thing? Hmmm - I never questioned it but now I'm beginning to wonder. In some ways its even worse than the situation with floating point as we get a value which is indistinguishable from a legitimate integer and we lose the "overflow" characterization. Now I'm thinking we need "safe_integer" as well as "safe_float". I can hear the screaming. At the bottom of all this is really different ways of looking at what we do when writing code. There are at least two ways. a) One can think of himself as writing a set of instructions for a machine to execute to achieve a desired result. To do this, one constantly keeps in mind the fact that a real machine is going to execute the code and is continually consious of things like overflows, Nan's etc. b) Another way is to consider a programming language as a means to express abstract ideas like mathmatics. Here the programmer doesn't think of the machine. He defines the behavior of his program by the composition of other components each of whose behavior is well defined. Its more like construction of a mathematical proof rather than the construction of a machine. C++ permits either view and there exist working programs which reflect both views. I would argue that a) would be more likely to produce more fragil and less portable programs. (Though they might be faster). While b) would produce more robust and more portable programs. Actually b) seems these days to be reflected in scripting languages which are considered "easier to use" because (in my view) they don't really even support the view a). One (literally) man's opinion. Robert Ramey