
I could be way off base here, but I assumed the question related to the use of try/catch blocks, which are very expensive
What is that claim based on? The old myth saying exceptions are slower than return values, while it's actually the opposite?
Hmm. The last time I cared about efficiency of exception handling, I was programming in C#, which takes a severe performance hit every time it enters a try block. My cursory research backs you up, that the performance hit in C++ is fairly minor unless an exception is actually thrown. Sorry about assuming something unstead of looking it up. What is that supposed to mean?
C++ has no notion of managed/unmanaged code.
Even when considering the Microsoft extensions, I still don't see what
exceptions have to do with managed code. (since you compare exceptions to "unmanaged" code that must mean you must consider those "managed")
I wasn't using managed in the technical sense, only to indicate that there's some additional code laid down on the stack inside a try block that's used to trace exceptions in a way that doesn't happen otherwise. It's obviously not managed in the sense that the code starts to perform boundry checking on arrays or whatnot.
The divide by zero question is easy to answer, because unless one of the
values is volatile, then it's more efficient to check the denominator before the division than put the division in a try/catch block.
I don't understand how not using exceptions allow better efficiency.
If you want to detect the error, you have to make a test in both cases.
I think I was confusing myself. Arithmetic errors aren't handled using exceptions, so code like try { ... } catch (Exception e) {throw DivideByZero} won't work if the try block contains just the division algorithm. I was thinking of comparing the above code to if (d == 0) throw DivdeByZero; else { ... } Which would be less silly and more efficient, but I have seen the former, especially in Java. In this case, I think I'm supposed to be comparing the second example to something that doesn't check for or throw errors at all, which seems like a bad practice to me, but is how ordinary integer operations are handled. Right? Am I still crazy? Regards, Hugh