On Thu, Feb 11, 2016 at 3:32 PM, Gavin Lambert
On 12/02/2016 11:57, Emil Dotchevski wrote:
It appears that you think that C++ exceptions are "unusual" in the same way OS exceptions are unusual: the OS detected something bad going on and raises an exception, as many OSes do for example in the case of dereferencing a null pointer. That's not at all what C++ exceptions are. They are in fact specifically designed to replace the need to return error codes, so that handling errors is simpler, safer and more testable.
Perhaps I'm biased by mostly developing on Windows+MSVC, but since that implements C++ exceptions by raising OS exceptions, and OS exceptions can be caught and processed just like C++ exceptions, I don't see any distinction between the two. (Though I'm aware that the OS exception handling mechanism is much more broken on non-Windows.)
It's criminal that MSVC can translate OS exceptions into C++ exceptions. :) It's not that it's broken on other OSes, other compilers don't do this because it's wrong. I do not recommend turning that MSVC option on.
And yes, I think that exceptions should be reserved for unexpected cases. If a method has a case where it is expected to sometimes not produce a value, then it should use optional<T> or similar.
I can see that from your responses but you're wrong. In C++, you can not get an exception unexpectedly. The only way to throw an exception in C++ is to use the throw keyword. Contrast this with hardware exceptions, which may be raised by virtually any CPU instruction and have nothing to do with C++ exceptions.
As for shared_ptr::operator*, I know it could be made to throw, but that
would be incorrect design. I mentioned shared_ptr to make the point that you're wrong that this kind of design decision can not be made in generic C++ code. STL too is full of generic functions that throw to indicate a failure, and others that do not, without giving the user a choice. Even at the language level, consider that in C++ constructors don't give you the option to return an error code, the only way for them to fail is by throwing. Why? Because that is the correct design.
The only reason that shared_ptr::operator* does not throw is that the class author decided that this is likely a hot path and the calling code has *probably* already checked for null, so it is more *efficient* to omit the check entirely (and cause undefined behavior if called in violation of that assumption).
He can speak for himself, but I bet that his motivation was to avoid overhead in the extremely common use case when the programmer *knows* that the shared_ptr isn't null. If I have a shared_ptr (that I didn't get from weak_ptr::lock), more often than not it would be a logic error for it to be null, so it would be dumb to check if it is null when dereferencing it.
(Although shared_ptr has an additional bias -- if the operator* precondition is violated then even if the assert is omitted it's almost certainly going to immediately cause an OS fault anyway due to a null (or nearly-null) pointer access -- which is arguably equivalent to always-throw behaviour.
Absolutely not. Dereferencing a null pointer in C++ is undefined behavior, which is not at all the same as throwing exceptions or raising OS exceptions. When you throw an exception, the standard specifies exactly what is going to happen, but in the case of undefined behavior all bets are off. Maybe your program will crash, maybe it'll send a nasty email to your boss. :) Emil