Christopher Kohlhoff wrote:
The error_code class itself deliberately does *not* imbue zero values with the meaning 'success' and non-zero values with the meaning 'failure'. An error_code simply represents an integer and a category, where the category identifies the source of a particular integer value. The specification of the error_code class carefully avoids making any judgement as to whether a particular value represents success or failure. The construct:
if (ec) ...
does not, in and of itself, mean 'if an error ...'. Instead, operator bool is specified to behave as the ints do, and the above construct should simply be read as 'if non-zero ...'. ... Instead, the correspondence of particular error_code values to success or failure is context specific and is defined by an API.
I don't agree. In the general case, void f( error_code& ec ) { ec.clear(); do_f1( ec ); if( !ec ) return; do_f2( ec ); if( !ec ) return; do_f3( ec ); if( !ec ) return; } `f` should not need to know what errors are being returned by `do_f1`, `do_f2`, `do_f3`, in order to check whether the calls succeeded; the whole point of using error_code is that errors from different domains can be handled in a generic manner. This is similar to exceptions; `f` needs not know what callees can throw. Today, `do_f1` is implemented with backend A, so it throws A-flavored exceptions; tomorrow, it switches to backend B and throws B-flavored exceptions. The logic in `f` doesn't change. In the same way, whether `do_f1` returns error codes of category A or category B should not matter for the logic in `f`. There must exist a generic way to test for failure. If !ec is not it, well, then !ec shouldn't be used and we need something that is and should be.