On 14/01/2018 18:02, Peter Dimov wrote:
`f` should not need to know what errors are being returned by `do_f1`, `do_f2`, `do_f3`, in order to check whether the calls succeeded; the whole point of using error_code is that errors from different domains can be handled in a generic manner.
This is similar to exceptions; `f` needs not know what callees can throw. Today, `do_f1` is implemented with backend A, so it throws A-flavored exceptions; tomorrow, it switches to backend B and throws B-flavored exceptions. The logic in `f` doesn't change.
In the same way, whether `do_f1` returns error codes of category A or category B should not matter for the logic in `f`. There must exist a generic way to test for failure. If !ec is not it, well, then !ec shouldn't be used and we need something that is and should be.
I think Christopher's assertion works in a world where error_codes are never blindly propagated upwards; regardless of whether library X uses library A or library B under the hood, it must only return standard errors or library X errors to its external callers, never library A or B errors. (It's up to the library itself when to do the conversion, but it usually makes the most sense to do it as soon as possible.) I think the general consensus from the Outcome discussion was that this is not the desired practice and that error_codes are indeed supposed to be propagated unmodified for the most part. I haven't quite worked out yet where I stand on that; I think I'd prefer if the error codes were converted but it was still possible to obtain a "history" of an error, somewhat like nested exceptions. But I also recognise that that could have significant performance drawbacks.