The lively debates during the Outcome review show that there is a great deal of interest in solving the problem of error handling in environments where C++ exception handling is unavailable. Noexcept is a new C++11 library that implements a different approach to solving the same problem. Any feedback is welcome. https://zajo.github.io/boost-noexcept/ Emil
2017-06-12 10:22 GMT+02:00 Emil Dotchevski via Boost <boost@lists.boost.org> :
The lively debates during the Outcome review show that there is a great deal of interest in solving the problem of error handling in environments where C++ exception handling is unavailable.
Noexcept is a new C++11 library that implements a different approach to solving the same problem. Any feedback is welcome.
https://zajo.github.io/boost-noexcept/
Emil
Thanks for sharing. The library looks interesting. It is my understanding that the caller can decide whether they want to catch exceptions (with try_), or let the exception pass: in this case you read the value returned from a function directly, which may result in obtaining a "default value" of the returned type. Right? Regards, &rzej;
On Mon, Jun 12, 2017 at 1:58 AM, Andrzej Krzemienski via Boost < boost@lists.boost.org> wrote:
2017-06-12 10:22 GMT+02:00 Emil Dotchevski via Boost < boost@lists.boost.org> :
The lively debates during the Outcome review show that there is a great deal of interest in solving the problem of error handling in environments where C++ exception handling is unavailable.
Noexcept is a new C++11 library that implements a different approach to solving the same problem. Any feedback is welcome.
https://zajo.github.io/boost-noexcept/
Emil
Thanks for sharing. The library looks interesting.
It is my understanding that the caller can decide whether they want to catch exceptions (with try_), or let the exception pass: in this case you read the value returned from a function directly, which may result in obtaining a "default value" of the returned type. Right?
Yes, if you do not use exceptions there must be a special return value that signals an error, there is no way around that. So, what you are describing is a failure detected in a function which doesn't (can't?) handle it. Usually (e.g. if the function returns optional<>) it looks like this: if( auto r=f() ) //good, use r.result(); else { //error! cleanup and return return throw_(); } Emil
On 12/06/2017 09:22, Emil Dotchevski via Boost wrote:
The lively debates during the Outcome review show that there is a great deal of interest in solving the problem of error handling in environments where C++ exception handling is unavailable.
Noexcept is a new C++11 library that implements a different approach to solving the same problem. Any feedback is welcome.
The use of functional throw(), try() and catch() was a design approach rejected very early by me and most who have looked into this problem. Nobody wants to reimplement via a library exception handling with exceptions disabled. It's an impoverished experience, and leads to brittle code. Just enable C++ exceptions if you want exceptions. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
On Mon, Jun 12, 2017 at 2:42 AM, Niall Douglas via Boost < boost@lists.boost.org> wrote:
On 12/06/2017 09:22, Emil Dotchevski via Boost wrote:
The lively debates during the Outcome review show that there is a great deal of interest in solving the problem of error handling in environments where C++ exception handling is unavailable.
Noexcept is a new C++11 library that implements a different approach to solving the same problem. Any feedback is welcome.
The use of functional throw(), try() and catch() was a design approach rejected very early by me and most who have looked into this problem.
Nobody wants to reimplement via a library exception handling with exceptions disabled. It's an impoverished experience, and leads to brittle code.
Can you elaborate? My understanding is that the problem with exception handling is the unpredictability of the performance you'll get. Noexcept directly addresses that issue by not introducing the unpredictability of its own return type which may or may not get optimized. It also removes the redundancy of requiring types which already have a useful empty state to be wrapped into something like outcome<>. Nobody would return optional<FILE *> from a function that may fail, they'll just return FILE */nullptr. Returning outcome<FILE *> is similarly redundant and possibly inefficient.
Just enable C++ exceptions if you want exceptions.
I agree, the question is what to do if you can't. Emil
2017-06-12 20:07 GMT+02:00 Emil Dotchevski via Boost <boost@lists.boost.org> :
On Mon, Jun 12, 2017 at 2:42 AM, Niall Douglas via Boost < boost@lists.boost.org> wrote:
On 12/06/2017 09:22, Emil Dotchevski via Boost wrote:
The lively debates during the Outcome review show that there is a great deal of interest in solving the problem of error handling in environments where C++ exception handling is unavailable.
Noexcept is a new C++11 library that implements a different approach to solving the same problem. Any feedback is welcome.
The use of functional throw(), try() and catch() was a design approach rejected very early by me and most who have looked into this problem.
Nobody wants to reimplement via a library exception handling with exceptions disabled. It's an impoverished experience, and leads to brittle code.
Can you elaborate? My understanding is that the problem with exception handling is the unpredictability of the performance you'll get. Noexcept directly addresses that issue by not introducing the unpredictability of its own return type which may or may not get optimized.
It also removes the redundancy of requiring types which already have a useful empty state to be wrapped into something like outcome<>. Nobody would return optional<FILE *> from a function that may fail, they'll just return FILE */nullptr. Returning outcome<FILE *> is similarly redundant and possibly inefficient.
Just enable C++ exceptions if you want exceptions.
I agree, the question is what to do if you can't.
There is a number of expectations people have or might have form error-handling framework: 1. Predictable times. 2. When I forget to check for error, the computation should not silently proceed. 3. Not polluting the function return type 4. Explicit control flows. 5. No explicit control flows. 6. Neutrality for some functions, elspecially those extern "C". 7. Being fast. 8. Being able to carry any payload. Obviously, a framework cannot guarantee all of these, and trade-offs need to be made. One thing that both exceptions, and outcome<>/expected<> have is #2: when you forget that the function might fail, and it fails, the dependent functions will not get called. In case of exceptions this is owing to stack unwinding. In case of outcome, it is because the program will not compile. In Noexcept, when I forget to check, the default value is passed to subsequent functions, which may or may not be prepared for that. I think this is what Niall is saying. Regards, &rzej;
On Mon, Jun 12, 2017 at 1:15 PM, Andrzej Krzemienski via Boost < boost@lists.boost.org> wrote:
2017-06-12 20:07 GMT+02:00 Emil Dotchevski via Boost < boost@lists.boost.org> :
On Mon, Jun 12, 2017 at 2:42 AM, Niall Douglas via Boost < boost@lists.boost.org> wrote:
On 12/06/2017 09:22, Emil Dotchevski via Boost wrote:
The lively debates during the Outcome review show that there is a great deal of interest in solving the problem of error handling in environments where C++ exception handling is unavailable.
Noexcept is a new C++11 library that implements a different approach to solving the same problem. Any feedback is welcome.
The use of functional throw(), try() and catch() was a design approach rejected very early by me and most who have looked into this problem.
Nobody wants to reimplement via a library exception handling with exceptions disabled. It's an impoverished experience, and leads to brittle code.
Can you elaborate? My understanding is that the problem with exception handling is the unpredictability of the performance you'll get. Noexcept directly addresses that issue by not introducing the unpredictability of its own return type which may or may not get optimized.
It also removes the redundancy of requiring types which already have a useful empty state to be wrapped into something like outcome<>. Nobody would return optional<FILE *> from a function that may fail, they'll just return FILE */nullptr. Returning outcome<FILE *> is similarly redundant and possibly inefficient.
Just enable C++ exceptions if you want exceptions.
I agree, the question is what to do if you can't.
There is a number of expectations people have or might have form error-handling framework:
1. Predictable times. 2. When I forget to check for error, the computation should not silently proceed. 3. Not polluting the function return type 4. Explicit control flows. 5. No explicit control flows. 6. Neutrality for some functions, elspecially those extern "C". 7. Being fast. 8. Being able to carry any payload.
Obviously, a framework cannot guarantee all of these, and trade-offs need to be made. One thing that both exceptions, and outcome<>/expected<> have is #2: when you forget that the function might fail, and it fails, the dependent functions will not get called.
What do you mean "dependent functions"?
In case of exceptions this is owing to stack unwinding. In case of outcome, it is because the program will not compile.
Can you post an actual example so we're not talking in the abstract?
2017-06-12 22:28 GMT+02:00 Emil Dotchevski via Boost <boost@lists.boost.org> :
On Mon, Jun 12, 2017 at 1:15 PM, Andrzej Krzemienski via Boost < boost@lists.boost.org> wrote:
2017-06-12 20:07 GMT+02:00 Emil Dotchevski via Boost < boost@lists.boost.org> :
On Mon, Jun 12, 2017 at 2:42 AM, Niall Douglas via Boost < boost@lists.boost.org> wrote:
On 12/06/2017 09:22, Emil Dotchevski via Boost wrote:
The lively debates during the Outcome review show that there is a great deal of interest in solving the problem of error handling in environments where C++ exception handling is unavailable.
Noexcept is a new C++11 library that implements a different approach to solving the same problem. Any feedback is welcome.
The use of functional throw(), try() and catch() was a design approach rejected very early by me and most who have looked into this problem.
Nobody wants to reimplement via a library exception handling with exceptions disabled. It's an impoverished experience, and leads to brittle code.
Can you elaborate? My understanding is that the problem with exception handling is the unpredictability of the performance you'll get. Noexcept directly addresses that issue by not introducing the unpredictability of its own return type which may or may not get optimized.
It also removes the redundancy of requiring types which already have a useful empty state to be wrapped into something like outcome<>. Nobody would return optional<FILE *> from a function that may fail, they'll just return FILE */nullptr. Returning outcome<FILE *> is similarly redundant and possibly inefficient.
Just enable C++ exceptions if you want exceptions.
I agree, the question is what to do if you can't.
There is a number of expectations people have or might have form error-handling framework:
1. Predictable times. 2. When I forget to check for error, the computation should not silently proceed. 3. Not polluting the function return type 4. Explicit control flows. 5. No explicit control flows. 6. Neutrality for some functions, elspecially those extern "C". 7. Being fast. 8. Being able to carry any payload.
Obviously, a framework cannot guarantee all of these, and trade-offs need to be made. One thing that both exceptions, and outcome<>/expected<> have is #2: when you forget that the function might fail, and it fails, the dependent functions will not get called.
What do you mean "dependent functions"?
In case of exceptions this is owing to stack unwinding. In case of outcome, it is because the program will not compile.
Can you post an actual example so we're not talking in the abstract?
``` int job(int x) { int y = f(x); // f might fail, but I forgot int z = g(y); // g might fail, but I forgot return h(z); } ``` If for some reason, I have forgotten that f() might fail (and signal failure), does function g() get called? In case of exceptions no, because if f() throws, then g() is never called. In case of outcome<>: no, because f() returns `outcome<int>, so the above will fail to compile, an I will be forced to rewrite function job(). Does that make sense? Regards, &rzej;
On Mon, Jun 12, 2017 at 1:50 PM, Andrzej Krzemienski via Boost < boost@lists.boost.org> wrote:
2017-06-12 22:28 GMT+02:00 Emil Dotchevski via Boost < boost@lists.boost.org> :
On Mon, Jun 12, 2017 at 1:15 PM, Andrzej Krzemienski via Boost < boost@lists.boost.org> wrote:
2017-06-12 20:07 GMT+02:00 Emil Dotchevski via Boost < boost@lists.boost.org> :
On Mon, Jun 12, 2017 at 2:42 AM, Niall Douglas via Boost < boost@lists.boost.org> wrote:
On 12/06/2017 09:22, Emil Dotchevski via Boost wrote:
The lively debates during the Outcome review show that there is a great deal of interest in solving the problem of error handling in environments where C++ exception handling is unavailable.
Noexcept is a new C++11 library that implements a different approach to solving the same problem. Any feedback is welcome.
The use of functional throw(), try() and catch() was a design approach rejected very early by me and most who have looked into this problem.
Nobody wants to reimplement via a library exception handling with exceptions disabled. It's an impoverished experience, and leads to brittle code.
Can you elaborate? My understanding is that the problem with exception handling is the unpredictability of the performance you'll get. Noexcept directly addresses that issue by not introducing the unpredictability of its own return type which may or may not get optimized.
It also removes the redundancy of requiring types which already have a useful empty state to be wrapped into something like outcome<>. Nobody would return optional<FILE *> from a function that may fail, they'll just return FILE */nullptr. Returning outcome<FILE *> is similarly redundant and possibly inefficient.
Just enable C++ exceptions if you want exceptions.
I agree, the question is what to do if you can't.
There is a number of expectations people have or might have form error-handling framework:
1. Predictable times. 2. When I forget to check for error, the computation should not silently proceed. 3. Not polluting the function return type 4. Explicit control flows. 5. No explicit control flows. 6. Neutrality for some functions, elspecially those extern "C". 7. Being fast. 8. Being able to carry any payload.
Obviously, a framework cannot guarantee all of these, and trade-offs need to be made. One thing that both exceptions, and outcome<>/expected<> have is #2: when you forget that the function might fail, and it fails, the dependent functions will not get called.
What do you mean "dependent functions"?
In case of exceptions this is owing to stack unwinding. In case of outcome, it is because the program will not compile.
Can you post an actual example so we're not talking in the abstract?
``` int job(int x) { int y = f(x); // f might fail, but I forgot int z = g(y); // g might fail, but I forgot return h(z); } ```
If for some reason, I have forgotten that f() might fail (and signal failure), does function g() get called? In case of exceptions no, because if f() throws, then g() is never called. In case of outcome<>: no, because f() returns `outcome<int>, so the above will fail to compile, an I will be forced to rewrite function job().
If you choose to write this code, in Noexcept you will get an assert. Even in NDEBUG builds, let's not forget that if f() fails, y is invalid and g() should reject it (think of it as e.g. fread getting a 0 for its FILE * parameter). Perhaps your point is that ideally f() shouldn't return int but a different type with special semantics. Okay. OTOH let's say you're returning a file descriptor. I'd think that optional<int> is an overkill in this case, in fact you're probably obfuscating the fact that the int is a FD -- what does it mean if you get -1 FD in an optional<int> (or for that matter in an outcome<int>)? It's redundant, and after all it's a safe assumption that you won't get silent failures from functions if you pass -1 for the FD. That said, Noexcept works great with optional, too: int job(int x) { optional<int> y = f(x); // f might fail, but I forgot int z = g(y); //if g takes int, you'll get a compile error, just like with Outcome return h(z); } (if you did y.result() it'd throw in case of failures (optional<> or not), so if you call .result() you can't accidentally pass the bad value to g.)
2017-06-12 23:26 GMT+02:00 Emil Dotchevski via Boost <boost@lists.boost.org> :
On Mon, Jun 12, 2017 at 1:50 PM, Andrzej Krzemienski via Boost < boost@lists.boost.org> wrote:
2017-06-12 22:28 GMT+02:00 Emil Dotchevski via Boost < boost@lists.boost.org> :
On Mon, Jun 12, 2017 at 1:15 PM, Andrzej Krzemienski via Boost < boost@lists.boost.org> wrote:
Can you post an actual example so we're not talking in the abstract?
``` int job(int x) { int y = f(x); // f might fail, but I forgot int z = g(y); // g might fail, but I forgot return h(z); } ```
If for some reason, I have forgotten that f() might fail (and signal failure), does function g() get called? In case of exceptions no, because if f() throws, then g() is never called. In case of outcome<>: no, because f() returns `outcome<int>, so the above will fail to compile, an I will be forced to rewrite function job().
If you choose to write this code, in Noexcept you will get an assert. Even in NDEBUG builds,
At which point? Where is the assertion located?
let's not forget that if f() fails, y is invalid and g() should reject it (think of it as e.g. fread getting a 0 for its FILE * parameter).
I absolutely agree that it is wrong to pass an invalid `y` to `g()`, I think we are just disagreeing on who is responsible for preventing this from happening. When I write function `job` (from the example above), I should make sure that it does not happen. But I might simply forget about it for some reason. The question now is: does the framework help me if I forget? In case of C++ exceptions, the answer is "yes": the call to `g()` is automatically skipped if I forget. In case of outcome<>, the answer is "yes": the compiler refuses to compile the code. You say that in case of Noexcept, there is an assertion, but I fail to see where you could put this assertion, so that a call to `g()` is prevented. Maybe you could elaborate?
Perhaps your point is that ideally f() shouldn't return int but a different type with special semantics. Okay.
OTOH let's say you're returning a file descriptor. I'd think that optional<int> is an overkill in this case, in fact you're probably obfuscating the fact that the int is a FD -- what does it mean if you get -1 FD in an optional<int> (or for that matter in an outcome<int>)? It's redundant, and after all it's a safe assumption that you won't get silent failures from functions if you pass -1 for the FD.
A valid concern. A solution to that has been provided by Vicente: just use type `uncertain<T>`, which is just a `T` inside, but because it is a different type, you cannot silently use it in place of `T`. But this would compromise your other goal: that some functions want to remain exception-neutral. C++ exceptions can both prevent invalid invocations of `g()` and remain exception-neutral. This is fine. `outcome<>` prevents invalid invocations of `g()` but compromises exception neutrality (still fine by some standards, and sometimes desirable). In case of Noexcept, you provide neutrality, but compromise the guarantee that invalid invocations of `g()`are prevented. This might be considered a wrong trade-off. Regards, &rzej;
On Tue, Jun 13, 2017 at 12:25 AM, Andrzej Krzemienski via Boost < boost@lists.boost.org> wrote:
let's not forget that if f() fails, y is invalid and g() should reject it (think of it as e.g. fread getting a 0 for its FILE * parameter).
I absolutely agree that it is wrong to pass an invalid `y` to `g()`, I think we are just disagreeing on who is responsible for preventing this from happening.
I think that I don't disagree at all, but I am pointing out that 1) no matter what, g() *should* assert on its preconditions anyway, and 2) if there is a danger for g() to silently not fail when given bad data, then the choice of return type for f() is poor. For example, usually I'm not terribly worried of the following bug escaping undetected: shared_ptr<foo> f(); void g( foo & ); .... g(*f()); //bug, no error check That's because dereferencing an empty shared_ptr fails dramatically, in fact (if exceptions are disabled) just as dramatically as if I had: result<shared_ptr<foo> > f(); void g( foo & ); .... g(*f().value() ); //.value checks for errors You say that in case of Noexcept, there is an assertion, but I fail to see
where you could put this assertion, so that a call to `g()` is prevented. Maybe you could elaborate?
Misunderstanding. The call to g() is not prevented, what's prevented is the attempt of g() to call throw_ if the error from f() wasn't handled. But again, I agree with all of your concerns, it's just that they're (mostly, not 100%) orthogonal to what Noexcept lets you do. If you feel that you want to put a FILE * into some result<T>, more power to you (do note that since in Noexcept that type doesn't have to transport errors, it can be implemented in 2 lines in terms of optional<T>).
Perhaps your point is that ideally f() shouldn't return int but a different type with special semantics. Okay.
OTOH let's say you're returning a file descriptor. I'd think that optional<int> is an overkill in this case, in fact you're probably obfuscating the fact that the int is a FD -- what does it mean if you get -1 FD in an optional<int> (or for that matter in an outcome<int>)? It's redundant, and after all it's a safe assumption that you won't get silent failures from functions if you pass -1 for the FD.
A valid concern. A solution to that has been provided by Vicente: just use type `uncertain<T>`, which is just a `T` inside, but because it is a different type, you cannot silently use it in place of `T`. But this would compromise your other goal: that some functions want to remain exception-neutral.
How does it compromise it? This uncertain<T> must still have an invalid state, a value that is returned when there is an error. If that value is simply uncertain<T>() (which, obviously, it is), you can just return throw_(my_error()) from a neutral function that returns uncertain<T>.
2017-06-13 23:01 GMT+02:00 Emil Dotchevski via Boost <boost@lists.boost.org> :
On Tue, Jun 13, 2017 at 12:25 AM, Andrzej Krzemienski via Boost < boost@lists.boost.org> wrote:
let's not forget that if f() fails, y is invalid and g() should reject it (think of it as e.g. fread getting a 0 for its FILE * parameter).
I absolutely agree that it is wrong to pass an invalid `y` to `g()`, I think we are just disagreeing on who is responsible for preventing this from happening.
I think that I don't disagree at all, but I am pointing out that 1) no matter what, g() *should* assert on its preconditions anyway, and 2) if there is a danger for g() to silently not fail when given bad data, then the choice of return type for f() is poor. For example, usually I'm not terribly worried of the following bug escaping undetected:
shared_ptr<foo> f(); void g( foo & ); .... g(*f()); //bug, no error check
That's because dereferencing an empty shared_ptr fails dramatically, in fact (if exceptions are disabled) just as dramatically as if I had:
result<shared_ptr<foo> > f(); void g( foo & ); .... g(*f().value() ); //.value checks for errors
You say that in case of Noexcept, there is an assertion, but I fail to see
where you could put this assertion, so that a call to `g()` is prevented. Maybe you could elaborate?
Misunderstanding. The call to g() is not prevented, what's prevented is the attempt of g() to call throw_ if the error from f() wasn't handled.
Ok.
But again, I agree with all of your concerns, it's just that they're (mostly, not 100%) orthogonal to what Noexcept lets you do. If you feel that you want to put a FILE * into some result<T>, more power to you (do note that since in Noexcept that type doesn't have to transport errors, it can be implemented in 2 lines in terms of optional<T>).
Perhaps your point is that ideally f() shouldn't return int but a different type with special semantics. Okay.
OTOH let's say you're returning a file descriptor. I'd think that optional<int> is an overkill in this case, in fact you're probably obfuscating the fact that the int is a FD -- what does it mean if you get -1 FD in an optional<int> (or for that matter in an outcome<int>)? It's redundant, and after all it's a safe assumption that you won't get silent failures from functions if you pass -1 for the FD.
A valid concern. A solution to that has been provided by Vicente: just use type `uncertain<T>`, which is just a `T` inside, but because it is a different type, you cannot silently use it in place of `T`. But this would compromise your other goal: that some functions want to remain exception-neutral.
How does it compromise it? This uncertain<T> must still have an invalid state, a value that is returned when there is an error. If that value is simply uncertain<T>() (which, obviously, it is), you can just return throw_(my_error()) from a neutral function that returns uncertain<T>.
Just to clarify: when I say `uncertain<T>`, I mean something like: template <typename T> struct uncertain { uncertain(T&& v) : value(std::move(v)) {} T value; }; It cannot hold any special value that would signal an error. It is more like an opaque typedef on T. Its only purpose is to signal a compile-time error when I obliviously type: auto y = f(x); // returns uncertain<Y> g(x); // error: cannot convert from uncertain<Y> to Y And now you have to decide: either apply your `try_` or some invented `ignore_`. Regards, &rzej;
The use of functional throw(), try() and catch() was a design approach rejected very early by me and most who have looked into this problem.
Nobody wants to reimplement via a library exception handling with exceptions disabled. It's an impoverished experience, and leads to brittle code.
Can you elaborate? My understanding is that the problem with exception handling is the unpredictability of the performance you'll get.
No, not at all. This is a major misunderstanding. The big problem with exception handling is that it introduces a non-obvious i.e. unwritten control flow path. In the hands of programmers who understand C++ very well, that's fine. Very high quality, predictable latency code can be written. Indeed I've done it myself many a time - a few contracts back I wrote a fixed latency audio processing implementation which delivered flawless audio with sub 4ms latency irrespective of CPU and GPU load on Windows using the STL and with C++ exceptions enabled. The problem is the average programmer. If they come along to some code carefully written to keep maximum latency bounded, they will more often than not wreck the code in subtle, very hard to detect ways. Problems can also emerge with STL implementations changing after a compiler upgrade. There is also an argument that auditing exception throwing code is harder in code with a 10+ year lifespan, or properly unit testing code being developed as you may have up to twice the execution paths which need testing. These are the reasons you globally disable C++ exceptions. You chop off a major source of performance uncertainty in a large code base being changed over more than a decade. It's also why you don't want to replace unwritten control flow with another unwritten control flow. You want to make programmers explicitly write down what happens when failure occurs. It makes them think about it properly, cost its performance properly, and lets peer review audit it properly.
Just enable C++ exceptions if you want exceptions.
I agree, the question is what to do if you can't.
You're looking at it in terms of working around a company's C++ design policy. That's the wrong way to look at it. In large orgs with varying talent, it can make managerial sense to globally disable C++ exceptions: they judge that for the benefit, the potential cost is not worth it. This is why the majority of finance and games users of C++ globally disable C++ exceptions. And indeed Google and a few others. It's not mostly a technical rationale, though for those who like to write out CPU cycle budgets for hot code paths, for obvious reasons you never permit C++ exception throws and you never call malloc. Globally disabling exceptions at least eliminates some arsehole bringing the former in, and writing a malloc link monkey patcher routine to prevent linking hotpath objects which call malloc can help enforce the latter. You need to make the CI fail builds where people commit stupid stuff, that makes them back out the commit and try harder. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
On Mon, Jun 12, 2017 at 3:22 PM, Niall Douglas via Boost < boost@lists.boost.org> wrote:
The use of functional throw(), try() and catch() was a design approach rejected very early by me and most who have looked into this problem.
Nobody wants to reimplement via a library exception handling with exceptions disabled. It's an impoverished experience, and leads to brittle code.
Can you elaborate? My understanding is that the problem with exception handling is the unpredictability of the performance you'll get.
No, not at all. This is a major misunderstanding.
The big problem with exception handling is that it introduces a non-obvious i.e. unwritten control flow path.
Where is this control flow path in Noexcept? If your function returns a value, in Noexcept you must explicitly return a value. I mean, it's how C++ works if you disable exceptions.
Just enable C++ exceptions if you want exceptions.
I agree, the question is what to do if you can't.
You're looking at it in terms of working around a company's C++ design policy. That's the wrong way to look at it.
In large orgs with varying talent, it can make managerial sense to globally disable C++ exceptions: they judge that for the benefit, the potential cost is not worth it. This is why the majority of finance and games users of C++ globally disable C++ exceptions.
I know, my background is in games. I didn't think of disabling exception handling as an arbitrary decision, the assumption is that this is the best option for the team you have. Maybe people don't want to deal with exception safety, that's legit. Still, I fail to see how Noexcept differs from Outcome in this aspect. Semantically the only difference is that Noexcept doesn't force users to use a special template in return types, but that's a good thing. If it's preferable, they can still use a special template, and if they do, it's trivial to design because it doesn't have to transport errors -- Noexcept takes care of that for you. I'll spell it out: Noexcept + optional<> ≈ Outcome
Still, I fail to see how Noexcept differs from Outcome in this aspect. Semantically the only difference is that Noexcept doesn't force users to use a special template in return types, but that's a good thing. If it's preferable, they can still use a special template, and if they do, it's trivial to design because it doesn't have to transport errors -- Noexcept takes care of that for you.
You *want* APIs to clearly indicate their failure contract. Relying on TLS trickery hides control flow paths. And if people fail to write the check, errors get lost or pop out in the wrong locations. Forcing a wrapper type to be used also allows [[nodiscard]] to be leveraged, and in the future static analysis to be applied. Neither works with your scheme, which is why I rejected it very early on. Finally, Rust and Swift have adopted a Result<T, E> model. It is generally viewed as a good design choice for its problem domain. Varying significantly from what the other system languages are doing needs to have very strong rationale. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
On Mon, Jun 12, 2017 at 4:01 PM, Niall Douglas via Boost < boost@lists.boost.org> wrote:
Still, I fail to see how Noexcept differs from Outcome in this aspect. Semantically the only difference is that Noexcept doesn't force users to use a special template in return types, but that's a good thing. If it's preferable, they can still use a special template, and if they do, it's trivial to design because it doesn't have to transport errors -- Noexcept takes care of that for you.
You *want* APIs to clearly indicate their failure contract.
Okay.
Relying on TLS trickery hides control flow paths.
How? Which control path is hidden by a return throw_(error())?
And if people fail to write the check, errors get lost or pop out in the wrong locations.
So, use optional<> with Noexcept.
Forcing a wrapper type to be used also allows [[nodiscard]] to be leveraged, and in the future static analysis to be applied. Neither works with your scheme, which is why I rejected it very early on.
Yes, I understand the argument for a wrapper type, what I've demonstrated is that this is a separate issue. Yes, do use a wrapper type if it is appropriate for the reasons you're providing.
Finally, Rust and Swift have adopted a Result<T, E> model. It is generally viewed as a good design choice for its problem domain. Varying significantly from what the other system languages are doing needs to have very strong rationale.
The rationale is that this is a better design choice for C++, or else you're arguing that C++ exception handling, which also doesn't burden return values with having to transport errors, is a bad design choice.
Le 13/06/2017 à 01:01, Niall Douglas via Boost a écrit :
Still, I fail to see how Noexcept differs from Outcome in this aspect. Semantically the only difference is that Noexcept doesn't force users to use a special template in return types, but that's a good thing. If it's preferable, they can still use a special template, and if they do, it's trivial to design because it doesn't have to transport errors -- Noexcept takes care of that for you. You *want* APIs to clearly indicate their failure contract.
Relying on TLS trickery hides control flow paths. And if people fail to write the check, errors get lost or pop out in the wrong locations.
Forcing a wrapper type to be used also allows [[nodiscard]] to be leveraged, and in the future static analysis to be applied. Neither works with your scheme, which is why I rejected it very early on.
Finally, Rust and Swift have adopted a Result<T, E> model. It is generally viewed as a good design choice for its problem domain. Varying significantly from what the other system languages are doing needs to have very strong rationale.
AFAIK [1], the proposed library and Swift error handling mechanism are very close. Swift has alternatively also used Result<T,E> as we could have expected<T,E>. The main difference I see is that one is library based and the other language based. In Swift you signal that a function can throw adding throws() to the signature. Swift has builtin optionals and adding throw is almost like declaring it to return T? (optional<T>. You cannot call this function without using try, try! or try? IIUC, with Noexcept, you cannot require this as it is a library. However when the user uses try_ it is able to control whether the call succeeds or fails. In order to force it, the closer is to use a return type that tell you that there could be errors, as return_<T>. I will say that if Noexcept required this return_<T> type, it will be like outcome<T>, except that the error is transported using TLS instead of using the stack (please let me know if I'm wrong) However if Noexcept doesn't require a return_<T> then it is much difficult to force the use of the try functions. But it works yet. I see advantages in this approach and I don't know which one is more efficient in the success and failure cases. Some measures will be more than welcome. I'll rename Noexcept to ErrorTLS In summary, do we want an error handling mechanism in C++ based on Swift error handling ;-) ? Do we want a library that emulates it as Boost.Noexcept in Boost? do we want a monadic error handling in C++ as Result<T,E> in Boost? do we want both in Boost? I believe both merit to be tried. Vicente P.S. I've not read the full documentation yet, but this seams promising. [1] https://www.cocoawithlove.com/blog/2016/08/21/result-types-part-one.html
On Mon, Jun 12, 2017 at 10:54 PM, Vicente J. Botet Escriba via Boost < boost@lists.boost.org> wrote:
However if Noexcept doesn't require a return_<T> then it is much difficult to force the use of the try functions.
In Noexcept, it is not correct to always use try_, for the same reason you don't always use try with exception handling. That is only used if you want to _handle_ errors, not just check for errors; see the second Q&A here: https://zajo.github.io/boost-noexcept/#qanda. Error checking with Noexcept depends on your choice of return type. For example, if your return type is T *, you'd check for 0, if your return type is shared_ptr<T> or optional<T>, you'd check using the conversion to bool.
On 13/06/2017 18:12, Emil Dotchevski wrote:
In Noexcept, it is not correct to always use try_, for the same reason you don't always use try with exception handling. That is only used if you want to _handle_ errors, not just check for errors; see the second Q&A here: https://zajo.github.io/boost-noexcept/#qanda.
Error checking with Noexcept depends on your choice of return type. For example, if your return type is T *, you'd check for 0, if your return type is shared_ptr<T> or optional<T>, you'd check using the conversion to bool.
Is it still legal to have a function return the designated error value without actually setting an error? Perhaps sometimes returning an empty shared_ptr is the successful return. This in turn suggests that it's necessary to call try_ or similar to verify the difference between success and error, assuming that it does so using the TLS state rather than by inspecting the return value. (Or perhaps only inspects the TLS state when the return value is the error value, as a performance optimisation.)
On Tue, Jun 13, 2017 at 1:28 AM, Gavin Lambert via Boost < boost@lists.boost.org> wrote:
On 13/06/2017 18:12, Emil Dotchevski wrote:
In Noexcept, it is not correct to always use try_, for the same reason you don't always use try with exception handling. That is only used if you want to _handle_ errors, not just check for errors; see the second Q&A here: https://zajo.github.io/boost-noexcept/#qanda.
Error checking with Noexcept depends on your choice of return type. For example, if your return type is T *, you'd check for 0, if your return type is shared_ptr<T> or optional<T>, you'd check using the conversion to bool.
Is it still legal to have a function return the designated error value without actually setting an error?
It could happen, but note that it has to be explicit: you have to actually return the invalid value, can't happen on its own. Similarly, you can pass 0 for std::error_code to something like outcome<T>. In fairness, the latter is perhaps more explicit, but it could still happen (I suppose, I may be wrong about this.) Regardless, the correct way to check for errors is to see if the result is valid. This is the price one must pay when not using exception handling.
Perhaps sometimes returning an empty shared_ptr is the successful return.
I can't imagine a function which returns shared_ptr<T> *and* which could fail, to return an empty one in case of success. That said, in that case all it means is that you can't use throw_ directly in a return expression; or you could partially specialize throw_return<> for a specific type of shared_ptr, if that's appropriate. In an earlier version I had the main throw_return template undefined, requiring users to specialize it to use throw_ in a function returning given type, but I got convinced that it is not worth it.
This in turn suggests that it's necessary to call try_ or similar to verify the difference between success and error, assuming that it does so using the TLS state rather than by inspecting the return value. (Or perhaps only inspects the TLS state when the return value is the error value, as a performance optimisation.)
try_ only inspects the TLS state. Similarly, try...catch also doesn't care about return values. Granted, if you throw you can't end up using an invalid value, but that's just the nature of the beast when you can't throw: you must check for bad results, and perhaps you should in some cases use a wrapper. I am not arguing against that, but it is a good thing that Noexcept doesn't require it, since a wrapper is almost certainly an overkill if you're returning, say, shared_ptr<T>.
I can't imagine a function which returns shared_ptr<T> *and* which could fail, to return an empty one in case of success. That said, in that case all it means is that you can't use throw_ directly in a return expression; or you could partially specialize throw_return<> for a specific type of shared_ptr, if that's appropriate.
On 13/06/2017 21:39, Emil Dotchevski wrote: private: void create_X_if_needed() noexcept { if (!m_X) { try { m_X = make_shared<X>(); } catch (internal_error const& e) { throw_(e.code()); } } } public: shared_ptr<X> get_X_if_enabled() noexcept { if (feature_X_enabled()) { create_X_if_needed(); } return m_X; } get_X_if_enabled() could return an empty pointer in two cases: 1. If feature_X_enabled() returns false during all calls, this is a successful return with no pointer. 2. If X's constructor throws an internal_error, this is an error code return with no pointer.
Regardless, the correct way to check for errors is to see if the result is valid. This is the price one must pay when not using exception handling.
Isn't the correct way to check for errors (without explicitly catching errors) in this case to call has_current_error()? Granted in this case get_X_if_enabled doesn't actually need to explicitly check for errors since the following statements will work regardless of whether there was an error state or not. But say if it wanted to do some logging only in the case where create_X_if_needed didn't fail, that would have to be written something like this: shared_ptr<X> get_X_if_enabled() noexcept { if (feature_X_enabled()) { create_X_if_needed(); if (!has_current_error()) { log("has X"); } } return m_X; } The other possibility is to catch and rethrow the error, but that seems more cumbersome and error-prone. (Related: if the original throw_ was a custom type not derived from std::exception, and catch<> is used to catch it as a std::exception and throw_ it again, can it still be caught later with a catch_<custom_error>? Does the answer depend on whether throw_() or throw_(e) was used?)
On Tue, Jun 13, 2017 at 4:33 PM, Gavin Lambert via Boost < boost@lists.boost.org> wrote:
On 13/06/2017 21:39, Emil Dotchevski wrote:
I can't imagine a function which returns shared_ptr<T> *and* which could fail, to return an empty one in case of success. That said, in that case all it means is that you can't use throw_ directly in a return expression; or you could partially specialize throw_return<> for a specific type of shared_ptr, if that's appropriate.
private: void create_X_if_needed() noexcept { if (!m_X) { try { m_X = make_shared<X>(); } catch (internal_error const& e) { throw_(e.code()); } } }
public: shared_ptr<X> get_X_if_enabled() noexcept { if (feature_X_enabled()) { create_X_if_needed(); } return m_X; }
get_X_if_enabled() could return an empty pointer in two cases:
1. If feature_X_enabled() returns false during all calls, this is a successful return with no pointer. 2. If X's constructor throws an internal_error, this is an error code return with no pointer.
Regardless, the correct way to check for errors is to see if the result is
valid. This is the price one must pay when not using exception handling.
Isn't the correct way to check for errors (without explicitly catching errors) in this case to call has_current_error()?
Gavin, thank you for this question. Yes, in this case has_current_error() would be correct, since the return value itself can't communicate success or failure.
Granted in this case get_X_if_enabled doesn't actually need to explicitly check for errors since the following statements will work regardless of whether there was an error state or not.
That seems wrong to me. It's difficult to reason about something like this in the abstract, but presumably failing to create_X is a Big Problem while the feature_X being disabled is not. If I've enabled feature_X, I probably don't want it to fail to work silently.
But say if it wanted to do some logging only in the case where create_X_if_needed didn't fail, that would have to be written something like this:
shared_ptr<X> get_X_if_enabled() noexcept { if (feature_X_enabled()) { create_X_if_needed(); if (!has_current_error()) { log("has X"); } } return m_X; }
The other possibility is to catch and rethrow the error, but that seems more cumbersome and error-prone.
I think it's not cumbersome at all. Just like when using exceptions, with Noexcept the ability to catch_, do_some_work then throw_ is an important feature. Consider that in general in this context you might not have the slightest idea what errors may pass through it; so you'd: if( auto tr=try_(....) ) { //ok good, do work then return a "good" value } else { log(BOOST_DIAGNOSTIC_INFORMATION(*tr.catch_<>())); return throw_(); } Except it won't work, you've uncovered an omission in the Noexcept API. The problem is that catch_<> will flag the error as handled, and in this case you don't want that. I need to add another function similar to catch_<> which only gets the error without handling it. Not sure what to call it, or maybe instead I can add a member function tr.throw_() which flags the error as unhandled.
(Related: if the original throw_ was a custom type not derived from std::exception, and catch<> is used to catch it as a std::exception and throw_ it again, can it still be caught later with a catch_<custom_error>? Does the answer depend on whether throw_() or throw_(e) was used?)
throw_() is a noop, except that it converts to anything for return throw_(), returning throw_return<R>::value() where R is the return type of the function. throw_(my_error()) will inject std::exception as a base if my_error doesn't already derive from it.
On 14/06/2017 12:29, Emil Dotchevski wrote:
Except it won't work, you've uncovered an omission in the Noexcept API. The problem is that catch_<> will flag the error as handled, and in this case you don't want that. I need to add another function similar to catch_<> which only gets the error without handling it. Not sure what to call it, or maybe instead I can add a member function tr.throw_() which flags the error as unhandled.
(Related: if the original throw_ was a custom type not derived from std::exception, and catch<> is used to catch it as a std::exception and throw_ it again, can it still be caught later with a catch_<custom_error>? Does the answer depend on whether throw_() or throw_(e) was used?)
throw_() is a noop, except that it converts to anything for return throw_(), returning throw_return<R>::value() where R is the return type of the function.
Ok, I assumed that throw_() was the equivalent of throw; ie. a way to flag a previously-caught exception as unhandled again so that it continues to propagate. If you're modelling this based on try-catch, that seems like essential functionality -- in code that wants to implement a sort of try-finally without using RAII classes it's not uncommon to see patterns like this: init_something(); try { // do something that might throw } catch (...) { cleanup_something(); throw; } cleanup_something(); // if needed in the success case as well You might argue (correctly) that RAII is better for this, but the pattern still exists. And it's sometimes used for other things, such as logging, or where the cleanup has to occur in a particular order and you don't want to rely on the order that destructors are called.
throw_(my_error()) will inject std::exception as a base if my_error doesn't already derive from it. I read that; I was wondering how it handled rethrows. Which I guess the answer in the current state is "it doesn't".
On Tue, Jun 13, 2017 at 6:00 PM, Gavin Lambert via Boost < boost@lists.boost.org> wrote:
On 14/06/2017 12:29, Emil Dotchevski wrote:
Except it won't work, you've uncovered an omission in the Noexcept API. The problem is that catch_<> will flag the error as handled, and in this case you don't want that. I need to add another function similar to catch_<> which only gets the error without handling it. Not sure what to call it, or maybe instead I can add a member function tr.throw_() which flags the error as unhandled.
(Related: if the original throw_ was a custom type not derived from
std::exception, and catch<> is used to catch it as a std::exception and throw_ it again, can it still be caught later with a catch_<custom_error>? Does the answer depend on whether throw_() or throw_(e) was used?)
throw_() is a noop, except that it converts to anything for return throw_(), returning throw_return<R>::value() where R is the return type of the function.
Ok, I assumed that throw_() was the equivalent of throw; ie. a way to flag a previously-caught exception as unhandled again so that it continues to propagate.
If you're modelling this based on try-catch, that seems like essential functionality -- in code that wants to implement a sort of try-finally without using RAII classes it's not uncommon to see patterns like this:
init_something(); try { // do something that might throw } catch (...) { cleanup_something(); throw; } cleanup_something(); // if needed in the success case as well
Absolutely. The reason why this fell through the cracks is that with Noexcept you could do this without a try_: if( auto r=foo() ) //success, all good else return throw_(); It's just the case with try_ that doesn't work right now, but yes it has to be fixed, it is essential.
On Tue, Jun 13, 2017 at 5:29 PM, Emil Dotchevski <emildotchevski@gmail.com> wrote:
if( auto tr=try_(....) ) { //ok good, do work then return a "good" value } else { log(BOOST_DIAGNOSTIC_INFORMATION(*tr.catch_<>())); return throw_(); }
Except it won't work, you've uncovered an omission in the Noexcept API. The problem is that catch_<> will flag the error as handled, and in this case you don't want that.
This is now fixed: even if you handle an error with catch_<>(), throw_() will "unhandle" it.
2017-06-14 2:29 GMT+02:00 Emil Dotchevski via Boost <boost@lists.boost.org>:
On Tue, Jun 13, 2017 at 4:33 PM, Gavin Lambert via Boost < boost@lists.boost.org> wrote:
On 13/06/2017 21:39, Emil Dotchevski wrote:
I can't imagine a function which returns shared_ptr<T> *and* which could fail, to return an empty one in case of success. That said, in that case all it means is that you can't use throw_ directly in a return expression; or you could partially specialize throw_return<> for a specific type of shared_ptr, if that's appropriate.
private: void create_X_if_needed() noexcept { if (!m_X) { try { m_X = make_shared<X>(); } catch (internal_error const& e) { throw_(e.code()); } } }
public: shared_ptr<X> get_X_if_enabled() noexcept { if (feature_X_enabled()) { create_X_if_needed(); } return m_X; }
get_X_if_enabled() could return an empty pointer in two cases:
1. If feature_X_enabled() returns false during all calls, this is a successful return with no pointer. 2. If X's constructor throws an internal_error, this is an error code return with no pointer.
Regardless, the correct way to check for errors is to see if the result is
valid. This is the price one must pay when not using exception handling.
Isn't the correct way to check for errors (without explicitly catching errors) in this case to call has_current_error()?
Gavin, thank you for this question. Yes, in this case has_current_error() would be correct, since the return value itself can't communicate success or failure.
Granted in this case get_X_if_enabled doesn't actually need to explicitly check for errors since the following statements will work regardless of whether there was an error state or not.
That seems wrong to me. It's difficult to reason about something like this in the abstract, but presumably failing to create_X is a Big Problem while the feature_X being disabled is not. If I've enabled feature_X, I probably don't want it to fail to work silently.
But say if it wanted to do some logging only in the case where create_X_if_needed didn't fail, that would have to be written something like this:
shared_ptr<X> get_X_if_enabled() noexcept { if (feature_X_enabled()) { create_X_if_needed(); if (!has_current_error()) { log("has X"); } } return m_X; }
The other possibility is to catch and rethrow the error, but that seems more cumbersome and error-prone.
I think it's not cumbersome at all. Just like when using exceptions, with Noexcept the ability to catch_, do_some_work then throw_ is an important feature. Consider that in general in this context you might not have the slightest idea what errors may pass through it; so you'd:
if( auto tr=try_(....) ) { //ok good, do work then return a "good" value } else { log(BOOST_DIAGNOSTIC_INFORMATION(*tr.catch_<>())); return throw_(); }
Except it won't work, you've uncovered an omission in the Noexcept API. The problem is that catch_<> will flag the error as handled, and in this case you don't want that. I need to add another function similar to catch_<> which only gets the error without handling it. Not sure what to call it, or maybe instead I can add a member function tr.throw_() which flags the error as unhandled.
Yes. this would make Noexcept superior to C++ exceptions in this aspect: this would allow to easily identify when you are handling the exception and when you are just modifying the exception. Call it "augment"? Regards, &rzej;
On Wed, Jun 14, 2017 at 1:57 AM, Andrzej Krzemienski via Boost < boost@lists.boost.org> wrote:
I think it's not cumbersome at all. Just like when using exceptions, with Noexcept the ability to catch_, do_some_work then throw_ is an important feature. Consider that in general in this context you might not have the slightest idea what errors may pass through it; so you'd:
if( auto tr=try_(....) ) { //ok good, do work then return a "good" value } else { log(BOOST_DIAGNOSTIC_INFORMATION(*tr.catch_<>())); return throw_(); }
Except it won't work, you've uncovered an omission in the Noexcept API. The problem is that catch_<> will flag the error as handled, and in this case you don't want that. I need to add another function similar to catch_<> which only gets the error without handling it. Not sure what to call it, or maybe instead I can add a member function tr.throw_() which flags the error as unhandled.
Yes. this would make Noexcept superior to C++ exceptions in this aspect: this would allow to easily identify when you are handling the exception and when you are just modifying the exception. Call it "augment"?
Not superior, "closely mimicking" :) Using exception handling you can catch(...) { do_something(); throw; //the original object continues up the stack } Boost Exception is all about augmenting exceptions with relevant data in error-neutral contexts. Also there is this: http://pdimov.com/cpp2/P0640R0.pdf
2017-06-13 7:54 GMT+02:00 Vicente J. Botet Escriba via Boost < boost@lists.boost.org>:
Le 13/06/2017 à 01:01, Niall Douglas via Boost a écrit :
Still, I fail to see how Noexcept differs from Outcome in this aspect.
Semantically the only difference is that Noexcept doesn't force users to use a special template in return types, but that's a good thing. If it's preferable, they can still use a special template, and if they do, it's trivial to design because it doesn't have to transport errors -- Noexcept takes care of that for you.
You *want* APIs to clearly indicate their failure contract.
Relying on TLS trickery hides control flow paths. And if people fail to write the check, errors get lost or pop out in the wrong locations.
Forcing a wrapper type to be used also allows [[nodiscard]] to be leveraged, and in the future static analysis to be applied. Neither works with your scheme, which is why I rejected it very early on.
Finally, Rust and Swift have adopted a Result<T, E> model. It is generally viewed as a good design choice for its problem domain. Varying significantly from what the other system languages are doing needs to have very strong rationale.
AFAIK [1], the proposed library and Swift error handling mechanism are very close. Swift has alternatively also used Result<T,E> as we could have expected<T,E>.
The main difference I see is that one is library based and the other language based.
In Swift you signal that a function can throw adding throws() to the signature. Swift has builtin optionals and adding throw is almost like declaring it to return T? (optional<T>.
You cannot call this function without using try, try! or try?
IIUC, with Noexcept, you cannot require this as it is a library. However when the user uses try_ it is able to control whether the call succeeds or fails.
In order to force it, the closer is to use a return type that tell you that there could be errors, as return_<T>.
I will say that if Noexcept required this return_<T> type, it will be like outcome<T>, except that the error is transported using TLS instead of using the stack (please let me know if I'm wrong)
However if Noexcept doesn't require a return_<T> then it is much difficult to force the use of the try functions. But it works yet.
I see advantages in this approach and I don't know which one is more efficient in the success and failure cases. Some measures will be more than welcome.
I'll rename Noexcept to ErrorTLS
In summary,
do we want an error handling mechanism in C++ based on Swift error handling ;-) ? Do we want a library that emulates it as Boost.Noexcept in Boost?
do we want a monadic error handling in C++ as Result<T,E> in Boost?
do we want both in Boost?
I believe both merit to be tried
I agree with everything here. You have probably put it better than me. Regards, &rzej;
On 13/06/2017 06:54, Vicente J. Botet Escriba wrote:
Le 13/06/2017 à 01:01, Niall Douglas via Boost a écrit :
Finally, Rust and Swift have adopted a Result<T, E> model. It is generally viewed as a good design choice for its problem domain. Varying significantly from what the other system languages are doing needs to have very strong rationale.
AFAIK [1], the proposed library and Swift error handling mechanism are very close.
Syntax wise I can see what you mean. But in terms of ABI implementation, Swift implements under the bonnet direct returns of a Result<T, E> equivalent.
In Swift you signal that a function can throw adding throws() to the signature. Swift has builtin optionals and adding throw is almost like declaring it to return T? (optional<T>.
That is one way of looking at it. I'd suggest a more accurate way is that all functions in Swift are noexcept. Adding "throws" is similar to adding "noexcept(false)". Furthermore, Swift doesn't actually implement exception throwing. Yet another interpretation of "throws" could be "use a hidden Result<T, E> to return from this function". This, in terms of ABI, is the most accurate description, much more accurate than it returning optionals.
I will say that if Noexcept required this return_<T> type, it will be like outcome<T>, except that the error is transported using TLS instead of using the stack (please let me know if I'm wrong)
The only remaining difference is the fragmented API using that TLS.
However if Noexcept doesn't require a return_<T> then it is much difficult to force the use of the try functions. But it works yet.
I see advantages in this approach and I don't know which one is more efficient in the success and failure cases. Some measures will be more than welcome.
SG14 folk would reject any mandatory use of TLS as its performance is not bounded on some platforms (hidden malloc). Furthermore, at least on Windows both static and dynamic TLS is a limited resource, one can run out of TLS slots easily in large programs as there is a hard limit for the entire process. Library code should always avoid using TLS where possible, let the end user supply TLS to it instead.
do we want an error handling mechanism in C++ based on Swift error handling ;-) ? Do we want a library that emulates it as Boost.Noexcept in Boost?
I feel any design resembling C++ exceptions adds no value. A design *complementing* C++ exceptions with a significantly different design makes much more sense, especially as you can then use both C++ exceptions AND your design together.
do we want a monadic error handling in C++ as Result<T,E> in Boost?
I've been getting quite a bit of private mail from SG14 folk regarding the Outcome review, specifically its rejection. As I said just earlier today to one such: "It may not have been obvious that the review arrived at three different designs, so a flexible variant kind, a super-simple hard coded kind, and a monadic kind. Peter Dimov is taking the variant kind to the Toronto meeting I believe. I'm currently refactoring Outcome to implement the super-simple kind which will be the most obviously suited for SG14 unless you like long build times. The monadic kind I suspect Vicente will end up driving forwards, he and Peter need to disentangle the variant kind from the monadic kind first." All three kinds ought to be submitted to Boost in my opinion. They cover three separate, though overlapping, use cases. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
Le 13/06/2017 à 14:43, Niall Douglas via Boost a écrit :
On 13/06/2017 06:54, Vicente J. Botet Escriba wrote:
Le 13/06/2017 à 01:01, Niall Douglas via Boost a écrit :
Finally, Rust and Swift have adopted a Result<T, E> model. It is generally viewed as a good design choice for its problem domain. Varying significantly from what the other system languages are doing needs to have very strong rationale.
AFAIK [1], the proposed library and Swift error handling mechanism are very close. Syntax wise I can see what you mean. But in terms of ABI implementation, Swift implements under the bonnet direct returns of a Result<T, E> equivalent.
Not exactly. I will say it returns T? implicitly. This forces you to use try.
In Swift you signal that a function can throw adding throws() to the signature. Swift has builtin optionals and adding throw is almost like declaring it to return T? (optional<T>. That is one way of looking at it. I'd suggest a more accurate way is that all functions in Swift are noexcept. Adding "throws" is similar to adding "noexcept(false)".
Furthermore, Swift doesn't actually implement exception throwing.
What do you mean?
Yet another interpretation of "throws" could be "use a hidden Result<T, E> to return from this function". This, in terms of ABI, is the most accurate description, much more accurate than it returning optionals.
The function return type should not store E, so optional T seams closer. But I'm not an expert in Swift. I have just discovered it very recently. I like the protocol (traits) and extension (traits extensions) mechanism ;)
I will say that if Noexcept required this return_<T> type, it will be like outcome<T>, except that the error is transported using TLS instead of using the stack (please let me know if I'm wrong) The only remaining difference is the fragmented API using that TLS.
What do you mean?
However if Noexcept doesn't require a return_<T> then it is much difficult to force the use of the try functions. But it works yet.
I see advantages in this approach and I don't know which one is more efficient in the success and failure cases. Some measures will be more than welcome. SG14 folk would reject any mandatory use of TLS as its performance is not bounded on some platforms (hidden malloc).
Furthermore, at least on Windows both static and dynamic TLS is a limited resource, one can run out of TLS slots easily in large programs as there is a hard limit for the entire process. Library code should always avoid using TLS where possible, let the end user supply TLS to it instead.
I'm not aware of the performances of TLS, but I would expect that if there is a malloc hidden, we will need it only once. If this was integrated in the language, I would expect the compiler could reserve some efficient storage for the possible error (SBO).
do we want an error handling mechanism in C++ based on Swift error handling ;-) ? Do we want a library that emulates it as Boost.Noexcept in Boost?
I feel any design resembling C++ exceptions adds no value.
It seems that it adds some value in Swift ;-)
A design *complementing* C++ exceptions with a significantly different design makes much more sense, especially as you can then use both C++ exceptions AND your design together.
Swift exception model could complement the one in C++. It provides everything we are locking for when we don't want to use C++ exceptions.
do we want a monadic error handling in C++ as Result<T,E> in Boost?
I've been getting quite a bit of private mail from SG14 folk regarding the Outcome review, specifically its rejection. As I said just earlier today to one such:
"It may not have been obvious that the review arrived at three different designs, so a flexible variant kind, a super-simple hard coded kind, and a monadic kind. Peter Dimov is taking the variant kind to the Toronto meeting I believe.
Great.
I'm currently refactoring Outcome to implement the super-simple kind which will be the most obviously suited for SG14 unless you like long build times.
Obviously.
The monadic kind I suspect Vicente will end up driving forwards, he and Peter need to disentangle the variant kind from the monadic kind first."
The two approaches are not incompatible. the variant is only the representation. The monadic interface is applicable to several representations.
All three kinds ought to be submitted to Boost in my opinion. They cover three separate, though overlapping, use cases.
I don't plan to submit a monadic interface in Boost, at least not yet. And if I do it one day, it will be independent of expected. In addition we have already monadic interface with Boost.Hana. All what we need is to adapt the concrete types. Just a last comment, Swift try is not the same as Haskell do-notation nor the proposed *monadic* coroutine TS await. Swift try is applicable to PossiblyValued types not to Monadic types. Vicente
On Tue, Jun 13, 2017 at 5:43 AM, Niall Douglas via Boost < boost@lists.boost.org> wrote:
I will say that if Noexcept required this return_<T> type, it will be like outcome<T>, except that the error is transported using TLS instead of using the stack (please let me know if I'm wrong)
The only remaining difference is the fragmented API using that TLS.
And that semantically Outcome, like C APIs, lets users treat error codes as "the error" while Noexcept treats it as data, like in C++ error handling. If error codes are treated as "the error", then the error domain is limited to a single function. Consider these two functions: int f1(....); //returns 0 on success, 1-f1_error1, 2-f1_error2 int f2(....); //returns 0 on success, 1-f2_error1, 2-f2_error2 If f2 calls f1, if the error is communicated by an error code, f2 _must_ translate the error condition from the domain of f1 errors, to the domain of f2 errors. And this must be done at every level, which introduces many points in the code where subtle errors may occur, and that is in error handling code which is very difficult to test and debug. If you use exceptions, when f1 detects an error and throws an exception e.g. file_open_error (which may _contain_ an error code) that exception can pass through many levels in the code stack before caught, and there is no need for each level to take it, examine it and translate it like in C. The ability to propagate errors without touching them is even supported explicitly, in that throw can be used without arguments. For that reason, returning error codes as if that is "what" went wrong should not be supported.
However if Noexcept doesn't require a return_<T> then it is much difficult to force the use of the try functions. But it works yet.
I see advantages in this approach and I don't know which one is more efficient in the success and failure cases. Some measures will be more than welcome.
SG14 folk would reject any mandatory use of TLS as its performance is not bounded on some platforms (hidden malloc).
Worst case, a single hidden malloc that occurs ONCE when the thread starts. Consider the alternative: to burden passing values up the call stack, where performance may be critical, with having to transport anything and everything in case of an error.
do we want an error handling mechanism in C++ based on Swift error handling ;-) ? Do we want a library that emulates it as Boost.Noexcept in Boost?
I feel any design resembling C++ exceptions adds no value.
I agree, it only preserves as much of the value of C++ exceptions as possible, mainly the ability to propagate them from the point of the throw_ to the point of the catch_ without messing with them. A design *complementing* C++ exceptions with a significantly different
design makes much more sense, especially as you can then use both C++ exceptions AND your design together.
Except the "different" design is a step back and leads to subtle bugs in error handling code.
Emil Dotchevski wrote:
If error codes are treated as "the error", then the error domain is limited to a single function. Consider these two functions:
int f1(....); //returns 0 on success, 1-f1_error1, 2-f1_error2 int f2(....); //returns 0 on success, 1-f2_error1, 2-f2_error2
If f2 calls f1, if the error is communicated by an error code, f2 _must_ translate the error condition from the domain of f1 errors, to the domain of f2 errors. And this must be done at every level, which introduces many points in the code where subtle errors may occur, and that is in error handling code which is very difficult to test and debug.
That's exactly the problem std::error_code solves, as it's a (code, domain) pair, so there's no need to translate.
Does anyone actually have a measurable example of real code in which the unexceptional path induces any more execution overhead than an optional/variant/outcome return type? Because when I look at the code generated by gcc et all, I am convinced that you're solving a non-existant problem when seeking to replace exceptions. By all means have a partial return type such as outcome if a failure is to result in a useful execution path. But exceptions do not actually add overhead when used to signal actual exceptions. At least in the millions of lines of code I have written and read. R On 13 Jun 2017 19:01, "Peter Dimov via Boost" <boost@lists.boost.org> wrote:
Emil Dotchevski wrote:
If error codes are treated as "the error", then the error domain is limited to a single function. Consider these two functions:
int f1(....); //returns 0 on success, 1-f1_error1, 2-f1_error2 int f2(....); //returns 0 on success, 1-f2_error1, 2-f2_error2
If f2 calls f1, if the error is communicated by an error code, f2 _must_ translate the error condition from the domain of f1 errors, to the domain of f2 errors. And this must be done at every level, which introduces many points in the code where subtle errors may occur, and that is in error handling code which is very difficult to test and debug.
That's exactly the problem std::error_code solves, as it's a (code, domain) pair, so there's no need to translate.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman /listinfo.cgi/boost
On Tue, Jun 13, 2017 at 12:25 PM, Richard Hodges via Boost < boost@lists.boost.org> wrote:
Does anyone actually have a measurable example of real code in which the unexceptional path induces any more execution overhead than an optional/variant/outcome return type?
Because when I look at the code generated by gcc et all, I am convinced that you're solving a non-existant problem when seeking to replace exceptions.
By all means have a partial return type such as outcome if a failure is to result in a useful execution path.
But exceptions do not actually add overhead when used to signal actual exceptions.
At least in the millions of lines of code I have written and read.
All true. Noexcept is not better than using exception handling, it is better than *not* using exception handling without it. :)
So why do developers continue to perpetuate the idea that the use of exceptions impacts performance or deterministic timing of code? It does not. It is no slower and no less deterministic than checking a return code or discriminated union. In fact that is exactly what implementations boil down to in compiled code, with the added benefit of being able to signal the failure to create an object in a way that makes it impossible to use the failed object accidentally. As I say, being able to return partial success is useful. Seeking to remove exceptions on performance grounds is a nonsense. On 13 Jun 2017 23:28, "Emil Dotchevski via Boost" <boost@lists.boost.org> wrote: On Tue, Jun 13, 2017 at 12:25 PM, Richard Hodges via Boost < boost@lists.boost.org> wrote:
Does anyone actually have a measurable example of real code in which the unexceptional path induces any more execution overhead than an optional/variant/outcome return type?
Because when I look at the code generated by gcc et all, I am convinced that you're solving a non-existant problem when seeking to replace exceptions.
By all means have a partial return type such as outcome if a failure is to result in a useful execution path.
But exceptions do not actually add overhead when used to signal actual exceptions.
At least in the millions of lines of code I have written and read.
All true. Noexcept is not better than using exception handling, it is better than *not* using exception handling without it. :) _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/ mailman/listinfo.cgi/boost
On 13/06/2017 23:44, Richard Hodges via Boost wrote:
So why do developers continue to perpetuate the idea that the use of exceptions impacts performance or deterministic timing of code? It does not.
It is no slower and no less deterministic than checking a return code or discriminated union. In fact that is exactly what implementations boil down to in compiled code, with the added benefit of being able to signal the failure to create an object in a way that makes it impossible to use the failed object accidentally.
As I say, being able to return partial success is useful.
Seeking to remove exceptions on performance grounds is a nonsense.
Maybe. If you are never going to catch an exception, just like many platforms don't need to catch bad_alloc, then log+terminate can be a more efficient approach. But in general, performance benefits disabling exceptions can be attached to specific domains and conditions. Specially today as exception code has improved a lot in compilers. Program size benefits, at least when exceptional situations are handled though abort(), can be measurable, and additionally a polymorphic hierarchy of exceptions types add non-optimizable (by the linker) code and RTTI to your executable. It adds non-determinism because there is no upper bound when an exception is thrown. In my opinion, at least after writing mission critical software, is that the main problem of exceptions is the non-explicit potentially infinite exit paths (as each thrown type can require a different action when catching) it adds in every function call, worsened with the non-static enforcement of what can be thrown. For critical software every function that could throw should be tested against all possible exceptions thrown by that function or any of its dependencies, and knowing what types dependencies of dependencies throw is an impossible mission. A lot of programmers don't understand exception safety guarantees and how to write exception-safe code. It is not simple because it is not explicit. IMHO return types, when handled, make error handling uglier, more explicit, maybe slower. You get much more paths and branches, because they really exist in your executable code, and exception handling makes them invisible in your source code, and in consequence, dangerous. Just look at the additional branches gcov shows when you call a function that can possibly throw. It's very hard to test every error and throwable type, whereas an int return type is for most programmers, easier to handle and understand. Ion
Exception return paths are not infinite. There are a finite number of places in code that an exception can be thrown. The exception path is one path, the non-exception path is another. That’s two in total. Exactly equivalent to an outcome<>. It is a fallacy to say that there are an indeterminate number of paths. If developers do not understand RAII, then an afternoon of training can solve that. RAII is the foundation of correct c++. It is the fundamental guarantee of deterministic object state. A program without RAII is not worthy of consideration. The author may as well have used C. Perhaps there is an argument that says that RAII adds overhead to a program’s footprint. If things are that tight, fair enough. Otherwise there is no excuse to avoid exceptions. I’ve never seen a convincing argument. The fact that people are taking time to re-implement exception functionality in outcome<> et al demonstrate the necessity and correctness of exceptions. I have yet to see an answer to my initial question - an example of code in which compiling without exceptions enabled and checking return types instead, will add any performance benefit to the non-exceptional case at all.
On 14 Jun 2017, at 21:42, Ion Gaztañaga via Boost <boost@lists.boost.org> wrote:
On 13/06/2017 23:44, Richard Hodges via Boost wrote:
So why do developers continue to perpetuate the idea that the use of exceptions impacts performance or deterministic timing of code? It does not. It is no slower and no less deterministic than checking a return code or discriminated union. In fact that is exactly what implementations boil down to in compiled code, with the added benefit of being able to signal the failure to create an object in a way that makes it impossible to use the failed object accidentally. As I say, being able to return partial success is useful. Seeking to remove exceptions on performance grounds is a nonsense.
Maybe. If you are never going to catch an exception, just like many platforms don't need to catch bad_alloc, then log+terminate can be a more efficient approach. But in general, performance benefits disabling exceptions can be attached to specific domains and conditions. Specially today as exception code has improved a lot in compilers. Program size benefits, at least when exceptional situations are handled though abort(), can be measurable, and additionally a polymorphic hierarchy of exceptions types add non-optimizable (by the linker) code and RTTI to your executable.
It adds non-determinism because there is no upper bound when an exception is thrown.
In my opinion, at least after writing mission critical software, is that the main problem of exceptions is the non-explicit potentially infinite exit paths (as each thrown type can require a different action when catching) it adds in every function call, worsened with the non-static enforcement of what can be thrown. For critical software every function that could throw should be tested against all possible exceptions thrown by that function or any of its dependencies, and knowing what types dependencies of dependencies throw is an impossible mission.
A lot of programmers don't understand exception safety guarantees and how to write exception-safe code. It is not simple because it is not explicit.
IMHO return types, when handled, make error handling uglier, more explicit, maybe slower. You get much more paths and branches, because they really exist in your executable code, and exception handling makes them invisible in your source code, and in consequence, dangerous. Just look at the additional branches gcov shows when you call a function that can possibly throw. It's very hard to test every error and throwable type, whereas an int return type is for most programmers, easier to handle and understand.
Ion
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On 14/06/2017 21:52, Richard Hodges wrote:
Exception return paths are not infinite. There are a finite number of places in code that an exception can be thrown.
Correct. There are infinite possible types that can be thrown.
The exception path is one path, the non-exception path is another. That’s two in total. Exactly equivalent to an outcome<>.
Right.
It is a fallacy to say that there are an indeterminate number of paths.
I stand corrected. Let me rephrase my assertion: with errors you have a limited range of values in a single type that you must handle. With exceptions you have an indeterminate number of types, each of them with a limited number of states/values. If you want to "handle" all of them, you need an undetermined number of catch statements because the type can be absolutely unknown to the handler. The programmer that must handle the exception can't be prepared for that. Instead, the programmer can be prepared to handle a limited number of exceptions that a function might throw, but that implies translating lower layer exception types to a finite number of exception types, which is very similar to handling and translating error codes. Exceptions shine when the programmer refuses to handle errors.
If developers do not understand RAII, then an afternoon of training can solve that.
RAII is the foundation of correct c++. It is the fundamental guarantee of deterministic object state. A program without RAII is not worthy of consideration. The author may as well have used C.
You can't correctly write exception safe code only with simple RAII. There are chained operations that must be undone on the presence of an error. If the exceptional path is not explicit, it is very easy to incorrectly handle it or just to ignore that path. Just like strong types are safer, explicit paths are safer. Exceptions imply weakly defined control paths. And that's a real problem for safety. Exceptions are great when you don't want to handle the exception, so the compiler automatically propagates control to the upper layers.
Perhaps there is an argument that says that RAII adds overhead to a program’s footprint. If things are that tight, fair enough.
Otherwise there is no excuse to avoid exceptions. I’ve never seen a convincing argument.
The fact that people are taking time to re-implement exception functionality in outcome<> et al demonstrate the necessity and correctness of exceptions.
If people reimplement exceptions, it's because many are not happy with them, even decades after exceptions were added to C++. Nobody rewrites "while" with "goto"s.
I have yet to see an answer to my initial question - an example of code in which compiling without exceptions enabled and checking return types instead, will add any performance benefit to the non-exceptional case at all. with table-based implementations, I don't think there is any.
Ion
On Wed, Jun 14, 2017 at 2:42 PM, Ion Gaztañaga via Boost < boost@lists.boost.org> wrote:
There are chained operations that must be undone on the presence of an error.
That is the strong guarantee. Basic guarantee is usually trivial...
If the exceptional path is not explicit, it is very easy to incorrectly handle it or just to ignore that path. Just like strong types are safer, explicit paths are safer.
...and when it is not, you can use explicit path: try { stuff } catch(...) { cleanup(); throw; } If you didn't use exceptions, the above would look like this: if( stuff_failed ) { cleanup(); return E_STUFF_FAILED; } which is semantically identical except for the difficulty of having to communicate failures through the return value. Both C++ exception handling and Noexcept provide a more elegant path for error objects.
Exceptions imply weakly defined control paths. And that's a real problem for safety.
What safety do you have in mind?
Exceptions are great when you don't want to handle the exception, so the compiler automatically propagates control to the upper layers.
Which is 99% of the functions in your code, which means that if you use RAII, there is little chance of these functions to have subtle bugs which, being in the error-handling code, are _very_ difficult to detect. And if you use exceptions, you get the equivalent of "if( stuff_failed ) return E_STUFF_FAILED" automatically. If people reimplement exceptions, it's because many are not happy with
them, even decades after exceptions were added to C++. Nobody rewrites "while" with "goto"s.
That many people aren't happy with something doesn't indicate a problem. The horse, the water, etc. :) All that said, there are valid reasons why you may not want to use exception handling, but they have more to do with the team you have rather than problems inherent in the exception handling semantics or its abstraction penalty.
On 15/06/2017 0:54, Emil Dotchevski via Boost wrote:
If the exceptional path is not explicit, it is very easy to incorrectly handle it or just to ignore that path. Just like strong types are safer, explicit paths are safer.
...and when it is not, you can use explicit path:
try { stuff } catch(...) { cleanup(); throw; }
Then there is no advantage of using exceptions, so better stick to errors which at least limit the types that must be handled, which usually reduces compiler-generated code (obviously the diagnostic is poorer, as you can't compare an int error return with a full class).
If you didn't use exceptions, the above would look like this:
if( stuff_failed ) { cleanup(); return E_STUFF_FAILED; }
If all you need is "stuff_failed" then exceptions are better because you just want to cleanup and return. Exceptions do this automatically. Problems arise when you wan to handle all possible errors, do different things with each of them. And just make sure you correctly handle all errors. I find myself writing exception safe code, but rarely using them. And I rarely find programmers that properly understand how to use exceptions, whereas everybody understands a return type. It's rare that we are saying that being strong typed is good, that abstract base classes are great to enforce the possible polymorphic operations at runtime, that concepts must be brought to language to explicitly reduce the types that must be handled by an operation (at compile time), but on the other hand we permit any type (even unknown ones by the caller) to be returned by exceptions. Looks like a contradiction to me, as exceptions to be thrown by an operation look like a contract to me. I don't say errors are superior, I just say that exceptions have not succeeded. The STL has succeeded, templates have succeeded, move semantics have succeeded, lambdas has succeeded... exceptions are still controversial. Now I read that we should use errors for things that are not "exceptional". If "exceptional" means to just propagate the error so that main catches it a prints to a log, then "std::terminate" can handle that more efficiently. ¿What should be done with "truly exceptional" errors? std::filesystem needs to offer a dual interface for errors and exceptions. There is something missing here. And I don't have the answer. Ion
On Thu, Jun 15, 2017 at 12:17 PM, Ion Gaztañaga via Boost < boost@lists.boost.org> wrote:
On 15/06/2017 0:54, Emil Dotchevski via Boost wrote:
If the exceptional path is not explicit, it is very easy to incorrectly
handle it or just to ignore that path. Just like strong types are safer, explicit paths are safer.
...and when it is not, you can use explicit path:
try { stuff } catch(...) { cleanup(); throw; }
Then there is no advantage of using exceptions.
In this case, the important thing is that there is no disadvantage of using exceptions when manual cleanup is needed. 99 times out of 100 it is not, and that's when exception handling shines.
, so better stick to errors which at least limit the types that must be handled, which usually reduces compiler-generated code (obviously the diagnostic is poorer, as you can't compare an int error return with a full class).
Yes, exception handling will use more static types, but the upside is type safety: when you catch an error, you know that such an error exists because otherwise you'll get a compile error. If instead you used int, and you check for error 42 instead of 43 there will be no compile error, and 43 may be an entirely invalid code. Of course you can easily avoid the types, by throwing and catching ints (or std::error_code), and checking their value dynamically, but then you're butchering type safety. I don't recommend it. If you didn't use exceptions, the above would look like this:
if( stuff_failed ) { cleanup(); return E_STUFF_FAILED; }
If all you need is "stuff_failed" then exceptions are better because you just want to cleanup and return. Exceptions do this automatically. Problems arise when you wan to handle all possible errors, do different things with each of them. And just make sure you correctly handle all errors.
Okay: try { stuff } catch( problem1 & ) { } catch( problem2 & ) { } catch( problem3 & ) { }
I find myself writing exception safe code, but rarely using them. And I rarely find programmers that properly understand how to use exceptions, whereas everybody understands a return type.
This is a fair point. If your team doesn't know how exception handling works you should not use it, use Noexcept instead. :)
std::filesystem needs to offer a dual interface for errors and exceptions. There is something missing here. And I don't have the answer.
The reason why filesystem needs dual interface is because for many operations it is impossible to define universal postconditions to be enforced by throwing exceptions: what in one use case is considered an error is not an error at all in some other case.
On 15/06/2017 07:52, Richard Hodges wrote:
If developers do not understand RAII, then an afternoon of training can solve that.
RAII is the foundation of correct c++. It is the fundamental guarantee of deterministic object state. A program without RAII is not worthy of consideration. The author may as well have used C.
Perhaps there is an argument that says that RAII adds overhead to a program’s footprint. If things are that tight, fair enough.
Given that RAII is so fundamental to exception safety, it's surprising that there isn't a vocabulary execute-lambda-inside-destructor type (eg. "guard") in the STL, to replace cleanup/try-finally style code. Granted, it's simple to write one yourself, but that seems like a poor rationalisation to omit it, especially once lambdas became standard. I suspect that the lack of such a type is probably a significant reason exception-unsafe code ends up surviving -- it's more effort to write RAII wrappers for code that you're not really expecting to encounter exceptions in, even though you should.
Given that RAII is so fundamental to exception safety, it's surprising that there isn't a vocabulary execute-lambda-inside-destructor type (eg. "guard") in the STL, to replace cleanup/try-finally style code.
AFAIAA, this was the main reason for the introduction of std::uncaught_exceptions() in c++17. It facilitates the writing of scoped transactional program in a standards-defined manner. The previous std::uncaught_exception() (note the singular) in c++11 wasn't sufficient and turned out to be a white elephant. So expect to see code with scoped commit/rollback semantics become common soon after the release of c++17 people do it in c++11 but it requires messiness (boost's ScopedExit, for example): On 15 June 2017 at 01:28, Gavin Lambert via Boost <boost@lists.boost.org> wrote:
On 15/06/2017 07:52, Richard Hodges wrote:
If developers do not understand RAII, then an afternoon of training can solve that.
RAII is the foundation of correct c++. It is the fundamental guarantee of deterministic object state. A program without RAII is not worthy of consideration. The author may as well have used C.
Perhaps there is an argument that says that RAII adds overhead to a program’s footprint. If things are that tight, fair enough.
Given that RAII is so fundamental to exception safety, it's surprising that there isn't a vocabulary execute-lambda-inside-destructor type (eg. "guard") in the STL, to replace cleanup/try-finally style code.
Granted, it's simple to write one yourself, but that seems like a poor rationalisation to omit it, especially once lambdas became standard.
I suspect that the lack of such a type is probably a significant reason exception-unsafe code ends up surviving -- it's more effort to write RAII wrappers for code that you're not really expecting to encounter exceptions in, even though you should.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman /listinfo.cgi/boost
2017-06-15 1:28 GMT+02:00 Gavin Lambert via Boost <boost@lists.boost.org>:
On 15/06/2017 07:52, Richard Hodges wrote:
If developers do not understand RAII, then an afternoon of training can solve that.
RAII is the foundation of correct c++. It is the fundamental guarantee of deterministic object state. A program without RAII is not worthy of consideration. The author may as well have used C.
Perhaps there is an argument that says that RAII adds overhead to a program’s footprint. If things are that tight, fair enough.
Given that RAII is so fundamental to exception safety, it's surprising that there isn't a vocabulary execute-lambda-inside-destructor type (eg. "guard") in the STL, to replace cleanup/try-finally style code.
Granted, it's simple to write one yourself, but that seems like a poor rationalisation to omit it, especially once lambdas became standard.
I suspect that the lack of such a type is probably a significant reason exception-unsafe code ends up surviving -- it's more effort to write RAII wrappers for code that you're not really expecting to encounter exceptions in, even though you should.
If you allow arbitrary lambdas to be called at the end of the scope, this itself causes many bugs: 1. They have access to scope variables that might already have been destroyed when the lambda is executed. It is easy to overlook it. (This is a no-problem for destructors, because they do not see the context in which they are called.) 2. People will start calling a potentially throwing lambdas, which may result in double-exception problem. (This is not a problem in Java-like languages, where you simple ignore some errors.) Regards, &rzej;
2017-06-14 21:52 GMT+02:00 Richard Hodges via Boost <boost@lists.boost.org>:
Exception return paths are not infinite. There are a finite number of places in code that an exception can be thrown.
The exception path is one path, the non-exception path is another. That’s two in total. Exactly equivalent to an outcome<>.
It is a fallacy to say that there are an indeterminate number of paths.
If developers do not understand RAII, then an afternoon of training can solve that.
RAII is the foundation of correct c++. It is the fundamental guarantee of deterministic object state. A program without RAII is not worthy of consideration. The author may as well have used C.
Perhaps there is an argument that says that RAII adds overhead to a program’s footprint. If things are that tight, fair enough.
Otherwise there is no excuse to avoid exceptions. I’ve never seen a convincing argument.
The above statement almost treats RAII and exception handling as synonymous. But I believe this gives the false picture of the situation. RAII is very useful, also if you do not use exceptions, but have multiple return paths. You want to acquire the resource in one place and schedule its future release in one place, not upon every return statement. In case of using things like Outcome, you still want to follow RAII idioms. People who choose to use Outcome do understand RAII and will still use it. But RAII does not handle all aspects of failure-safety, and this is about these other aspects that people may choose to go with Outcome rather than exceptions. One example: propagating information about failures across threads, or "tasks".
The fact that people are taking time to re-implement exception functionality in outcome<> et al demonstrate the necessity and correctness of exceptions.
I have yet to see an answer to my initial question - an example of code in which compiling without exceptions enabled and checking return types instead, will add any performance benefit to the non-exceptional case at all.
Looking for alternatives to exceptions is not driven (this is my understanding) by performance. But by other factors, like explicitness. The only performance-related objective is that you want a task where failures occur and tasks with no failures to be performed in comparable times (rather than one being orders of magnitude slower). Regards, &rzej;
On Mon, Jun 19, 2017 at 2:58 AM, Andrzej Krzemienski via Boost < boost@lists.boost.org> wrote:
2017-06-14 21:52 GMT+02:00 Richard Hodges via Boost <boost@lists.boost.org
:
Exception return paths are not infinite. There are a finite number of places in code that an exception can be thrown.
The exception path is one path, the non-exception path is another. That’s two in total. Exactly equivalent to an outcome<>.
It is a fallacy to say that there are an indeterminate number of paths.
If developers do not understand RAII, then an afternoon of training can solve that.
RAII is the foundation of correct c++. It is the fundamental guarantee of deterministic object state. A program without RAII is not worthy of consideration. The author may as well have used C.
Perhaps there is an argument that says that RAII adds overhead to a program’s footprint. If things are that tight, fair enough.
Otherwise there is no excuse to avoid exceptions. I’ve never seen a convincing argument.
The above statement almost treats RAII and exception handling as synonymous. But I believe this gives the false picture of the situation.
RAII is very useful, also if you do not use exceptions, but have multiple return paths. You want to acquire the resource in one place and schedule its future release in one place, not upon every return statement.
If you adopt this programming style as a rule, there is no downside to using exceptions.
In case of using things like Outcome, you still want to follow RAII idioms.
People who choose to use Outcome do understand RAII and will still use it. But RAII does not handle all aspects of failure-safety, and this is about these other aspects that people may choose to go with Outcome rather than exceptions. One example: propagating information about failures across threads, or "tasks".
There is exception_ptr for transporting exceptions between threads, which was the only primitive that was missing for being able to accumulate results from multiple workers. Outcome and Noexcept are simply better alternatives for users who write or maintain code that is not exception-safe -- better not compared to exception handling (which in this case can not be used), but compared to what they would likely do otherwise.
The fact that people are taking time to re-implement exception functionality in outcome<> et al demonstrate the necessity and correctness of exceptions.
I have yet to see an answer to my initial question - an example of code in which compiling without exceptions enabled and checking return types instead, will add any performance benefit to the non-exceptional case at all.
Looking for alternatives to exceptions is not driven (this is my understanding) by performance. But by other factors, like explicitness.
More precisely, people who don't use exceptions choose to lower the level of abstraction in error handling. Their motivation is similar to why C programmers avoid C++, but limited to the domain of error handling.
The only performance-related objective is that you want a task where failures occur and tasks with no failures to be performed in comparable times (rather than one being orders of magnitude slower).
I am yet to see hard data showing a real world example where error handling based on exceptions is an order of magnitude slower. Maybe I should start offering an award. :)
2017-06-19 20:33 GMT+02:00 Emil Dotchevski via Boost <boost@lists.boost.org> :
On Mon, Jun 19, 2017 at 2:58 AM, Andrzej Krzemienski via Boost < boost@lists.boost.org> wrote:
2017-06-14 21:52 GMT+02:00 Richard Hodges via Boost < boost@lists.boost.org
:
Exception return paths are not infinite. There are a finite number of places in code that an exception can be thrown.
The exception path is one path, the non-exception path is another. That’s two in total. Exactly equivalent to an outcome<>.
It is a fallacy to say that there are an indeterminate number of paths.
If developers do not understand RAII, then an afternoon of training can solve that.
RAII is the foundation of correct c++. It is the fundamental guarantee of deterministic object state. A program without RAII is not worthy of consideration. The author may as well have used C.
Perhaps there is an argument that says that RAII adds overhead to a program’s footprint. If things are that tight, fair enough.
Otherwise there is no excuse to avoid exceptions. I’ve never seen a convincing argument.
The above statement almost treats RAII and exception handling as synonymous. But I believe this gives the false picture of the situation.
RAII is very useful, also if you do not use exceptions, but have multiple return paths. You want to acquire the resource in one place and schedule its future release in one place, not upon every return statement.
If you adopt this programming style as a rule, there is no downside to using exceptions.
In case of using things like Outcome, you still want to follow RAII idioms.
People who choose to use Outcome do understand RAII and will still use it. But RAII does not handle all aspects of failure-safety, and this is about these other aspects that people may choose to go with Outcome rather than exceptions. One example: propagating information about failures across threads, or "tasks".
There is exception_ptr for transporting exceptions between threads, which was the only primitive that was missing for being able to accumulate results from multiple workers.
Outcome and Noexcept are simply better alternatives for users who write or maintain code that is not exception-safe -- better not compared to exception handling (which in this case can not be used), but compared to what they would likely do otherwise.
To somewhat challenge this statement, The following is an example of how I would use Boost.Outcome if I had it available at the time when I was solving this parsing problem: https://github.com/akrzemi1/__sandbox__/blob/master/outcome_practical_exampl... This tries to parse (or match) the string input, where I expect certain syntax, into a data structure. It is not performance-critical, I do not mind using exceptions, but I still prefer to handle situations where the input string does not conform to the expected syntax via explicit contrl paths. A number of reasons for that: 1. I want to separate resource acquisition errors (exceptions are still thrown upon memory exhaustion) from input validation. 2. Some debuggers/IDEs by default engage when any exception is thrown. I do not want this to happen when an incorrect input from the user is obtained. 3. I want validation failers to be handled immediately: one level up the stack. I do not expect or intend to ever propagate them further. And this code still throws exceptions. Regards, &rzej;
To somewhat challenge this statement, The following is an example of how I would use Boost.Outcome if I had it available at the time when I was solving this parsing problem: https://github.com/akrzemi1/__sandbox__/blob/master/outcome_ practical_example.md
In the above code, the return type of expected<std::vector<Distrib>, BadInput> is equivalent in all respects to 2-state variant. If it were implemented as a boost/std::variant, then the two states could be handled with a static visitor - which would provide a compile-time guarantee that any future third state was not missed. I don't think we're gaining anything with outcome, other than perhaps a label which serves to express intent? R On 19 June 2017 at 23:41, Andrzej Krzemienski via Boost < boost@lists.boost.org> wrote:
2017-06-19 20:33 GMT+02:00 Emil Dotchevski via Boost < boost@lists.boost.org> :
On Mon, Jun 19, 2017 at 2:58 AM, Andrzej Krzemienski via Boost < boost@lists.boost.org> wrote:
2017-06-14 21:52 GMT+02:00 Richard Hodges via Boost < boost@lists.boost.org
:
Exception return paths are not infinite. There are a finite number of places in code that an exception can be thrown.
The exception path is one path, the non-exception path is another. That’s two in total. Exactly equivalent to an outcome<>.
It is a fallacy to say that there are an indeterminate number of paths.
If developers do not understand RAII, then an afternoon of training can solve that.
RAII is the foundation of correct c++. It is the fundamental guarantee of deterministic object state. A program without RAII is not worthy of consideration. The author may as well have used C.
Perhaps there is an argument that says that RAII adds overhead to a program’s footprint. If things are that tight, fair enough.
Otherwise there is no excuse to avoid exceptions. I’ve never seen a convincing argument.
The above statement almost treats RAII and exception handling as synonymous. But I believe this gives the false picture of the situation.
RAII is very useful, also if you do not use exceptions, but have multiple return paths. You want to acquire the resource in one place and schedule its future release in one place, not upon every return statement.
If you adopt this programming style as a rule, there is no downside to using exceptions.
In case of using things like Outcome, you still want to follow RAII idioms.
People who choose to use Outcome do understand RAII and will still use it. But RAII does not handle all aspects of failure-safety, and this is about these other aspects that people may choose to go with Outcome rather than exceptions. One example: propagating information about failures across threads, or "tasks".
There is exception_ptr for transporting exceptions between threads, which was the only primitive that was missing for being able to accumulate results from multiple workers.
Outcome and Noexcept are simply better alternatives for users who write or maintain code that is not exception-safe -- better not compared to exception handling (which in this case can not be used), but compared to what they would likely do otherwise.
To somewhat challenge this statement, The following is an example of how I would use Boost.Outcome if I had it available at the time when I was solving this parsing problem: https://github.com/akrzemi1/__sandbox__/blob/master/outcome_ practical_example.md This tries to parse (or match) the string input, where I expect certain syntax, into a data structure. It is not performance-critical, I do not mind using exceptions, but I still prefer to handle situations where the input string does not conform to the expected syntax via explicit contrl paths. A number of reasons for that:
1. I want to separate resource acquisition errors (exceptions are still thrown upon memory exhaustion) from input validation. 2. Some debuggers/IDEs by default engage when any exception is thrown. I do not want this to happen when an incorrect input from the user is obtained. 3. I want validation failers to be handled immediately: one level up the stack. I do not expect or intend to ever propagate them further.
And this code still throws exceptions.
Regards, &rzej;
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/ mailman/listinfo.cgi/boost
2017-06-20 0:42 GMT+02:00 Richard Hodges via Boost <boost@lists.boost.org>:
To somewhat challenge this statement, The following is an example of how I would use Boost.Outcome if I had it available at the time when I was solving this parsing problem: https://github.com/akrzemi1/__sandbox__/blob/master/outcome_ practical_example.md
In the above code, the return type of expected<std::vector<Distrib>, BadInput> is equivalent in all respects to 2-state variant.
Conceptually, yes.
If it were implemented as a boost/std::variant, then the two states could be handled with a static visitor - which would provide a compile-time guarantee that any future third state was not missed.
When I use `expected` I also get a guarantee that there will always be two types to handle - no more. What I also gain is the ability to use the TRY operation, which makes the program twice shorter, while not compromising clarity or type safety.
I don't think we're gaining anything with outcome, other than perhaps a label which serves to express intent?
Yes, it is mostly for: clarity of intent, terseness, static safety (invalid usages detected at compile-time). Regards, &rzej;
On Mon, Jun 19, 2017 at 2:41 PM, Andrzej Krzemienski via Boost < boost@lists.boost.org> wrote:
2017-06-19 20:33 GMT+02:00 Emil Dotchevski via Boost < boost@lists.boost.org> :
On Mon, Jun 19, 2017 at 2:58 AM, Andrzej Krzemienski via Boost < boost@lists.boost.org> wrote:
2017-06-14 21:52 GMT+02:00 Richard Hodges via Boost < boost@lists.boost.org
:
Exception return paths are not infinite. There are a finite number of places in code that an exception can be thrown.
The exception path is one path, the non-exception path is another. That’s two in total. Exactly equivalent to an outcome<>.
It is a fallacy to say that there are an indeterminate number of paths.
If developers do not understand RAII, then an afternoon of training can solve that.
RAII is the foundation of correct c++. It is the fundamental guarantee of deterministic object state. A program without RAII is not worthy of consideration. The author may as well have used C.
Perhaps there is an argument that says that RAII adds overhead to a program’s footprint. If things are that tight, fair enough.
Otherwise there is no excuse to avoid exceptions. I’ve never seen a convincing argument.
The above statement almost treats RAII and exception handling as synonymous. But I believe this gives the false picture of the situation.
RAII is very useful, also if you do not use exceptions, but have multiple return paths. You want to acquire the resource in one place and schedule its future release in one place, not upon every return statement.
If you adopt this programming style as a rule, there is no downside to using exceptions.
In case of using things like Outcome, you still want to follow RAII idioms.
People who choose to use Outcome do understand RAII and will still use it. But RAII does not handle all aspects of failure-safety, and this is about these other aspects that people may choose to go with Outcome rather than exceptions. One example: propagating information about failures across threads, or "tasks".
There is exception_ptr for transporting exceptions between threads, which was the only primitive that was missing for being able to accumulate results from multiple workers.
Outcome and Noexcept are simply better alternatives for users who write or maintain code that is not exception-safe -- better not compared to exception handling (which in this case can not be used), but compared to what they would likely do otherwise.
To somewhat challenge this statement, The following is an example of how I would use Boost.Outcome if I had it available at the time when I was solving this parsing problem: https://github.com/akrzemi1/__sandbox__/blob/master/outcome_ practical_example.md This tries to parse (or match) the string input, where I expect certain syntax, into a data structure. It is not performance-critical, I do not mind using exceptions, but I still prefer to handle situations where the input string does not conform to the expected syntax via explicit contrl paths. A number of reasons for that:
1. I want to separate resource acquisition errors (exceptions are still thrown upon memory exhaustion) from input validation.
Why?
2. Some debuggers/IDEs by default engage when any exception is thrown. I do not want this to happen when an incorrect input from the user is obtained.
"By default", so turn off that option.
3. I want validation failers to be handled immediately: one level up the stack. I do not expect or intend to ever propagate them further.
You can catch exceptions one level up if you want to. Right? :) However, if you're only propagating errors one level up, it really doesn't matter how you're handling them. I mean, how much trouble can you get into in this case? It's trivial. Error handling libraries are needed in more complex use cases where errors must be propagated across multiple levels, across threads, across API boundaries. The important design goals are: 1) The error object created by reporting code should be able to be propagated across (potentially many) error-neutral contexts which should not be required to "translate" it (that is, turn it into a different error object.) The idea of translation of errors gave us exception specifications which are notoriously one of the more embarrassing aspects of C++. 2) Error-neutral contexts should be able to ignore any errors reported by lower level code but also intercept _any_ error, augment it with relevant information (which may not be available at the point the error is detected) and let it propagate up the call stack, intact. 3) Error-handling contexts should be able to recognize the errors they can deal with but remain neutral to others. Your use of outcome is probably fine in this simple case but out::expected<Range, BadInput> parse_range (const std::string& input) looks much too close to exception specifications: Range parse_range(const std::string& input) throw(BadInput) While this doesn't matter if your error handling is only one level up, it's going to create a lot of problems in more complex cases because it makes error-neutral contexts impossible and, as the exception specifications fiasco showed, in general it is not possible for functions to even "know" what errors might propagate through them. Assuming we agree that it is not acceptable for error-neutral contexts to kill errors they don't recognize, this is a problem.
2017-06-20 3:38 GMT+02:00 Emil Dotchevski via Boost <boost@lists.boost.org>:
On Mon, Jun 19, 2017 at 2:41 PM, Andrzej Krzemienski via Boost < boost@lists.boost.org> wrote:
2017-06-19 20:33 GMT+02:00 Emil Dotchevski via Boost < boost@lists.boost.org> :
On Mon, Jun 19, 2017 at 2:58 AM, Andrzej Krzemienski via Boost < boost@lists.boost.org> wrote:
2017-06-14 21:52 GMT+02:00 Richard Hodges via Boost < boost@lists.boost.org
:
Exception return paths are not infinite. There are a finite number of places in code that an exception can be thrown.
The exception path is one path, the non-exception path is another. That’s two in total. Exactly equivalent to an outcome<>.
It is a fallacy to say that there are an indeterminate number of paths.
If developers do not understand RAII, then an afternoon of training can solve that.
RAII is the foundation of correct c++. It is the fundamental guarantee of deterministic object state. A program without RAII is not worthy of consideration. The author may as well have used C.
Perhaps there is an argument that says that RAII adds overhead to a program’s footprint. If things are that tight, fair enough.
Otherwise there is no excuse to avoid exceptions. I’ve never seen a convincing argument.
The above statement almost treats RAII and exception handling as synonymous. But I believe this gives the false picture of the situation.
RAII is very useful, also if you do not use exceptions, but have multiple return paths. You want to acquire the resource in one place and schedule its future release in one place, not upon every return statement.
If you adopt this programming style as a rule, there is no downside to using exceptions.
In case of using things like Outcome, you still want to follow RAII idioms.
People who choose to use Outcome do understand RAII and will still use it. But RAII does not handle all aspects of failure-safety, and this is about these other aspects that people may choose to go with Outcome rather than exceptions. One example: propagating information about failures across threads, or "tasks".
There is exception_ptr for transporting exceptions between threads, which was the only primitive that was missing for being able to accumulate results from multiple workers.
Outcome and Noexcept are simply better alternatives for users who write or maintain code that is not exception-safe -- better not compared to exception handling (which in this case can not be used), but compared to what they would likely do otherwise.
To somewhat challenge this statement, The following is an example of how I would use Boost.Outcome if I had it available at the time when I was solving this parsing problem: https://github.com/akrzemi1/__sandbox__/blob/master/outcome_ practical_example.md This tries to parse (or match) the string input, where I expect certain syntax, into a data structure. It is not performance-critical, I do not mind using exceptions, but I still prefer to handle situations where the input string does not conform to the expected syntax via explicit contrl paths. A number of reasons for that:
1. I want to separate resource acquisition errors (exceptions are still thrown upon memory exhaustion) from input validation.
Why?
I think the reason is that of a personal taste or personal sense of order. Failure to acquire resources is a situation where I will not be able to deliver what I have committed to. I will disappoint the user. In case of validation failure, this is exactly what users are calling my function for: sometimes their only goal is to get a true-false answer whether their text file is valid. I do not even treat validation failure as "error". But I still like to have the "short exit" behavior of errors.
2. Some debuggers/IDEs by default engage when any exception is thrown. I do not want this to happen when an incorrect input from the user is obtained.
"By default", so turn off that option.
But after a while I have concluded that it is a good default. Even if I am debugging something else, if I get a "resource-failure", or "logic error" (like invariant broken) I want to be alerted, and possibly stop what I was debugging before. This default setting is my friend, provided I do not use exceptions for just any "irregularity".
3. I want validation failers to be handled immediately: one level up the stack. I do not expect or intend to ever propagate them further.
You can catch exceptions one level up if you want to. Right? :)
I can. And it would work. But it just feels not the right tool for the job. It would not reflect my intention as clearly as `outcome`.
However, if you're only propagating errors one level up, it really doesn't matter how you're handling them. I mean, how much trouble can you get into in this case? It's trivial.
But t reflects my intentions clearly and gives me confidence that the error information will not escape the scope if I forget to put a try-block, or if I inadvertently add another return path form my validation routine.
Error handling libraries are needed in more complex use cases where errors must be propagated across multiple levels, across threads, across API boundaries. The important design goals are:
1) The error object created by reporting code should be able to be propagated across (potentially many) error-neutral contexts which should not be required to "translate" it (that is, turn it into a different error object.) The idea of translation of errors gave us exception specifications which are notoriously one of the more embarrassing aspects of C++.
2) Error-neutral contexts should be able to ignore any errors reported by lower level code but also intercept _any_ error, augment it with relevant information (which may not be available at the point the error is detected) and let it propagate up the call stack, intact.
3) Error-handling contexts should be able to recognize the errors they can deal with but remain neutral to others.
I recognize these needs. And in the contexts where you require the above characteristics (probably 97% of all code) exceptions are the tool for the job. For rare situations where I need different characteristics of error reporting mechanism, I will need to resort to something else, like a dedicated library.
Your use of outcome is probably fine in this simple case but
out::expected<Range, BadInput> parse_range (const std::string& input)
looks much too close to exception specifications:
Range parse_range(const std::string& input) throw(BadInput)
In some other language - yes. In a language, where such throw specification is enforced statically, like in Java.
While this doesn't matter if your error handling is only one level up, it's going to create a lot of problems in more complex cases because it makes error-neutral contexts impossible and,
Let me be clear: I do not claim that things like `outcome` are superior to exceptions. What I claim is that there happen to be rare ("rare" is very subjective here) situations where things like `outcome` fit better than exceptions.
as the exception specifications fiasco showed,
So yes, I would probably never agitate to put static exception checking into C++ and impose it on everybody. But for solving local problems, or for specific environments, I would like to have a library that offer different error handling trade-offs and is superior to error codes.
in general it is not possible for functions to even "know" what errors might propagate through them.
Agreed.
Assuming we agree that it is not acceptable for error-neutral contexts to kill errors they don't recognize, this is a problem.
Ok. It is just that I have parts of the program that I do not want to be exception neutral by accident. Regards, &rzej;
On Mon, Jun 19, 2017 at 11:58 PM, Andrzej Krzemienski via Boost < boost@lists.boost.org> wrote:
2017-06-20 3:38 GMT+02:00 Emil Dotchevski via Boost <boost@lists.boost.org
:
On Mon, Jun 19, 2017 at 2:41 PM, Andrzej Krzemienski via Boost <
1. I want to separate resource acquisition errors (exceptions are still thrown upon memory exhaustion) from input validation.
Why?
...
I do not even treat validation failure as
"error". But I still like to have the "short exit" behavior of errors.
If it's not an error then it is not an error -- and you should not treat it as such.
2. Some debuggers/IDEs by default engage when any exception is thrown. I do not want this to happen when an incorrect input from the user is obtained.
"By default", so turn off that option.
But after a while I have concluded that it is a good default. Even if I am debugging something else, if I get a "resource-failure", or "logic error" (like invariant broken)
Yes, std::logic_error is another embarrassment for C++. Logic errors by definition leave the program in an undefined state, the last thing you want to do in this case is to start unwinding the stack. You should use an assert instead.
I want to be alerted, and possibly stop what I was debugging before. This default setting is my friend, provided I do not use exceptions for just any "irregularity".
Exceptions are not used in case of "irregularities" but to enforce postconditions. When the program throws, it is in well defined state, working correctly, as if the compiler automatically writes "if" statements to check for errors before it executes any code for which it would be a logic error if control reaches it. The _only_ cost of this goodness is that your code must be exception safe. Programmers who write debuggers that by default break when a C++ exception is thrown likely do not understand the semantic differences between OS exceptions (e.g. segfaults, which *do* indicate logic errors) and C++ exceptions. Semantically, that's like breaking, by default, every time a C function returns an error code.
3. I want validation failers to be handled immediately: one level up the stack. I do not expect or intend to ever propagate them further.
You can catch exceptions one level up if you want to. Right? :)
I can. And it would work. But it just feels not the right tool for the job. It would not reflect my intention as clearly as `outcome`.
That's because (in your mind, as you stated) you're not using Outcome to handle "real" errors.
However, if you're only propagating errors one level up, it really doesn't matter how you're handling them. I mean, how much trouble can you get into in this case? It's trivial.
But t reflects my intentions clearly and gives me confidence that the error information will not escape the scope if I forget to put a try-block
Not really, if you forget to check for errors and call .value() on the outcome object, it'll throw (if I understand the outcome semantics correctly). That exceptions are propagated if you forget to handle them when you should is a good thing. It means no error gets ignored.
Error handling libraries are needed in more complex use cases where errors
must be propagated across multiple levels, across threads, across API boundaries. The important design goals are:
1) The error object created by reporting code should be able to be propagated across (potentially many) error-neutral contexts which should not be required to "translate" it (that is, turn it into a different error object.) The idea of translation of errors gave us exception specifications which are notoriously one of the more embarrassing aspects of C++.
2) Error-neutral contexts should be able to ignore any errors reported by lower level code but also intercept _any_ error, augment it with relevant information (which may not be available at the point the error is detected) and let it propagate up the call stack, intact.
3) Error-handling contexts should be able to recognize the errors they can deal with but remain neutral to others.
I recognize these needs. And in the contexts where you require the above characteristics (probably 97% of all code) exceptions are the tool for the job.
For rare situations where I need different characteristics of error reporting mechanism, I will need to resort to something else, like a dedicated library.
I personally think that libraries are definitely needed when they can deal efficiently with 97% of all use cases, the remaining 3% being not nearly as important. Evidently we disagree.
Your use of outcome is probably fine in this simple case but
out::expected<Range, BadInput> parse_range (const std::string& input)
looks much too close to exception specifications:
Range parse_range(const std::string& input) throw(BadInput)
In some other language - yes. In a language, where such throw specification is enforced statically, like in Java.
It's a bad idea. Again: generally, functions (especially library functions) can not know all the different kinds of errors they might need to forward (one way or another) up the call stack. From https://herbsutter.com/2007/01/24/questions-about-exception-specifications/: "When you go down the Java path, people love exception specifications until they find themselves all too often encouraged, or even forced, to add throws Exception, which immediately renders the exception specification entirely meaningless. (Example: Imagine writing a Java generic that manipulates an arbitrary type T…)"
Assuming we agree that it is not
acceptable for error-neutral contexts to kill errors they don't recognize, this is a problem.
Ok. It is just that I have parts of the program that I do not want to be exception neutral by accident.
You mean bugs where exceptions aren't handled when they should be, which in C++ may result in std::terminate, which seems harsh. But these bugs are also possible when errors aren't reported by throwing, the result being that they're neither handled nor propagated up the call stack -- in other words, they're being ignored. I submit that std::terminate is much preferable outcome in this case (no pun intended).
2017-06-20 10:32 GMT+02:00 Emil Dotchevski via Boost <boost@lists.boost.org> :
On Mon, Jun 19, 2017 at 11:58 PM, Andrzej Krzemienski via Boost < boost@lists.boost.org> wrote:
2017-06-20 3:38 GMT+02:00 Emil Dotchevski via Boost < boost@lists.boost.org
:
On Mon, Jun 19, 2017 at 2:41 PM, Andrzej Krzemienski via Boost <
1. I want to separate resource acquisition errors (exceptions are still thrown upon memory exhaustion) from input validation.
Why?
...
I do not even treat validation failure as
"error". But I still like to have the "short exit" behavior of errors.
If it's not an error then it is not an error -- and you should not treat it as such.
2. Some debuggers/IDEs by default engage when any exception is thrown. I do not want this to happen when an incorrect input from the user is obtained.
"By default", so turn off that option.
But after a while I have concluded that it is a good default. Even if I am debugging something else, if I get a "resource-failure", or "logic error" (like invariant broken)
Yes, std::logic_error is another embarrassment for C++. Logic errors by definition leave the program in an undefined state, the last thing you want to do in this case is to start unwinding the stack. You should use an assert instead.
I think I personally agree with you here. However, whenever I try to promote this philosophy, I encounter so much resistance, that I am forced to drop before I have chance to present rational arguments.
I want to be alerted, and possibly stop what I was debugging before. This default setting is my friend, provided I do not use exceptions for just any "irregularity".
Exceptions are not used in case of "irregularities" but to enforce postconditions. When the program throws, it is in well defined state, working correctly, as if the compiler automatically writes "if" statements to check for errors before it executes any code for which it would be a logic error if control reaches it. The _only_ cost of this goodness is that your code must be exception safe.
No disagreement here. It is just that my sense tells me preconditions should be defined so that breaking them is rare. So rare that I do not even mind my debugger interrupting me.
Programmers who write debuggers that by default break when a C++ exception is thrown likely do not understand the semantic differences between OS exceptions (e.g. segfaults, which *do* indicate logic errors) and C++ exceptions. Semantically, that's like breaking, by default, every time a C function returns an error code.
Same here. How often C functions in your program return error codes? If it is "quite often" then this might be a problem on its own.
3. I want validation failers to be handled immediately: one level up the stack. I do not expect or intend to ever propagate them further.
You can catch exceptions one level up if you want to. Right? :)
I can. And it would work. But it just feels not the right tool for the job. It would not reflect my intention as clearly as `outcome`.
That's because (in your mind, as you stated) you're not using Outcome to handle "real" errors.
Maybe you are right. Of course "real" and "unreal" are very subjective, but maybe you have got a point. When writing a low-level asynchronous library like AFIO, situations like not being able to open a file or write to it at a given moment should not be treated as a "real error", because at this level, in this context, there is no corresponding postcondition. But still, a dedicated library for representing variant return values is needed. and `variant` is not good enough.
However, if you're only propagating errors one level up, it really doesn't matter how you're handling them. I mean, how much trouble can you get into in this case? It's trivial.
But t reflects my intentions clearly and gives me confidence that the error information will not escape the scope if I forget to put a try-block
Not really, if you forget to check for errors and call .value() on the outcome object, it'll throw (if I understand the outcome semantics correctly).
I think this is where you do not appreciate the power of static checking. Yes, technically it is possible just access the `o.value()` manually and get a throw or some unintended behavior. But I would consider it an irresponsible use and compare it the situation where you use type unique_ptr, and only call `get()` and `release() to compromise it safety: ``` unique_ptr<T> factory(X x, Y y) { unique_ptr<T> ans = make_unique<T>(x) T* raw = ans.release(); raw->m = compute_and_maybe_throw(y); return unique_ptr<T>(raw); } ``` And you might argue that `unique_ptr` is unsafe, or that using `unique_ptr` is dangerous. But that would be false, because only because you can compromise a type's static-safety it does not mean that it is not safe. Same goes for outcome: ``` outcome<T> append_Y(T t, Y y); outcome<T> fun(X x, Y y) { outcome<T> t = make_X(x); return append_Y(t, y); // fails to compile return append_Y(TRY(t), y); // ok, and safe } ``` If you forget to check for the error the compiler will remind you. Not the runtime, not the call to std::terminate(), but the compiler!
Assuming we agree that it is not
acceptable for error-neutral contexts to kill errors they don't recognize, this is a problem.
Ok. It is just that I have parts of the program that I do not want to be exception neutral by accident.
You mean bugs where exceptions aren't handled when they should be, which in C++ may result in std::terminate, which seems harsh. But these bugs are also possible when errors aren't reported by throwing, the result being that they're neither handled nor propagated up the call stack -- in other words, they're being ignored. I submit that std::terminate is much preferable outcome in this case (no pun intended).
std::terminate() is better than ignoring exceptions in this case, yes. But having the compiler tell you that you have this problem is even better. And this is what `outcome` offers.
For rare situations where I need different characteristics of error reporting mechanism, I will need to resort to something else, like a dedicated library.
I personally think that libraries are definitely needed when they can deal efficiently with 97% of all use cases, the remaining 3% being not nearly as important. Evidently we disagree.
If I measure how much area of my program needs a `variant` it might be about 5%. And in some programs I do not need it at all. But I appreciate that I have a standard library (well tested, well designed) for it. Some of the Standard Library components I have never used, but I still consider the decision to have it there to be correct.
Your use of outcome is probably fine in this simple case but
out::expected<Range, BadInput> parse_range (const std::string& input)
looks much too close to exception specifications:
Range parse_range(const std::string& input) throw(BadInput)
In some other language - yes. In a language, where such throw specification is enforced statically, like in Java.
It's a bad idea. Again: generally, functions (especially library functions) can not know all the different kinds of errors they might need to forward (one way or another) up the call stack. From https://herbsutter.com/2007/01/24/questions-about- exception-specifications/:
"When you go down the Java path, people love exception specifications until they find themselves all too often encouraged, or even forced, to add throws Exception, which immediately renders the exception specification entirely meaningless. (Example: Imagine writing a Java generic that manipulates an arbitrary type T…)"
I agree with your diagnosis. I am not advocating for Java exceptions. My apologies if I have confused you. Regards, &rzej;
On Tue, Jun 20, 2017 at 2:16 AM, Andrzej Krzemienski via Boost < boost@lists.boost.org> wrote:
3. I want validation failers to be handled immediately: one level up the stack. I do not expect or intend to ever propagate them further.
You can catch exceptions one level up if you want to. Right? :)
I can. And it would work. But it just feels not the right tool for the job. It would not reflect my intention as clearly as `outcome`.
That's because (in your mind, as you stated) you're not using Outcome to handle "real" errors.
Maybe you are right. Of course "real" and "unreal" are very subjective,
"Subjective", as in sometimes the correct design is not obvious or there are competing design goals which make it difficult or impossible for the library to define universal postconditions for a function, as it is often the case in boost::filesystem. In this case, something like Outcome/Noexcept helps, by shifting the responsibility to use the correct postconditions to the user. But this is not typical.
When writing a low-level asynchronous library like AFIO, situations like not being able to open a file or write to it at a given moment should not be treated as a "real error", because at this level, in this context, there is no corresponding postcondition.
Usually there is. The postcondition for a write function is that the data has been successfully submitted to the file system, because it is very rare that the user wouldn't care, in which case he can write a wrapper that ignores all errors.
But still, a dedicated library for representing variant return values is needed. and `variant` is not good enough.
The need is for an error handling library that doesn't use exceptions -- how exactly it works is a matter of design. In my view it is not a good idea to burden return values with having to transport error objects because, as the Outcome review showed, that creates a ton of competing goals and it is very difficult (perhaps impossible) to address all or even most of them without stripping the outcome<> type from all meaningful error-handling semantics, effectively turning it into variant<>.
``` outcome<T> append_Y(T t, Y y);
outcome<T> fun(X x, Y y) { outcome<T> t = make_X(x); return append_Y(t, y); // fails to compile
return append_Y(TRY(t), y); // ok, and safe } ```
With exceptions you would do: T append_Y(T t, Y y); T fun(X x, Y y) { return append_Y(make_X(x),y); } The "fails to compile" -- which in the case of Outcome serves the purpose of making sure that you don't forget to check for errors -- is gone because the compiler checks for errors for you. Literally, in this case Outcome protects you form a logic error that is impossible to make if you use exceptions.
2017-06-20 22:44 GMT+02:00 Emil Dotchevski via Boost <boost@lists.boost.org> :
On Tue, Jun 20, 2017 at 2:16 AM, Andrzej Krzemienski via Boost < boost@lists.boost.org> wrote:
3. I want validation failers to be handled immediately: one level up the stack. I do not expect or intend to ever propagate them further.
You can catch exceptions one level up if you want to. Right? :)
I can. And it would work. But it just feels not the right tool for the job. It would not reflect my intention as clearly as `outcome`.
That's because (in your mind, as you stated) you're not using Outcome to handle "real" errors.
Maybe you are right. Of course "real" and "unreal" are very subjective,
"Subjective", as in sometimes the correct design is not obvious or there are competing design goals which make it difficult or impossible for the library to define universal postconditions for a function, as it is often the case in boost::filesystem. In this case, something like Outcome/Noexcept helps, by shifting the responsibility to use the correct postconditions to the user. But this is not typical.
When writing a low-level asynchronous library like AFIO, situations like not being able to open a file or write to it at a given moment should not be treated as a "real error", because at this level, in this context, there is no corresponding postcondition.
Usually there is. The postcondition for a write function is that the data has been successfully submitted to the file system, because it is very rare that the user wouldn't care, in which case he can write a wrapper that ignores all errors.
Yes, users would care: and for them it makes sense to expose the interface with exceptions. For the implementer of the guts, he might consider it a "regular" situation and not engage postconditions.
But still, a dedicated library for representing variant return values is needed. and `variant` is not good enough.
The need is for an error handling library that doesn't use exceptions -- how exactly it works is a matter of design. In my view it is not a good idea to burden return values with having to transport error objects because, as the Outcome review showed, that creates a ton of competing goals and it is very difficult (perhaps impossible) to address all or even most of them without stripping the outcome<> type from all meaningful error-handling semantics, effectively turning it into variant<>.
``` outcome<T> append_Y(T t, Y y);
outcome<T> fun(X x, Y y) { outcome<T> t = make_X(x); return append_Y(t, y); // fails to compile
return append_Y(TRY(t), y); // ok, and safe } ```
With exceptions you would do:
T append_Y(T t, Y y); T fun(X x, Y y) { return append_Y(make_X(x),y); }
The "fails to compile" -- which in the case of Outcome serves the purpose of making sure that you don't forget to check for errors -- is gone because the compiler checks for errors for you. Literally, in this case Outcome protects you form a logic error that is impossible to make if you use exceptions.
Yes: exceptions are well designed. And the decision to cancel any operation that depends on the currently failed one -- by default -- is a reasonable one, and in fact desired. Also, the decision made by `outcome` -- when you are in the 3% area where explicit control paths are preferred -- to fail to compile by default is acceptable, and in fact desired. In contrast, the behavior of Boost.Noexcept is just to let the depending functions execute. This is my initial concern. In case of exceptions you can have both neutrality and a good default action upon throw. In case of Boost.Noexcept, in order to provide the neutrality, the default action is not that good anymore. Actually, how can you write an exception-neutral code with Boost.Nowide? Regards, &rzej;
On Tue, Jun 20, 2017 at 11:31 PM, Andrzej Krzemienski via Boost < boost@lists.boost.org> wrote:
Actually, how can you write an exception-neutral code with Boost.Nowide?
Noexcept, right? :) In error-neutral contexts, you simply return throw_() to propagate any error from lower level functions.
2017-06-21 8:54 GMT+02:00 Emil Dotchevski via Boost <boost@lists.boost.org>:
On Tue, Jun 20, 2017 at 11:31 PM, Andrzej Krzemienski via Boost < boost@lists.boost.org> wrote:
Actually, how can you write an exception-neutral code with Boost.Nowide?
Noexcept, right? :)
:) Sorry. So many things going on in Boost at the moment.
In error-neutral contexts, you simply return throw_() to propagate any error from lower level functions.
But does this not compromise exception neutrality? That you have to specify in each function that you want to just pass the exception up? Regards, &rzej;
On Wed, Jun 21, 2017 at 12:43 AM, Andrzej Krzemienski via Boost < boost@lists.boost.org> wrote:
In error-neutral contexts, you simply return throw_() to propagate any error from lower level functions.
But does this not compromise exception neutrality? That you have to specify in each function that you want to just pass the exception up?
Compared to what? Is there a better option when you can't throw?
2017-06-29 8:14 GMT+02:00 Emil Dotchevski via Boost <boost@lists.boost.org>:
On Wed, Jun 21, 2017 at 12:43 AM, Andrzej Krzemienski via Boost < boost@lists.boost.org> wrote:
In error-neutral contexts, you simply return throw_() to propagate any error from lower level functions.
But does this not compromise exception neutrality? That you have to specify in each function that you want to just pass the exception up?
Compared to what? Is there a better option when you can't throw?
My observation is, if you cannot throw then exception neutrality is not achievable. Here, by "exception neutrality" I mean what function std::qsort is doing. It does not mention `try` or `catch` by any means, it can be compiled with C compiler, and yet it can propagate exceptions thrown by the callback. Regards, &rzej;
On 29/06/2017 18:34, Andrzej Krzemienski wrote:
Here, by "exception neutrality" I mean what function std::qsort is doing. It does not mention `try` or `catch` by any means, it can be compiled with C compiler, and yet it can propagate exceptions thrown by the callback.
That doesn't really follow. If code is C-compilable then it cannot have destructors and thus can only be exception safe if it does not perform any memory allocation or any other resource acquisition (eg. opening files). If you imagine any other pure C algorithm that calls a callback, unless it explicitly states that it's C++-exception-safe you shouldn't assume that you can safely throw an exception from that callback. (In fact it's generally a bad idea to assume that of a C++ algorithm too, unless explicitly stated.) If you happen to know that a given implementation has no side effects outside of the stack (and no destructors needed), then you could use setjmp/longjmp instead of exceptions, which at least conforms to a C contract. But that is not really any different from using exceptions in the first place. (Perhaps somewhat faster and less safe, since it skips the stack unwinding entirely.) (In the case of std::qsort specifically, it explicitly provides one overload taking a C++ linkage callback and one taking a C-linkage callback. Either one might be exception-safe, with the C++ callback perhaps slightly more likely, but in the absence of explicit guarantees you should probably be hesitant about it in both cases.) If you're referring only to exception propagation rather than safety (which kind of defeats the point of using exceptions in the first place), then no return-based solution could ever meet that; only exceptions themselves or a setjmp/longjmp alternative (which is obviously less safe, especially in C++ code), or some other compiler extension that would end up much like exceptions anyway.
On Wed, Jun 28, 2017 at 11:34 PM, Andrzej Krzemienski via Boost < boost@lists.boost.org> wrote:
2017-06-29 8:14 GMT+02:00 Emil Dotchevski via Boost <boost@lists.boost.org
:
On Wed, Jun 21, 2017 at 12:43 AM, Andrzej Krzemienski via Boost < boost@lists.boost.org> wrote:
In error-neutral contexts, you simply return throw_() to propagate any error from lower level functions.
But does this not compromise exception neutrality? That you have to specify in each function that you want to just pass the exception up?
Compared to what? Is there a better option when you can't throw?
My observation is, if you cannot throw then exception neutrality is not achievable.
In functions that propagate errors in the return value (very common both in C and in -fno-exceptions C++), each caller has to take the returned error code, examine it, then possibly return a copy of it -- but it is also common to return another error, which is known as error translation (which is a bad idea). Instead, functions which can't handle the error should just propagate it to the caller. That is what it means to be neutral: "I don't know what this is, someone else please deal with it". In Noexcept neutrality can be expressed explicitly: you simply return throw_(), which doesn't touch the error object at all.
Here, by "exception neutrality" I mean what function std::qsort is doing.
I would not bet that qsort is exception-safe on all platforms. Emil
2017-06-29 19:26 GMT+02:00 Emil Dotchevski via Boost <boost@lists.boost.org> :
On Wed, Jun 28, 2017 at 11:34 PM, Andrzej Krzemienski via Boost < boost@lists.boost.org> wrote:
2017-06-29 8:14 GMT+02:00 Emil Dotchevski via Boost < boost@lists.boost.org
:
On Wed, Jun 21, 2017 at 12:43 AM, Andrzej Krzemienski via Boost < boost@lists.boost.org> wrote:
In error-neutral contexts, you simply return throw_() to propagate any error from lower level functions.
But does this not compromise exception neutrality? That you have to specify in each function that you want to just pass the exception up?
Compared to what? Is there a better option when you can't throw?
My observation is, if you cannot throw then exception neutrality is not achievable.
In functions that propagate errors in the return value (very common both in C and in -fno-exceptions C++), each caller has to take the returned error code, examine it, then possibly return a copy of it -- but it is also common to return another error, which is known as error translation (which is a bad idea).
Instead, functions which can't handle the error should just propagate it to the caller. That is what it means to be neutral: "I don't know what this is, someone else please deal with it". In Noexcept neutrality can be expressed explicitly: you simply return throw_(), which doesn't touch the error object at all.
Ok, I get it now. IOW, by "neutrality" you are saying "I am not handling or changing type of an exception unless I am sure I recognize this condition", right? Regards, &rzej;
Yes, std::logic_error is another embarrassment for C++. Logic errors by definition leave the program in an undefined state, the last thing you want to do in this case is to start unwinding the stack. You should use an assert instead. That's true enough for applications, less so for libraries. A
Exceptions are not used in case of "irregularities" but to enforce postconditions. When the program throws, it is in well defined state, working correctly, as if the compiler automatically writes "if" statements to check for errors before it executes any code for which it would be a logic error if control reaches it. The _only_ cost of this goodness is that your code must be exception safe.
Programmers who write debuggers that by default break when a C++ exception is thrown likely do not understand the semantic differences between OS exceptions (e.g. segfaults, which *do* indicate logic errors) and C++ exceptions. Semantically, that's like breaking, by default, every time a C function returns an error code. Well, no. C functions return error codes also as both a way to describe exceptional conditions (e.g. ENOMEM), user error (e.g. EINVAL) as well as informing the user of what they should do (e.g. EAGAIN). (I
On Tue, Jun 20, 2017 at 01:32:41AM -0700, Emil Dotchevski via Boost wrote: library may simply be re-initialized upon throwing a logic error (if it permits), and the using application may try to use the library's functionality again with modified input data. That won't, of course, fix the logic error, but it may circumvent it and produce useful application output. Python effectively achieves this by having assert raise a catchable AssertionError. prefer to treat them as status codes for this reason.) Exceptions can only cover the first two cases, and only the first is non-recoverable. Breaking in those cases makes more sense than breaking when C functions return error codes precisely because of the likes of EAGAIN. Effectively, C++ exceptions occupy the middle ground between unrecoverable error states and status codes, i.e. situations that users may wish to recover from, but that should lead to program terminiation if they choose not to. One *good* example is throwing a logic_error in the default case of a switch statement (specifically a domain_error, in this case). Not in every situation, of course. I hope that makes sense, Jens -- 1.21 Jiggabytes of memory ought to be enough for anybody.
On 20/06/2017 20:32, Emil Dotchevski wrote:
On Mon, Jun 19, 2017 at 11:58 PM, Andrzej Krzemienski wrote:
I want to be alerted, and possibly stop what I was debugging before. This default setting is my friend, provided I do not use exceptions for just any "irregularity".
Exceptions are not used in case of "irregularities" but to enforce postconditions. When the program throws, it is in well defined state, working correctly, as if the compiler automatically writes "if" statements to check for errors before it executes any code for which it would be a logic error if control reaches it. The _only_ cost of this goodness is that your code must be exception safe.
Programmers who write debuggers that by default break when a C++ exception is thrown likely do not understand the semantic differences between OS exceptions (e.g. segfaults, which *do* indicate logic errors) and C++ exceptions. Semantically, that's like breaking, by default, every time a C function returns an error code.
While I don't disagree with that, and I use a debugger which by default does not pause on caught exceptions, I tend to run it configured to pause on all thrown exceptions regardless. This is because in the codebases I tend to work with, exceptions are rare and unusual and generally indicative of a serious problem in either the code or the input, and thus they ought to be investigated whenever they happen, because they're not supposed to ever happen. As a result, code that throws exceptions for other reasons irritates me, and I try to avoid using it. Even where I might normally run without that enabled, at some point I find myself needing to track down a logged exception, so I turn it on, and then get irritated if exceptions other than the one I was looking for turn up. So again it's preferable to not have them happen unless they're serious. And of course there's the old adage about not using exceptions for "normal" control flow (whatever normal means to your method).
On Tue, Jun 20, 2017 at 4:41 PM, Gavin Lambert via Boost < boost@lists.boost.org> wrote:
On 20/06/2017 20:32, Emil Dotchevski wrote:
On Mon, Jun 19, 2017 at 11:58 PM, Andrzej Krzemienski wrote:
I want to be alerted, and possibly stop what I was debugging before. This default setting is my friend, provided I do not use exceptions for just any "irregularity".
Exceptions are not used in case of "irregularities" but to enforce postconditions. When the program throws, it is in well defined state, working correctly, as if the compiler automatically writes "if" statements to check for errors before it executes any code for which it would be a logic error if control reaches it. The _only_ cost of this goodness is that your code must be exception safe.
Programmers who write debuggers that by default break when a C++ exception is thrown likely do not understand the semantic differences between OS exceptions (e.g. segfaults, which *do* indicate logic errors) and C++ exceptions. Semantically, that's like breaking, by default, every time a C function returns an error code.
While I don't disagree with that, and I use a debugger which by default does not pause on caught exceptions, I tend to run it configured to pause on all thrown exceptions regardless.
This is because in the codebases I tend to work with, exceptions are rare and unusual and generally indicative of a serious problem in either the code or the input, and thus they ought to be investigated whenever they happen, because they're not supposed to ever happen.
As a result, code that throws exceptions for other reasons irritates me, and I try to avoid using it.
The "serious problem" classification is not exactly tangible. :) And, presumably you need to handle other errors that aren't "serious problems", and you don't want to use exceptions in that case (why?), so what do you do? In my book, if returning from a function would leave the caller in a state that requires it to abandon what it was doing and in turn return to its caller, there is seldom a reason to insist on writing that if statement manually; and this rarely requires special attention in the debugger. Even where I might normally run without that enabled, at some point I find
myself needing to track down a logged exception, so I turn it on, and then get irritated if exceptions other than the one I was looking for turn up. So again it's preferable to not have them happen unless they're serious.
I'd say that it's preferable to use a debugger which can break on some exception types but not others. :)
On 6/14/2017 3:42 PM, Ion Gaztañaga via Boost wrote:
On 13/06/2017 23:44, Richard Hodges via Boost wrote:
So why do developers continue to perpetuate the idea that the use of exceptions impacts performance or deterministic timing of code? It does not.
It is no slower and no less deterministic than checking a return code or discriminated union. In fact that is exactly what implementations boil down to in compiled code, with the added benefit of being able to signal the failure to create an object in a way that makes it impossible to use the failed object accidentally.
As I say, being able to return partial success is useful.
Seeking to remove exceptions on performance grounds is a nonsense.
Maybe. If you are never going to catch an exception, just like many platforms don't need to catch bad_alloc, then log+terminate can be a more efficient approach. But in general, performance benefits disabling exceptions can be attached to specific domains and conditions. Specially today as exception code has improved a lot in compilers. Program size benefits, at least when exceptional situations are handled though abort(), can be measurable, and additionally a polymorphic hierarchy of exceptions types add non-optimizable (by the linker) code and RTTI to your executable.
It adds non-determinism because there is no upper bound when an exception is thrown.
Please explain what this is supposed to mean, because it does not mean anything to me.
In my opinion, at least after writing mission critical software, is that the main problem of exceptions is the non-explicit potentially infinite exit paths (as each thrown type can require a different action when catching) it adds in every function call, worsened with the non-static enforcement of what can be thrown. For critical software every function that could throw should be tested against all possible exceptions thrown by that function or any of its dependencies, and knowing what types dependencies of dependencies throw is an impossible mission.
A lot of programmers don't understand exception safety guarantees and how to write exception-safe code. It is not simple because it is not explicit.
IMHO return types, when handled, make error handling uglier, more explicit, maybe slower. You get much more paths and branches, because they really exist in your executable code, and exception handling makes them invisible in your source code, and in consequence, dangerous. Just look at the additional branches gcov shows when you call a function that can possibly throw. It's very hard to test every error and throwable type, whereas an int return type is for most programmers, easier to handle and understand.
Ion
On 14/06/2017 22:35, Edward Diener via Boost wrote: cutable.
It adds non-determinism because there is no upper bound when an exception is thrown.
Please explain what this is supposed to mean, because it does not mean anything to me.
It's not easy to know and no way to guarantee how much time the run-time plus your code needs to propagate the exception and execute the landing codee. With return codes you can measure it. For critical systems this is very important. Ion
On Thu, Jun 15, 2017 at 11:45 AM, Ion Gaztañaga via Boost < boost@lists.boost.org> wrote:
On 14/06/2017 22:35, Edward Diener via Boost wrote: cutable.
It adds non-determinism because there is no upper bound when an exception is thrown.
Please explain what this is supposed to mean, because it does not mean anything to me.
It's not easy to know and no way to guarantee how much time the run-time plus your code needs to propagate the exception and execute the landing codee. With return codes you can measure it.
You can *measure* it with exceptions too. The unpredictability is because you can't know *without* measuring how much time the C++ runtime needs to unwind the stack in each case and on each platform. However, this is true even if you don't use exceptions, because it depends heavily on function inlining: if functions are inlined there is nothing to unwind (with or without exceptions), and the difference in speed can be orders of magnitude. This means that even if you don't use exceptions, the only way to know how much time it'll take for an error code to bubble up the call stack is to measure it.
On 6/15/2017 2:45 PM, Ion Gaztañaga via Boost wrote:
On 14/06/2017 22:35, Edward Diener via Boost wrote: cutable.
It adds non-determinism because there is no upper bound when an exception is thrown.
Please explain what this is supposed to mean, because it does not mean anything to me.
It's not easy to know and no way to guarantee how much time the run-time plus your code needs to propagate the exception and execute the landing codee. With return codes you can measure it. For critical systems this is very important.
Well that is some "translation" from your original remark above <g>. You can obviously measure such timings for a given exception just as you can measure such timings for return codes. There is nothing non-deterministic about either.
Ion
On Wed, Jun 14, 2017 at 12:42 PM, Ion Gaztañaga via Boost < boost@lists.boost.org> wrote:
In my opinion, at least after writing mission critical software, is that the main problem of exceptions is the non-explicit potentially infinite exit paths (as each thrown type can require a different action when catching) it adds in every function call, worsened with the non-static enforcement of what can be thrown.
When an error occurs (assuming you can't just bail out), it must get passed up the call stack, together with relevant data that is captured at the time it's detected, but also present in functions up the call stack, all the way to a function that can actually deal with the problem. If you don't use exceptions, then you're writing a lot of if( !success ) return error, which is prone to errors. As well, in environments where error handling doesn't use exceptions, the code tends to not always use RAII, so in addition to returning errors you need to manually release resources, which is also prone to errors (say hello to leaks). Semantically, however, exception handling only 1) writes the ifs for you, and 2) forces you to use RAII and to write exception-safe code (which is a good idea anyway.) With or without exceptions, the error handling paths as well as the work that must be done in case of errors (freeing resources, etc.) is identical. The supposed "unpredictability" of exceptions is not semantic but syntactic. Depending on the compiler, and depending on whether a function got inlined or not (even functions that are exception-neutral, e.g. neither throw nor catch exceptions), there might be overhead which sometimes may be significant, not to mention I have never seen a compiler that optimizes out throw statements. But even this is probably just theoretical. I've been asking for hard data that shows that in a given use case exception handling adds too much overhead. I keep hearing that such cases exist (and they probably do) but I'm yet to see a single one.
On 14/06/2017 23:01, Emil Dotchevski via Boost wrote:
The supposed "unpredictability" of exceptions is not semantic but syntactic. Depending on the compiler, and depending on whether a function got inlined or not (even functions that are exception-neutral, e.g. neither throw nor catch exceptions), there might be overhead which sometimes may be significant, not to mention I have never seen a compiler that optimizes out throw statements.
But even this is probably just theoretical. I've been asking for hard data that shows that in a given use case exception handling adds too much overhead. I keep hearing that such cases exist (and they probably do) but I'm yet to see a single one.
I don't think there is a lot of overhead, although there is. The C++ performance report mentions them: http://www.open-std.org/jtc1/sc22/wg21/docs/TR18015.pdf Then you have random posts about the issue, but I don't think there is a serious analysis: http://www.gamearchitect.net/Articles/ExceptionsAndErrorCodes.html Ion
2017-06-13 21:25 GMT+02:00 Richard Hodges via Boost <boost@lists.boost.org>:
Does anyone actually have a measurable example of real code in which the unexceptional path induces any more execution overhead than an optional/variant/outcome return type?
Because when I look at the code generated by gcc et all, I am convinced that you're solving a non-existant problem when seeking to replace exceptions.
I think the expectation that `outcome<>` tries to solve is somewhat different: that entering exceptional path should be no more expensive than entering the non-exceptional pat. They do not even have to be fast. They just need to be guaranteed to be the same, so that you have a predictable (not necessarily super-small) latency, so that you can guarantee the worst-case performance. This is what I understood from Boost.Outcome review. Regards, &rzej;
On Tue, Jun 13, 2017 at 11:01 AM, Peter Dimov via Boost < boost@lists.boost.org> wrote:
Emil Dotchevski wrote:
If error codes are treated as "the error", then the error domain is limited to a single function. Consider these two functions:
int f1(....); //returns 0 on success, 1-f1_error1, 2-f1_error2 int f2(....); //returns 0 on success, 1-f2_error1, 2-f2_error2
If f2 calls f1, if the error is communicated by an error code, f2 _must_ translate the error condition from the domain of f1 errors, to the domain of f2 errors. And this must be done at every level, which introduces many points in the code where subtle errors may occur, and that is in error handling code which is very difficult to test and debug.
That's exactly the problem std::error_code solves, as it's a (code, domain) pair, so there's no need to translate.
That presumes that ENOENT represents the same _error_ when returned from two different functions. Generally, it does not. The correct strategy in C++ is to throw different types to indicate different errors, even when both end up carrying the same ENOENT. So it is critical to decouple the error code (std or otherwise) from _what_ went wrong, and if you don't, you're butchering the ability to write error-neutral functions, which in practice means translating error codes from one domain to another, at every level, which is prone to errors.
Emil Dotchevski wrote:
If error codes are treated as "the error", then the error domain is limited to a single function. Consider these two functions:
int f1(....); //returns 0 on success, 1-f1_error1, 2-f1_error2 int f2(....); //returns 0 on success, 1-f2_error1, 2-f2_error2
If f2 calls f1, if the error is communicated by an error code, f2 _must_ translate the error condition from the domain of f1 errors, to the domain of f2 errors. And this must be done at every level, which introduces many points in the code where subtle errors may occur, and that is in error handling code which is very difficult to test and debug.
Peter Dimov replied:
That's exactly the problem std::error_code solves, as it's a (code, domain) pair, so there's no need to translate.
Emil Dotchevski responded:
That presumes that ENOENT represents the same _error_ when returned from two different functions. Generally, it does not. The correct strategy in C++ is to throw different types to indicate different errors, even when both end up carrying the same ENOENT.
So it is critical to decouple the error code (std or otherwise) from _what_ went wrong, and if you don't, you're butchering the ability to write error-neutral functions, which in practice means translating error codes from one domain to another, at every level, which is prone to errors.
Agree with your concern that an 'ENOENT' value may semantically mean different things in different contexts (or not, depending on a specific cross-domain mapping). However, the 'std::error_code' implementation is intended to type-erase those semantically-different meanings into the same object-type, but to permit that "difference-checking" that would otherwise be performed with different types, such as your example to throw two types to semantically represent two different domains of errors. class f1_domain : public std::error_category { ... }; class f2_domain : public std::error_category { ... }; std::error_code f1(...) { ...; return make_error_code(ENOENT, get_f1_domain()); } std::error_code f2(...) { ...; return make_error_code(ENOENT, get_f2_domain()); } { ... if(f1() == f2()) // ...can be true-or-false... { ... } } The "type-erasure" implementation within 'std::error_code' is provided by the cross-domain mapping implemented in 'f1_domain' and/or 'f2_domain' (by overriding virtual functions from 'std::error_category'). So, agree with your assertion that throwing different types will disambiguate; but the 'std::error_code' attempts to similarly keep that different-domain distinction, but through type-erasure (provided through 'std::error_category').
On 06/13/2017 12:38 AM, Emil Dotchevski via Boost wrote:
I'll spell it out: Noexcept + optional<> ≈ Outcome
That approximation only holds for function-calling scenarios. As I pointed out in my review, there are other use cases for Outcome. Noexcept is a bad match for these use cases, because it transports errors "out-of-band" like errno. One use case is to pass a value-or-error between threads. We already have one outcome-like feature for this: promise-future. If we want to use a different mechanism to pass the value-or-error between threads, then Outcome offers a natural solution. Another use case is to pass a value-or-error via a queue. The queue may contain several outstanding errors. In the case of Outcome, we simply push the value-or-error directly to the queue.
On Tue, Jun 13, 2017 at 1:01 PM, Bjorn Reese via Boost < boost@lists.boost.org> wrote:
On 06/13/2017 12:38 AM, Emil Dotchevski via Boost wrote:
I'll spell it out: Noexcept + optional<> ≈ Outcome
That approximation only holds for function-calling scenarios. As I pointed out in my review, there are other use cases for Outcome. Noexcept is a bad match for these use cases, because it transports errors "out-of-band" like errno.
One use case is to pass a value-or-error between threads. We already have one outcome-like feature for this: promise-future. If we want to use a different mechanism to pass the value-or-error between threads, then Outcome offers a natural solution.
Another use case is to pass a value-or-error via a queue. The queue may contain several outstanding errors. In the case of Outcome, we simply push the value-or-error directly to the queue.
Yes, though I consider this a separate issue. One is, what's the best way to transport errors across error-neutral functions within a single thread, the other is what to do when, somewhere way up the stack, we get a successful value or an error. At that point it's trivial to build a variant<T,std::exception_ptr> or equivalent, you don't need a lib for that.
Le 14/06/2017 à 00:05, Emil Dotchevski via Boost a écrit :
On Tue, Jun 13, 2017 at 1:01 PM, Bjorn Reese via Boost < boost@lists.boost.org> wrote:
On 06/13/2017 12:38 AM, Emil Dotchevski via Boost wrote:
I'll spell it out: Noexcept + optional<> ≈ Outcome That approximation only holds for function-calling scenarios. As I pointed out in my review, there are other use cases for Outcome. Noexcept is a bad match for these use cases, because it transports errors "out-of-band" like errno.
One use case is to pass a value-or-error between threads. We already have one outcome-like feature for this: promise-future. If we want to use a different mechanism to pass the value-or-error between threads, then Outcome offers a natural solution.
Another use case is to pass a value-or-error via a queue. The queue may contain several outstanding errors. In the case of Outcome, we simply push the value-or-error directly to the queue.
Yes, though I consider this a separate issue. One is, what's the best way to transport errors across error-neutral functions within a single thread, the other is what to do when, somewhere way up the stack, we get a successful value or an error. At that point it's trivial to build a variant<T,std::exception_ptr> or equivalent, you don't need a lib for that.
+1 Vicente
On 12/06/2017 20:22, Emil Dotchevski wrote:
Noexcept is a new C++11 library that implements a different approach to solving the same problem. Any feedback is welcome.
From the docs:
WARNING If unhandled errors remain at the time the current thread terminates, Noexcept calls abort(). Use catch_<> to handle any error regardless of its type.
Shouldn't it call std::terminate() instead?
On Mon, Jun 12, 2017 at 5:08 PM, Gavin Lambert via Boost < boost@lists.boost.org> wrote:
On 12/06/2017 20:22, Emil Dotchevski wrote:
Noexcept is a new C++11 library that implements a different approach to solving the same problem. Any feedback is welcome.
From the docs:
WARNING If unhandled errors remain at the time the current thread terminates, Noexcept calls abort(). Use catch_<> to handle any error regardless of its type.
Shouldn't it call std::terminate() instead?
I think abort() is better because it would be strange for a program that doesn't use exceptions to indicate an unhandled exception. :) But maybe it would be more practical to call terminate(), I'm not sure.
On 13/06/2017 12:42, Emil Dotchevski wrote:
WARNING If unhandled errors remain at the time the current thread terminates, Noexcept calls abort(). Use catch_<> to handle any error regardless of its type.
Shouldn't it call std::terminate() instead?
I think abort() is better because it would be strange for a program that doesn't use exceptions to indicate an unhandled exception. :) But maybe it would be more practical to call terminate(), I'm not sure.
std::terminate() isn't just for unhandled exceptions, though (eg. it's also called for unjoined threads). And this still kind of is an unhandled exception anyway. Using std::terminate() allows hooking std::set_terminate() to trigger logging or minidump/coredump, which can be useful. (abort() can be similarly intercepted via SIGABRT, but it's a more restricted context.) Or to put it another way: std::terminate() is C++; abort() is C. It seems wrong to use the C-based termination method in a C++ context.
On Mon, Jun 12, 2017 at 7:19 PM, Gavin Lambert via Boost < boost@lists.boost.org> wrote:
On 13/06/2017 12:42, Emil Dotchevski wrote:
WARNING
If unhandled errors remain at the time the current thread terminates, Noexcept calls abort(). Use catch_<> to handle any error regardless of its type.
Shouldn't it call std::terminate() instead?
I think abort() is better because it would be strange for a program that doesn't use exceptions to indicate an unhandled exception. :) But maybe it would be more practical to call terminate(), I'm not sure.
std::terminate() isn't just for unhandled exceptions, though (eg. it's also called for unjoined threads). And this still kind of is an unhandled exception anyway.
Using std::terminate() allows hooking std::set_terminate() to trigger logging or minidump/coredump, which can be useful. (abort() can be similarly intercepted via SIGABRT, but it's a more restricted context.)
Or to put it another way: std::terminate() is C++; abort() is C. It seems wrong to use the C-based termination method in a C++ context.
Okay, thanks. I'll change it.
participants (12)
-
Andrzej Krzemienski
-
Bjorn Reese
-
charleyb123 .
-
Edward Diener
-
Emil Dotchevski
-
Gavin Lambert
-
Ion Gaztañaga
-
Jens Finkhäuser
-
Niall Douglas
-
Peter Dimov
-
Richard Hodges
-
Vicente J. Botet Escriba