sob., 2 mar 2019 o 10:36 Andrey Semashev via Boost
On 3/2/19 11:56 AM, Andrzej Krzemienski via Boost wrote:
sob., 2 mar 2019 o 07:35 Emil Dotchevski via Boost < boost@lists.boost.org> napisał(a):
On Fri, Mar 1, 2019 at 9:37 PM Andrzej Krzemienski via Boost < boost@lists.boost.org> wrote:
My hypothesis is that reading valid-but-unspecified can only happen in a buggy program in an unintended path.
Running out of memory, or out of some other resource, does not indicate a bug. In response, under the basic exception guarantee, you may get a state which I'm saying shouldn't be merely "destructable" but also valid. For example, if this was a vector<T>, it shouldn't explode if you call .size(), or if you iterate over whatever elements it ended up with.
This is where my imagination fails me. I cannot imagine why upon bad_alloc I would be stopping the stack unwinding and determining size of my vectors. This is why I ask about others' experience with real-world correct code.
That is not an unimaginable scenario. If you have two branches of code, one requiring more memory but better performance, and the other that is slower (or maybe lacking some other qualities but still acceptable) and less resource consuming, operating on the same vector, you will want the vector to stay valid if memory allocation fails. Although not specifically with vectors, I had cases like this in real world.
Thanks for sharing your experience. I am not sure we are on the same page here. Are you describing an operation where an operation on vector fails to allocate more storage and therefore throws and leaves the *value* of the vector unchanged (i.e., an operation with strong exception safety guarantee)? Or are you describing an operation that reuses memory owned by a vector and discards the vector's value? Boith these cases can be described as "never observing the value after a failed operation with a basic exception safety guarantee". Or are you describing a different situation? Regards, Andrzej
However, in my experience, if I want to handle OOM condition gracefully, I tend to not trust any third party components except the lowest level ones, like C runtime, and write the relevant code myself. Especially, this concerns components that allocate memory, like containers. Unfortunately, it is often the case that either I don't trust implementations to take OOM into account and handle it well or I want some specific guarantees about how much memory is allocated and what the state of the program is when OOM happens.
And that making design compromises to address this path is not necessarily the best approach to take.
Consider that if you choose to allow, after an error, to have objects left in such a state that they may explode if you attempt to do anything but destroy them, there may not be any way to detect that state.
Yes. and I do not see how this is a problem in practice. In my experience objects that failed on operation with basic guarantee can only be safely removed from the scope. (I do not even reset them.)
Removing the objects may be wasteful or require expensive operations. In the vector example, that vector may be initially large or expensive or even impossible to reconstruct. If you strive for the "destroy upon failure" logic, you would have to duplicate the vector before attempting the operation that may fail with an exception. Which is a point of failure on its own, BTW. Generally, you want to minimize the number of points of failure while also minimizing amount of work needed to be done to complete the program. There is also a third subjective limit of code quality or simplicity, design quality, etc., but that is not relevant to my point.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost