
on Mon Aug 22 2011, Eric Niebler <eric-AT-boostpro.com> wrote:
On 8/22/2011 1:36 AM, Nevin Liber wrote:
On 21 August 2011 21:15, Eric Niebler <eric@boostpro.com> wrote:
This is the accepted dogma,
For good reason. These are places that end up getting aggressively optimized, so code that appears to work today suddenly is broken tomorrow when the compiler is revved.
Stepanov argues that things like UNDERLYING_TYPE and move_raw should be part of the Standard Library and guaranteed to work.
But IIUC they can only be guaranteed to work in a world where there are no back-pointers or self-pointers. It's fine to postulate that the designers of C++ should have outlawed such types from the beginning, and I believe that's the world Stepanov is advocating, but it's not the world we live in today, and couldn't make the desired guarantee without declaring working code to be broken.
How often is this thing needed? The only times you need this is when the compiler generated move constructor/assignment operators either are deleted or are doing the wrong thing.
No. Nevin, at this point its obvious to me you haven't read the paper. Please go read it and then we can talk about the pros and cons.
I haven't re-read that paper in years, and I don't have time for a complete re-reading now, but I did read it once and I believed I understood it. I also had extensive discussions with Sean about the world he and Alex were describing. It was then―and is still―my understanding that this can all work if the world is made up of what Stepanov calls "Regular Types," but will fail for types that are perfectly valid in C++ today.
It is needed when implementing an algorithm for which move is *inherently* an inadequate tool for the job because it makes guarantees that are not needed: namely the destructability of the moved-from object. Preserving that invariant is not needed if you will be moving a valid value back into the moved-from object before its lifetime ends.
But it remains to be seen whether compilers are capable of optimizing away the needless invariant restoration in practical cases. For example, when swapping vectors using the generic algorithm, it's entirely plausible that the compiler could observe that after setting the internal pointers to zero they are set to a value copied from a different vector, and simply eliminate the zeroing. -- Dave Abrahams BoostPro Computing http://www.boostpro.com