
On 15/04/12 21:56, Daniel James wrote:
On 15 April 2012 13:53, Mathias Gaunard<mathias.gaunard@ens-lyon.org> wrote:
Calling delete on a pointer to a polymorphic type with a non-virtual destructor might be a bug. It would be a bad idea for shared_ptr to disable this warning for any other type than error_info.
But shared_ptr is explicitly written to deal with this case so the warning is almost always incorrect (remember that if you delete the object by some other means, the warning will appear for the other deletion). You can see this tested in shared_ptr_test. Look at the contents of n_spt_abstract (in a couple of places). Note also that gcc's std::shared_ptr does not warn for this use case.
I don't see what this has to do with abstract classes. struct A { virtual void foo() {} }; struct B : A {}; boost::shared_ptr<A> p(new B); should issue that warning.
The simplest solution is to make the destructor of error_info virtual, since it doesn't add any significant overhead, especially compared to how bloated Boost.Exception is as a whole...
This isn't just about Boost.Exception. I'm not arguing against it making its destructor virtual.
All the warnings being reported are about error_info not having a virtual destructor; I don't see anything specific to shared_ptr here.
It would also have the advantage of not relying on fragile compiler-specific warning suppression mechanisms.
Since this warning was added in 4.7, the warning mechanism introduced in 4.6 can be used. It isn't fragile.
First, it's specific to that compiler. What if tomorrow a new compiler decides to add that kind of warning too? What if it occurs in slightly different scenarios? Also it's not clear how warnings are suppressed with those mechanisms, especially when templates are involved. See, we have something that seems to depend on include order here. That's the worst possible behaviour. It looks like if I include boost/exception_ptr.hpp first, then all possible warning messages (which *are* useful) would be suppressed from the boost/shared_ptr.hpp header.
Sometimes that's that case, but sometimes the opposite is true. Rewriting code can cause unnecessary complexity and introduce bugs. Especially when you consider the multitude of warnings we have to deal with.
I've made codebases of several hundreds of thousands of lines of code warning-free on a variety of different compilers, yet I cannot say I share your experience. On the contrary, it tends to remove bugs because the compiler does more strict checking than a human. The only added complexity is when you need to use compiler-specific workarounds, which is only required for MSVC deprecated/unsafe functions.