From: boost-users-bounces@lists.boost.org [mailto:boost-users-bounces@lists.boost.org] On Behalf Of Emil Dotchevski Sent: March-17-11 5:27 PM On Thu, Mar 17, 2011 at 1:58 PM, Ted Byers
wrote: I do have a little sympathy with your, and his, position, when dealing with extremely tight time constraints, but not a lot. If one of the design criteria is that the code being produced must be widely portable, then I pass it through the range of platforms and compilers that we have to support, and as far as it is possible, I try to treat all warnings as errors.
The virtual destructor warning goes directly against a conscious design decision. In my opinion it also teaches programmers a bad habit. This doesn't make the warning any less annoying of course, so I'm doing my best to suppress it.
The fact that we're having this discussion is another upside: it shows the reader that at least some people think that it is an error to "fix" this
I would like to understand why making destructors virtual when there are other virtual member functions might be seen as a bad habit. I would also appreciate it if you can educate me (while I am so old I can recall fighting off T.Rex so I could enjoy my bronto-burgers, I'm not too old to learn something new. :-) What, precisely, was that conscious design decision, and what is the rationale for it? It is my experience that when two experienced developers disagree about a practice, it is born of differences in the nature of the problems they have faced in the past, and the information they have at their disposal. If you were to look at the applications I develop, you'd find very few objects created on the stack. Almost everything goes on the heap, managed by the most appropriate of the boost smart pointers. In my environmental modelling software, for example, the application starts off with almost nothing in the heap, but as the user builds the model, he may end up producing hundreds or even thousands of instances of sometimes complex UDTs, and these UDTs are often drawn from complex inheritance trees (but almost never involving multiple inheritance ;-). Connections between these instances can often be quite complex, so there is, in the base class, a function that breaks all connections among the objects before any attempt is made to delete anything. Because the number of UDTs is quite large, and there is a common modelling interface for which there are virtual functions (as pure virtual functions in the base class), all these objects are managed in a single std::vector containing smart pointers having pointers to the base class. About 99% of the effort in these applications is focussed on these UDTs, and the containers managing them, and managing the resources they require (which varies wildly among UDTs). particular warning. :) Agreed. Cheers Ted