
Hi Bartlett, Just for the record and since your statements below are about general experiences with large scale C++ projects, let me put my own experiences in context: I had been architecting, designing and implementing large-scaled projects from the early 90's. The largest of which accounts for 160K of C++ code, which in fact I wrote almost entirely myslef. Again this is just to put the comments based on my experience in persective.
These types of experiences have lead many C++ teams to code C++ with a (justified but unfortunate) paranoia about the use of memory in C++ that has all kinds of bad consequences
I think that those memory problems are funtamentally rooted at a design issue. In the mid 90's I used to developed best practices and utilities for sane memory management (and other sanity requirements), in the same spirit of the paper you presented (I did read it btw). I even implemented custom allocators, based on class-specific memory pools, and mandated that every object should obey a strict allocation-deallocation protocol. Simply defining a new class on my system required the use of a macro-based DSL, something like "DEFINE_DERIVED_OBJECT(Foo,Base). Likewise, object graphs had to be very carefully spelled with my DSL, as in "INCLUDE_SUBOBJECT(Bar)", and so on.. This forced everyone in the team, year after year, to learn a language on top of C++. In the long end, however, I realized I was just overengineering the problem way too much, putting a big burden on the team and making it difficult for newcomers. When C++ evolved I found new, much simpler ways to solve the same problems, and in particular, memory problems: using smart pointers (even long before boost::shared_ptr came alone). Once I started using smart pointers I never looked back, and I never again, ever, had to spend a single minute on a memory leak.
Come one, at least some people on this mail list must have had similar nightmare experiences in trying to track down and diagnose hard memory misuse errors in C++ that took days of effort to resolve (even with the help of tools like valgrind and purify).
Before I started using smart pointers, yes. After that, no.. never again.
And again, tools like valgrind and purify will *never* catch semantic misuse of memory (i.e. allocating a big chunk of memory and then breaking it up to be used to construct different objects and array of objects). The Teuchos MM classes will catch most semantic misuse of memory in a way that no tool like valgrind and purify every can (because they don't know the context of your program, they only know that any read/writes in a big block of memory that you allocated look okay). I think this is a big deal in catching hard-to-find defects that are not (technically speaking) memory misuse defects but are program defects none the less.
I totally fail to see why this design mistakes (wrong allocation pattern) should be detected by a framework? These are design issues and should be deal with at that stage. Surely any team can be trained not to make such mistakes. Best -- Fernando Cacciola SciSoft Consulting, Founder http://www.scisoft-consulting.com