
Robert Fendt wrote:
Hi,
I have a somewhat peculiar problem concerning the shared_ptr template. I use an STL container to hold several shared_ptr values. When the container is destroyed, all the destructors are called correctly, but the memory is not freed. Furthermore, shared_ptr usage introduces a 100x memory overhead (or more) in my case. Compiler system is GCC.
I am not sure this is entirely unexpected. A shared pointer contains 2 pointers. One points to the object stored, while the other points to a structure containing a variety of information regarding the object (called shared_count). The fields I can think of in this structure follow: - The number of shared pointers referencing this object - The number of weak pointers referencing this object - A pointer to the object, expressed as the original type used to create the shared pointer (To prevent deleting using a pointer to a base class which may not have a virtual destructor). - A pointer to a deleter routine, used if you shouldn't use the delete operator when you're done with the pointer. In addition, I remember reading that on several platforms, the shared_count structures are actually allocated using a pool allocator, as malloc and/or new aren't well suited to allocating numerous small structures. This would cause an even larger allocation, as the pool would allocate enough memory for several dozen structures. As for the fact that you didn't see the memory getting released, that could also be explained by the pool. Then the objects are destroyed, the shared_count structures are returned to the pool to be used later, but, as far as the operating system is concerned, they were not freed. I am also curious what you mean by "program size in memory." Was this the amount of memory allocated to the program by the operating system? It has often been my experience that allocation of memory doesn't exactly mirror the individual allocations of objects; the operating system generally allocates a few (relatively large) pages, which the C runtime library then splits into individual objects. It has also been my experience that these pages are rarely reclaimed from a running program. You might consider testing this by inserting a loop round the code that fills the deque, like so: #include <iostream> #include <string> #include <deque> #include <boost/shared_ptr.hpp> class A { public: int a; int b; }; int main() { std::string tmp; std::cout << "Enter something to continue" << std::endl; std::cin >> tmp; for (int rep = 0; rep != 100; ++rep) { std::deque<boost::shared_ptr<A> > list; for (int i = 0; i < 10; i++) { boost::shared_ptr<A> ptr(new A()); list.push_back(ptr); } std::cout << "Enter something to continue" << std::endl; std::cin >> tmp; } std::cout << "Enter something to continue" << std::endl; std::cin >> tmp; } If I'm right and you're just seeing some of the headache-inducing details of memory allocation, then the memory size should remain relatively static after the first pass through the loop. However, if it continues to grow after each pass, then this truly is a memory leak, and we have a problem. As for the possibility of some error in your use of the shared_ptr, your sample looks good to me. You may need to consider if something as heavyweight as shared_ptr is truly necessary in your case (that shared_count structure is an awful lot of overhead for a pair of ints). However, the only alternative I know of is to store dumb pointers in the deque, which sends shivers down my spine. Andrew Holden
Between the first and the second stop, the program size in memory grows by about 150k... and that's for 10 objects which are <20 bytes large, individually. And to top it off, it leaks memory. Between stops two and three the memory *should* be freed since the scope is left. I can include a destructor doing debug output in the class which shows that the objects are indeed destroyed. But the memory stays allocated.
Thanks in advance, Robert