
Gennadiy Rozental wrote:
Even though you did not bother to comment on my formal review of first submission (http://article.gmane.org/gmane.comp.lib.boost.devel/124201) all the points there are still valid (IMO). And the most important is: should we do this at all? IMO it's bad idea to promote smart singletons in single threaded environment (beyond what we could implement using trivial means like "Meyers singleton") and it twice as bad in MT where there is no relyable way to implement efficient access.
I'm sorry that I did not remember to respond to your review at the time... I was swamped with finals at the time, but that's not an excuse. Let me try to address your concerns now. A) class destructors preferably shouldn't be doing anything smart I don't understand the reasoning behind this... class destructors exist so that resources can be disposed of intelligently. If this were valid then we wouldn't have reference counted smart pointers. B) if teardown procedure is not trivial it's part of normal program flow and should reside within main() bounds This forces the highest level code to be aware of every detail that needs to be cleaned up. What if a library wants to use some sort of resource manager in the background? Should it force client code to be aware that said resource manager exists, and make client code responsible for cleaning it up? This seems like a C-oriented way of thinking, and does not scale. C) it doesn't seem to be ever portable to rely on order of global variables destruction in different libraries (dynamic or static) The portable means to control this is with mini meyers singletons. It is guaranteed by the standard that local static variables will be destroyed in reverse order of the completion of their constructors. Thus with the following code: Foo & A ( ) { static Foo foo; return foo; } Bar & B ( ) { A ( ); static Bar bar; return bar; } If B is called, then B's bar will definitely be destroyed after A's foo. D) it's unreasonable to expect all global variables to employ single manager facility within bound any reasonably big application Hence the policy based design. Related singletons can use related lifetime managers, and unrelated singletons need not be concerned about each other. The dependency_lifetime solution handles this concern especially well, as using reference counted smart pointers to singleton instances does not require a master manager at all. E) if any teardown procedure fails it's really non-trivial to insure proper destruction for the rest of globals (if possible at all), while reporting an error may be impossible/difficult. This is not a problem with singletons, but a more general problem which client code must be concerned about. Singletons which have a well defined order of destruction can help client code to organize a well structured solution. Many of your other points do not still apply, as there does exist a scalable mechanism to control destruction order, and the new design does take multi-threading into consideration. Just because it is relatively inefficient to access a singleton shared between threads safely does not mean that it is a bad approach. Any method employed to communicate between threads will be somewhat expensive. The goal is to minimize the amount of communication required, not to throw out communication mechanisms all together. With regard to your desire for a different interface, it is easy enough to create an instance method which wraps the smart pointer and just returns a raw reference. However, choosing to do so does eliminate all thread safety, destroys the possibility of delaying creation until a member function is accessed, and ensures that your references will not stay up to date if the singleton instance is destroyed. I will concede that including all policies from the same header in the old design was a bad idea. It was expensive and unnecessary. The new design will require including such headers individually, and when all of the simplest policies are used should not be much more expensive to use than a meyers singleton (assuming the compiler optimizes away functions that just forward to other functions). It feels like much of your argument is against object oriented principles in general, and advocates a single flow of execution. While this may be reliable for smaller programs, I have no idea how you can expect that solution to scale. -Jason