
Gennadiy Rozental wrote:
A) class destructors preferably shouldn't be doing anything smart
I don't understand the reasoning behind this... class destructors exist so that resources can be disposed of intelligently. If this were valid then we wouldn't have reference counted smart pointers.
Disposing of resource this instance own is one (simple) thing. Meddle with other templates is something completely different (complex) and in most cases would involve function invocations which in turn may throw an exceptions and all the hell break loose. On the other hand if destructors are simple they are independent and all this complex machinery in unnecessary.
None of the automated cleanup functionality in my original library did anything that could result in an exception being thrown. The dependency_lifetime used simple reference counting to release the instance, and the static_lifetime just used the property of order of initialization of static variables to guarantee an order of destruction. Neither of these was complex and/or coupling to other singletons. I admit that the longevity_lifetime and lifo_lifetime did use a rather expensive internal registry, but quite honestly I don't think that these lifetimes were as useful as the other two and they were merely provided for completeness. Regardless, their automated cleanup mechanisms could not throw either.
B) if teardown procedure is not trivial it's part of normal program flow and should reside within main() bounds
This forces the highest level code to be aware of every detail that needs to be cleaned up. What if a library wants to use some sort of resource manager in the background? Should it force client code to be aware that said resource manager exists, and make client code responsible for cleaning it up?
Client code should be aware about single function teardown() that user should call somewhere during program exit. After that any attempts to access library functionality is invalid. If you want you could even automate it by RADII based manager that does lib_init() in constructor and lib_teardown() in destructor.
My singleton library gave the client code that option. Client code could explicitly destroy the instance early rather than relying on it being destroyed automatically. There was even a lifetime policy available which did not do any automated cleanup whatsoever, and a cleanup class designed specifically to lock singleton lifetimes to specific scopes (like the scope of main). I don't see why the client code should be forced to clean up global resources manually when reliable automated methods exist. I think it should be allowed to manage such things manually if it wants to, but the choice for automated resource management should be available.
This seems like a C-oriented way of thinking, and does not scale.
Both statements seems to be born out of thin air. Could you elaborate?
I was referring to managing all global resources and their dependencies at the bottom of main, or in cleanup functions that had to be called by main, rather than in destructors. This seems like a poor organizational method to use for large projects, especially if such projects have many shared resources.
C) it doesn't seem to be ever portable to rely on order of global variables destruction in different libraries (dynamic or static)
The portable means to control this is with mini Meyers singletons. It is guaranteed by the standard that local static variables will be destroyed in reverse order of the completion of their constructors. Thus with the following code:
Foo & A ( ) { static Foo foo; return foo; }
Bar & B ( ) { A ( ); static Bar bar; return bar; }
If B is called, then B's bar will definitely be destroyed after A's foo.
That was my point that Meyers singletons is all that we need. Anything smarter than that (IOW all these games with enforcing unnatural order of destruction) is just unnecessary "smart".
I don't understand how my automated methods of destruction are unnatural. When you can express dependencies between singletons directly and cleanly, rather than having to hard code lifetime relationships into the bottom of main, it seems more natural to me.
D) it's unreasonable to expect all global variables to employ single manager facility within bound any reasonably big application
Hence the policy based design. Related singletons can use related lifetime managers, and unrelated singletons need not be concerned about each other. The dependency_lifetime solution handles this concern especially well, as using reference counted smart pointers to singleton instances does not require a master manager at all.
This still assumes that all world will be using your framework to implement singletons. Let say you need to use third party singleton in your destructor that is implemented by some arrogant somebody without regards to your order management?
Perhaps you were not aware when you wrote this that my library does offer the option (regardless of the automated cleanup policy used) to destroy any singleton manually at any time. As such, it is perfectly possible to set up lifetime relationships between singletons from my framework, singletons from other libraries, and even global variables.
E) if any teardown procedure fails it's really non-trivial to insure proper destruction for the rest of globals (if possible at all), while reporting an error may be impossible/difficult.
This is not a problem with singletons, but a more general problem which client code must be concerned about. Singletons which have a well defined order of destruction can help client code to organize a well structured solution.
Singletons with nontrivial logic in destructors ... should not do this in destructor. And this is not a general problem. It's problem with all global variables that destructed in afterlife (most main() exit)
If code should not perform non-trivial cleanup in destructors, then when should it do it? Regardless of where the logic is put, if cleanup fails there isn't a whole lot you can do. You pretty much would just have to swallow exceptions or terminate the program anyhow, and with destructors you are at least guaranteed that cleanup is attempted.
Many of your other points do not still apply, as there does exist a scalable mechanism to control destruction order, and the new design does take multi-threading into consideration.
Even if it exist (which you did not prove AFAICT) don't go there. "Don't go in language dark corners" experts told recently.
It isn't a dark corner, it is a well defined part of the standard. I am talking about using reference counting and the guaranteed order of destruction of meyers singletons to build a scalable mechanism of managing dependencies between global resources. The interface that the library provides makes managing the order of destruction as simple as having one singleton own a pointer to another, which is about as clean and decoupled a solution as I can imagine. class A { }; class B { A::pointer; // ensures that A outlives B };
Just because it is relatively inefficient to access a singleton shared between threads safely does not mean that it is a bad approach.
Just because it is relatively inefficient it makes it completely unusable. MT in many cases are very performance aware and singleton is a performance bottleneck.
So what would you advise as a mechanism to communicate global information between threads efficiently?
Any method employed to communicate between threads will be somewhat expensive. The goal is to minimize the amount of communication required, not to throw out communication mechanisms all together.
Not every method. Just init you singleton in main thread and destroy in main thread. And no need to care about synchronization at all.
This assumes that the state of the singleton is read-only for its entire lifetime. If the singleton has a state which can be modified, how can you avoid caring about synchronization? If the singleton is used to perform actual runtime communication between threads, its state cannot be read-only.
With regard to your desire for a different interface, it is easy enough to create an instance method which wraps the smart pointer and just returns a raw reference. However, choosing to do so does eliminate all thread safety,
No need to synch on instance access at all.
See above.
destroys the possibility of delaying creation until a member function is accessed,
I do not see how. Third recommended technique in original post cover this.
I am talking about delaying creation until a member function is called, not until instance is called. I suppose this difference may be trivial, and is not the main point of my argument.
and ensures that your references will not stay up to date if the singleton instance is destroyed.
Don't keep references and/or don't destroy your global variables in a middle of program execution.
That places unnecessary restrictions on client code. Code should be given many available options, not just one fixed way of thinking about and implementing things.
It feels like much of your argument is against object oriented principles in general, and advocates a single flow of execution.
Let's not make unfounded accusations. Where did you see me argue against OOD?
In promoting the idea that all global resource management be handled at the bottom of main, rather than expressing dependencies between global resources in a more abstract and scalable way. Also in advocating against the use of destructors to perform cleanup.
While this may be reliable for smaller programs, I have no idea how you can expect that solution to scale.
And I have know idea what you mean. It would be good to base your statements with at least examples.
I hope that I have clarified this above. -Jason