
Dan Eloff wrote:
I would go ahead an use boost::signals, and if later you profile your code and find they produce a signifigant performance penalty, replace them. They may not be that fast, but if you're doing something a 100,000 times slower in your code, you really won't notice the 0.001% overhead they add. As always premature optimization is a waste of time, effort, and money. Only ever optimize if a) you've profiled and identified the code as a bottleneck or b) you're concerned it will be a bottleneck AND the alternative is similarily easy to implement and use, then you could get away with using your alternative from the start.
On the other hand, as Alexandrescu in _Modern C++ Design_ quotes Len Lattanzi, "Belated pessimization is the leaf of no good." Alexandrescu goes on to write, "A pessimization of one order of magnitude in the runtime of a core object like a functor, a smart pointer, or a string can easily make the difference between success and failure for a whole project" (Ch. 4, p. 77). Library writers often do not have the same liberty as application writers to go back later, after profiling, and fix their slow code. I have been working on a design for a prototype policy-based universal demultiplexor. One of the policies is a 'dispatch policy' that provides the actual mechanism for notifying resource classes of events. In the initial implementation of this policy, I am using Boost.Signal. Preliminary analysis suggests that the performance efficiency of the demultiplexor when using this dispatcher is "bad." While I can not yet offer meaningful numbers, it seems that there is something quite slow happening under the covers in the Signals library. I have not examined the implementation of the Signals library at all, however. The extent of this problem, and whether it is easily fixable, remains to be seen. I'm going to get back to the list on this, probably about when I'm done with the demultiplexor prototype. Aaron W. LaFramboise