Tobias Schwinger wrote:
Joel de Guzman wrote:
Single function:
I'm a strong advocate of smaller is better. Modularity matters. Big classes (or in this case function objects), metaprogram blobs (i.e. traits), humongous enums, and all such sort of dinosours :-) are best avoided. They are convenient up to a certain extent and becomes unmanageable beyond a certain limit.
In all of my use cases, I have N functions that are provided elsewhere and I have no control over (e.g. parser functions). I insist that this is the more common use case. Grouping them into a single big struct is an unnecessary and cumbersome step.
Still, if people insist, I outlined a way to convert the big function object to smaller 'bound' objects, in another post. Just bind 'em into smaller function chunks.
I think it's a misconception to assume that a single function object will be inherently big.
It's not about "big". It's about modular vs. monolithic. The best designs, IMO, are modular.
It's just another degree of freedom (namely what to look up) exposed to the user. Whether one can appreciate it seems a matter of taste to me...
A ---> B: function := L(I): functions[I]() B ---> A: transform(cases, L(I): make_pair<I>(bind(function,I())))
...and I happen to prefer the first transform.
I guess that reasoning is due to your thinking that the one monolithic automation functor is more common. That's where I disagree, vehemently(!). I've got lots of use cases that show otherwise. I assert that the more common case is to have as input N functions (member/free function pointers, binded functions, boost.functions, etc) and the switch_ is used to dispatch. Let me emphasize this to the strongest extent: A single monolithic function for the cases in a switch facility is simply and utterly wrong, wrong and wrong! :-P If the design stands as it is, I better write my own. But it won't get my vote. Regards, -- Joel de Guzman http://www.boost-consulting.com http://spirit.sf.net