formal review of Switch library ends tomorrow (Wednesday, January 9th) - reviews needed

The formal review of Switch library by Steven Watanabe is scheduled to end tomorrow, January 9th. I have only received one review (privately) so far, and hope that those participating in the discussions about the library (as well as any other interested boosters) would be willing to contribute a few more - a summary of your final thoughts with a vote would be greatly appreciated. If you need a little extra time to finish out the discussions or write the review, please let me know as we can extend the review period for a few days if necessary. Please feel free to submit your reviews to the list, or privately to me. As a reminder, here are some questions you may want to answer in your review: * What is your evaluation of the design? * What is your evaluation of the implementation? * What is your evaluation of the documentation? * What is your evaluation of the potential usefulness of the library? * Did you try to use the library? With what compiler? Did you have any problems? * How much effort did you put into your evaluation? A glance? A quick reading? In-depth study? * Are you knowledgeable about the problem domain? And finally, every review should answer this question: * Do you think the library should be accepted as a Boost library? Be sure to say this explicitly so that your other comments don't obscure your overall opinion. Thanks again to those that have been participating in the discussions and/or have already submitted a review. Regards, Stjepan

Here's my review:
* What is your evaluation of the design?
It's a very simple design. Very straightforward. I'm not quite sure about the interface though: * The specification of cases is a bit cumbersome. In the common case, they are both specified in the switch_ call and in the individual function call overloads supplied by the client (e.g.): 1) switch_<mpl::vector_c<int, 1, 2, 3> >(n, f) 2) void F::operator()(mpl::int_<1>) void F::operator()(mpl::int_<2>) void F::operator()(mpl::int_<3>) Redundant. * I'd prefer the result type to be user-specified like in Boost.Bind instead of hard-wiring it to F::result_type. * The client needs to provide a specialized function object for the case detection. There's no way to use plain functions or even Boost.Function. I've implemented switch_ many times now. Here are some links: http://tinyurl.com/28e8y2 and http://tinyurl.com/ypmgob Having said all that, my preferred syntax is: switch_<return_type>(n, case_<1>(f1), case_<2>(f2), case_<3>(f3), default_(fd) ); Key points: * The return type is specified in the call, where it should be. * You specify the cases only once, and not using a cumbersome MPL range or some such. * You supply N functions, not one humongous function with all the cases in. In many cases, you don't have a control over the functions, especially if you are writing a library. Think modularity. * Plain function pointers and Boost.Functions are fine. * The syntax is very "idiomatic" and as close to the native switch statement. * Fall-through is a possibility with additional syntax and thought. Example, by default we fall-through, but with: case_<3>(f3, break_) we don't. * Multiple cases are possible. E.g: case_<3, 5>(f3) * One disadvantage is that the type of the case is always an int. Perhaps we can specify that ala MPL: case_<char, 'x'>(f3) Another solution is to detect the type of the supplied (/n/ in the example) and cast the cases to its type. That way, it should be ok to have unsigned too. We only deal with the largest static integer (long or long long) and cast to the supplied int type. ... I think that would work, but I'm just thinking out loud as I write this, so I'd like to hear other people's thoughts.
* What is your evaluation of the implementation?
Haven't had time to check. But coming from Steven, It should be A+.
* What is your evaluation of the documentation?
Too terse. More examples needed. I think other folks have commented about this, so, I'll stop. I'm more concerned about the design.
* What is your evaluation of the potential usefulness of the library?
Very useful!
* Did you try to use the library? With what compiler? Did you have any problems?
No and no.
* How much effort did you put into your evaluation? A glance? A quick reading? In-depth study?
Ehm, just read the docs and studied the design.
* Are you knowledgeable about the problem domain?
Very.
And finally, every review should answer this question:
* Do you think the library should be accepted as a Boost library? Be sure to say this explicitly so that your other comments don't obscure your overall opinion.
Not at the moment. I think we need a more thorough discussion on alternative interfaces. We also need to discuss the issues that were raised in the review. I'm eager to hear Steven's replies. He seem to be a bit too quiet? I'm really tempted to say "yes" and let Steven address the concerns raised (including mine). I'm very confident in Steven's abilities. He's one of those who still gives me the "oooh" feeling with his code. And, I really *NEED* such a switch utility now and not later. So, please take this as a soft "no" vote for now. I encourage Steven to get more involved in the discussion and consider all the issues raised. As soon as these matters are ironed out, fire up another review ASAP. Regards, -- Joel de Guzman http://www.boost-consulting.com http://spirit.sf.net

Joel de Guzman wrote:
And finally, every review should answer this question:
* Do you think the library should be accepted as a Boost library? Be sure to say this explicitly so that your other comments don't obscure your overall opinion.
Not at the moment. I think we need a more thorough discussion on alternative interfaces. We also need to discuss the issues that were raised in the review. I'm eager to hear Steven's replies. He seem to be a bit too quiet?
I'm really tempted to say "yes" and let Steven address the concerns raised (including mine). I'm very confident in Steven's abilities. He's one of those who still gives me the "oooh" feeling with his code. And, I really *NEED* such a switch utility now and not later.
So, please take this as a soft "no" vote for now. I encourage Steven to get more involved in the discussion and consider all the issues raised. As soon as these matters are ironed out, fire up another review ASAP.
Oh, perhaps I'd like to ask a review extension. If Steven can reply to my concerns and is willing to address them, I might still change my vote to a yes. I really need this facility now! Bottom line: more discussion please! This is a small library, yet it is (to me) very important. Let's get the design correct. Regards, -- Joel de Guzman http://www.boost-consulting.com http://spirit.sf.net

AMDG Joel de Guzman <joel <at> boost-consulting.com> writes:
Here's my review:
* What is your evaluation of the design?
It's a very simple design. Very straightforward. I'm not quite sure about the interface though:
* The specification of cases is a bit cumbersome. In the common case, they are both specified in the switch_ call and in the individual function call overloads supplied by the client (e.g.):
1) switch_<mpl::vector_c<int, 1, 2, 3> >(n, f) 2) void F::operator()(mpl::int_<1>) void F::operator()(mpl::int_<2>) void F::operator()(mpl::int_<3>)
Redundant.
If that's how you're using it yes it's redundant. There are really two use cases. One is basically what you are asking for. Separate functions for each case. The other is when you have a runtime integer and somehow want to get back to compile time. The first example I can think of off the top of my head is using switch_ to implement variant. It's pretty straightforward with the mpl sequence interface: template<class Variant, class Visitor> struct visitor_applier { typedef typename Visitor::result_type result_type; template<class N> result_type operator()(N) { visitor( variant.get<typename mpl::at<typename variant::types, N>::type>()); } Visitor& visitor; Variant& variant; }; template<class Variant, class Visitor> typename Visitor::result_type apply_visitor(Visitor visitor, Variant& variant) { visitor_applier<Variant, Visitor> impl = { visitor, variant }; return switch< mpl::range_c<int, 0, mpl::size<typename Variant::types>::value> >(variant.which(), impl); } To implement this with separate functions for each case requires the additional complexity of somehow collecting all the cases together. For this even to be possible, I would have to use a fusion sequence. Something along the lines of switch_<result_type>(n, *make_tuple(* case<1>(f1), case<2>(f2), ... )) It's possible to implement this in terms of the interface I chose and vise versa. Either one is inconvenient for some tasks. I optimized the interface for the uses that I understood best.
* I'd prefer the result type to be user-specified like in Boost.Bind instead of hard-wiring it to F::result_type.
Ok.
* The client needs to provide a specialized function object for the case detection. There's no way to use plain functions or even Boost.Function.
I've implemented switch_ many times now. Here are some links: http://tinyurl.com/28e8y2 and http://tinyurl.com/ypmgob
Having said all that, my preferred syntax is:
switch_<return_type>(n, case_<1>(f1), case_<2>(f2), case_<3>(f3), default_(fd) );
<snip>
The most important disadvantage which makes it a no-go for me is that the number of cases is hard-wired into the program structure. In short, this makes it so much like the native switch statement that there is little benefit to using it.
Not at the moment. I think we need a more thorough discussion on alternative interfaces. We also need to discuss the issues that were raised in the review. I'm eager to hear Steven's replies. He seem to be a bit too quiet?
Sorry. A lot of the messages haven't been appearing on GMane... Thanks a lot for your review, Joel. In Christ, Steven Watanabe

Stjepan Rajko wrote:
* What is your evaluation of the design?
I very much like the simplicity. I'm not sure it really suits the name 'switch_', as I'd expect something syntactically different (something like Joel sketched out in his review, and with some argument forwarding), however, I think it's an important building block that should be kept as simple as possible. I think it should probably be a part of MPL. It would fit perfectly next to 'mpl::for_each' -- 'mpl::for_petrified_constant' or so :). As pointed out in other posts I think the default default case behavior should be changed not to throw but to use the default constructor if no default case function object is specified. A default case function object returning 'void' should be assumed (and 'assert'ed) not to return in a context where a (non-void) result is expected (implementation hint: the built-in comma operator allows void arguments and an overloaded comma operator doesn't). As also pointed out and discussed in other posts I'm very much for having the result type specified explicitly (as opposed to deducing it from the function object).
* What is your evaluation of the implementation?
Again, I like the simplicity. Keep it this way: If "fallthrough cases" are going to be implemented it should be done in another template (or should it even be a full-blown Duff loop?). Another variant of the template taking min/max instead of a sequence might be a good idea, as it can make things compile faster in many typical use cases (well, that would be half-way a design thing).
* What is your evaluation of the documentation?
Works for me. As mentioned before, the reference could be more detailed at places, regarding the equality with MPL constants and the exact types passed to the function objects (even if overloading operator() is uncommon - it might be occasionally useful to deduce a non-type constant from it).
* What is your evaluation of the potential usefulness of the library?
The implemented functionality is a must-have for metaprogram-driven code generation.
* Did you try to use the library?
Not yet.
* How much effort did you put into your evaluation? A glance? A quick reading? In-depth study?
Doc reading & discussion.
* Are you knowledgeable about the problem domain?
Yes.
* Do you think the library should be accepted as a Boost library? Be sure to say this explicitly so that your other comments don't obscure your overall opinion.
Yes, given that handling of the default and the result type is changed as discussed in the design section of this review (or someone makes me change my mind by bringing up even better approaches :) ). Regards, Tobias

Tobias Schwinger wrote:
Stjepan Rajko wrote:
* What is your evaluation of the design?
I very much like the simplicity.
I'm not sure it really suits the name 'switch_', as I'd expect something syntactically different (something like Joel sketched out in his review, and with some argument forwarding), however, I think it's an important building block that should be kept as simple as possible.
I can assure you that my suggestion is "as simple as possible, but not simpler" ;-) Simpler than that is simply not usable to me. I know. I've been there many times. I have real world use cases for this thing. Switch is not simple. Let's not pretend it is. Here's an acid test for the API -- try to implement my suggested syntax on top of the "simple" API. You'll soon realize that you can't --without having to write the same amount of PP expansions all over again. It's not a suitable building block, like say, mpl::for_each. Regards, -- Joel de Guzman http://www.boost-consulting.com http://spirit.sf.net

AMDG Joel de Guzman <joel <at> boost-consulting.com> writes:
Here's an acid test for the API -- try to implement my suggested syntax on top of the "simple" API. You'll soon realize that you can't --without having to write the same amount of PP expansions all over again. It's not a suitable building block, like say, mpl::for_each.
Try implementing the "simple" API on top of your suggested syntax. It can't be done either. (Variadic templates would make both possible, I believe.) In Christ, Steven Watanabe

Joel de Guzman wrote:
Tobias Schwinger wrote:
Stjepan Rajko wrote:
* What is your evaluation of the design?
I very much like the simplicity.
I'm not sure it really suits the name 'switch_', as I'd expect something syntactically different (something like Joel sketched out in his review, and with some argument forwarding), however, I think it's an important building block that should be kept as simple as possible.
I can assure you that my suggestion is "as simple as possible, but not simpler" ;-) Simpler than that is simply not usable to me. I know. I've been there many times. I have real world use cases for this thing. Switch is not simple. Let's not pretend it is.
My point is that the utility we have here is in fact too different from phoenix::switch_ to compare the two (and that boost::switch_ is not the best name): What would be the point in phoenix::switch_ if it wasn't lazy? As Steven pointed out, overloading operator() (which is roughly what you propose - just using an uglier syntax) is a rare use case. Usually we'll want to feed the index to some metaprogramming machinery and instantiate several high-speed dispatched control paths. This utility is about generating the machine code of 'switch'. It does not necessarily have to mimic it's syntax!
Here's an acid test for the API -- try to implement my suggested syntax on top of the "simple" API. You'll soon realize that you can't
No?! Here is how it's done: o Use fusion::unfused or wire up some operators to build a fusion::map of constant / function object pairs, and o create a function object from that map that looks up the appropriate function object in the map and calls it (if we want it to be non-lazily evaluated) or returns it (otherwise) and is fed to 'switch_'. Please note that the reverse requires indeed to repeat the PP code. Also note that it would also work to implement the variant visitor. Also also note that the 'switch_' name is obviously confusing :-).
It's not a suitable building block, like say, mpl::for_each.
No, but it fits perfectly well into the same category. Regards, Tobias

Tobias Schwinger wrote:
Joel de Guzman wrote:
Tobias Schwinger wrote:
Stjepan Rajko wrote:
* What is your evaluation of the design?
I very much like the simplicity.
I'm not sure it really suits the name 'switch_', as I'd expect something syntactically different (something like Joel sketched out in his review, and with some argument forwarding), however, I think it's an important building block that should be kept as simple as possible. I can assure you that my suggestion is "as simple as possible, but not simpler" ;-) Simpler than that is simply not usable to me. I know. I've been there many times. I have real world use cases for this thing. Switch is not simple. Let's not pretend it is.
My point is that the utility we have here is in fact too different from phoenix::switch_ to compare the two (and that boost::switch_ is not the best name):
Maybe.
What would be the point in phoenix::switch_ if it wasn't lazy?
Good point! IIUC, what you mean is: what's the advantage of this: switch_<return_type>(n, case_<1>(f1), case_<2>(f2), case_<3>(f3), default_(fd) ); vs. using a plain switch: switch (n) { case 1: f1(); break; case 2: f2(); break; case 3: f3(); break; default: fd; }; ? In a word, composability. You can compose the first, but not the second. Ok, that seems wrong in my syntax ('twas just a sketch :-) Perhaps: switch_<return_type>(n) ( case_<1>(f1), case_<2>(f2), case_<3>(f3), default_(fd) ); Oughta do it. Did that clear things up?
As Steven pointed out, overloading operator() (which is roughly what you propose - just using an uglier syntax) is a rare use case. Usually we'll want to feed the index to some metaprogramming machinery and instantiate several high-speed dispatched control paths.
It can amount to the same thing if we allow the cases: case<1>(f1), case<2>(f2), ... case<N>(fN) to be a fusion sequence.
This utility is about generating the machine code of 'switch'. It does not necessarily have to mimic it's syntax!
Why not? Syntax matters! And again, it's not just syntax. Why limit yourself to a not-really-a-switch-but-similar library when you can have a real-switch library with all the possibilities that switch can provide -- fallback, individual functions, defaults, etc.
Here's an acid test for the API -- try to implement my suggested syntax on top of the "simple" API. You'll soon realize that you can't
No?! Here is how it's done:
o Use fusion::unfused or wire up some operators to build a fusion::map of constant / function object pairs, and
o create a function object from that map that looks up the appropriate function object in the map and calls it (if we want it to be non-lazily evaluated) or returns it (otherwise) and is fed to 'switch_'.
Ok, indeed you can. Man, I sometimes forget about this thing called Fusion.
Please note that the reverse requires indeed to repeat the PP code.
What reverse?
Also note that it would also work to implement the variant visitor.
Indeed.
Also also note that the 'switch_' name is obviously confusing :-).
Because the API *is* confusing. Why invent another scheme when we can mimic a well-proven, time-tested mechanism, albeit with an FP flavor --To me, that's most important: the possibility of using higher order functions in the cases. I'll leave this as-is for now. Let me ruminate on this some more. I'll try to come up with a better suggestion refining my first. Regards, -- Joel de Guzman http://www.boost-consulting.com http://spirit.sf.net

Joel de Guzman wrote:
Tobias Schwinger wrote:
Tobias Schwinger wrote:
Stjepan Rajko wrote:
* What is your evaluation of the design?
I very much like the simplicity.
I'm not sure it really suits the name 'switch_', as I'd expect something syntactically different (something like Joel sketched out in his review, and with some argument forwarding), however, I think it's an important building block that should be kept as simple as possible. I can assure you that my suggestion is "as simple as possible, but not simpler" ;-) Simpler than that is simply not usable to me. I know. I've been there many times. I have real world use cases for this thing. Switch is not simple. Let's not pretend it is. My point is that the utility we have here is in fact too different from
Joel de Guzman wrote: phoenix::switch_ to compare the two (and that boost::switch_ is not the best name):
Maybe.
Yes, maybe ;-). ...
Oughta do it. Did that clear things up?
Yep. FP switch has its place - but it's not what we've got here. This one's about automation. Granted, both could be combined - exposing some sort of "fused switch" (taking a sequence) and I'd be all for it if compile time was for free.
This utility is about generating the machine code of 'switch'. It does not necessarily have to mimic it's syntax!
Why not? Syntax matters!
It sure does and elegance is formally defined as the least you can get away with ;-).
And again, it's not just syntax. Why limit yourself to a not-really-a-switch-but-similar library when you can have a real-switch library with all the possibilities that switch can provide -- fallback, individual functions, defaults, etc.
Because a not-really-a-switch-but-similar utility suffices in many use cases and provides a more lightweight building block which really-switch can be easily implemented on top of. The fact that we can't do it the other way around (at least not easily -- even given "fused switch") indicates that the current design is the better-suited one for automation purposes!
Also also note that the 'switch_' name is obviously confusing :-).
Because the API *is* confusing.
I had a look at this utility back when Steven first posted it and had similar doubts. I tried to write Steven about it, but the email never got finished and while trying I realized I got misled by the name.
Why invent another scheme when we can mimic a well-proven, time-tested mechanism,
It isn't about inventing -- it's about reducing switch to its essence! Regards, Tobias

Tobias Schwinger wrote:
FP switch has its place - but it's not what we've got here. This one's about automation.
Huh? The fact that you have a higher order function makes it essentially FP. Even the humblest for_each is FP.
Granted, both could be combined - exposing some sort of "fused switch" (taking a sequence) and I'd be all for it if compile time was for free.
* I'd be for the "simple" solution if the excess scaffolding you need to add to make it to the truer-switch is compile time free. Transforming the dumb switch_ to a smarter alternative would require Fusion. * The change in interface to what I am suggesting does not have to be heavy in terms of additional code. It's just the syntax, basically. It can be done without Fusion. I know. I did it before. The smarts is almost already in the PP code.
This utility is about generating the machine code of 'switch'. It does not necessarily have to mimic it's syntax! Why not? Syntax matters!
It sure does and elegance is formally defined as the least you can get away with ;-).
And again, it's not just syntax. Why limit yourself to a not-really-a-switch-but-similar library when you can have a real-switch library with all the possibilities that switch can provide -- fallback, individual functions, defaults, etc.
Because a not-really-a-switch-but-similar utility suffices in many use cases and provides a more lightweight building block which really-switch can be easily implemented on top of.
That's also a thing a point of disagreement. My solution does not have to be heavyweight. Implementing it on top of the dumb solution, OTOH, will definitely be heavyweight, requiring Fusion.
The fact that we can't do it the other way around (at least not easily -- even given "fused switch") indicates that the current design is the better-suited one for automation purposes!
I don't know what you mean by "other way around".
Also also note that the 'switch_' name is obviously confusing :-). Because the API *is* confusing.
I had a look at this utility back when Steven first posted it and had similar doubts. I tried to write Steven about it, but the email never got finished and while trying I realized I got misled by the name.
Why invent another scheme when we can mimic a well-proven, time-tested mechanism,
It isn't about inventing -- it's about reducing switch to its essence!
Again, why reduce the essence! To make it lightweight? Then, I argue that you are wrong. My proposed interface does not have to be heavyweight. It can be lightweight if done correctly without intervening libraries like Fusion. Regards, -- Joel de Guzman http://www.boost-consulting.com http://spirit.sf.net

Joel de Guzman wrote:
Tobias Schwinger wrote:
FP switch has its place - but it's not what we've got here. This one's about automation.
Huh? The fact that you have a higher order function makes it essentially FP. Even the humblest for_each is FP.
Yes, but that doesn't justify its existence -- the fact that you can easily automate it without using the PP lib does! Both the built-in switch statement and the interfaces you proposed restrict client code to a single number of cases which is known at preprocessing time. Steven's approach OTOH allows the number of cases to vary at compile time. Further, requiring a function object for every case is often not required and just overhead. Picking the simplest interface without these limitations will basically yield what we are currently reviewing.
This utility is about generating the machine code of 'switch'. It does not necessarily have to mimic it's syntax! Why not? Syntax matters! It sure does and elegance is formally defined as the least you can get away with ;-).
And again, it's not just syntax. Why limit yourself to a not-really-a-switch-but-similar library when you can have a real-switch library with all the possibilities that switch can provide -- fallback, individual functions, defaults, etc. Because a not-really-a-switch-but-similar utility suffices in many use cases and provides a more lightweight building block which really-switch can be easily implemented on top of.
That's also a thing a point of disagreement. My solution does not have to be heavyweight. Implementing it on top of the dumb solution, OTOH, will definitely be heavyweight, requiring Fusion.
The fact that we can't do it the other way around (at least not easily -- even given "fused switch") indicates that the current design is the better-suited one for automation purposes!
I don't know what you mean by "other way around".
Implementing Steven's interface in terms of yours is not possible without repeating the PP code.
Why invent another scheme when we can mimic a well-proven, time-tested mechanism, It isn't about inventing -- it's about reducing switch to its essence!
Again, why reduce the essence! To make it lightweight?
Not only. More importantly to make it flexible. Regards, Tobias

Tobias Schwinger wrote:
Joel de Guzman wrote:
Tobias Schwinger wrote:
FP switch has its place - but it's not what we've got here. This one's about automation. Huh? The fact that you have a higher order function makes it essentially FP. Even the humblest for_each is FP.
Yes, but that doesn't justify its existence -- the fact that you can easily automate it without using the PP lib does!
I'm confused with that sentence. Doesn't make sense to me. Steven's original interface is already FP out of the box.
Both the built-in switch statement and the interfaces you proposed restrict client code to a single number of cases which is known at preprocessing time. Steven's approach OTOH allows the number of cases to vary at compile time.
Further, requiring a function object for every case is often not required and just overhead.
"Often"? On the contrary. It's the opposite. What's your use cases? I have lots with composition of *many* function objects. We've discussed some in earlier posts. "Overhead"? There is no overhead. The compiler will optimize them away. Look at how proto does it, for example. Zero overhead.
Picking the simplest interface without these limitations will basically yield what we are currently reviewing.
For the purpose of clarification, let me call the original interface A and my proposal B. Again: * Transforming A to B requires minimal amount of coding. The smarts is already in the PP code. The cost is cheap. * Building B on top of A requires Fusion or repeating the PP code all over again. The cost is high!
The fact that we can't do it the other way around (at least not easily -- even given "fused switch") indicates that the current design is the better-suited one for automation purposes! I don't know what you mean by "other way around".
Implementing Steven's interface in terms of yours is not possible without repeating the PP code.
But why do you want to do that? The interface I have encompasses Steven's.
Why invent another scheme when we can mimic a well-proven, time-tested mechanism, It isn't about inventing -- it's about reducing switch to its essence! Again, why reduce the essence! To make it lightweight?
Not only. More importantly to make it flexible.
And you are contradicting yourself if you say that. The interface is far from flexible. You only got from A to B with the help of another powerful library. It's not the switch_ library that's flexible. It is Fusion that's flexible. I could implement my desired interface with PP and Fusion more easily than going through the half-baked switch_ under review. Regards, -- Joel de Guzman http://www.boost-consulting.com http://spirit.sf.net

Joel de Guzman wrote:
Tobias Schwinger wrote:
Joel de Guzman wrote:
Tobias Schwinger wrote:
Both the built-in switch statement and the interfaces you proposed restrict client code to a single number of cases which is known at preprocessing time. Steven's approach OTOH allows the number of cases to vary at compile time.
Further, requiring a function object for every case is often not required and just overhead.
"Often"? On the contrary. It's the opposite. What's your use cases?
Ones where I don't know the number of cases at preprocessing time...
I have lots with composition of *many* function objects.
...so those ^^ will be in a Sequence. Therefore I'm happy with the pure dispatch and a compile time index. Further, that index is more flexible, as I can look whatever I like (not just function objects but arbitrary objects).
We've discussed some in earlier posts. "Overhead"? There is no overhead. The compiler will optimize them away. Look at how proto does it, for example.
I know. This process will cost precious compile time!
Zero overhead.
Depends on what you count...
Picking the simplest interface without these limitations will basically yield what we are currently reviewing.
For the purpose of clarification, let me call the original interface A and my proposal B.
Again:
* Transforming A to B requires minimal amount of coding. The smarts is already in the PP code. The cost is cheap.
The cost is cheap and the effect is destructive :-).
* Building B on top of A requires Fusion or repeating the PP code all over again. The cost is high!
It's not about *repeating* -- it takes 'unfused' or PP code to mess up the interface and remove all flexibility from it :-). Building A on top of B is much worse, and this time we *are* repeating the PP code! A brilliant software architect like you should find this sign alarming enough to take a deep breath and give things another thought.
The fact that we can't do it the other way around (at least not easily -- even given "fused switch") indicates that the current design is the better-suited one for automation purposes! I don't know what you mean by "other way around".
Implementing Steven's interface in terms of yours is not possible without repeating the PP code.
But why do you want to do that?
To allow the number of cases to vary at compile time (as opposed to preprocessing time). Regards, Tobias

Tobias Schwinger wrote:
Joel de Guzman wrote:
Tobias Schwinger wrote:
Joel de Guzman wrote:
Tobias Schwinger wrote:
Both the built-in switch statement and the interfaces you proposed restrict client code to a single number of cases which is known at preprocessing time. Steven's approach OTOH allows the number of cases to vary at compile time.
Further, requiring a function object for every case is often not required and just overhead. "Often"? On the contrary. It's the opposite. What's your use cases?
Ones where I don't know the number of cases at preprocessing time...
I think we are on a different page and it's perhaps a good idea to look back beyond our current mindset. AFAIK, my proposal *can* accommodate that.
I have lots with composition of *many* function objects.
...so those ^^ will be in a Sequence. Therefore I'm happy with the pure dispatch and a compile time index.
Further, that index is more flexible, as I can look whatever I like (not just function objects but arbitrary objects).
We've discussed some in earlier posts. "Overhead"? There is no overhead. The compiler will optimize them away. Look at how proto does it, for example.
I know. This process will cost precious compile time!
Everything costs at compile time.
Zero overhead.
Depends on what you count...
If you count compile time, I'd say minimal overhead. But really, all this talk at this early stage is premature optimization if you jump to the conclusion that the added interface is not sound because of this potential overhead.
Picking the simplest interface without these limitations will basically yield what we are currently reviewing. For the purpose of clarification, let me call the original interface A and my proposal B.
Again:
* Transforming A to B requires minimal amount of coding. The smarts is already in the PP code. The cost is cheap.
The cost is cheap and the effect is destructive :-).
Why is it destructive? You must be seeing a use-case that I don't.
* Building B on top of A requires Fusion or repeating the PP code all over again. The cost is high!
It's not about *repeating* -- it takes 'unfused' or PP code to mess up the interface and remove all flexibility from it :-).
I'm confused. Please be more straight to the point.
Building A on top of B is much worse, and this time we *are* repeating the PP code! A brilliant software architect like you should find this sign alarming enough to take a deep breath and give things another thought.
To be honest, I really don't see your point. I still don't see why you'd want to build A on top of B.
The fact that we can't do it the other way around (at least not easily -- even given "fused switch") indicates that the current design is the better-suited one for automation purposes! I don't know what you mean by "other way around".
Implementing Steven's interface in terms of yours is not possible without repeating the PP code. But why do you want to do that?
To allow the number of cases to vary at compile time (as opposed to preprocessing time).
You already said that. Here: template <typename Cases> void foo(Cases cases) { switch_<void>(n)(cases); } The number of cases *can* vary at compile time. I have a guess that we're talking past each other. Regards, -- Joel de Guzman http://www.boost-consulting.com http://spirit.sf.net

Joel de Guzman wrote:
Tobias Schwinger wrote:
Joel de Guzman wrote:
Building A on top of B is much worse, and this time we *are* repeating the PP code! A brilliant software architect like you should find this sign alarming enough to take a deep breath and give things another thought.
To be honest, I really don't see your point. I still don't see why you'd want to build A on top of B.
(for context: A is the original Steven API, B is my proposed API) I still don't see why you'd want to build A on top of B but... Actually... there is a way to do it, and it's very trivial. If you really want to insist on the big function object, BF, with all the cases, just do a bind to the same function object BF for all the cases. The bind is also trivial, it just curries the N-case argument to make it a nullary (no need for boost.bind). Simple. But then again, why do you insist on a big humongous function object? Modular is better! Regards, -- Joel de Guzman http://www.boost-consulting.com http://spirit.sf.net

It makes sense to answer this one in LIFO order. Joel de Guzman wrote:
template <typename Cases> void foo(Cases cases) { switch_<void>(n)(cases); }
Aha! Then 'switch_<void>(n)' is a fused function and 'cases' a sequence of cases (whether implemented with Fusion or not)...
The number of cases *can* vary at compile time. I have a guess that we're talking past each other.
Yes, obviously -- I think because of a pair of parentheses. This code switch_<result>(n) (( // ... hand-written cases, note (( )) )); would've told me more easily what you're up to :-).
For the purpose of clarification, let me call the original interface A and my proposal B.
Again:
* Transforming A to B requires minimal amount of coding. The smarts is already in the PP code. The cost is cheap. The cost is cheap and the effect is destructive :-).
Why is it destructive?
We lose the index and the chance to easily use a single function for all cases.
You must be seeing a use-case that I don't.
Yes, see below. [...]
To be honest, I really don't see your point. I still don't see why you'd want to build A on top of B.
1. As a thought experiment to figure out which of the two variants is the more basic one, and 2. for parameterizing a single function with compile time information looked up by index (it's quite common and -as you can probably guess- the index can be very handy inside the metaprogram). Now that I have figured out that your interface accepts sequences, we can actually express the transforms between the two: A ---> B: function := L(I): functions[I]() B ---> A: transform(cases, L(I): make_pair<I>(bind(function,I()))) // Notation: // ========= // uppercase - types // lowercase - objects // L(args): - lambda composition I don't expect us to a reach consensus, but hopefully we do understand each other's points now. Regards, Tobias

AMDG Tobias Schwinger <tschwinger <at> isonews2.com> writes:
To be honest, I really don't see your point. I still don't see why you'd want to build A on top of B.
1. As a thought experiment to figure out which of the two variants is the more basic one, and
2. for parameterizing a single function with compile time information looked up by index (it's quite common and -as you can probably guess- the index can be very handy inside the metaprogram).
Now that I have figured out that your interface accepts sequences, we can actually express the transforms between the two:
A ---> B: function := L(I): functions[I]() B ---> A: transform(cases, L(I): make_pair<I>(bind(function,I())))
// Notation: // ========= // uppercase - types // lowercase - objects // L(args): - lambda composition
I don't expect us to a reach consensus, but hopefully we do understand each other's points now.
How about: switch<result_type>(n, Cases, f, default?) Make Cases a fusion sequence the types of which must have a nested ::value. Then, make each case look like: typedef fusion::result_of::value_at_c<Cases, n>::type case_n; case case_n::value: if(returns<case_n>()) return(f(fusion::at_c<n>(cases))); else f(fusion::at_c<n>(cases)); If Cases is an MPL sequence this becomes equivalent to A. By using template<int N, class F, bool fallthrough = false> struct case_t { static const int value = N; F impl; }; template<int N, class F> struct returns<case_t<N, F, true> > : mpl::false_ {}; in Cases and template<class R> struct call_function { template<class T> R operator()(T t) { return(t()); } }; for f it becomes B. Just a thought... In Christ, Steven Watanabe

Tobias Schwinger wrote:
Then 'switch_<void>(n)' is a fused function and 'cases' a sequence of cases (whether implemented with Fusion or not)...
The number of cases *can* vary at compile time. I have a guess that we're talking past each other.
Yes, obviously -- I think because of a pair of parentheses. This code
switch_<result>(n) (( // ... hand-written cases, note (( )) ));
would've told me more easily what you're up to :-).
I often add the double parens. Someone asked why at BoostCon'07. I answered "because I like it" :-) Now, I have a more valid reason. Yeah... sorry about the confusion.
For the purpose of clarification, let me call the original interface A and my proposal B.
Again:
* Transforming A to B requires minimal amount of coding. The smarts is already in the PP code. The cost is cheap. The cost is cheap and the effect is destructive :-).
Why is it destructive?
We lose the index and the chance to easily use a single function for all cases.
Index: * I also recall sometime ago when attributes for Spirit2 was being discussed. For alternates: (a | b | c)[f] the question was: do f receive the index or not. Many times it is useful. * With for_each (and many of the algorithms), one disadvantage over the lowly for or while is the access to the index of the iterator. Many times those are usefule too. My answer to these kind of concerns now is: bind it when you need it. Single function: I'm a strong advocate of smaller is better. Modularity matters. Big classes (or in this case function objects), metaprogram blobs (i.e. traits), humongous enums, and all such sort of dinosours :-) are best avoided. They are convenient up to a certain extent and becomes unmanageable beyond a certain limit. In all of my use cases, I have N functions that are provided elsewhere and I have no control over (e.g. parser functions). I insist that this is the more common use case. Grouping them into a single big struct is an unnecessary and cumbersome step. Still, if people insist, I outlined a way to convert the big function object to smaller 'bound' objects, in another post. Just bind 'em into smaller function chunks.
You must be seeing a use-case that I don't.
Yes, see below.
[...]
To be honest, I really don't see your point. I still don't see why you'd want to build A on top of B.
1. As a thought experiment to figure out which of the two variants is the more basic one, and
2. for parameterizing a single function with compile time information looked up by index (it's quite common and -as you can probably guess- the index can be very handy inside the metaprogram).
Now that I have figured out that your interface accepts sequences, we can actually express the transforms between the two:
A ---> B: function := L(I): functions[I]() B ---> A: transform(cases, L(I): make_pair<I>(bind(function,I())))
// Notation: // ========= // uppercase - types // lowercase - objects // L(args): - lambda composition
I don't expect us to a reach consensus, but hopefully we do understand each other's points now.
That's a relief ;-) Now that that's cleared, let me /push/ now the other benefits of my proposed interface: * Ability to allow fall-through and break: case_<1>(f1, break_), // no fall-through case_<2>(f2), // fall-through (by default) * Allow multiple case handling: case_<'x', 'y'>(f2), // handle 'x' and 'y' Regards, -- Joel de Guzman http://www.boost-consulting.com http://spirit.sf.net

Joel de Guzman wrote:
Tobias Schwinger wrote:
Then 'switch_<void>(n)' is a fused function and 'cases' a sequence of cases (whether implemented with Fusion or not)...
The number of cases *can* vary at compile time. I have a guess that we're talking past each other.
Yes, obviously -- I think because of a pair of parentheses. This code
switch_<result>(n) (( // ... hand-written cases, note (( ));
would've told me more easily what you're up to :-). )) I often add the double parens. Someone asked why at BoostCon'07. I answered "because I like it" :-) Now, I have a more valid reason. Yeah... sorry about the confusion.
Never mind! This way we "emulated" some additional reviewers producing a lot of noise ;-).
For the purpose of clarification, let me call the original interface A and my proposal B.
Again:
* Transforming A to B requires minimal amount of coding. The smarts is already in the PP code. The cost is cheap. The cost is cheap and the effect is destructive :-).
Why is it destructive?
We lose the index and the chance to easily use a single function for all cases.
Index:
* I also recall sometime ago when attributes for Spirit2 was being discussed. For alternates:
(a | b | c)[f]
the question was: do f receive the index or not. Many times it is useful.
* With for_each (and many of the algorithms), one disadvantage over the lowly for or while is the access to the index of the iterator. Many times those are usefule too.
My answer to these kind of concerns now is: bind it when you need it.
Single function:
I'm a strong advocate of smaller is better. Modularity matters. Big classes (or in this case function objects), metaprogram blobs (i.e. traits), humongous enums, and all such sort of dinosours :-) are best avoided. They are convenient up to a certain extent and becomes unmanageable beyond a certain limit.
In all of my use cases, I have N functions that are provided elsewhere and I have no control over (e.g. parser functions). I insist that this is the more common use case. Grouping them into a single big struct is an unnecessary and cumbersome step.
Still, if people insist, I outlined a way to convert the big function object to smaller 'bound' objects, in another post. Just bind 'em into smaller function chunks.
I think it's a misconception to assume that a single function object will be inherently big. It's just another degree of freedom (namely what to look up) exposed to the user. Whether one can appreciate it seems a matter of taste to me...
A ---> B: function := L(I): functions[I]() B ---> A: transform(cases, L(I): make_pair<I>(bind(function,I())))
...and I happen to prefer the first transform.
// Notation: // ========= // uppercase - types // lowercase - objects // L(args): - lambda composition
I don't expect us to a reach consensus, but hopefully we do understand each other's points now.
That's a relief ;-)
So it wasn't all for nothing :-P...
Now that that's cleared, let me /push/ now the other benefits of my proposed interface:
* Ability to allow fall-through and break:
case_<1>(f1, break_), // no fall-through case_<2>(f2), // fall-through (by default)
* Allow multiple case handling:
case_<'x', 'y'>(f2), // handle 'x' and 'y'
This stuff is pretty cool. But it -again- makes me think we are in fact talking about two different kinds of switch tools: One that can be used manually with lots of syntactic sugar - and another one that's mean and lean and doesn't have to be that pretty because it's intended to be fed its input in form of an automatically computed sequence, anyway... Regards, Tobias

AMDG Tobias Schwinger <tschwinger <at> isonews2.com> writes:
Now that that's cleared, let me /push/ now the other benefits of my proposed interface:
* Ability to allow fall-through and break:
case_<1>(f1, break_), // no fall-through case_<2>(f2), // fall-through (by default)
* Allow multiple case handling:
case_<'x', 'y'>(f2), // handle 'x' and 'y'
This stuff is pretty cool. But it -again- makes me think we are in fact talking about two different kinds of switch tools: One that can be used manually with lots of syntactic sugar - and another one that's mean and lean and doesn't have to be that pretty because it's intended to be fed its input in form of an automatically computed sequence, anyway...
I'm inclined to agree. Regarding multiple case handling, there have to be two layers. The outer layer exposed to the user has to separate everything into separate cases, possibly using a no-op function that falls through. Expanding the sequence seems a bit too heavy weight for me. I would at least want to have the internal dispatching function exposed directly so that those who don't need this kind of fanciness don't have to pay for it. In Christ, Steven Watanabe

Steven Watanabe wrote:
AMDG
Tobias Schwinger <tschwinger <at> isonews2.com> writes:
Now that that's cleared, let me /push/ now the other benefits of my proposed interface:
* Ability to allow fall-through and break:
case_<1>(f1, break_), // no fall-through case_<2>(f2), // fall-through (by default)
* Allow multiple case handling:
case_<'x', 'y'>(f2), // handle 'x' and 'y'
This stuff is pretty cool. But it -again- makes me think we are in fact talking about two different kinds of switch tools: One that can be used manually with lots of syntactic sugar - and another one that's mean and lean and doesn't have to be that pretty because it's intended to be fed its input in form of an automatically computed sequence, anyway...
I'm inclined to agree.
Regarding multiple case handling, there have to be two layers. The outer layer exposed to the user has to separate everything into separate cases, possibly using a no-op function that falls through. Expanding the sequence seems a bit too heavy weight for me. I would at least want to have the internal dispatching function exposed directly so that those who don't need this kind of fanciness don't have to pay for it.
Again, please post to both lists. I almost missed this post. Ok, if you are willing to incorporate these into the switch_ library, as a separate layer. I'd be happy to change my vote. I'm sure we all agree that I've presented valid points that need to be addressed. I don't think we need another library to have these implemented. I wouldn't mind a dual layer library. Layers are good. Regards, -- Joel de Guzman http://www.boost-consulting.com http://spirit.sf.net

Tobias Schwinger wrote:
Joel de Guzman wrote:
Single function:
I'm a strong advocate of smaller is better. Modularity matters. Big classes (or in this case function objects), metaprogram blobs (i.e. traits), humongous enums, and all such sort of dinosours :-) are best avoided. They are convenient up to a certain extent and becomes unmanageable beyond a certain limit.
In all of my use cases, I have N functions that are provided elsewhere and I have no control over (e.g. parser functions). I insist that this is the more common use case. Grouping them into a single big struct is an unnecessary and cumbersome step.
Still, if people insist, I outlined a way to convert the big function object to smaller 'bound' objects, in another post. Just bind 'em into smaller function chunks.
I think it's a misconception to assume that a single function object will be inherently big.
It's not about "big". It's about modular vs. monolithic. The best designs, IMO, are modular.
It's just another degree of freedom (namely what to look up) exposed to the user. Whether one can appreciate it seems a matter of taste to me...
A ---> B: function := L(I): functions[I]() B ---> A: transform(cases, L(I): make_pair<I>(bind(function,I())))
...and I happen to prefer the first transform.
I guess that reasoning is due to your thinking that the one monolithic automation functor is more common. That's where I disagree, vehemently(!). I've got lots of use cases that show otherwise. I assert that the more common case is to have as input N functions (member/free function pointers, binded functions, boost.functions, etc) and the switch_ is used to dispatch. Let me emphasize this to the strongest extent: A single monolithic function for the cases in a switch facility is simply and utterly wrong, wrong and wrong! :-P If the design stands as it is, I better write my own. But it won't get my vote. Regards, -- Joel de Guzman http://www.boost-consulting.com http://spirit.sf.net

On Jan 10, 2008 7:58 PM, Joel de Guzman <joel@boost-consulting.com> wrote:
Tobias Schwinger wrote:
Joel de Guzman wrote:
Single function:
I'm a strong advocate of smaller is better. Modularity matters. Big classes (or in this case function objects), metaprogram blobs (i.e. traits), humongous enums, and all such sort of dinosours :-) are best avoided. They are convenient up to a certain extent and becomes unmanageable beyond a certain limit.
In all of my use cases, I have N functions that are provided elsewhere and I have no control over (e.g. parser functions). I insist that this is the more common use case. Grouping them into a single big struct is an unnecessary and cumbersome step.
Still, if people insist, I outlined a way to convert the big function object to smaller 'bound' objects, in another post. Just bind 'em into smaller function chunks.
I think it's a misconception to assume that a single function object will be inherently big.
It's not about "big". It's about modular vs. monolithic. The best designs, IMO, are modular.
It's just another degree of freedom (namely what to look up) exposed to the user. Whether one can appreciate it seems a matter of taste to me...
A ---> B: function := L(I): functions[I]() B ---> A: transform(cases, L(I): make_pair<I>(bind(function,I())))
...and I happen to prefer the first transform.
I guess that reasoning is due to your thinking that the one monolithic automation functor is more common. That's where I disagree, vehemently(!). I've got lots of use cases that show otherwise. I assert that the more common case is to have as input N functions (member/free function pointers, binded functions, boost.functions, etc) and the switch_ is used to dispatch.
Let me emphasize this to the strongest extent: A single monolithic function for the cases in a switch facility is simply and utterly wrong, wrong and wrong! :-P If the design stands as it is, I better write my own. But it won't get my vote.
FWIW, I have a use case in which a single function is used for all the cases, but it doesn't seem monolithic. Structurally it is very similar to the example given in the proposed library's documentation - the case function looks something like this: // get_port_c behaves similarly to fusion::at_c // port_t<T> is a run-time polymorphic class derived from port // the goal of this whole thing is to return a port * (port is the base class of a run-time polymorphic class hierarchy) from an object of some type T (T is required to model a certain concept - Port) template<typename Component> struct get_port_case { typedef void result_type; template<class Case> void operator()(Case) const { ret.reset(new port_t< typename result_of::get_port_c< Component, Case::value, >::type >(get_port_c<Case::value>(c))); } get_port_case(Component& c, std::auto_ptr<port> &ret) : c(c), ret(ret) {} Component &c; std::auto_ptr<port> &ret; }; It is used like this (the number of ports is known at compile time but not at "coding time"): template<typename Component> std::auto_ptr<port> get_port(Component &c, int port_num) { std::auto_ptr<port> ret; typedef mpl::range_c< int, 0, mpl::size< typename traits_of<Component>::type::ports> ::value> range; boost::switch_<range>(port_num, detail::get_port_case<Component>(c, ret)); return ret; } "A" seems ideal for this use case. I have trouble seeing how to use "B" for it (easily) without making it so that the index is passed to the function object and allowing something like case<range>(...). But I might not be seeing all the possibilities. How can I implement this using "B"? Regardless of this example, if there isn't a "one size fits all" interface that can be implemented in a lightweight enough fashion to satisfy everyone, * should the Switch library provide both interfaces in separate functions (even if they are both implemented with their own PP)?; or * maybe the submitted library should have a different name/more clearly stated scope, as Tobias has suggested? Kudos to all of you for trying so hard at finding a solution for this! Stjepan

AMDG Joel de Guzman <joel <at> boost-consulting.com> writes:
Single function:
I'm a strong advocate of smaller is better. Modularity matters. Big classes (or in this case function objects), metaprogram blobs (i.e. traits), humongous enums, and all such sort of dinosours are best avoided. They are convenient up to a certain extent and becomes unmanageable beyond a certain limit.
When it's being managed by humans sure. When the creation of the whatever is automated and it's never used by humans directly it doesn't matter much. The big function object you refer to has a single conceptual responsibility. It isn't a blob any more than a fusion sequence used for the same purpose.
In all of my use cases, I have N functions that are provided elsewhere and I have no control over (e.g. parser functions). I insist that this is the more common use case. Grouping them into a single big struct is an unnecessary and cumbersome step.
It may be more common. What I know for certain is that it's slightly easier to implement my interface in terms of yours than the other way around.
Still, if people insist, I outlined a way to convert the big function object to smaller 'bound' objects, in another post. Just bind 'em into smaller function chunks.
That's true for your use cases. For cases where you have a runtime integer which you need to convert into a compile time integer, which is then used to index into an mpl sequence for instance (as in variant) separate function objects are pointless. I don't want to relegate this usage to the side just because you don't happen to need it. There are enough use cases that it needs to be considered, IMO. Another example is for finite state machines. Also, at one point I was considering a multiple dispatcher that involved a switch. In Christ, Steven Watanabe

I almost missed Steven's post. Please post to both lists. Steven Watanabe wrote:
AMDG
Joel de Guzman <joel <at> boost-consulting.com> writes:
Single function:
I'm a strong advocate of smaller is better. Modularity matters. Big classes (or in this case function objects), metaprogram blobs (i.e. traits), humongous enums, and all such sort of dinosours are best avoided. They are convenient up to a certain extent and becomes unmanageable beyond a certain limit.
When it's being managed by humans sure. When the creation of the whatever is automated and it's never used by humans directly it doesn't matter much. The big function object you refer to has a single conceptual responsibility. It isn't a blob any more than a fusion sequence used for the same purpose.
Ok. Good point.
In all of my use cases, I have N functions that are provided elsewhere and I have no control over (e.g. parser functions). I insist that this is the more common use case. Grouping them into a single big struct is an unnecessary and cumbersome step.
It may be more common. What I know for certain is that it's slightly easier to implement my interface in terms of yours than the other way around.
Still, if people insist, I outlined a way to convert the big function object to smaller 'bound' objects, in another post. Just bind 'em into smaller function chunks.
That's true for your use cases. For cases where you have a runtime integer which you need to convert into a compile time integer, which is then used to index into an mpl sequence for instance (as in variant) separate function objects are pointless. I don't want to relegate this usage to the side just because you don't happen to need it. There are enough use cases that it needs to be considered, IMO. Another example is for finite state machines. Also, at one point I was considering a multiple dispatcher that involved a switch.
Yes. I agree. The index is indeed needed. See my latest posts. Regards, -- Joel de Guzman http://www.boost-consulting.com http://spirit.sf.net

Joel de Guzman wrote:
Single function:
I'm a strong advocate of smaller is better. Modularity matters. Big classes (or in this case function objects), metaprogram blobs (i.e. traits), humongous enums, and all such sort of dinosours are best avoided. They are convenient up to a certain extent and becomes unmanageable beyond a certain limit.
When it's being managed by humans sure. When the creation of the whatever is automated and it's never used by humans directly it doesn't matter much. The big function object you refer to has a single conceptual responsibility. It isn't a blob any more than a fusion sequence used for the same purpose.
Ok. Good point.
Ok, if this is the intent, then it should be clearly mentioned in the docs. But let me ask then, for the sake of discussion, how do you intend to build the struct of overloaded operator()s? If that is the intent, then why does the library not provide the necessary tools to do that? Now, you mentioned a fusion sequence. If the intent is for the big function object to be built, why is that scheme any better than simply using a fusion structure in the first place? All the mechanisms for building the container of cases are already in place. Why invent another scheme? Regards, -- Joel de Guzman http://www.boost-consulting.com http://spirit.sf.net

Tobias Schwinger wrote:
My point is that the utility we have here is in fact too different from phoenix::switch_ to compare the two (and that boost::switch_ is not the best name):
What would be the point in phoenix::switch_ if it wasn't lazy?
OK, after some thinking I managed to look beyond my own nose and figured out that a non-lazy phoenix::switch_ would be an expression-level switch (like in VHDL or Ruby) and can indeed be a practical thing. However, I don't think that's what this library is intended for.
As Steven pointed out, overloading operator() (which is roughly what you propose - just using an uglier syntax) is a rare use case. Usually we'll want to feed the index to some metaprogramming machinery and instantiate several high-speed dispatched control paths. This utility is about generating the machine code of 'switch'. It does not necessarily have to mimic it's syntax!
s/it's/its/
Here's an acid test for the API -- try to implement my suggested syntax on top of the "simple" API. You'll soon realize that you can't
No?! Here is how it's done:
o Use fusion::unfused or wire up some operators to build a fusion::map of constant / function object pairs, and
o create a function object from that map that looks up the appropriate function object in the map and calls it (if we want it to be non-lazily evaluated) or returns it (otherwise) and is fed to 'switch_'.
Just to say it explicitly: Without using *any* PP metacode!
Please note that the reverse requires indeed to repeat the PP code. Also note that it would also work to implement the variant visitor.
Here's another one: Say we have a bunch of (generic) Strategies and an Algorithm template. Now we want to choose the an Algorithm instantiation (that is, an Algorithm template specialized with a Strategy) at some entry point. We don't want dynamic calls and we want everything after the initial dispatch to be inlined. Does this sound familiar? Right we just have to substitute "Strategies" with "Scanners" and Algorithm with "RD Parser" and "some entry point" with "Rule" and we're talking about Spirit's Multi Scanner Support. A scalable solution on top of Fusion and Switch will be almost trivial.
Also also note that the 'switch_' name is obviously confusing :-).
It's not a suitable building block, like say, mpl::for_each.
No, but it fits perfectly well into the same category.
... as both are code generation constructs, for_each implementing sequential and switch_ (which I think should be called differently) selective cascading. Regards, Tobias

Tobias Schwinger wrote:
Here's another one:
Say we have a bunch of (generic) Strategies and an Algorithm template.
Now we want to choose the an Algorithm instantiation (that is, an Algorithm template specialized with a Strategy) at some entry point.
We don't want dynamic calls and we want everything after the initial dispatch to be inlined.
Does this sound familiar? Right we just have to substitute "Strategies" with "Scanners" and Algorithm with "RD Parser" and "some entry point" with "Rule" and we're talking about Spirit's Multi Scanner Support.
A scalable solution on top of Fusion and Switch will be almost trivial.
That's why I'm very interested with switch_. I'm well aware of the possibilities :) I have a lot more use cases. How about static first-follow prediction :P I don't think we have a disagreement on usefulness.
Also also note that the 'switch_' name is obviously confusing :-).
... (which I think should be called differently) selective cascading.
I disagree. I don't see any reason why we have to invent another mechanism when we can borrow the ideas from a well tested scheme: the native switch, albeit in a more FP flavor. It does not stop there. At the machine level, a switch is properly implemented as a perfect hash of N vs N functions. It is also very possible to have the N functions in a dynamic runtime container and dispatch with the speed of a switch. I did some experiments on this sometime ago and posted to this list along with some timings. We could actually get the speed of switch at runtime by computing, before hand, the perfect hashes. With some compilers, I recall even surpassing the switch speed (yes: http://lists.boost.org/Archives/boost/2004/08/69787.php) A switch_ library is crucial -- one that I would not take lightly with a half-baked solution. Regards, -- Joel de Guzman http://www.boost-consulting.com http://spirit.sf.net

Tobias Schwinger wrote:
Tobias Schwinger wrote:
My point is that the utility we have here is in fact too different from phoenix::switch_ to compare the two (and that boost::switch_ is not the best name):
What would be the point in phoenix::switch_ if it wasn't lazy?
OK, after some thinking I managed to look beyond my own nose and figured out that a non-lazy phoenix::switch_ would be an expression-level switch (like in VHDL or Ruby) and can indeed be a practical thing.
Nice one! Yeah, that too.
However, I don't think that's what this library is intended for.
That's where our disagreement starts. I don't see any reason why not. Why settle for a half baked solution when you can, with a bit more thought and a bit more code, have a truly elegant solution that encompasses a lot more area in the problem space. Regards, -- Joel de Guzman http://www.boost-consulting.com http://spirit.sf.net

AMDG Tobias Schwinger <tschwinger <at> isonews2.com> writes:
As pointed out in other posts I think the default default case behavior should be changed not to throw but to use the default constructor if no default case function object is specified.
Will do.
A default case function object returning 'void' should be assumed (and 'assert'ed) not to return in a context where a (non-void) result is expected (implementation hint: the built-in comma operator allows void arguments and an overloaded comma operator doesn't).
Personally, I'd rather just forbid a default that returns void when the return type is non-void. I don't think the comma operator is a completely fool-proof way to detect void. It could cause an ambiguity if the other type also defines an overloaded comma operator.
As also pointed out and discussed in other posts I'm very much for having the result type specified explicitly (as opposed to deducing it from the function object).
* What is your evaluation of the implementation?
Again, I like the simplicity. Keep it this way:
If "fallthrough cases" are going to be implemented it should be done in another template (or should it even be a full-blown Duff loop?).
Fallthrough cases make a lot more sense with a fusion sequence based interface.
Another variant of the template taking min/max instead of a sequence might be a good idea, as it can make things compile faster in many typical use cases (well, that would be half-way a design thing).
I'll consider it. Provided I can measure a speedup, of course.
* What is your evaluation of the documentation?
Works for me.
As mentioned before, the reference could be more detailed at places, regarding the equality with MPL constants and the exact types passed to the function objects (even if overloading operator() is uncommon - it might be occasionally useful to deduce a non-type constant from it).
As Dan Marsden mentioned, I can easily imagine dealing with special cases such as zero by overloading. Thanks! In Christ, Steven Watanabe

Steven Watanabe wrote:
AMDG
Tobias Schwinger <tschwinger <at> isonews2.com> writes:
As pointed out in other posts I think the default default case behavior should be changed not to throw but to use the default constructor if no default case function object is specified.
Will do.
A default case function object returning 'void' should be assumed (and 'assert'ed) not to return in a context where a (non-void) result is expected (implementation hint: the built-in comma operator allows void arguments and an overloaded comma operator doesn't).
Personally, I'd rather just forbid a default that returns void when the return type is non-void.
The rationale was to allow non-returning default function objects that work regardless of the type (such as 'throw_'). It's probably not that important...
I don't think the comma operator is a completely fool-proof way to detect void.
It is.
It could cause an ambiguity if the other type also defines an overloaded comma operator.
No, because the operator is binary and one operand can have a reserved type which is only internally used by the test code. Regards, Tobias

AMDG Tobias Schwinger <tschwinger <at> isonews2.com> writes:
I don't think the comma operator is a completely fool-proof way to detect void.
It is.
It could cause an ambiguity if the other type also defines an overloaded comma operator.
No, because the operator is binary and one operand can have a reserved type which is only internally used by the test code.
Ok. Let's make this concrete. Here is what I imagine: struct tester1 {}; struct tester2 {}; typedef char no; struct yes { no dummy[2]; }; template<class T> tester2 operator,(T, tester1); template<class T> no test(T); template<class T> yes test(tester2); template<bool is_void> struct call_impl; template<> struct call_impl<true> { template<class R, class F, class T> R apply(F f, T t) { f(t); BOOST_ASSERT(!"Void function cannot return to call" "when the return type is not void"); } }; template<> struct call_impl<false> { template<class R, class F, class T> R apply(F f, T t) { return(f(t)); } }; template<class R, class F, class T> R call(F f, T t) { return call_impl< sizeof(test(f(t), tester1())) == sizeof(yes) >::template apply<R>(f, t); } Suppose that we create a type: struct X {}; template<class T> T operator,(X, T); X f(X); And then try call<X>(f, X()) The expression f(t), tester1() is ambiguous. It matches both comma operators and neither operator is more specialized than the other. How do you get around this? Remember that f(t) only needs to be convertible to the result type so we can't look for an exact match. Adding implicit conversions doesn't work because the other comma operator can mirror such conversions. In Christ, Steven Watanabe

Steven Watanabe wrote:
AMDG
Tobias Schwinger <tschwinger <at> isonews2.com> writes:
I don't think the comma operator is a completely fool-proof way to detect void. It is.
It could cause an ambiguity if the other type also defines an overloaded comma operator. No, because the operator is binary and one operand can have a reserved type which is only internally used by the test code.
I hope I haven't promised too much :-)...
Ok. Let's make this concrete. Here is what I imagine:
<code> Try the attached code - does it work for you? Regards, Tobias #include <iostream> #include <cassert> template< typename T > struct reserved { char i[2]; template< typename U > friend char operator,(T const&, reserved<U> const&); }; #define IS_VOID(expr,target_type) \ (sizeof(( (expr) , reserved<target_type>() )) != 1) // void void_(); int nonvoid(); struct X {}; struct Y { Y(X); Y(); }; template<class T> T operator,(X, T); // int main() { assert( IS_VOID(void_(),int) ); assert( IS_VOID(void_(),X) ); assert( IS_VOID(void_(),Y) ); assert( !IS_VOID(nonvoid(),int) ); assert( !IS_VOID(nonvoid(),X) ); assert( !IS_VOID(nonvoid(),Y) ); assert( !IS_VOID(X(),int) ); assert( !IS_VOID(X(),X) ); assert( !IS_VOID(X(),Y) ); assert( !IS_VOID(Y(),int) ); assert( !IS_VOID(Y(),X) ); assert( !IS_VOID(Y(),Y) ); return 0; }

AMDG Tobias Schwinger <tschwinger <at> isonews2.com> writes:
Steven Watanabe wrote:
AMDG
Tobias Schwinger <tschwinger <at> isonews2.com> writes:
I don't think the comma operator is a completely fool-proof way to detect void. It is.
It could cause an ambiguity if the other type also defines an overloaded comma operator. No, because the operator is binary and one operand can have a reserved type which is only internally used by the test code.
I hope I haven't promised too much ...
Ok. Let's make this concrete. Here is what I imagine:
<code>
Try the attached code - does it work for you?
No it doesn't. None of the versions requiring implicit conversions worked. Did you by any chance compile with NDEBUG? In Christ, Steven Watanabe

Steven Watanabe wrote:
AMDG
Tobias Schwinger <tschwinger <at> isonews2.com> writes:
Steven Watanabe wrote:
AMDG
Tobias Schwinger <tschwinger <at> isonews2.com> writes:
I don't think the comma operator is a completely fool-proof way to detect void. It is.
It could cause an ambiguity if the other type also defines an overloaded comma operator. No, because the operator is binary and one operand can have a reserved type which is only internally used by the test code. I hope I haven't promised too much ...
Ok. Let's make this concrete. Here is what I imagine: <code>
Try the attached code - does it work for you?
No it doesn't. None of the versions requiring implicit conversions worked. Did you by any chance compile with NDEBUG?
It works with GCC4. And another overload would be good: friend char operator,(T const&, reserved const&); Regards, Tobias

AMDG Tobias Schwinger <tschwinger <at> isonews2.com> writes:
Try the attached code - does it work for you?
No it doesn't. None of the versions requiring implicit conversions worked. Did you by any chance compile with NDEBUG?
It works with GCC4. And another overload would be good:
friend char operator,(T const&, reserved const&);
gcc 3.4.4 and msvc 8.0 are both unhappy, still. What makes me wonder is that assert( !IS_VOID(nonvoid(),X) ); can't possibly work because there is no conversion from int to X. In Christ, Steven Watanabe

Steven Watanabe wrote:
AMDG
Tobias Schwinger <tschwinger <at> isonews2.com> writes:
Try the attached code - does it work for you? No it doesn't. None of the versions requiring implicit conversions worked. Did you by any chance compile with NDEBUG? It works with GCC4. And another overload would be good:
friend char operator,(T const&, reserved const&);
gcc 3.4.4 and msvc 8.0 are both unhappy, still.
What makes me wonder is that assert( !IS_VOID(nonvoid(),X) ); can't possibly work because there is no conversion from int to X.
Yes, that's strange -- I'm certain 'assert' works correctly. Making the cases that shouldn't work fail is probably easier than making those that should work pass, however. Using Comeau I get all cases pass except this one STATIC_ASSERT( !IS_VOID(X(),Y) ); not sure whether it should behave this way. Anyway, I wouldn't have expected it to be that difficult -- maybe we should use some "never-returns type" in place of 'void'... Regards, Tobias

AMDG Tobias Schwinger wrote:
Anyway, I wouldn't have expected it to be that difficult -- maybe we should use some "never-returns type" in place of 'void'...
Yeah. Fortunately, that's straightforward. In Christ, Steven Watanabe

On Jan 9, 2008 6:55 AM, Tobias Schwinger <tschwinger@isonews2.com> wrote:
Stjepan Rajko wrote:
* What is your evaluation of the design?
A default case function object returning 'void' should be assumed (and 'assert'ed) not to return in a context where a (non-void) result is expected (implementation hint: the built-in comma operator allows void arguments and an overloaded comma operator doesn't).
I can see why having a non-returning default case function object would be a common use case (e.g., throw on non-specified case), but technically, why should the default case be treated differently than the regular cases in this respect? E.g., what if I wanted to throw on case 3 but return something by default? It seems that the cleanest solutions here would be to either allow void-returning function objects (when a return type is expected) both for regular and default cases (and assert that the call does not return), or disallow the void-returning function objects all together (when a return type is expected). Just my 2 cents... Regards, Stjepan
participants (5)
-
Joel de Guzman
-
Steven Watanabe
-
Steven Watanabe
-
Stjepan Rajko
-
Tobias Schwinger