
Here is my review of Phoenix v3. First, let me say that I am a big user of Phoenix v2, Spirit and Proto, and that I therefore envision myself a big user of Phoenix v3 as well. It is a great improvement on top of Phoenix v2 (in particular, no strange bugs due to broken type deduction and correct usage of result_of), and for that reason alone it warrants my approval. So here it is: I vote yes for inclusion. I however have concerns about its compatibility with Phoenix v2. I tried to see how Spirit fared if I symlinked its underlying phoenix directory to that of Phoenix v3, and it just doesn't work. Mixing Phoenix v3 with Spirit doesn't work either. I believe this is a very important issue. I'm not sure it needs to be fixed before the first release of Phoenix as a first-class Boost citizen, but it certainly needs to be fixed ASAP by porting Spirit to use the new version. Apart from that, the library is pretty satisfying: - As I said, usage of result_of has been a great improvement, and bind for example is now polymorphic and fully compatible with boost/std::bind. - it supports perfect forwarding up to a limit - It uses Proto in a minimalistic and modular way - It is reasonably fast to compile, thanks to the instrumentation of wave to preprocess headers. Compiling the STL module could be made faster by using free functions instead of global objects (which should be the recommended way to define lazy functions!). - The docs haven't changed much, a few rectifications here and there plus the addition of the new internals. It still contains typos and some things that need to be updated. Let's start on with more detailed remarks: Misc ---- Phoenix v3 is missing a phoenix.hpp header file that includes all modules, which Phoenix v2 had (but in the parent directory). Lazy functions -------------- Using phoenix::function to turn a PFO into a lazy function is the most basic way to extend Phoenix with new functionality, and clearly the recommended way to do so according to the docs. [1] [2] It is directly inherited from Phoenix v2, and only the result type deduction mechanism changed. This approach has several problems, at least in the way it is presented in the documentation. - global objects potentially increase binary size - those objects are not PODs and therefore requires runtime initialization, adding some runtime overhead at application startup - instantiation happens regardless of whether the function is used or not, which affects compilation time negatively. This method is massively used to define a whole lot of lazy functions that forward to standard algorithms and container functions [3], which suggests that this is indeed the recommended way to proceed. It should however be avoided; prefer defining a template function that constructs and applies the adapted function object, if the function object can stay a POD that's better too. I think it would also be valuable to add a "lazy" function (or some other name) that takes a PFO and returns its lazy version, that could be used inline in lambda expressions in a way similar to bind. [1] <http://svn.boost.org/svn/boost/sandbox/SOC/2010/phoenix3/libs/phoenix/doc/html/phoenix/starter_kit/lazy_functions.html> [2] <http://svn.boost.org/svn/boost/sandbox/SOC/2010/phoenix3/libs/phoenix/doc/html/phoenix/modules/bind.html> [3] <http://svn.boost.org/svn/boost/sandbox/SOC/2010/phoenix3/libs/phoenix/doc/html/phoenix/modules/stl/container.html> Phoenix as a Proto-based library -------------------------------- Extension of Phoenix is very nice: you can plug custom subgrammars and subnodes non-intrusively, making it very modular. Phoenix expressions are also refinements of Proto expressions, so I can use Proto transforms to alter the tree. It is also possible to specify custom evaluation strategies on a per-node basis, which paves the way for many exotic applications. Terminals can also be customized for evaluation within the default strategy. I'm not entirely sure this feature is not redundant, but I'm not familiar enough with the code to tell. Phoenix as Proto with statements -------------------------------- One thing I was hoping Phoenix v3 would be is Proto extended to support statements. While it comes pretty for some uses, it still fails one one point: the ability to define custom languages that embed Phoenix statements. The missing piece is handling of domains (extends are missing too, but I don't think they make sense at the statement level). I think that area needs some research though, so it's not relevant to acceptance at all. Documentation ------------- <http://svn.boost.org/svn/boost/sandbox/SOC/2010/phoenix3/libs/phoenix/doc/html/phoenix/modules/bind.html> says that bind is monomorphic. I thought it was polymorphic now? <http://svn.boost.org/svn/boost/sandbox/SOC/2010/phoenix3/libs/phoenix/doc/html/phoenix/actor.html> doesn't talk about perfect forwarding. <http://svn.boost.org/svn/boost/sandbox/SOC/2010/phoenix3/libs/phoenix/doc/html/phoenix/inside/actor.html> Actor cannot be both a concept and a model, it's a concept and a refinement. It would also be nice to highlight the difference with the PFO concept. "The problem is that given an arbitrary function F, using current C++ language rules, one cannot create a forwarding function FF that transparently assumes the arguments of F. " That's wrong. It's not impossible, it just requires an exponential number of overloads. C++0x rvalue references allow to reduce this to a linear amount. <http://svn.boost.org/svn/boost/sandbox/SOC/2010/phoenix3/libs/phoenix/doc/html/phoenix/inside/expression.html> spurious ] at the end. The layout of this section is weird, by the way. The macros appear on top but are not explained until a later page. I remember I saw a couple other typos and similar but I can't find them anymore. Anyway it needs some proofreading. Features I would like to see in a future version ------------------------------------------------- Ok, this has little to do in a review, but here it is anyway. I would like to have an adapter to turn a PFO into a monomorphic function object (i.e. a function object with a result_type) that can be passed to legacy algorithms. This can be done two ways: either by giving the return type or the type of the arguments explicitly. This could also be used to define variant visitors with lambdas. I would like it if Phoenix could detect function objects that are monomorphic and automatically propagate that monomorphism (but that's probably hard to do). I would like it if the polymorphic function objects generated by Phoenix were masked out using SFINAE if their body-expression would result in a hard error. Essentially, to do this, one needs to be able to test whether the default Proto transform of an expression leads to an error. While this should be possible with compilers supporting extended SFINAE, I haven't been able to grok the Proto internals well enough to do it yet myself.

On Thu, Feb 24, 2011 at 5:52 PM, Mathias Gaunard <mathias.gaunard@ens-lyon.org> wrote:
Here is my review of Phoenix v3.
First, let me say that I am a big user of Phoenix v2, Spirit and Proto, and that I therefore envision myself a big user of Phoenix v3 as well.
It is a great improvement on top of Phoenix v2 (in particular, no strange bugs due to broken type deduction and correct usage of result_of), and for that reason alone it warrants my approval. So here it is: I vote yes for inclusion.
Mathias, thanks for the review and the nice words.
I however have concerns about its compatibility with Phoenix v2. I tried to see how Spirit fared if I symlinked its underlying phoenix directory to that of Phoenix v3, and it just doesn't work. Mixing Phoenix v3 with Spirit doesn't work either.
Yes, this is a known issue.
I believe this is a very important issue. I'm not sure it needs to be fixed before the first release of Phoenix as a first-class Boost citizen, but it certainly needs to be fixed ASAP by porting Spirit to use the new version.
I am currently working on porting spirit to V3. I agree on all points, a proper switching strategy has to be developed. <snipping overview>
Let's start on with more detailed remarks: Misc ---- Phoenix v3 is missing a phoenix.hpp header file that includes all modules, which Phoenix v2 had (but in the parent directory).
Nope, its not, there is the boost/phoenix/phoenix.hpp header which includes all of phoenix. IIUC it is generally frowned upon to put those headers directly in the boost directory.
Lazy functions -------------- Using phoenix::function to turn a PFO into a lazy function is the most basic way to extend Phoenix with new functionality, and clearly the recommended way to do so according to the docs. [1] [2] It is directly inherited from Phoenix v2, and only the result type deduction mechanism changed.
Correct. But please keep in mind. You as a user don't have to put your phoenix::function objects at some namespace scope.
This approach has several problems, at least in the way it is presented in the documentation. - global objects potentially increase binary size - those objects are not PODs and therefore requires runtime initialization, adding some runtime overhead at application startup - instantiation happens regardless of whether the function is used or not, which affects compilation time negatively.
Do you have numbers for that?
This method is massively used to define a whole lot of lazy functions that forward to standard algorithms and container functions [3], which suggests that this is indeed the recommended way to proceed. It should however be avoided; prefer defining a template function that constructs and applies the adapted function object, if the function object can stay a POD that's better too.
Do you have concrete proposal how this could look like? If there is no significant impact on (compile and runtime) performance of the points you mentioned above, it will stay as it is for now.
I think it would also be valuable to add a "lazy" function (or some other name) that takes a PFO and returns its lazy version, that could be used inline in lambda expressions in a way similar to bind.
I agree and think this would be a valuable addition. <snip>
Phoenix as a Proto-based library --------------------------------
Extension of Phoenix is very nice: you can plug custom subgrammars and subnodes non-intrusively, making it very modular.
Phoenix expressions are also refinements of Proto expressions, so I can use Proto transforms to alter the tree.
It is also possible to specify custom evaluation strategies on a per-node basis, which paves the way for many exotic applications.
Terminals can also be customized for evaluation within the default strategy. I'm not entirely sure this feature is not redundant, but I'm not familiar enough with the code to tell.
It is not redundant. The thing is that proto only recognizes one tag for a terminal expression. To distinguish between all those different terminal types, we needed the custom terminal customization point.
Phoenix as Proto with statements --------------------------------
One thing I was hoping Phoenix v3 would be is Proto extended to support statements.
I don't understand, what do you mean by that?
While it comes pretty for some uses, it still fails one one point: the ability to define custom languages that embed Phoenix statements.
The ability is there ...
The missing piece is handling of domains (extends are missing too, but I don't think they make sense at the statement level).
... and you don't need necessarily need the proto domain feature for that.
I think that area needs some research though, so it's not relevant to acceptance at all.
It indeed needs a lot more thought.
Documentation -------------
<http://svn.boost.org/svn/boost/sandbox/SOC/2010/phoenix3/libs/phoenix/doc/html/phoenix/modules/bind.html> says that bind is monomorphic. I thought it was polymorphic now?
It is, good catch, it is a relict from old times.
<http://svn.boost.org/svn/boost/sandbox/SOC/2010/phoenix3/libs/phoenix/doc/html/phoenix/actor.html> doesn't talk about perfect forwarding.
It is mentioned here: http://svn.boost.org/svn/boost/sandbox/SOC/2010/phoenix3/libs/phoenix/doc/ht...
<http://svn.boost.org/svn/boost/sandbox/SOC/2010/phoenix3/libs/phoenix/doc/html/phoenix/inside/actor.html> Actor cannot be both a concept and a model, it's a concept and a refinement.
Thanks. This is corrected.
It would also be nice to highlight the difference with the PFO concept. The difference is that it is a proto expression template wrapper.
"The problem is that given an arbitrary function F, using current C++ language rules, one cannot create a forwarding function FF that transparently assumes the arguments of F. " That's wrong. It's not impossible, it just requires an exponential number of overloads. C++0x rvalue references allow to reduce this to a linear amount.
We can only *emulate* perfect forwarding. THe sentence you claim is wrong is just a reformulation of the "Perfect Forwarding Problem" (http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2002/n1385.htm) problem statement.
<http://svn.boost.org/svn/boost/sandbox/SOC/2010/phoenix3/libs/phoenix/doc/html/phoenix/inside/expression.html> spurious ] at the end. The layout of this section is weird, by the way. The macros appear on top but are not explained until a later page.
This is because of quickbook, the macros are subsections of this section, and the TOC is created for them ... I will try to find a way that make it less confusing.
I remember I saw a couple other typos and similar but I can't find them anymore. Anyway it needs some proofreading.
I agree :)
Features I would like to see in a future version -------------------------------------------------
Ok, this has little to do in a review, but here it is anyway.
I would like to have an adapter to turn a PFO into a monomorphic function object (i.e. a function object with a result_type) that can be passed to legacy algorithms. This can be done two ways: either by giving the return type or the type of the arguments explicitly. This could also be used to define variant visitors with lambdas.
I would like it if Phoenix could detect function objects that are monomorphic and automatically propagate that monomorphism (but that's probably hard to do).
We talked about that already and I didn't forget. I think this will be a good addition.
I would like it if the polymorphic function objects generated by Phoenix were masked out using SFINAE if their body-expression would result in a hard error. Essentially, to do this, one needs to be able to test whether the default Proto transform of an expression leads to an error. While this should be possible with compilers supporting extended SFINAE, I haven't been able to grok the Proto internals well enough to do it yet myself.
Yes, I was thinking about that lately too! Maybe we can work out an implementation of that feature together!

On 2/25/2011 2:07 AM, Thomas Heller wrote:
On Thu, Feb 24, 2011 at 5:52 PM, Mathias Gaunard <mathias.gaunard@ens-lyon.org> wrote:
Here is my review of Phoenix v3.
First, let me say that I am a big user of Phoenix v2, Spirit and Proto, and that I therefore envision myself a big user of Phoenix v3 as well.
It is a great improvement on top of Phoenix v2 (in particular, no strange bugs due to broken type deduction and correct usage of result_of), and for that reason alone it warrants my approval. So here it is: I vote yes for inclusion.
Mathias, thanks for the review and the nice words.
Thank you very much for your review, Mathias.
I however have concerns about its compatibility with Phoenix v2. I tried to see how Spirit fared if I symlinked its underlying phoenix directory to that of Phoenix v3, and it just doesn't work. Mixing Phoenix v3 with Spirit doesn't work either.
Yes, this is a known issue.
I believe this is a very important issue. I'm not sure it needs to be fixed before the first release of Phoenix as a first-class Boost citizen, but it certainly needs to be fixed ASAP by porting Spirit to use the new version.
I am currently working on porting spirit to V3. I agree on all points, a proper switching strategy has to be developed.
Thomas, let's discuss this off-list. This is very important!
<snipping overview>
Let's start on with more detailed remarks: Misc ---- Phoenix v3 is missing a phoenix.hpp header file that includes all modules, which Phoenix v2 had (but in the parent directory).
Nope, its not, there is the boost/phoenix/phoenix.hpp header which includes all of phoenix. IIUC it is generally frowned upon to put those headers directly in the boost directory.
Sure, but for consistency, we should do it anyway. Just trust the user knows what he's doing.
Lazy functions -------------- Using phoenix::function to turn a PFO into a lazy function is the most basic way to extend Phoenix with new functionality, and clearly the recommended way to do so according to the docs. [1] [2] It is directly inherited from Phoenix v2, and only the result type deduction mechanism changed.
Correct. But please keep in mind. You as a user don't have to put your phoenix::function objects at some namespace scope.
This approach has several problems, at least in the way it is presented in the documentation. - global objects potentially increase binary size - those objects are not PODs and therefore requires runtime initialization, adding some runtime overhead at application startup - instantiation happens regardless of whether the function is used or not, which affects compilation time negatively.
Do you have numbers for that?
:-) Mathias has a point. Let's also discuss this off-list. With Spirit, I've already begun the migration towards an object free environment, but that's for terminals --of which proto is known to slow down compilation when there are lots of terminals. I am not quite sure about function objects.
This method is massively used to define a whole lot of lazy functions that forward to standard algorithms and container functions [3], which suggests that this is indeed the recommended way to proceed. It should however be avoided; prefer defining a template function that constructs and applies the adapted function object, if the function object can stay a POD that's better too.
Do you have concrete proposal how this could look like? If there is no significant impact on (compile and runtime) performance of the points you mentioned above, it will stay as it is for now.
I agree.
I think it would also be valuable to add a "lazy" function (or some other name) that takes a PFO and returns its lazy version, that could be used inline in lambda expressions in a way similar to bind.
I agree and think this would be a valuable addition.
Agreed.
Ok, this has little to do in a review, but here it is anyway.
I would like to have an adapter to turn a PFO into a monomorphic function object (i.e. a function object with a result_type) that can be passed to legacy algorithms. This can be done two ways: either by giving the return type or the type of the arguments explicitly. This could also be used to define variant visitors with lambdas.
I would like it if Phoenix could detect function objects that are monomorphic and automatically propagate that monomorphism (but that's probably hard to do).
We talked about that already and I didn't forget. I think this will be a good addition.
Any concrete suggestions on how it would look like? Regards, -- Joel de Guzman http://www.boostpro.com http://boost-spirit.com

:-) Mathias has a point. Let's also discuss this off-list. With Spirit, I've already begun the migration towards an object free environment, but that's for terminals --of which proto is known to slow down compilation when there are lots of terminals. I am not quite sure about function objects. I want to point you again tomy boostcon 2k10 talk. we showed figure where
On 25/02/11 03:21, Joel de Guzman wrote: the old nt2, using instance of function object , resulted in a linear compile time, while having template function using make_expr get us constant compile time. Here the post I made way before on the same issue: http://lists.boost.org/boost-users/2009/02/45591.php The relevant data: Naked main : 0.05s Naked main w/ proto : 1.50s => overhead of proto include = 1.45s Without Call to the actual function: Main with 1 proto term : 1.52s => overhead = 0.02s = 0.020s/term Main with 10 proto term : 1.55s => overhead = 0.05s = 0.005s/term Main with 100 proto term : 1.99s => overhead = 0.49s = 0.005s/term Main with 150 proto term : 2.55s => overhead = 1.05s = 0.007s/term Main with 200 proto term : 3.48s => overhead = 1.98s = 0.009S/term Main with 256 proto term : 4.80s => overhead = 3.30s = 0.013s/term Main with 1 proto func : 1.52s => overhead = 0.02s for one function = 0.0200s/func Main with 10 proto func : 1.53s => overhead = 0.03s for one function = 0.0030s/func Main with 100 proto func : 1.53s => overhead = 0.03s for one function = 0.00030s/func Main with 150 proto func : 1.55s => overhead = 0.05s for one function = 0.00033s/func Main with 200 proto func : 1.57s => overhead = 0.05s for one function = 0.00035s/func Main with 256 proto func : 1.61s => overhead = 0.09s for one function = 0.00043s/func With Call to the defined functions in sequence : func behaves like term without call (aka compilation time between 1.5s and 4.8s) term behaves like term without call with a linear overhead of 0.017s/call (aka compilation time between 4.5s and 9.1s) Other measures : executable size skyrocket with term even without any functions being called. 256 functions instanciated yields a 7.3kb binary vs a 1.4kb binary for make_expr func so an overhead of 23,6 bytes/functions in the case of terminal object

On Fri, Feb 25, 2011 at 7:08 AM, Joel Falcou <joel.falcou@lri.fr> wrote:
On 25/02/11 03:21, Joel de Guzman wrote:
:-) Mathias has a point. Let's also discuss this off-list. With Spirit, I've already begun the migration towards an object free environment, but that's for terminals --of which proto is known to slow down compilation when there are lots of terminals. I am not quite sure about function objects.
I want to point you again tomy boostcon 2k10 talk. we showed figure where the old nt2, using instance of function object , resulted in a linear compile time, while having template function using make_expr get us constant compile time.
Here the post I made way before on the same issue: http://lists.boost.org/boost-users/2009/02/45591.php <snip>
You have a point for proto terminals. A phoenix::function is _not_ a proto terminal though. here is a sketch of the implementation: template <typename Func> struct function { template <typename A0 ... typename AN> typename proto::result_of::make_expr<tag::function, Func, A0, ..., AN> operator()(A0 const& a0, ..., A1 const& a1) const { return proto::make_expr<....>(....); } }; It already behaves like the proto generator functions you described. Maybe the instantiation of a phoenix::function<F> also add significantly to compile time and binary size. These were the numbers I was interested in.

On 2/25/11 2:24 PM, Thomas Heller wrote:
On Fri, Feb 25, 2011 at 7:08 AM, Joel Falcou<joel.falcou@lri.fr> wrote:
On 25/02/11 03:21, Joel de Guzman wrote:
:-) Mathias has a point. Let's also discuss this off-list. With Spirit, I've already begun the migration towards an object free environment, but that's for terminals --of which proto is known to slow down compilation when there are lots of terminals. I am not quite sure about function objects.
I want to point you again tomy boostcon 2k10 talk. we showed figure where the old nt2, using instance of function object , resulted in a linear compile time, while having template function using make_expr get us constant compile time.
Here the post I made way before on the same issue: http://lists.boost.org/boost-users/2009/02/45591.php <snip>
You have a point for proto terminals. A phoenix::function is _not_ a proto terminal though. here is a sketch of the implementation:
template<typename Func> struct function { template<typename A0 ... typename AN> typename proto::result_of::make_expr<tag::function, Func, A0, ..., AN> operator()(A0 const& a0, ..., A1 const& a1) const { return proto::make_expr<....>(....); } };
It already behaves like the proto generator functions you described. Maybe the instantiation of a phoenix::function<F> also add significantly to compile time and binary size. These were the numbers I was interested in.
Exactly. Phoenix functions are not proto terminals. I agree with Thomas, I'd like to see the numbers first before jumping to a conclusion. Regards, -- Joel de Guzman http://www.boostpro.com http://spirit.sf.net

On 25/02/11 13:35, Joel de Guzman wrote:
It already behaves like the proto generator functions you described. Maybe the instantiation of a phoenix::function<F> also add significantly to compile time and binary size. These were the numbers I was interested in.
Exactly. Phoenix functions are not proto terminals. I agree with Thomas, I'd like to see the numbers first before jumping to a conclusion.
The problem comes from the inevitable code instanciation of this template class when you actually do a lot of function<stuff> evewhere. see your std bindings for ex.

On Fri, Feb 25, 2011 at 1:54 PM, Joel Falcou <joel.falcou@lri.fr> wrote:
On 25/02/11 13:35, Joel de Guzman wrote:
It already behaves like the proto generator functions you described. Maybe the instantiation of a phoenix::function<F> also add significantly to compile time and binary size. These were the numbers I was interested in.
Exactly. Phoenix functions are not proto terminals. I agree with Thomas, I'd like to see the numbers first before jumping to a conclusion.
The problem comes from the inevitable code instanciation of this template class when you actually do a lot of function<stuff> evewhere.
see your std bindings for ex.
Which should still be lighter than proto terminals, let me remind you that "stuff" is just yet another PFO with templated opertator() overloads. It really shouldn't matter a lot. I can't run the tests right now ... I will when returning back home.

On 25/02/2011 13:35, Joel de Guzman wrote:
Exactly. Phoenix functions are not proto terminals. I agree with Thomas, I'd like to see the numbers first before jumping to a conclusion.
The attached example tries to demonstrate the difference using a simplistic "function" definition that merely forwards to the wrapped PFO. I've only considered unary functions for simplicity. I ran the tests with GCC 4.5 with -O3. On my platform, for 4096 functions, using global objects compiles in 7 seconds, while using free functions compiles in 550 ms. The size overhead is fairly minimal, 992 bytes for the global version and 643 for the function version. Now, if I change function to be a POD, both tests end up with an object size of 644 bytes, and the compile time of the global version gets down to 650 ms, making it almost as good as the free function case.

On 2/25/2011 10:54 PM, Mathias Gaunard wrote:
On 25/02/2011 13:35, Joel de Guzman wrote:
Exactly. Phoenix functions are not proto terminals. I agree with Thomas, I'd like to see the numbers first before jumping to a conclusion.
The attached example tries to demonstrate the difference using a simplistic "function" definition that merely forwards to the wrapped PFO. I've only considered unary functions for simplicity.
I ran the tests with GCC 4.5 with -O3.
On my platform, for 4096 functions, using global objects compiles in 7 seconds, while using free functions compiles in 550 ms.
The size overhead is fairly minimal, 992 bytes for the global version and 643 for the function version.
Now, if I change function to be a POD, both tests end up with an object size of 644 bytes, and the compile time of the global version gets down to 650 ms, making it almost as good as the free function case.
That is very good information, Mathias. Thank you very much for doing this. This is indeed very enlightening. So, either make them global functions, or make them true PODs. I'll discuss this off list with Thomas. Regards, -- Joel de Guzman http://www.boostpro.com http://boost-spirit.com

On 24/02/2011 19:07, Thomas Heller wrote:
On Thu, Feb 24, 2011 at 5:52 PM, Mathias Gaunard <mathias.gaunard@ens-lyon.org> wrote:
---- Phoenix v3 is missing a phoenix.hpp header file that includes all modules, which Phoenix v2 had (but in the parent directory).
Nope, its not, there is the boost/phoenix/phoenix.hpp header which includes all of phoenix. IIUC it is generally frowned upon to put those headers directly in the boost directory.
It must be recent then, because I didn't have it in my not entirely up to date checkout of the sandbox.
Lazy functions -------------- Using phoenix::function to turn a PFO into a lazy function is the most basic way to extend Phoenix with new functionality, and clearly the recommended way to do so according to the docs. [1] [2] It is directly inherited from Phoenix v2, and only the result type deduction mechanism changed.
Correct. But please keep in mind. You as a user don't have to put your phoenix::function objects at some namespace scope.
This approach has several problems, at least in the way it is presented in the documentation. - global objects potentially increase binary size - those objects are not PODs and therefore requires runtime initialization, adding some runtime overhead at application startup - instantiation happens regardless of whether the function is used or not, which affects compilation time negatively.
Do you have numbers for that?
This method is massively used to define a whole lot of lazy functions that forward to standard algorithms and container functions [3], which suggests that this is indeed the recommended way to proceed. It should however be avoided; prefer defining a template function that constructs and applies the adapted function object, if the function object can stay a POD that's better too.
Do you have concrete proposal how this could look like? If there is no significant impact on (compile and runtime) performance of the points you mentioned above, it will stay as it is for now.
I'll let Joel Falcou do that.
It is not redundant. The thing is that proto only recognizes one tag for a terminal expression. To distinguish between all those different terminal types, we needed the custom terminal customization point.
I was thinking phoenix::ref could be a proto node, but that doesn't quite work because Proto catches terminals by value (isn't there a way to customize that though? -- I'm not familiar with that kind of thing)
Phoenix as Proto with statements --------------------------------
One thing I was hoping Phoenix v3 would be is Proto extended to support statements.
I don't understand, what do you mean by that?
Proto is a tool to write domain-specific embedded languages based on C++ expressions. It's like a compiler framework, but the AST is limited to expressions and cannot contain function creation, local variable definition, or repetition. It would be nice to have a tool like Proto but that was extended to deal with more C++ language constructs. Even if the syntax to call them requires transforming those constructs to expressions with a DSEL of its own, at least there can be a canonic way to do so.
While it comes pretty for some uses, it still fails one one point: the ability to define custom languages that embed Phoenix statements.
The ability is there ...
The missing piece is handling of domains (extends are missing too, but I don't think they make sense at the statement level).
... and you don't need necessarily need the proto domain feature for that.
Phoenix doesn't really help with things like phoenix::if_(1)[some_expr_in_my_own_domain] If things are necessarily in the phoenix domain, that's not really your custom language. That's like using Proto without domains. I think it needs to be a parametric domain, like phoenix of common domain of what's inside. Being able to tell how Proto expressions within Phoenix should be evaluated on a per-domain basis seems important too. I have no idea how to evaluate a Phoenix expression containing other Proto expressions with the current design.
Documentation ------------- <http://svn.boost.org/svn/boost/sandbox/SOC/2010/phoenix3/libs/phoenix/doc/html/phoenix/actor.html> doesn't talk about perfect forwarding.
It is mentioned here: http://svn.boost.org/svn/boost/sandbox/SOC/2010/phoenix3/libs/phoenix/doc/ht...
Yes, but the page I referenced shouldn't only list non-const reference overloads: that's confusing.
We can only *emulate* perfect forwarding. THe sentence you claim is wrong is just a reformulation of the "Perfect Forwarding Problem" (http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2002/n1385.htm) problem statement.
Then the original problem statement is misleading. It is later said in the paper that they don't consider solutions that scale worse than linearly to be good enough. Solution #3, that they do list, works fine with C++03 language rules, but is discarded due to its cost in number of overloads. I don't really see where the term "emulation" comes from. Forwarding arguments is forwarding arguments, you just have to make sure you can catch all arguments without loss of information, there is no hack or emulation involved. A better phrasing would involve saying that is is not impossible, but rather requires an impractical amount of work. I wonder how much that amount of work is really a problem with preprocessed Phoenix though.

On 25/02/11 13:31, Mathias Gaunard wrote:
This method is massively used to define a whole lot of lazy functions that forward to standard algorithms and container functions [3], which suggests that this is indeed the recommended way to proceed. It should however be avoided; prefer defining a template function that constructs and applies the adapted function object, if the function object can stay a POD that's better too.
Do you have concrete proposal how this could look like? If there is no significant impact on (compile and runtime) performance of the points you mentioned above, it will stay as it is for now.
I'll let Joel Falcou do that.
basically everything possible should go through make_expr internally.

On 25/02/2011 13:31, Mathias Gaunard wrote:
Phoenix doesn't really help with things like
phoenix::if_(1)[some_expr_in_my_own_domain]
To be more explicit, phoenix::if_(a) [ b = c + d ](); fails to compile if a, b, c and d are all Proto terminals of a placeholder type (i.e. a type that doesn't itself have an operator+ and is not convertible to bool), because it tries to evaluate "a" and "b = c + d" with the default transform. It would be nice if it had the same effect as if(evaluate<domain_of<decltype(a)>::type>(a)) { evaluate<domain_of<decltype(b = c + d)>::type>(b = c + d); } i.e. whenever Phoenix encounters an expression not within the Phoenix domain, I would like if it called an evaluation function specific to that domain.

On Fri, Feb 25, 2011 at 1:31 PM, Mathias Gaunard <mathias.gaunard@ens-lyon.org> wrote:
On 24/02/2011 19:07, Thomas Heller wrote:
On Thu, Feb 24, 2011 at 5:52 PM, Mathias Gaunard <mathias.gaunard@ens-lyon.org> wrote:
<snip>
It is not redundant. The thing is that proto only recognizes one tag for a terminal expression. To distinguish between all those different terminal types, we needed the custom terminal customization point.
I was thinking phoenix::ref could be a proto node, but that doesn't quite work because Proto catches terminals by value (isn't there a way to customize that though? -- I'm not familiar with that kind of thing)
There is a way in proto to control the capturing behavior on a domain basis. The Phoenix AST nodes need to be captured by value by default. Otherwise there will be dangling references all over your code. The only way to have both by value and by reference is to realize the latter with a terminal wrapper. Fortunately there already exists something like this for exactly that purpose: boost::reference_wrapper. Now, to distinguish between "normal" terminals and this wrapper, we developed the custom terminal registering method. As a matter of fact, it came in handy in all kinds of different situations (It is used for the null expression, and it got very handy in the spirit port).
Phoenix as Proto with statements --------------------------------
One thing I was hoping Phoenix v3 would be is Proto extended to support statements.
I don't understand, what do you mean by that?
Proto is a tool to write domain-specific embedded languages based on C++ expressions. It's like a compiler framework, but the AST is limited to expressions and cannot contain function creation, local variable definition, or repetition.
It would be nice to have a tool like Proto but that was extended to deal with more C++ language constructs. Even if the syntax to call them requires transforming those constructs to expressions with a DSEL of its own, at least there can be a canonic way to do so.
This already exists inside proto, there are basic language constructs and it is easily extensible (through proto transforms).
While it comes pretty for some uses, it still fails one one point: the ability to define custom languages that embed Phoenix statements.
The ability is there ...
The missing piece is handling of domains (extends are missing too, but I don't think they make sense at the statement level).
... and you don't need necessarily need the proto domain feature for that.
Phoenix doesn't really help with things like
phoenix::if_(1)[some_expr_in_my_own_domain]
It is not really the job of phoenix. For things like that you need go to the "low-level" proto layer. See below.
If things are necessarily in the phoenix domain, that's not really your custom language. That's like using Proto without domains. I think it needs to be a parametric domain, like phoenix of common domain of what's inside.
Being able to tell how Proto expressions within Phoenix should be evaluated on a per-domain basis seems important too.
Which can be done, see below.
I have no idea how to evaluate a Phoenix expression containing other Proto expressions with the current design.
You can hook up your DSEL into phoenix by extending meta_grammar and default_actions. Maybe a small example in the documentation would be helpful to explain how it really works ...
Documentation -------------
<http://svn.boost.org/svn/boost/sandbox/SOC/2010/phoenix3/libs/phoenix/doc/html/phoenix/actor.html> doesn't talk about perfect forwarding.
It is mentioned here:
http://svn.boost.org/svn/boost/sandbox/SOC/2010/phoenix3/libs/phoenix/doc/ht...
Yes, but the page I referenced shouldn't only list non-const reference overloads: that's confusing.
We can only *emulate* perfect forwarding. THe sentence you claim is wrong is just a reformulation of the "Perfect Forwarding Problem" (http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2002/n1385.htm) problem statement.
Then the original problem statement is misleading. It is later said in the paper that they don't consider solutions that scale worse than linearly to be good enough. Solution #3, that they do list, works fine with C++03 language rules, but is discarded due to its cost in number of overloads.
I don't really see where the term "emulation" comes from. Forwarding arguments is forwarding arguments, you just have to make sure you can catch all arguments without loss of information, there is no hack or emulation involved.
A better phrasing would involve saying that is is not impossible, but rather requires an impractical amount of work. I wonder how much that amount of work is really a problem with preprocessed Phoenix though.
Preprocessed phoenix still needs to resolve all the operator overloads. The other points you mentioned will be reworked.

On 25/02/2011 14:13, Thomas Heller wrote:
It would be nice to have a tool like Proto but that was extended to deal with more C++ language constructs. Even if the syntax to call them requires transforming those constructs to expressions with a DSEL of its own, at least there can be a canonic way to do so.
This already exists inside proto, there are basic language constructs and it is easily extensible (through proto transforms).
There is no proto::if_, proto::while_ etc. The way I see it, Phoenix could provide the canonical way to represent statements in Proto-based DSELs.
You can hook up your DSEL into phoenix by extending meta_grammar and default_actions. Maybe a small example in the documentation would be helpful to explain how it really works ...
Since I don't see how that is possible with the current design, I would appreciate an example indeed. default_actions can only be specialized per node tag, not per domain. And I don't want to have to duplicate all the Proto node tags for my own DSEL like Phoenix does. I also don't see why the grammar should be affected. It's purely an evaluation thing, the Phoenix language itself should not be affected.
Preprocessed phoenix still needs to resolve all the operator overloads. The other points you mentioned will be reworked.
AFAIK, overload resolution is constant-time on all compilers. The compile-time slowdown should only come from parsing a larger amount of code.

On 2/25/2011 9:38 PM, Mathias Gaunard wrote:
On 25/02/2011 14:13, Thomas Heller wrote:
It would be nice to have a tool like Proto but that was extended to deal with more C++ language constructs. Even if the syntax to call them requires transforming those constructs to expressions with a DSEL of its own, at least there can be a canonic way to do so.
This already exists inside proto, there are basic language constructs and it is easily extensible (through proto transforms).
There is no proto::if_, proto::while_ etc.
The way I see it, Phoenix could provide the canonical way to represent statements in Proto-based DSELs.
This is very interesting! I never envisioned this when I first wrote Phoenix. I'd love to see more of this. Regards, -- Joel de Guzman http://www.boostpro.com http://boost-spirit.com

On 2/25/2011 9:38 PM, Mathias Gaunard wrote:
On 25/02/2011 14:13, Thomas Heller wrote:
Preprocessed phoenix still needs to resolve all the operator overloads. The other points you mentioned will be reworked.
AFAIK, overload resolution is constant-time on all compilers. The compile-time slowdown should only come from parsing a larger amount of code.
I'd love to know if that is a fact. I often wondered about this myself. Regards, -- Joel de Guzman http://www.boostpro.com http://boost-spirit.com

On Saturday, February 26, 2011 03:02:22 AM Joel de Guzman wrote:
On 2/25/2011 9:38 PM, Mathias Gaunard wrote:
On 25/02/2011 14:13, Thomas Heller wrote:
Preprocessed phoenix still needs to resolve all the operator overloads. The other points you mentioned will be reworked.
AFAIK, overload resolution is constant-time on all compilers. The compile-time slowdown should only come from parsing a larger amount of code.
I'd love to know if that is a fact. I often wondered about this myself.
This is correct. overload resolution isn't that bad. I did a quick test ... with "perfect forwarding" for 8 arguments, which didn't show a significant increase in compile time.
Regards,

On 26.02.2011, at 03:02, Joel de Guzman wrote:
On 2/25/2011 9:38 PM, Mathias Gaunard wrote:
On 25/02/2011 14:13, Thomas Heller wrote:
Preprocessed phoenix still needs to resolve all the operator overloads. The other points you mentioned will be reworked.
AFAIK, overload resolution is constant-time on all compilers. The compile-time slowdown should only come from parsing a larger amount of code.
I'd love to know if that is a fact. I often wondered about this myself.
Overload resolution is O(N + V*A), where N is the number of overloads, V is the number of viable overloads, and A is the number of arguments passed. Note that detecting non-viability of overloads with the wrong number of parameters is very cheap. Sebastian

On 2/26/11 8:55 PM, Sebastian Redl wrote:
On 26.02.2011, at 03:02, Joel de Guzman wrote:
On 2/25/2011 9:38 PM, Mathias Gaunard wrote:
On 25/02/2011 14:13, Thomas Heller wrote:
Preprocessed phoenix still needs to resolve all the operator overloads. The other points you mentioned will be reworked.
AFAIK, overload resolution is constant-time on all compilers. The compile-time slowdown should only come from parsing a larger amount of code.
I'd love to know if that is a fact. I often wondered about this myself.
Overload resolution is O(N + V*A), where N is the number of overloads, V is the number of viable overloads, and A is the number of arguments passed. Note that detecting non-viability of overloads with the wrong number of parameters is very cheap.
Thanks, Sebastian. That makes sense. Regards, -- Joel de Guzman http://www.boostpro.com http://spirit.sf.net
participants (5)
-
Joel de Guzman
-
Joel Falcou
-
Mathias Gaunard
-
Sebastian Redl
-
Thomas Heller