Yap's formal review is starting now!

Dear Boost community, The formal review of Zach Laine's Yap library starts Monday, February 5th and ends on Wednesday, February 14th. Yap is a C++14-and-later library to build expression templates. Yap uses new features introduced in C++14/17 to make the library simpler to use and compile faster than existing solutions, such as the well known Boost.Proto library. The documentation is available here: https://tzlaine.github.io/yap/doc/html/index.html The GitHub repository is available here: https://github.com/tzlaine/yap We encourage your participation in this review. At a minimum, please state: - Whether you believe the library should be accepted into Boost - Your name - Your knowledge of the problem domain You are strongly encouraged to also provide additional information: - What is your evaluation of the library's: * Design * Implementation * Documentation * Tests * Usefulness - Did you attempt to use the library? If so: * Which compiler(s)? * What was the experience? Any problems? - How much effort did you put into your evaluation of the review? All issues discussed during this review will be tracked on the library's GitHub issue tracker at https://github.com/tzlaine/yap/issues. Regards, Louis Dionne P.S.: I've been having trouble posting to this list. If this ends up being a duplicate post, let's make this one the official one.

On Feb 5, 2018, at 8:53 AM, Louis Dionne via Boost <boost@lists.boost.org> wrote:
The formal review of Zach Laine's Yap library starts Monday, February 5th and ends on Wednesday, February 14th.
I have not heard any discussion of Yap yet, so I’ll jump in and start it off. My review will come later, but for the moment I want to raise an issue for discussion. First, a bit of context. I have been using Yap in production for the last year (thanks Zach for your help along the way). In my application, I am dealing with arbitrary, user-coded expression trees. Some of the terminals can be function objects that in turn evaluate other user-coded expression trees. Ideally, evaluation of these expressions would work for any types wrapped in the appropriate expression classes. Indeed, this is the case, which is great. However, I also have a common use case that requires changing how the evaluation works depending on context. For example, it would be useful to write something analogous to evaluate(expr,context) with a stateful context object that would influence how certain terminals are evaluated. The official stance on this [1] is that one should instead use transform(expr,xform), where xform could play the role of context above because transforms can “do anything”, including evaluate the expression using contextual information contained within the xform object. The problem I see with using transform() is that the entire recursion through the expression tree that is provided by the implementation of evaluate() must be duplicated in some way by the user in order to implement the necessary overloads in xform. This is because “if xform(N) is a well-formed C++ expression, that xform(N) is used, and no nodes under N in expr are visited” [2]. Therefore, the user has to provide whatever recursion is needed to mimic what evaluate() would otherwise do. This situation raises several questions. It is my hope that this post will raise a discussion about the appropriate design considerations at work. - Is context-dependent evaluation a use-case that is valuable to support by the library? Based on my experience, the answer is clearly yes, but perhaps others wish to weigh in. - Is it appropriate for a library to require users to reimplement something as intricate as this recursion in order to support a use case like that? - Is it appropriate for Yap to have something like evaluate_with_context(eval,context,…) that would implement a recursion similar to evaluate() but allow a context argument to control the evaluation at appropriate customization points. Note, the variadic part is for placeholders, which are supported by evaluate(), and not really part of the issue. Again, from my experience it seems that implementing this once correctly in the library would save much pain on the users’ part. I hope this will stimulate some discussion and look forward to seeing where it goes. Cheers, Brook [1] Often when you create an expression, you will want to evaluate it in different contexts, changing its evaluation -- or even entire meaning -- in each context. evaluate() Is wrong for this task, since it only takes values for substitution into placeholders. In these situations, you should instead use an explicit transform that evaluates your expression. https://tzlaine.github.io/yap/doc/html/boost_yap__proposed_/manual/tutorial/... [2] https://tzlaine.github.io/yap/doc/html/boost_yap__proposed_/manual/tutorial/...

AMDG On 02/07/2018 01:50 PM, Brook Milligan via Boost wrote:
<snip> - Is it appropriate for a library to require users to reimplement something as intricate as this recursion in order to support a use case like that?
- Is it appropriate for Yap to have something like evaluate_with_context(eval,context,…) that would implement a recursion similar to evaluate() but allow a context argument to control the evaluation at appropriate customization points. Note, the variadic part is for placeholders, which are supported by evaluate(), and not really part of the issue. Again, from my experience it seems that implementing this once correctly in the library would save much pain on the users’ part.
I haven't actually started looking at YAP yet, but I think the correct way to handle this for YAP to expose a transform that does the same as evaluate, which can be selectively overridden by a derived transform. In Christ, Steven Watanabe

On Wed, Feb 7, 2018 at 3:50 PM, Brook Milligan via Boost < boost@lists.boost.org> wrote:
On Feb 5, 2018, at 8:53 AM, Louis Dionne via Boost < boost@lists.boost.org> wrote:
The formal review of Zach Laine's Yap library starts Monday, February 5th and ends on Wednesday, February 14th.
I have not heard any discussion of Yap yet, so I’ll jump in and start it off.
My review will come later, but for the moment I want to raise an issue for discussion.
First, a bit of context. I have been using Yap in production for the last year (thanks Zach for your help along the way). In my application, I am dealing with arbitrary, user-coded expression trees. Some of the terminals can be function objects that in turn evaluate other user-coded expression trees. Ideally, evaluation of these expressions would work for any types wrapped in the appropriate expression classes. Indeed, this is the case, which is great.
However, I also have a common use case that requires changing how the evaluation works depending on context. For example, it would be useful to write something analogous to evaluate(expr,context) with a stateful context object that would influence how certain terminals are evaluated.
The official stance on this [1] is that one should instead use transform(expr,xform), where xform could play the role of context above because transforms can “do anything”, including evaluate the expression using contextual information contained within the xform object.
The problem I see with using transform() is that the entire recursion through the expression tree that is provided by the implementation of evaluate() must be duplicated in some way by the user in order to implement the necessary overloads in xform. This is because “if xform(N) is a well-formed C++ expression, that xform(N) is used, and no nodes under N in expr are visited” [2]. Therefore, the user has to provide whatever recursion is needed to mimic what evaluate() would otherwise do.
This situation raises several questions. It is my hope that this post will raise a discussion about the appropriate design considerations at work.
- Is context-dependent evaluation a use-case that is valuable to support by the library? Based on my experience, the answer is clearly yes, but perhaps others wish to weigh in.
- Is it appropriate for a library to require users to reimplement something as intricate as this recursion in order to support a use case like that?
- Is it appropriate for Yap to have something like evaluate_with_context(eval,context,…) that would implement a recursion similar to evaluate() but allow a context argument to control the evaluation at appropriate customization points. Note, the variadic part is for placeholders, which are supported by evaluate(), and not really part of the issue. Again, from my experience it seems that implementing this once correctly in the library would save much pain on the users’ part.
I hope this will stimulate some discussion and look forward to seeing where it goes.
Ok, some things to note: evaluate() currently has two modes, speaking roughly. The first mode is to evaluate an expression by doing whatever the built-in operators and existing function calls in the expression would do. This can be extremely useful in many situations when you write code using Yap, especially transforms where you want to at least partially default-evaluate some subexpression. The second is to evaluate the expression using custom code that the user has specified using customization points; there are customization points for every overloadable operator, among others. This second mode is essentially a way of doing implicit transforms, and is really only there for Proto parity. I've never liked it, and am on the fence about cutting the customization points entirely. The implicit nature of the customization is at the heart of my problem with this feature. A good abstraction is used explicitly, but hides its implementation details. These customization points do the implementation hiding bit just fine, but you can't even tell you're using them when looking at a particular line of code -- does yap::evaluate(a + b) yield a sum, or launch a missile? Who's to say? I have to go code spelunking to find out. This is at odds with good code practice emphasizing local reasoning. If I had not wanted Proto feature parity, I *never* would have implemented a library like this. So, were I to add a new evaluate_with_context(), it would mean that we would perpetuate this customization-point mis-feature. This I do not like. Did I mention that the customization points are implemented done via ADL trickery, requiring new types and/or namespaces to get slightly different behaviors? I also don't like this aspect. So, I have been very resistant to adding another new evaluation mode. Instead, I think something like this should be usable in most cases: // Let's assume the existence of a my_make_term() function that takes a Yap terminal // and a Context &, and returns a new Yap terminal. template <typename Context> struct transform_terminals_with_context { // Base case. Note that I'm ignoring placeholders entirely for this // example (they're easy to special-case if necessary). template <typename T> auto operator() (yap::terminal_tag, T && t) { return my_make_term(std::forward<T>(t), context_); } // Recursive case: Match any binary expression. template <typename Tag, typename LExpr, typename RExpr> auto operator() (Tag, LExpr const & left, RExpr const & right) { return yap::make_expression<yap::detail::kind_for<Tag>>( boost::yap::transform(left, *this), boost::yap::transform(right, *this)); } // Recursive case: Match any unary expression. template <typename Tag, typename Expr> auto operator() (Tag, Expr const & expr) { return yap::make_expression<yap::detail::kind_for<Tag>>( boost::yap::transform(expr, *this)); } // Ternary and call are added as necessary. Context & context_; }; This transforms all your terminals to new terminals, using your custom code that applies your terminal + context operation. You can then just eval() the transformed expression using the default evaluate(), transform() it again with a different transform, etc. I had to make up the constexpr function kind_for<>() that maps expression-kind tag types to yap::expr_kind values (which I'll probably add now that I see it is needed *TODO*), but the reset is just typical Yapery. Now, this has the downside that if you have a very large number of terminals, you may have some expensive copies going on, because you are copying the entire tree. This implies to me that the most important issue is whether the evaluate-as-you-transform-because-the-tree-is-huge use case is of primary or secondary concern. My expectation is that it is of secondary concern. To expect otherwise is probably to optimize prematurely -- even if this issue is important, how often do users see real perf impact? I don't know the answer to that yet, but if it is a common enough perf problem, I'm quite happy to come up with a solution. If not, making your own transform that evaluates in place actually sounds like the right answer to me. In most cases, users also don't care about every possible expression kind -- they are designing a mini-language that uses a subset. Using a transform like the one above, evaluate(), or the proposed evaluate_with_context() allows subexpressions that are not in the purview of your mini-language. A custom transform has better type safety. Zach

On Feb 8, 2018, at 2:15 PM, Zach Laine via Boost <boost@lists.boost.org> wrote:
Ok, some things to note:
evaluate() currently has two modes, speaking roughly. The first mode is to evaluate an expression by doing whatever the built-in operators and existing function calls in the expression would do. This can be extremely useful in many situations when you write code using Yap, especially transforms where you want to at least partially default-evaluate some subexpression. The second is to evaluate the expression using custom code that the user has specified using customization points; there are customization points for every overloadable operator, among others.
This second mode is essentially a way of doing implicit transforms, and is really only there for Proto parity. I've never liked it, and am on the fence about cutting the customization points entirely. The implicit nature of the customization is at the heart of my problem with this feature. A good abstraction is used explicitly, but hides its implementation details. These customization points do the implementation hiding bit just fine, but you can't even tell you're using them when looking at a particular line of code -- does yap::evaluate(a + b) yield a sum, or launch a missile? Who's to say? I have to go code spelunking to find out. This is at odds with good code practice emphasizing local reasoning. If I had not wanted Proto feature parity, I *never* would have implemented a library like this.
Thank you, Zach. This is the first explanation that I have seen that clearly lays out your design philosophy, although even this is a bit implicit. To be more explicit, let me restate what I think you are saying. You feel that expression templates should essentially provide lazy evaluation of expressions with the same semantics they would otherwise have; the semantics of evaluating an expression should not be changeable in the process of the evaluation. Changing semantics should only result from an explicit transformation of the expression into some new form corresponding to the new semantics. Coming from a Proto world, this is quite a different view that should be clarified in the documentation. If this is the world view, then it seems that implicit transforms should be removed. Alternatively, they should be retained but with _much_ clearer documentation regarding that use case being for "Proto compatibility" but not really "approved". For my own ongoing use of Yap, your comments above have been really helpful to clarify what you consider to be best practices. Having the idiom I described in mind (please clarify if I'm getting this wrong) is very helpful to rethink my code base. However, this points to a concern I have long had, which is that the documentation does not lay out philosophies, guidelines, best practices, etc. Even though I have worked with Yap for a year and with Proto before (maybe that poisoned me), I have apparently been thinking about this wrongly. The examples are fine for what they do, but they are not sufficient to explain _why_ a certain solution is appropriate. Thus, I strongly urge you to revisit the documentation with an eye toward addressing this gap.
This transforms all your terminals to new terminals, using your custom code that applies your terminal + context operation. You can then just eval() the transformed expression using the default evaluate(), transform() it again with a different transform, etc. I had to make up the constexpr function kind_for<>() that maps expression-kind tag types to yap::expr_kind values (which I'll probably add now that I see it is needed *TODO*), but the reset is just typical Yapery.
Thanks for this example. It clarifies a lot. And yes, please support this fully.
Now, this has the downside that if you have a very large number of terminals, you may have some expensive copies going on, because you are copying the entire tree. This implies to me that the most important issue is whether the evaluate-as-you-transform-because-the-tree-is-huge use case is of primary or secondary concern. My expectation is that it is of secondary concern. To expect otherwise is probably to optimize prematurely -- even if this issue is important, how often do users see real perf impact?
I, too, was concerned about copying, but now that I better understand the "proper" use case, I'll offer this. At least in my experience, I cannot copy some of the terminals, but I can of course copy the values that the terminals would evaluate to. It seems that in general, those values should always be cheaply copyable, otherwise the whole idea of evaluation of an expression tree is fraught. Thus, I feel that using a transform that converts terminals into cheap-to-copy values still embedded within an equivalent expression tree would not incur a major cost. If I am getting this right, then I agree that this issue is unlikely to be a problem and the following principles make perfect sense: - the semantics of expression tree evaluation is the same as native evaluation - expression trees are to encode lazy evaluation without the option of new (or implicit) semantics - a potentially common use case for complex terminals is to transform them into appropriate values while copying the expression, followed by evaluation of the expression newly populated with values If I am getting this right, then this type of information is what I feel is missing from the documentation. Including it would go a long way toward making use of Yap clearer.
In most cases, users also don't care about every possible expression kind -- they are designing a mini-language that uses a subset.
Please do not impose this view on the design of the library. It is absolutely not the case for my use case, as I need to support virtually all the operators. Cheers, Brook

On Feb 8, 2018, at 2:15 PM, Zach Laine via Boost <boost@lists.boost.org> wrote:
So, I have been very resistant to adding another new evaluation mode. Instead, I think something like this should be usable in most cases:
// Let's assume the existence of a my_make_term() function that takes a Yap terminal // and a Context &, and returns a new Yap terminal. template <typename Context> struct transform_terminals_with_context { // Base case. Note that I'm ignoring placeholders entirely for this // example (they're easy to special-case if necessary). template <typename T> auto operator() (yap::terminal_tag, T && t) { return my_make_term(std::forward<T>(t), context_); }
// Recursive case: Match any binary expression. template <typename Tag, typename LExpr, typename RExpr> auto operator() (Tag, LExpr const & left, RExpr const & right) { return yap::make_expression<yap::detail::kind_for<Tag>>( boost::yap::transform(left, *this), boost::yap::transform(right, *this)); }
// Recursive case: Match any unary expression. template <typename Tag, typename Expr> auto operator() (Tag, Expr const & expr) { return yap::make_expression<yap::detail::kind_for<Tag>>( boost::yap::transform(expr, *this)); }
// Ternary and call are added as necessary.
Context & context_; };
So just to be sure, I made up what I think is a complete example to illustrate your main Yap idiom. The features I wanted to capture are: - the evaluate(transform(expr,xform)) idiom - move-only terminals - terminals with no particular expression semantics that are transformed into values that have expression semantics - a customization point in the transform to support arbitrary user-defined types being added to the transform - ignore context-dependent transforms for now; given this pattern those are easy to support since a transform object is available everywhere a terminal is transformed into a value In what follows, I am thinking of the "user" namespace as user-defined code and the "N" namespace as library code. Have I missed anything of note? Please also note that yap::detail::kind_for<Tag> in your code above should not be in the "detail" namespace as that is something that library writers building things upon Yap will be using and therefore it is not an implementation detail. I also think you didn't quite get the yap::make_expression<>() call correct in the above code. Cheers, Brook #include <boost/yap/algorithm.hpp> namespace user { template < typename T = void > class UDT {}; template < typename T, typename Transform > auto transform_terminal (UDT<T>, Transform) { return 1; } } // namespace user namespace N { template <boost::yap::expr_kind Kind, typename Tuple> struct minimal_expr { static const boost::yap::expr_kind kind = Kind; Tuple elements; BOOST_YAP_USER_BINARY_OPERATOR_MEMBER(plus,::N::minimal_expr) }; template < typename T = double > class number { public: number () = default; explicit number (T value) : value_(value) {} number (number const&) = delete; number (number&&) = default; auto value () const { return value_; } private: T value_ = 0; }; template < typename Tag > class Kind; template <> struct Kind< boost::yap::plus_tag > { static constexpr boost::yap::expr_kind value = boost::yap::expr_kind::plus; }; struct xform { // recursion: any binary operator template < typename Tag, typename Left, typename Right > decltype(auto) operator () (Tag, Left&& left, Right&& right) const { return boost::yap::make_expression<minimal_expr,Kind<Tag>::value> (boost::yap::transform(std::forward<Left>(left),*this), boost::yap::transform(std::forward<Right>(right),*this)); } // base case: any terminal (requires transform_terminal() overload) template < typename Terminal > decltype(auto) operator () (boost::yap::terminal_tag, Terminal const& terminal) const { return transform_terminal(terminal,*this); } // base case: number<T>: this can be in the transform (as here) or // a transform_terminal overload (as for user::UDT) template < typename T > auto operator () (boost::yap::terminal_tag, number<T> const& n) const { return boost::yap::make_terminal<minimal_expr>(n.value()); } // ... more overloads for other terminals, unary operators, etc. }; template < typename T > auto lit (T&& t) { return boost::yap::make_terminal<minimal_expr>(std::forward<T>(t)); } } // namespace N int main () { using boost::yap::evaluate; using boost::yap::transform; using N::lit; auto plus_expr = lit(N::number<>{41}) + user::UDT<>{}; auto plus_result = evaluate(transform(plus_expr,N::xform{})); assert(plus_result == 42); return 0; }

On Mon, Feb 12, 2018 at 11:28 AM, Brook Milligan <brook@nmsu.edu> wrote:
On Feb 8, 2018, at 2:15 PM, Zach Laine via Boost <boost@lists.boost.org> wrote:
So, I have been very resistant to adding another new evaluation mode. Instead, I think something like this should be usable in most cases:
// Let's assume the existence of a my_make_term() function that takes a Yap terminal // and a Context &, and returns a new Yap terminal. template <typename Context> struct transform_terminals_with_context { // Base case. Note that I'm ignoring placeholders entirely for this // example (they're easy to special-case if necessary). template <typename T> auto operator() (yap::terminal_tag, T && t) { return my_make_term(std::forward<T>(t), context_); }
// Recursive case: Match any binary expression. template <typename Tag, typename LExpr, typename RExpr> auto operator() (Tag, LExpr const & left, RExpr const & right) { return yap::make_expression<yap::detail::kind_for<Tag>>( boost::yap::transform(left, *this), boost::yap::transform(right, *this)); }
// Recursive case: Match any unary expression. template <typename Tag, typename Expr> auto operator() (Tag, Expr const & expr) { return yap::make_expression<yap::detail::kind_for<Tag>>( boost::yap::transform(expr, *this)); }
// Ternary and call are added as necessary.
Context & context_; };
So just to be sure, I made up what I think is a complete example to illustrate your main Yap idiom. The features I wanted to capture are:
- the evaluate(transform(expr,xform)) idiom
- move-only terminals
I'm not sure what you mean by this, because the terminal type in your example is copyable. Do you mean showing how to force copies with moves, or something else? - terminals with no particular expression semantics that are transformed
into values that have expression semantics
- a customization point in the transform to support arbitrary user-defined types being added to the transform
- ignore context-dependent transforms for now; given this pattern those are easy to support since a transform object is available everywhere a terminal is transformed into a value
In what follows, I am thinking of the "user" namespace as user-defined code and the "N" namespace as library code.
Have I missed anything of note?
Yes, that covers the main moving parts as far as I can see.
Please also note that yap::detail::kind_for<Tag> in your code above should not be in the "detail" namespace as that is something that library writers building things upon Yap will be using and therefore it is not an implementation detail. I also think you didn't quite get the yap::make_expression<>() call correct in the above code.
Agreed. Once added, kind_for<>() would go in yap::. Zach

On Feb 12, 2018, at 6:02 PM, Zach Laine <whatwasthataddress@gmail.com> wrote:
- move-only terminals
I'm not sure what you mean by this, because the terminal type in your example is copyable. Do you mean showing how to force copies with moves, or something else?
How is this copyable? number (number const&) = delete;
Have I missed anything of note?
Yes, that covers the main moving parts as far as I can see.
Good. Cheers, Brook

On Mon, Feb 12, 2018 at 11:38 PM, Brook Milligan <brook@nmsu.edu> wrote:
On Feb 12, 2018, at 6:02 PM, Zach Laine <whatwasthataddress@gmail.com> wrote:
- move-only terminals
I'm not sure what you mean by this, because the terminal type in your example is copyable. Do you mean showing how to force copies with moves, or something else?
How is this copyable?
Ah! I totally missed the deleted copy operations the first time. Thanks. Zach

On 2/5/2018 9:53 AM, Louis Dionne via Boost wrote:
Dear Boost community,
The formal review of Zach Laine's Yap library starts Monday, February 5th and ends on Wednesday, February 14th.
Yap is a C++14-and-later library to build expression templates. Yap uses new features introduced in C++14/17 to make the library simpler to use and compile faster than existing solutions, such as the well known Boost.Proto library. The documentation is available here:
Hello Everyone, My question is on the compiler support instead of a review. On https://tzlaine.github.io/yap/doc/html/boost_yap__proposed_/compiler_support..., it states that MSVC does not support YAP. With the ongoing improvements to support C++17 in the Visual Studio 2017 branch, have the weekly to bi-weekly updates positively affected using cl.exe 19.12.25834 and newer (e.g. Visual Studio 2017, 15.5.4)? --Robert
The GitHub repository is available here:
https://github.com/tzlaine/yap
We encourage your participation in this review. At a minimum, please state:
- Whether you believe the library should be accepted into Boost - Your name - Your knowledge of the problem domain
You are strongly encouraged to also provide additional information:
- What is your evaluation of the library's: * Design * Implementation * Documentation * Tests * Usefulness - Did you attempt to use the library? If so: * Which compiler(s)? * What was the experience? Any problems? - How much effort did you put into your evaluation of the review?
All issues discussed during this review will be tracked on the library's GitHub issue tracker at https://github.com/tzlaine/yap/issues.
Regards, Louis Dionne
P.S.: I've been having trouble posting to this list. If this ends up being a duplicate post, let's make this one the official one.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Wed, Feb 7, 2018 at 6:33 PM, Robert via Boost <boost@lists.boost.org> wrote:
On 2/5/2018 9:53 AM, Louis Dionne via Boost wrote:
Dear Boost community,
The formal review of Zach Laine's Yap library starts Monday, February 5th and ends on Wednesday, February 14th.
Yap is a C++14-and-later library to build expression templates. Yap uses new features introduced in C++14/17 to make the library simpler to use and compile faster than existing solutions, such as the well known Boost.Proto library. The documentation is available here:
Hello Everyone,
My question is on the compiler support instead of a review. On https://tzlaine.github.io/yap/doc/html/boost_yap__proposed_/ compiler_support.html, it states that MSVC does not support YAP. With the ongoing improvements to support C++17 in the Visual Studio 2017 branch, have the weekly to bi-weekly updates positively affected using cl.exe 19.12.25834 and newer (e.g. Visual Studio 2017, 15.5.4)?
The compiler limitations for Yap are exactly those for Hana. Yap uses Hana extensively, and MSVC (last I heard) cannot handle Hana. I've also heard from Louis that the MSVC team is making great strides there. I don't have terribly up-to-date info I'm afraid. Zach

AMDG I have not included the doxygen reference in this as I'll get to that along with the code. General: - missing .gitattributes introduction.html: - "...expression syntax that ... optimizes your expressions. " The syntax doesn't optimize. It's the Eigen implementation that optimizes. - Please use the correct syntax for bulleted lists. It's '*' not '-' in quickbook. http://www.boost.org/doc/html/quickbook/syntax/block.html#quickbook.syntax.b... header_organization.html: - I think it's better to put the actual name of the header as the primary text since that's what I actually need to know to #include it. tutorial/expressions.html: - "...any of the functions in Boost.YAP that takes" s/takes/take/ - "And if we evaluate it using..." And don't begin a sentence with a conjunction. - "An expression containing an expression<> subexpression and subexpressions instantiated from N other ExpressionTemplates can be passed as an argument to any of the Expression-accepting Boost.YAP functions." This sentence is hard to read because it uses "expression" too many times in too many different ways. - "there are lots of examples of how to do this in the Examples section" A link on "Examples" would be nice. tutorial/kinds_of_exceptions.html: - "kinds of expression" s/expression/expressions/ - "represents the leaf-node in an expression tree" I think "a leaf-node" would be more correct as an expression tree may have more than one leaf. - "An if_else expression is analogous to the C++ ternary operator (?:), but without the common-type conversion logic." So does this mean that such conversions are represented explicitly or are they simply not allowed? tutorial/operators.html: - BOOST_YAP_USER_BINARY_OPERATOR_MEMBER(plus, ::lazy_vector_expr) I feel like it should be possible to avoid needing to specify the ExpressionTemplate. It should be possible to deduce the return type from `this`. Also, for binary operators, I'd really prefer to have a single macro that defines both expr + x and x + expr, as defining both is usually what we want for symmetric operators. tutorial/explicit_transformations.html: - "result is created by visiting each node in expr, in top-down, breadth-first order" Is it really breadth-first? I would find that quite surprising as normal recursion creates a depth-first traversal. - "In other words, if returns any non-Boost.YAP argument..." s/if/it/ - "If situations when you want to..." s/If/In/ - "--" An mdash is \u2014 (or you can use UTF-8 for the qbk source, I think). - "evaluate() Is wrong for this task" s/Is/is/ tutorial/evaluating_expressions.html: - I don't think that BOOST_YAP_CONVERSION_OPERATOR_TEMPLATE is a good idea. Users can already define their own expression template types if the default behavior isn't good enough and your documentation encourages this anyway. A macro like this is just asking for accidental incompatibility. tutorial/operator_macros.html: - What about BOOST_YAP_USER_ANY_UDT_BINARY_OPERATOR? tutorial/how_expression_operands_are_treated.html: - "This implies that an rvalue remain treatable as a..." s/remain/remains/. Also this sentence is somewhat awkward. - T & - > PARTIAL_DECAY(T) & This can't possibly be right. If you decay a reference to a function or array into a pointer, then you need to treat the pointer as an rvalue. tutorial/customization_points.html: - what do you need evaluate_as for? I'm having a hard time thinking of uses that wouldn't be better handled as an explicit transform (Or perhaps just wrapping the expression in an implicit_cast_expr). The same goes for transform_expression. Having more knobs and dials just makes the API confusing. Also it looks to me like transform_expression will cause any placeholders to get dropped on the floor, as transform_expression doesn't take the extra arguments that would be used to replace placeholders. - transform_expression is the odd man out in terms of naming. I think it should be eval_something. You're already using the term transform as something else. tutorial/transform_matching.html: - The negate example: struct xform { // This transform negates terminals. auto operator() (boost::yap::terminal_tag, double x) { return -x; } // This transform removes negations. auto operator() (boost::yap::negate_tag, double x) { return x; } } seems somewhat confused. The behavior I would expect from this transform is that it can match expressions of the form `d` or `-d` and will return -d or d respectively. This implies that the correct behavior is not to apply the terminal_tag transform before accessing negate_tag. The later statement: // Only applies the negate_tag transform; prints -9. makes no sense as the negate_tag transform alone will give 9. - "The TagTransform is preferred, except in the case of a call expression, in which case the ExpressionTransform is preferred" Why is call treated specially? From an outsider's POV it just seems weird. At the very least it deserves an explanation in the rationale. - "The above only applies when you have a ExpressionTransform" s/a/an/ examples/hello_world.html: - I have no idea what you're talking about regarding eager evaluation in proto. The corresponding Proto example also uses an explicit evaluate. examples/hello_world_redux.html: - "That's better!" What's better than what? examples/calc1.html: - "Here we can first see how much C++14-and-later language features help the end user -- the Proto version is much, much longer." I disagree with this assertion. The real reason that this is shorter than the Proto version is that you've baked placeholder substitution into evaluate, which I think is a mistake--are you building a lambda library or are you building a generic ET library? examples/calc2.html: - This really misses the point of proto's calc2 example. An equivalent example with YAP would be to define a new ExpressionTemplate with an overloaded call operator. examples/calc3.html: - The way you're using get_arity is rather wacky as you know the arity statically anyway. The original Proto example makes more sense. examples/lazy_vector.html: - "This move is something of a hack." I'm not sure that I would consider it a hack. The general idea is that in evaluate(transform(expr)), any values held by expr should get moved into the result of transform so that they can be passed to evaluate. - "We're able to write this more simply than the equivalent Proto code; since YAP is always lazy, we can just use the expressions below directly." Actually the same code works for proto as well. I suspect that BOOST_PROTO_AUTO was just used for symmetry with expr1. examples/vector.html: - return boost::yap::make_terminal(std::move(vec[n])); Move is probably wrong as you're working with a reference to begin with. (Of course, it doesn't really matter for double, but imagine that you have a vector<unique_ptr> instead.) examples/mixed.html: - I think you'd be better off making sin_tag a regular function object like in the proto example. examples/map_list_of: - map.emplace( Key{std::forward<Key2 &&>(key)}, Value{std::forward<Value2 &&>(value)} ); Why create extra temporaries here? - return transform.map; This doesn't allow NRVO. examples/future_group.html: - "Assertion that left and right are compatible types" I don't think that is_same is the correct test. What you want to guarantee is that the return types are the same on both sides. manual/object_code.html: - "The object code generated by modern template metaprogramming can be quite good, in large part due to the drastic reduction in the amount of metaprogramming necessary to determine the result type" I'm not sure I see the connection here. Type-based metaprogramming generates no object code to begin with. In Christ, Steven Watanabe

On Sat, Feb 10, 2018 at 6:01 PM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
AMDG
[snip] All good notes; done. If you want to track the changes, I've created a boost_review branch where all the during-review changes will be done. tutorial/expressions.html:
- "...any of the functions in Boost.YAP that takes" s/takes/take/
That's not wrong; "takes" needs to agree with "any", not "functions".
- "And if we evaluate it using..." And don't begin a sentence with a conjunction.
:) Done.
- "An expression containing an expression<> subexpression and subexpressions instantiated from N other ExpressionTemplates can be passed as an argument to any of the Expression-accepting Boost.YAP functions." This sentence is hard to read because it uses "expression" too many times in too many different ways.
I've re-written it to hopefully make it clearer. Please let me know if it's still too hard to parse. [snip] Done. tutorial/operators.html:
- BOOST_YAP_USER_BINARY_OPERATOR_MEMBER(plus, ::lazy_vector_expr) I feel like it should be possible to avoid needing to specify the ExpressionTemplate. It should be possible to deduce the return type from `this`. Also, for binary operators, I'd really prefer to have a single macro that defines both expr + x and x + expr, as defining both is usually what we want for symmetric operators.
Agreed. There is a bug for this on GitHub already; I plan to add this after the review.
tutorial/explicit_transformations.html:
- "result is created by visiting each node in expr, in top-down, breadth-first order" Is it really breadth-first? I would find that quite surprising as normal recursion creates a depth-first traversal.
No, you're right. That's wrong on its face, but more broadly it's badly described. What I was trying to describe was the short-circuiting nature of the matching. If a visited subexpression is handled by the transform, the transform does not recurse from that point, unless you write that recursion into the transform. But that's not depth-first, of course. [snip] Done. - "--"
An mdash is \u2014 (or you can use UTF-8 for the qbk source, I think).
Thanks, I didn't know I could use Unicode characters in the qbk source in any form. "\u2104" works int the qbk source, btw.
- "evaluate() Is wrong for this task" s/Is/is/
Done.
tutorial/evaluating_expressions.html:
- I don't think that BOOST_YAP_CONVERSION_OPERATOR_TEMPLATE is a good idea. Users can already define their own expression template types if the default behavior isn't good enough and your documentation encourages this anyway. A macro like this is just asking for accidental incompatibility.
I agree. This and the implicit transforms-during-eval are on the chopping block as far as I'm concerned. The main reason they have not already been cut is that I was waiting to see if reviewers really needed them to stay.
tutorial/operator_macros.html:
- What about BOOST_YAP_USER_ANY_UDT_BINARY_OPERATOR?
Do you mean "Why doesn't it exist?" If so, I don't know what it would do. BOOST_YAP_USER_UDT_ANY_BINARY_OPERATOR already accepts a UDT on either the right or left side, and any non-Expression type on the other. tutorial/how_expression_operands_are_treated.html:
- "This implies that an rvalue remain treatable as a..." s/remain/remains/. Also this sentence is somewhat awkward.
That's the right conjugation for what I was trying to say, though I agree about the awkwardness. I changed that fragment to "This implies that an rvalue be treated as if it were a temporary".
- T & - > PARTIAL_DECAY(T) & This can't possibly be right. If you decay a reference to a function or array into a pointer, then you need to treat the pointer as an rvalue.
Thanks! Fortunately, the code doesn't do what I wrote in that table.
tutorial/customization_points.html:
- what do you need evaluate_as for? I'm having a hard time thinking of uses that wouldn't be better handled as an explicit transform (Or perhaps just wrapping the expression in an implicit_cast_expr). The same goes for transform_expression. Having more knobs and dials just makes the API confusing. Also it looks to me like transform_expression will cause any placeholders to get dropped on the floor, as transform_expression doesn't take the extra arguments that would be used to replace placeholders.
- transform_expression is the odd man out in terms of naming. I think it should be eval_something. You're already using the term transform as something else.
Yes, I agree with all of this, for the reasons already stated.
tutorial/transform_matching.html:
- The negate example: struct xform { // This transform negates terminals. auto operator() (boost::yap::terminal_tag, double x) { return -x; }
// This transform removes negations. auto operator() (boost::yap::negate_tag, double x) { return x; } } seems somewhat confused. The behavior I would expect from this transform is that it can match expressions of the form `d` or `-d` and will return -d or d respectively.
This gets right at the point of the example. It's maybe a counterintuitive way to evaluate expressions, but the alternative is worse. The alternative is that terminal transforms are not auto-applied before a terminal is unwrapped and passed as a value to a tag-transform call. If that were how the library worked, there would be no way to write a tag-transform function that *did* apply the transform, even explicitly, because at that point the terminal is no longer visible. You are just passed a value (like "double x") above. Perhaps I should add something like what I just wrote here to the docs as well.
This implies that the correct behavior is not to apply the terminal_tag transform before accessing negate_tag. The later statement: // Only applies the negate_tag transform; prints -9. makes no sense as the negate_tag transform alone will give 9.
Thanks. Done.
- "The TagTransform is preferred, except in the case of a call expression, in which case the ExpressionTransform is preferred" Why is call treated specially? From an outsider's POV it just seems weird. At the very least it deserves an explanation in the rationale.
Frankly, because I'm not smart enough to fix this. I tried *a lot* to do so. When and if Paul ever puts Fit into a Boost release, I'm going to use that as the fix.
- "The above only applies when you have a ExpressionTransform" s/a/an/
Done.
examples/hello_world.html:
- I have no idea what you're talking about regarding eager evaluation in proto. The corresponding Proto example also uses an explicit evaluate.
I don't have any idea what I'm talking about there either. I went back and looked at all the Proto examples. and I can't figure out what it was I was looking at before that made me think that. I've removed the mention of Proto altogether.
examples/hello_world_redux.html:
- "That's better!" What's better than what?
This example is better that the previous example. The comment is tongue-in-cheek, though, as the rest indicates. examples/calc1.html:
- "Here we can first see how much C++14-and-later language features help the end user -- the Proto version is much, much longer." I disagree with this assertion. The real reason that this is shorter than the Proto version is that you've baked placeholder substitution into evaluate, which I think is a mistake--are you building a lambda library or are you building a generic ET library?
evaluate() hardly qualifies as a lambda library! However, the point remains the same -- making yap::evaluate() is trivial with variadic templates, and would be necessarily limited in a C++98 library. Making such a trivial function part of the Yap library instead of asking the user to reinvent that same wheel in every Yap terminal use case seems like a feature to me, not a bug.
examples/calc2.html:
- This really misses the point of proto's calc2 example. An equivalent example with YAP would be to define a new ExpressionTemplate with an overloaded call operator.
I don't think this example missed the point. In my calc2a example, I show the same functionality, just using lambdas to give the expressions names. I like this better for stylistic reasons. The calc2b version is more directly like the Proto version, except that here again it is possible to provide a single *very simple* Yap function that replaces all the functionality in the Proto calc2 example, yap::make_expression_function(). This also seems to me like a good thing.
examples/calc3.html:
- The way you're using get_arity is rather wacky as you know the arity statically anyway. The original Proto example makes more sense.
Please indicate which of the statements in Proto calc3's main() does *not* contain a statically known arity. Again, I think the only transformation that I made which lead to your comment was to give the expressions names by lambda-wrapping them. I prefer this style.
examples/lazy_vector.html:
- "This move is something of a hack." I'm not sure that I would consider it a hack. The general idea is that in evaluate(transform(expr)), any values held by expr should get moved into the result of transform so that they can be passed to evaluate.
You're right, of course. I only meant that it's a little wonky to std::move() and int to force Yap to make a copy.
- "We're able to write this more simply than the equivalent Proto code; since YAP is always lazy, we can just use the expressions below directly." Actually the same code works for proto as well. I suspect that BOOST_PROTO_AUTO was just used for symmetry with expr1.
True enough. Comment removed.
examples/vector.html:
- return boost::yap::make_terminal(std::move(vec[n])); Move is probably wrong as you're working with a reference to begin with. (Of course, it doesn't really matter for double, but imagine that you have a vector<unique_ptr> instead.)
The move is required to force a copy; just because the reference is valid now doesn't mean it won't dangle later. But in the case of a vector<unique_ptr> case, yes I would probably have done something different.
examples/mixed.html:
- I think you'd be better off making sin_tag a regular function object like in the proto example.
Could you explain? It isn't immediately obvious to me what you mean.
examples/map_list_of:
- map.emplace( Key{std::forward<Key2 &&>(key)}, Value{std::forward<Value2 &&>(value)} ); Why create extra temporaries here?
Stupidity? Sleepiness? I just don't know. A very existential question. Regardless, this is now changed.
- return transform.map; This doesn't allow NRVO.
Thanks! I've changed the transform to operate on a reference to the return type, and changed its use to this: template <typename Key, typename Value, typename Allocator> operator std::map<Key, Value, Allocator> () const { std::map<Key, Value, Allocator> retval; map_list_of_transform<Key, Value, Allocator> transform{retval}; boost::yap::transform(*this, transform); return retval; } examples/future_group.html:
- "Assertion that left and right are compatible types" I don't think that is_same is the correct test. What you want to guarantee is that the return types are the same on both sides.
Yes, that's right. I've stricken the static_assert(), as it's not strictly necessary for the example anyway.
manual/object_code.html:
- "The object code generated by modern template metaprogramming can be quite good, in large part due to the drastic reduction in the amount of metaprogramming necessary to determine the result type" I'm not sure I see the connection here. Type-based metaprogramming generates no object code to begin with.
This is my intuition about part of what confuses the inliner -- I have seen the inliner "get confused" (my characterization) and produce worse code in these type-based metaprogramming-heavy situations, and then produce more inlined calls when the metaprogramming is removed from the same code. I could certainly be wrong! I'll strike that paragraph. Thanks for the very thorough review, Steven! You caught a lot. I'm looking forward to the second part, since I'm sure you'll catch a lot there too. Zach

AMDG On 02/11/2018 01:26 PM, Zach Laine via Boost wrote:
On Sat, Feb 10, 2018 at 6:01 PM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
[snip]
All good notes; done. If you want to track the changes, I've created a boost_review branch where all the during-review changes will be done.
tutorial/expressions.html:
- "...any of the functions in Boost.YAP that takes" s/takes/take/
That's not wrong; "takes" needs to agree with "any", not "functions".
I think "any" is also plural when used as an indefinite pronoun.
<snip>
tutorial/operator_macros.html:
- What about BOOST_YAP_USER_ANY_UDT_BINARY_OPERATOR?
Do you mean "Why doesn't it exist?" If so, I don't know what it would do. BOOST_YAP_USER_UDT_ANY_BINARY_OPERATOR already accepts a UDT on either the right or left side, and any non-Expression type on the other.
Oh, I see. I got confused by the fact that the table says "--" for the second operand type instead of saying the same as for the first operand type, so I jumped to the conclusion that the macro is asymmetric.
tutorial/transform_matching.html:
- The negate example: struct xform { // This transform negates terminals. auto operator() (boost::yap::terminal_tag, double x) { return -x; }
// This transform removes negations. auto operator() (boost::yap::negate_tag, double x) { return x; } } seems somewhat confused. The behavior I would expect from this transform is that it can match expressions of the form `d` or `-d` and will return -d or d respectively.
This gets right at the point of the example. It's maybe a counterintuitive way to evaluate expressions, but the alternative is worse. The alternative is that terminal transforms are not auto-applied before a terminal is unwrapped and passed as a value to a tag-transform call. If that were how the library worked, there would be no way to write a tag-transform function that *did* apply the transform, even explicitly, because at that point the terminal is no longer visible.
I don't believe that that is worse than being unable to write a tag-transform that *doesn't* apply the transform. Nothing else in a transform implicitly recurses, so I think your choice here is inconsistent. In addition, it's not quite true that you can't apply it explicitly. I can think of at least three ways to do so: - re-wrap the argument in a terminal - call (*this)(terminal_tag(), d) - implement terminal evaluation in a separate function which can be used as needed.
You are just passed a value (like "double x") above. Perhaps I should add something like what I just wrote here to the docs as well.
This implies that the correct behavior is not to apply the terminal_tag transform before accessing negate_tag. The later statement: // Only applies the negate_tag transform; prints -9. makes no sense as the negate_tag transform alone will give 9.
Thanks. Done.
- "The TagTransform is preferred, except in the case of a call expression, in which case the ExpressionTransform is preferred" Why is call treated specially? From an outsider's POV it just seems weird. At the very least it deserves an explanation in the rationale.
Frankly, because I'm not smart enough to fix this. I tried *a lot* to do so. When and if Paul ever puts Fit into a Boost release, I'm going to use that as the fix.
Okay, so it's an implementation problem rather than a deliberate interface choice made for some inscrutable reason. I can live with that.
examples/hello_world_redux.html:
- "That's better!" What's better than what?
This example is better that the previous example. The comment is tongue-in-cheek, though, as the rest indicates.
Ah. Having "That" appear a full paragraph before its antecedent makes it unintelligible.
examples/calc1.html:
- "Here we can first see how much C++14-and-later language features help the end user -- the Proto version is much, much longer." I disagree with this assertion. The real reason that this is shorter than the Proto version is that you've baked placeholder substitution into evaluate, which I think is a mistake--are you building a lambda library or are you building a generic ET library?
evaluate() hardly qualifies as a lambda library! However, the point remains the same -- making yap::evaluate() is trivial with variadic templates, and would be necessarily limited in a C++98 library. Making such a trivial function part of the Yap library instead of asking the user to reinvent that same wheel in every Yap terminal use case seems like a feature to me, not a bug.
Positional placeholders are really only needed if you're doing lambda-like things. I don't think that the primary library interface should directly support domain-specific uses. I also don't think requiring those who actually need this to implement it themselves is an excessive burden, as a transform that does it should be <10 lines of code. If you really feel that it's necessary to provide this, just write the transform yourself and provide it along with the placeholders.
examples/calc2.html:
- This really misses the point of proto's calc2 example. An equivalent example with YAP would be to define a new ExpressionTemplate with an overloaded call operator.
I don't think this example missed the point.
The point of the example in Proto is how to add members to an expression type, not just how to make it callable. Wrapping it in another class is something quite different. Now, admittedly, Yap makes creating your own ExpressionTemplate types so easy, that this example is basically redundant...
In my calc2a example, I show the same functionality, just using lambdas to give the expressions names.
You can give the expressions names in Proto too: BOOST_PROTO_AUTO(expr_1_fn, _1 + 2.0);
I like this better for stylistic reasons. The calc2b version is more directly like the Proto version, except that here again it is possible to provide a single *very simple* Yap function that replaces all the functionality in the Proto calc2 example, yap::make_expression_function(). This also seems to me like a good thing.
examples/calc3.html:
- The way you're using get_arity is rather wacky as you know the arity statically anyway. The original Proto example makes more sense.
Please indicate which of the statements in Proto calc3's main() does *not* contain a statically known arity. Again, I think the only transformation that I made which lead to your comment was to give the expressions names by lambda-wrapping them. I prefer this style.
The arity is known in main, but is *not* known at the point where get_arity is actually called. i.e. in the various overloads of calculator_expression::operator()
examples/vector.html:
- return boost::yap::make_terminal(std::move(vec[n])); Move is probably wrong as you're working with a reference to begin with. (Of course, it doesn't really matter for double, but imagine that you have a vector<unique_ptr> instead.)
The move is required to force a copy; just because the reference is valid now doesn't mean it won't dangle later.
Sure, but when using the evaluate(transform()) idiom, you're guaranteed that the references returned by transform will not be left dangling.
But in the case of a vector<unique_ptr> case, yes I would probably have done something different.
examples/mixed.html:
- I think you'd be better off making sin_tag a regular function object like in the proto example.
Could you explain? It isn't immediately obvious to me what you mean.
struct sin_tag { template<class T> T operator()(const T& t) const { using std::sin; return sin(); } }; Then there's no need to use a yap-specific customization point (eval_call). In Christ, Steven Watanabe

On Sun, Feb 11, 2018 at 5:54 PM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
AMDG
On 02/11/2018 01:26 PM, Zach Laine via Boost wrote:
On Sat, Feb 10, 2018 at 6:01 PM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
[snip]
- "...any of the functions in Boost.YAP that takes" s/takes/take/
That's not wrong; "takes" needs to agree with "any", not "functions".
I think "any" is also plural when used as an indefinite pronoun.
It can go either way. I'm using it in the singular. The second definition in Google's online dictionary for "any" is: *2*. whichever of a specified class might be chosen.
tutorial/transform_matching.html:
- The negate example: struct xform { // This transform negates terminals. auto operator() (boost::yap::terminal_tag, double x) { return -x; }
// This transform removes negations. auto operator() (boost::yap::negate_tag, double x) { return x; } } seems somewhat confused. The behavior I would expect from this transform is that it can match expressions of the form `d` or `-d` and will return -d or d respectively.
This gets right at the point of the example. It's maybe a counterintuitive way to evaluate expressions, but the alternative is worse. The alternative is that terminal transforms are not auto-applied before a terminal is unwrapped and passed as a value to a tag-transform call. If that were how the library worked, there would be no way to write a tag-transform function that *did* apply the transform, even explicitly, because at that point the terminal is no longer visible.
I don't believe that that is worse than being unable to write a tag-transform that *doesn't* apply the transform. Nothing else in a transform implicitly recurses, so I think your choice here is inconsistent. In addition, it's not quite true that you can't apply it explicitly. I can think of at least three ways to do so: - re-wrap the argument in a terminal - call (*this)(terminal_tag(), d)
In either of those cases, any information you had about the details of the terminal are now gone, and you can't get them back. The value that you re-wrap or call as above may have come from a terminal that was an lvalue, an rvalue, a reference-expression, const, mutable, etc. - implement terminal evaluation in a separate
function which can be used as needed.
True. I had to make a decision about which convention violated my sense of surprise the least. This is the one I chose. It seems less surprising to get stuck with transforms you did not yet get to in the terminals (mainly because they happen only since you wrote those into your transform explicitly), than it does to get stuck because you lost important information about the properties of the unwrapped terminal. It may be that the community consensus is that my sense of surprise is wrong in this case; I'm open to being persuaded. [snip]
examples/calc1.html:
- "Here we can first see how much C++14-and-later language features help the end user -- the Proto version is much, much longer." I disagree with this assertion. The real reason that this is shorter than the Proto version is that you've baked placeholder substitution into evaluate, which I think is a mistake--are you building a lambda library or are you building a generic ET library?
evaluate() hardly qualifies as a lambda library! However, the point remains the same -- making yap::evaluate() is trivial with variadic templates, and would be necessarily limited in a C++98 library. Making such a trivial function part of the Yap library instead of asking the user to reinvent that same wheel in every Yap terminal use case seems like a feature to me, not a bug.
Positional placeholders are really only needed if you're doing lambda-like things. I don't think that the primary library interface should directly support domain-specific uses. I also don't think requiring those who actually need this to implement it themselves is an excessive burden, as a transform that does it should be <10 lines of code. If you really feel that it's necessary to provide this, just write the transform yourself and provide it along with the placeholders.
True enough. I hesitate to remove this feature because 1) it's already in there, and no else has to reinvent it when it's needed in their code, and 2) I don't think it does any real harm. Contrast this with the implicit transforms associated with evaluate(), which I think lead to bad code.
examples/calc2.html:
- This really misses the point of proto's calc2 example. An equivalent example with YAP would be to define a new ExpressionTemplate with an overloaded call operator.
I don't think this example missed the point.
The point of the example in Proto is how to add members to an expression type, not just how to make it callable. Wrapping it in another class is something quite different. Now, admittedly, Yap makes creating your own ExpressionTemplate types so easy, that this example is basically redundant...
Yeah, this last part is why I even left out some of the Proto examples.
In my calc2a example, I show the same functionality, just using lambdas to give the expressions names.
You can give the expressions names in Proto too:
BOOST_PROTO_AUTO(expr_1_fn, _1 + 2.0);
True. [snip]
examples/vector.html:
- return boost::yap::make_terminal(std::move(vec[n])); Move is probably wrong as you're working with a reference to begin with. (Of course, it doesn't really matter for double, but imagine that you have a vector<unique_ptr> instead.)
The move is required to force a copy; just because the reference is valid now doesn't mean it won't dangle later.
Sure, but when using the evaluate(transform()) idiom, you're guaranteed that the references returned by transform will not be left dangling.
Sure. But that's just in this example, and using that (admittedly dominant) idiom. I still want to reinforce in the example code how you make copies of value types, especially built-in ones.
But in the case of a vector<unique_ptr> case, yes I would probably have done something different.
examples/mixed.html:
- I think you'd be better off making sin_tag a regular function object like in the proto example.
Could you explain? It isn't immediately obvious to me what you mean.
struct sin_tag { template<class T> T operator()(const T& t) const { using std::sin; return sin(); } };
Then there's no need to use a yap-specific customization point (eval_call).
Oh, I see. Yeah, that's true. I just wanted to show how to do a transformed call I think. I'll add this alternate way of accomplishing the same thing to the example. Zach

AMDG On 02/12/2018 09:45 AM, Zach Laine wrote:
On Sun, Feb 11, 2018 at 5:54 PM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
tutorial/transform_matching.html:
- The negate example: struct xform { // This transform negates terminals. auto operator() (boost::yap::terminal_tag, double x) { return -x; }
// This transform removes negations. auto operator() (boost::yap::negate_tag, double x) { return x; } } seems somewhat confused. The behavior I would expect from this transform is that it can match expressions of the form `d` or `-d` and will return -d or d respectively.
This gets right at the point of the example. It's maybe a counterintuitive way to evaluate expressions, but the alternative is worse. The alternative is that terminal transforms are not auto-applied before a terminal is unwrapped and passed as a value to a tag-transform call. If that were how the library worked, there would be no way to write a tag-transform function that *did* apply the transform, even explicitly, because at that point
<snip> the
terminal is no longer visible.
<snip> - re-wrap the argument in a terminal - call (*this)(terminal_tag(), d)
In either of those cases, any information you had about the details of the terminal are now gone, and you can't get them back. The value that you re-wrap or call as above may have come from a terminal that was an lvalue, an rvalue, a reference-expression, const, mutable, etc.
- implement terminal evaluation in a separate
function which can be used as needed.
True. I had to make a decision about which convention violated my sense of surprise the least. This is the one I chose. It seems less surprising to get stuck with transforms you did not yet get to in the terminals (mainly because they happen only since you wrote those into your transform explicitly), than it does to get stuck because you lost important information about the properties of the unwrapped terminal.
It may be that the community consensus is that my sense of surprise is wrong in this case; I'm open to being persuaded.
[snip]
Maybe you should reconsider whether terminals should be unwrapped in the first place. If you don't unwrap terminals, then this isn't an issue. If I'm writing a generic function like: auto operator()(plus_tag, Expr&& lhs, Expr&& rhs) { transform(lhs, *this); transform(rhs, *this) } then the behavior that I want is for terminals to be processed by the nested call to transform. Unwrapping the terminals causes surprising behavior, which is only sort of fixed by automatically applying the terminal transform and making transform a no-op for non-Expressions. In particular, if the terminal transform returns an Expression, that Expression will get processed again.
examples/vector.html:
- return boost::yap::make_terminal(std::move(vec[n])); Move is probably wrong as you're working with a reference to begin with. (Of course, it doesn't really matter for double, but imagine that you have a vector<unique_ptr> instead.)
The move is required to force a copy; just because the reference is valid now doesn't mean it won't dangle later.
Sure, but when using the evaluate(transform()) idiom, you're guaranteed that the references returned by transform will not be left dangling.
Sure. But that's just in this example, and using that (admittedly dominant) idiom. I still want to reinforce in the example code how you make copies of value types, especially built-in ones.
Perfect-forwarding through a chain of transforms is also an important use case that needs to be explained. I don't mind having the examples demonstrate copying as long as they are written such in a way that copying is really the correct behavior. In this particular example, I don't think that it is. In Christ, Steven Watanabe

On Mon, Feb 12, 2018 at 12:08 PM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
AMDG
On Sun, Feb 11, 2018 at 5:54 PM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
tutorial/transform_matching.html:
- The negate example: struct xform { // This transform negates terminals. auto operator() (boost::yap::terminal_tag, double x) { return -x; }
// This transform removes negations. auto operator() (boost::yap::negate_tag, double x) { return x; } } seems somewhat confused. The behavior I would expect from this transform is that it can match expressions of the form `d` or `-d` and will return -d or d respectively.
This gets right at the point of the example. It's maybe a counterintuitive way to evaluate expressions, but the alternative is worse. The alternative is that terminal transforms are not auto-applied before a terminal is unwrapped and passed as a value to a tag-transform call. If that were how the library worked, there would be no way to write a tag-transform function that *did* apply the transform, even explicitly, because at that point
<snip> the
terminal is no longer visible.
<snip> - re-wrap the argument in a terminal - call (*this)(terminal_tag(), d)
In either of those cases, any information you had about the details of
On 02/12/2018 09:45 AM, Zach Laine wrote: the
terminal are now gone, and you can't get them back. The value that you re-wrap or call as above may have come from a terminal that was an lvalue, an rvalue, a reference-expression, const, mutable, etc.
- implement terminal evaluation in a separate
function which can be used as needed.
True. I had to make a decision about which convention violated my sense of surprise the least. This is the one I chose. It seems less surprising to get stuck with transforms you did not yet get to in the terminals (mainly because they happen only since you wrote those into your transform explicitly), than it does to get stuck because you lost important information about the properties of the unwrapped terminal.
It may be that the community consensus is that my sense of surprise is wrong in this case; I'm open to being persuaded.
[snip]
Maybe you should reconsider whether terminals should be unwrapped in the first place. If you don't unwrap terminals, then this isn't an issue.
If I'm writing a generic function like:
auto operator()(plus_tag, Expr&& lhs, Expr&& rhs) { transform(lhs, *this); transform(rhs, *this) }
then the behavior that I want is for terminals to be processed by the nested call to transform. Unwrapping the terminals causes surprising behavior, which is only sort of fixed by automatically applying the terminal transform and making transform a no-op for non-Expressions. In particular, if the terminal transform returns an Expression, that Expression will get processed again.
Yes, I'm not entirely satisfied with this either, for the same reasons. FWIW, the design has settled here after previously not doing any automatic terminal unwrapping, but I found it substantially harder to use when that was the case. I find that code like what you wrote here infrequently trips over whether Expr is an Expression or a non-Expression type, but when it does you can write it like this: auto operator()(plus_tag, Expr&& lhs, Expr&& rhs) { transform(as_expr(lhs), *this); transform(as_expr(rhs), *this) } Not the greatest, but like I said, I don't think the full generality here is required all that often.
examples/vector.html:
- return boost::yap::make_terminal(std::move(vec[n])); Move is probably wrong as you're working with a reference to begin with. (Of course, it doesn't really matter for double, but imagine that you have a vector<unique_ptr> instead.)
The move is required to force a copy; just because the reference is valid now doesn't mean it won't dangle later.
Sure, but when using the evaluate(transform()) idiom, you're guaranteed that the references returned by transform will not be left dangling.
Sure. But that's just in this example, and using that (admittedly dominant) idiom. I still want to reinforce in the example code how you make copies of value types, especially built-in ones.
Perfect-forwarding through a chain of transforms is also an important use case that needs to be explained. I don't mind having the examples demonstrate copying as long as they are written such in a way that copying is really the correct behavior. In this particular example, I don't think that it is.
That's a really good point. I'll revisit that example. *TODO*. Zach

AMDG On 02/12/2018 06:09 PM, Zach Laine wrote:
On Mon, Feb 12, 2018 at 12:08 PM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
<snip> Unwrapping the terminals causes surprising behavior, which is only sort of fixed by automatically applying the terminal transform and making transform a no-op for non-Expressions. In particular, if the terminal transform returns an Expression, that Expression will get processed again.
Yes, I'm not entirely satisfied with this either, for the same reasons. FWIW, the design has settled here after previously not doing any automatic terminal unwrapping, but I found it substantially harder to use when that was the case. I find that code like what you wrote here infrequently trips over whether Expr is an Expression or a non-Expression type, but when it does you can write it like this:
auto operator()(plus_tag, Expr&& lhs, Expr&& rhs) { transform(as_expr(lhs), *this); transform(as_expr(rhs), *this) }
Not the greatest, but like I said, I don't think the full generality here is required all that often.
I don't think as_expr helps with the problems I'm thinking of. Unless I'm misunderstanding something, the following (seemingly reasonable) code will blow up the stack with infinite recursion: int check_int(int i) { assert(i >= -42 && i < 42); return i; } int checked_add(int i, int j) { return check_int(i + j); } struct checked_call { template<class F> int operator()(F f, int i) { return check_int(f(i)); } }; struct int_checker { auto operator()(terminal_tag, placeholder<I>) { return make_expression<expr_kind::call>(check_int, placeholder<I>{}); } template<class L, class R> auto operator()(plus_tag, L && lhs, R && rhs) { return make_expression<expr_kind::call>(checked_add, transform(lhs, *this), transform(rhs, *this)); } template<class F, class E> auto operator()(call_tag, F&& f, E&&... e) { return make_expression<expr_kind::call>(checked_call{}, f, transform(e, *this)...); } }; abs_ = make_terminal((int(*)(int))&std::abs); evaluate(transform(abs_(1_p + 2_p), int_checker{}), -5, 20); In Christ, Steven Watanabe

On Mon, Feb 12, 2018 at 10:13 PM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
AMDG
On 02/12/2018 06:09 PM, Zach Laine wrote:
On Mon, Feb 12, 2018 at 12:08 PM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
<snip> Unwrapping the terminals causes surprising behavior, which is only sort of fixed by automatically applying the terminal transform and making transform a no-op for non-Expressions.
I guess I was reading too fast. I missed or misread this last sentence entirely:
In particular, if the terminal
transform returns an Expression, that Expression will get processed again.
I get what you're concerned about now. That's not an issue. An expression is processed at most once by transform(). If, during a call to transform(), an expression E matches the transform-object TO, the result of TO(E) is used in the output of transform(). If, in the TO(E) call, its children are processed in some way, it is the user's responsibility not to infinitely recurse. transform() will copy or move the partial result TO(E), but will never revisit/re-transform it. [snip] Unless I'm misunderstanding something, the following
(seemingly reasonable) code will blow up the stack with infinite recursion:
int check_int(int i) { assert(i >= -42 && i < 42); return i; }
int checked_add(int i, int j) { return check_int(i + j); }
struct checked_call { template<class F> int operator()(F f, int i) { return check_int(f(i)); } };
struct int_checker { auto operator()(terminal_tag, placeholder<I>) { return make_expression<expr_kind::call>(check_int, placeholder<I>{}); } template<class L, class R> auto operator()(plus_tag, L && lhs, R && rhs) { return make_expression<expr_kind::call>(checked_add, transform(lhs, *this), transform(rhs, *this)); } template<class F, class E> auto operator()(call_tag, F&& f, E&&... e) { return make_expression<expr_kind::call>(checked_call{}, f, transform(e, *this)...); } };
abs_ = make_terminal((int(*)(int))&std::abs);
evaluate(transform(abs_(1_p + 2_p), int_checker{}), -5, 20);
Let's look at the call graph for this: First transform() will call operator()(call_tag, ...) on abs_(1_p + 2_p). The result is make_expression<expr_kind::call>(checked_call{}, abs_, transform(1_p + 2_p), *this). To get that result, transform() will call (a *new* call to transform() at the user's request, not as part of the top-level call): operator()(plus_tag, ...) on 1_p and 2_p. The result is make_expression<expr_kind:: call>(checked_add{}, transform(1_p, *this), transform(2_p, *this)). Just following the left side: to evaluate that, transform() (again, a *new* call): operator()(terminal_tag, 1_p), which returns make_expression<expr_kind::call>(check_int, 1_p). The right does something very similar, just with 2_p. No infinite recursion. Zach

AMDG On 02/13/2018 09:44 AM, Zach Laine wrote:
On Mon, Feb 12, 2018 at 10:13 PM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
On 02/12/2018 06:09 PM, Zach Laine wrote:
On Mon, Feb 12, 2018 at 12:08 PM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
<snip> Unwrapping the terminals causes surprising behavior, which is only sort of fixed by automatically applying the terminal transform and making transform a no-op for non-Expressions.
I guess I was reading too fast. I missed or misread this last sentence entirely:
In particular, if the terminal
transform returns an Expression, that Expression will get processed again.
If, during a call to transform(), an expression E matches the
I get what you're concerned about now. That's not an issue. An expression is processed at most once by transform(). transform-object TO, the result of TO(E) is used in the output of transform(). If, in the TO(E) call, its children are processed in some way, it is the user's responsibility not to infinitely recurse. transform() will copy or move the partial result TO(E), but will never revisit/re-transform it.
[snip]
Unless I'm misunderstanding something, the following
(seemingly reasonable) code will blow up the stack with infinite recursion:
int check_int(int i) { assert(i >= -42 && i < 42); return i; }
int checked_add(int i, int j) { return check_int(i + j); }
struct checked_call { template<class F> int operator()(F f, int i) { return check_int(f(i)); } };
struct int_checker { auto operator()(terminal_tag, placeholder<I>) { return make_expression<expr_kind::call>(check_int, placeholder<I>{}); } template<class L, class R> auto operator()(plus_tag, L && lhs, R && rhs) { return make_expression<expr_kind::call>(checked_add, transform(lhs, *this), transform(rhs, *this)); } template<class F, class E> auto operator()(call_tag, F&& f, E&&... e) { return make_expression<expr_kind::call>(checked_call{}, f, transform(e, *this)...); } };
abs_ = make_terminal((int(*)(int))&std::abs);
evaluate(transform(abs_(1_p + 2_p), int_checker{}), -5, 20);
Let's look at the call graph for this:
First transform() will call operator()(call_tag, ...) on abs_(1_p + 2_p). The result is make_expression<expr_kind::call>(checked_call{}, abs_, transform(1_p + 2_p), *this).
To get that result, transform() will call (a *new* call to transform() at the user's request, not as part of the top-level call): operator()(plus_tag, ...) on 1_p and 2_p. The result is make_expression<expr_kind:: call>(checked_add{}, transform(1_p, *this), transform(2_p, *this)).
I think you're forgetting that the terminal transform is applied before calling operator()(xxx_tag, ...). Thus, the result is actually: make_expression<expr_kind::call>(checked_add{}, transform(transform(1_p, *this), *this), transform(transform(2_p, *this), *this)). Since the extra transform tacks on another call expr, we end up up coming right back around to operator()(call_tag). The bottom line is, any transform which is not idempotent for terminals is badly broken if you unwrap terminals automatically. If you don't apply the terminal transform, but still unwrap terminals, then it's *still* broken, because then the terminal transform gets skipped entirely. Actually, the code I posted fails to compile (even after fixing the obvious typos) because the recursion makes it impossible for auto to deduce the return type.
Just following the left side: to evaluate that, transform() (again, a *new* call): operator()(terminal_tag, 1_p), which returns make_expression<expr_kind::call>(check_int, 1_p).
The right does something very similar, just with 2_p.
No infinite recursion.
In Christ, Steven Watanabe

On Tue, Feb 13, 2018 at 1:30 PM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
AMDG
On 02/13/2018 09:44 AM, Zach Laine wrote:
On Mon, Feb 12, 2018 at 10:13 PM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
On 02/12/2018 06:09 PM, Zach Laine wrote:
On Mon, Feb 12, 2018 at 12:08 PM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
[snip]
I think you're forgetting that the terminal transform is applied before calling operator()(xxx_tag, ...). Thus, the result is actually:
make_expression<expr_kind::call>(checked_add{}, transform(transform(1_p, *this), *this), transform(transform(2_p, *this), *this)).
Since the extra transform tacks on another call expr, we end up up coming right back around to operator()(call_tag).
I was forgetting that! Thanks.
The bottom line is, any transform which is not idempotent for terminals is badly broken if you unwrap terminals automatically.
If you don't apply the terminal transform, but still unwrap terminals, then it's *still* broken, because then the terminal transform gets skipped entirely.
Actually, the code I posted fails to compile (even after fixing the obvious typos) because the recursion makes it impossible for auto to deduce the return type.
Right, and although this saves the user from runtime infinite recursion, it is a very obscure failure mode. This more than outweighs the convenience of the current interface. I'm convinced the auto-evaluating, and in fact all terminal unwrapping in tag-transforms, should be removed. Zach

On Tue, Feb 13, 2018 at 2:10 PM, Zach Laine <whatwasthataddress@gmail.com> wrote:
On Tue, Feb 13, 2018 at 1:30 PM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
[snip]
Actually, the code I posted fails to compile (even after
fixing the obvious typos) because the recursion makes it impossible for auto to deduce the return type.
Right, and although this saves the user from runtime infinite recursion, it is a very obscure failure mode. This more than outweighs the convenience of the current interface. I'm convinced the auto-evaluating, and in fact all terminal unwrapping in tag-transforms, should be removed.
Actually, I take this back, at least partially -- I think auto-unwrapping in some form might still be useful without auto-applying the transform. I'll need to experiment with that a bit. *TODO* Zach

AMDG On 02/13/2018 01:29 PM, Zach Laine wrote:
On Tue, Feb 13, 2018 at 2:10 PM, Zach Laine <whatwasthataddress@gmail.com> wrote:
<snip> I'm convinced the auto-evaluating, and in fact all terminal unwrapping in tag-transforms, should be removed.
Actually, I take this back, at least partially -- I think auto-unwrapping in some form might still be useful without auto-applying the transform. I'll need to experiment with that a bit. *TODO*
If you can figure out a way to do it such that unwrapping is only applied for arguments that are *explicitly* intended to match terminals, then it would work. If you imagine the semantics being as if unwrapping the terminal were an implicit conversion, then it would probably work correctly in all cases. Unfortunately, I don't know how to implement this. In Christ, Steven Watanabe

On Tue, Feb 13, 2018 at 3:36 PM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
AMDG
On 02/13/2018 01:29 PM, Zach Laine wrote:
On Tue, Feb 13, 2018 at 2:10 PM, Zach Laine < whatwasthataddress@gmail.com> wrote:
<snip> I'm convinced the auto-evaluating, and in fact all terminal unwrapping in tag-transforms, should be removed.
Actually, I take this back, at least partially -- I think auto-unwrapping in some form might still be useful without auto-applying the transform. I'll need to experiment with that a bit. *TODO*
If you can figure out a way to do it such that unwrapping is only applied for arguments that are *explicitly* intended to match terminals, then it would work. If you imagine the semantics being as if unwrapping the terminal were an implicit conversion, then it would probably work correctly in all cases. Unfortunately, I don't know how to implement this.
I don't know if it needs to be so fancy. I could just revert back to the old behavior, which always unwraps terminals, but does not auto-apply transforms. Zach

AMDG On 02/12/2018 09:45 AM, Zach Laine wrote:
On Sun, Feb 11, 2018 at 5:54 PM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
On 02/11/2018 01:26 PM, Zach Laine via Boost wrote:
evaluate() hardly qualifies as a lambda library! However, the point remains the same -- making yap::evaluate() is trivial with variadic templates, and would be necessarily limited in a C++98 library. Making such a trivial function part of the Yap library instead of asking the user to reinvent that same wheel in every Yap terminal use case seems like a feature to me, not a bug.
Positional placeholders are really only needed if you're doing lambda-like things. I don't think that the primary library interface should directly support domain-specific uses. I also don't think requiring those who actually need this to implement it themselves is an excessive burden, as a transform that does it should be <10 lines of code. If you really feel that it's necessary to provide this, just write the transform yourself and provide it along with the placeholders.
True enough. I hesitate to remove this feature because 1) it's already in there, and no else has to reinvent it when it's needed in their code, and 2) I don't think it does any real harm. Contrast this with the implicit transforms associated with evaluate(), which I think lead to bad code.
I would be more willing to accept this in C++03 or even C++11, but this kind of usage of expression templates is largely obsolete in C++14, because we have language-level lambda expressions. In Christ, Steven Watanabe

________________________________________ From: Boost [boost-bounces@lists.boost.org] on behalf of Steven Watanabe via Boost [boost@lists.boost.org] Sent: 14 February 2018 18:09 To: Boost Cc: Steven Watanabe Subject: Re: [boost] [yap] Review part 1: documentation <SNIP>
Positional placeholders are really only needed if you're doing lambda-like things.
I don't think that the primary library interface should directly support domain-specific uses. I also don't think requiring those who actually need this to implement it themselves is an excessive burden, as a transform that does it should be <10 lines of code. If you really feel that it's necessary to provide this, just write the transform yourself and provide it along with the placeholders.
True enough. I hesitate to remove this feature because 1) it's already in there, and no else has to reinvent it when it's needed in their code, and 2) I don't think it does any real harm. Contrast this with the implicit transforms associated with evaluate(), which I think lead to bad code.
I would be more willing to accept this in C++03 or even C++11, but this kind of usage of expression templates is largely obsolete in C++14, because we have language-level lambda expressions.
Please do NOT remove the placeholders and other things which allow a functional use case. I do not see how they prevent other use cases and I don't see why users should not have access to them. I have used these features to build the equivalent of the proto mini lambda example. Best wishes John Fletcher

AMDG On 02/14/2018 11:25 AM, Fletcher, John P wrote:
<snip> Please do NOT remove the placeholders and other things which allow a functional use case. I do not see how they prevent other use cases and I don't see why users should not have access to them.
I object to them because they violate the single-responsibility principle, I don't believe that they are commonly needed, and they're simple to implement for the cases that actually do need them.
I have used these features to build the equivalent of the proto mini lambda example.
Why would anyone want to do this in the first place in C++14 (other than to show that you can)? In Christ, Steven Watanabe

________________________________________ From: Boost [boost-bounces@lists.boost.org] on behalf of Steven Watanabe via Boost [boost@lists.boost.org] Sent: 14 February 2018 18:33 To: Boost Cc: Steven Watanabe Subject: Re: [boost] [yap] Review part 1: documentation AMDG On 02/14/2018 11:25 AM, Fletcher, John P wrote:
<snip> Please do NOT remove the placeholders and other things which allow a functional use case. I do not see how they prevent other use cases and I don't see why users should not have access to them.
I object to them because they violate the single-responsibility principle, I don't believe that they are commonly needed, and they're simple to implement for the cases that actually do need them.
Please can you give me a reference for the single-responsibility principle. I believe that anyone doing a calculation on a modern system has to evaluate how far they can trust the components they use. I have been an academic engineer for more than 40 years and have taught students to take responsibility. There was a time when engineers used a slide rule and wrote down their answers. Then they could take responsibility completely. Now we have to work out what we trust. You are free to not use them. The balance of the uses made may well be different from what you expect.
I have used these features to build the equivalent of the proto mini lambda example.
Why would anyone want to do this in the first place in C++14 (other than to show that you can)?
Intellectual curiosity and to learn about the software. Also to compare YAP with other systems I have used. John

AMDG - Please include the standard copyright/licence text in all files. - headers should be in include/boost algorithm.hpp: 80: in deref: "std::is_rvalue_reference<Expr>{} &&" This is always false, since Expr will be deduced as either E or E&. That's probably for the best, though, as I don't think moving is ever right here. 110: ADL 180: in value_impl (ValueOfTerminalsOnly && kind == expr_kind::terminal) Testing ValueOfTerminalsOnly is redundant. 214: in value - Otherwise, \a x is forwarded to the caller. */ Does value actually make sense sense for non-unary expressions? 221: template <long long I, typename Expr> decltype(auto) get (Expr && expr, hana::llong<I> i); Does this need to be restricted to hana::llong, rather than, say, std::integral_constant? 300: get_c: hana uses at/at_c. If you're going to use get, I'd prefer to use get<0> rather than get_c<0> to match the naming in the standard library. 317: in left: kind == expr_kind::expr_ref || detail::arity_of<kind>() == detail::expr_arity::two, This assertion isn't quite strict enough as it will accept an expr_ref to anything. I think you need a get_kind_after_unwrapping_expr_refs function, as this seems to be needed in quite a few places. 414: in callable: detail::arity_of<kind>() == detail::expr_arity::n, Shouldn't this check whether kind is call_expr? 439: in argument: "In argument(expr, I), I must be nonnegative, and less " "than hana::size(expr.elements)." The assertion text doesn't match the code. I think it would be better to make the message more abstract, rather than trying to describe the exact conditions numerically. 667-669: /** Returns the result of transforming (all or part of) \a expr using whatever overloads of <code>Transform::operator()</code> that match \a expr. Remove "that" before "match." As written "whatever" is left hanging. 698: inline char const * op_string (expr_kind kind) Does this belong here and not in print.hpp? algorithm_fwd.hpp: 17: expr_ref, ///< An (possibly \c const) reference to another expression. s/An/A/ config.hpp: #ifndef BOOST_NO_CONSTEXPR_IF /** Indicates whether the compiler supports constexpr if. If the user does not define any value for this, we assume that the compiler does not have the necessary support. Note that this is a temporary hack; this should eventually be a Boost-wide macro. */ #define BOOST_NO_CONSTEXPR_IF 1 #endif This means that BOOST_NO_CONSTEXPR_IF will always be defined (presumably to 0 or 1). Unfortunately, you will never use if constexpr because you always test the macro with #ifdef instead of #if. expression.hpp: 11-13: \note Due to a limitation of Doxygen, each of the <code>value()</code>, <code>left()</code>, <code>right()</code>, and operator overloads listed here is a stand-in for three member Did you try \overload? That sometimes helps. 49: eval_expression_as is particularly insidious here as it uses static_cast, thus enabling even conversions that would normally be explicit. 266: template <typename T> struct expression<expr_kind::terminal, hana::tuple<T>> The only differences that I can see between this specialization and the primary template are - terminal has an extra constructor expression(T &&). - terminal does not have left/right. I don't think either of these is worth making a specialization. make_terminal is more convenient and left and right will already fail in the static assert. expression_free_operators.hpp/expression_if_else.hpp: - Why is this not part of expression.hpp? operators.hpp: No comments. print.hpp: 60: inline bool is_const_expr_ref (...) { return false; } If the argument has a non-trivial copy/move/destructor then this is conditionally supported with implementation defined semantics. [expr.call] Don't use the elipsis like this if you want to write portable code. 121,176: ADL 144,146,185,187: ADL (Is this intentional? It isn't a documented customization point) user_macros.hpp: 8: # define BOOST_PP_CAT(lhs, rhs) BOOST_PP_CAT_O((lhs, rhs)) This is obviously copy-pasted from Boost.Preprocessor. 89-90: using this_type = ::boost::yap::detail::remove_cv_ref_t<decltype(*this)>; \ using lhs_type = ::boost::yap::detail::operand_type_t<expr_template, this_type const &>; \ Is there a reason why you're stripping off qualifiers and then adding them back again? 315: in BOOST_YAP_USER_FREE_BINARY_OPERATOR auto operator BOOST_YAP_INDIRECT_CALL(op_name)() (T && lhs, Expr & rhs) \ Do you really want this to match even non-expr lvalues in the rhs? Oh, I see, it's handled by SFINAE in free_binary_op_result_t. Still, you should probably handle the lvalue and rvalue cases in the same way. 467: in BOOST_YAP_UDT_UNARY_OPERATOR auto operator BOOST_YAP_INDIRECT_CALL(op_name)((T && x)) This doesn't work for post-inc/decrement. 619: ::boost::hana::ic_detail::parse<sizeof...(c)>({c...}) Calling another library's details is a big red flag. detail/algorithm.hpp: 20: struct static_const This dance is no longer necessary in C++17. variables can be declared `inline`. 44: struct partial_decay<R(A...)>; The documentation seems to imply that you also decay function references. 192-200: This specialization is unnecessary as the primary template (undefined) works well enough. detail/default_eval.hpp: 149,302: BOOST_YAP_BINARY_OPERATOR_CASE(logical_or) // || 150,303: BOOST_YAP_BINARY_OPERATOR_CASE(logical_and) // && No short circuit evaluation. 162,310: return eval_comma( Please tell me that I'm reading this wrong and that (a) this can safely return void, and (b) the left argument is evaluated before the right argument. 192,332: return eval_if_else( It looks like this always evaluates both branches. Additional notes that I missed previously: concepts.html: - There's an additional constraint on Expression that you do not state explicitly. E e{t} where t is a Tuple, must store t in e.elements. In Christ, Steven Watanabe

On Tue, Feb 13, 2018 at 4:51 PM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
AMDG
I've snipped most of this email. The missing parts were either addressed last night in my local copy of the code, or are now GitHub tickets.
algorithm.hpp:
214: in value - Otherwise, \a x is forwarded to the caller. */ Does value actually make sense sense for non-unary expressions?
I hope so. I added this because I found that the semantics of value() including the non-unary case are looser, it is still useful in this form in generic code. It saves you from having to handle unary-vs.-non-unary as separate cases in a lot of code that is essentially passing through a parameter and doesn't want to have to care about what it is. value() did not originally work like this, but I found it harder to use then, so I added the non-unary pass-through.
221: template <long long I, typename Expr> decltype(auto) get (Expr && expr, hana::llong<I> i); Does this need to be restricted to hana::llong, rather than, say, std::integral_constant?
It's the lack of nice syntax for integral_constant literals that made me choose this. I write get(expr, N_c) a lot, and I expect users to as well.
300: get_c: hana uses at/at_c. If you're going to use get, I'd prefer to use get<0> rather than get_c<0> to match the naming in the standard library.
I prefer what Hana does to the get() from the standard. Also, the get() from the standard is heavily overloaded, and I'd like not to match its signature, if only for clarity. For instance, after a quick glance at the code, what does get<3>(x) do? I dislike that, in part due to ADL, I really have no idea without looking somewhere else.
317: in left: kind == expr_kind::expr_ref || detail::arity_of<kind>() == detail::expr_arity::two, This assertion isn't quite strict enough as it will accept an expr_ref to anything. I think you need a get_kind_after_unwrapping_expr_refs function, as this seems to be needed in quite a few places.
That would provide better locality of error messages. However, you'll still get an error, just a call or two down the call stack. It's a lot of compile-time mechanism to get an improvement, and this slows down all the non-error cases. I don't think it's a favorable trade.
algorithm_fwd.hpp:
#ifndef BOOST_NO_CONSTEXPR_IF /** Indicates whether the compiler supports constexpr if.
If the user does not define any value for this, we assume that the compiler does not have the necessary support. Note that this is a temporary hack; this should eventually be a Boost-wide macro. */ #define BOOST_NO_CONSTEXPR_IF 1 #endif This means that BOOST_NO_CONSTEXPR_IF will always be defined (presumably to 0 or 1). Unfortunately, you will never use if constexpr because you always test the macro with #ifdef instead of #if.
So frustrating! I noticed this a long time ago, and then got distracted and forgot to go back and fix it. Thanks.
expression.hpp:
49: eval_expression_as is particularly insidious here as it uses static_cast, thus enabling even conversions that would normally be explicit.
Agreed. This is going to be cut though, based on earlier comments.
266: template <typename T> struct expression<expr_kind::terminal, hana::tuple<T>> The only differences that I can see between this specialization and the primary template are - terminal has an extra constructor expression(T &&). - terminal does not have left/right. I don't think either of these is worth making a specialization. make_terminal is more convenient and left and right will already fail in the static assert.
That ctor is pretty important. You could use make_terminal, as you said, but the main use case for expression<> is prototyping, in which case I'd like to have a template that as closely as possible matches the expression template you'll eventually write in your post-prototype implementation. Explicit terminal types or templates are almost always the right answer in such cases, and I think expression<> should reflect that.
expression_free_operators.hpp/expression_if_else.hpp:
- Why is this not part of expression.hpp?
For the same reason that expression_free_operators.hpp is not. Those three headers are separate pieces that you may want some or all of, so I kept them separate.
print.hpp:
60: inline bool is_const_expr_ref (...) { return false; } If the argument has a non-trivial copy/move/destructor then this is conditionally supported with implementation defined semantics. [expr.call] Don't use the elipsis like this if you want to write portable code.
I did no know that; thanks. Fixed.
121,176: ADL
144,146,185,187: ADL (Is this intentional? It isn't a documented customization point)
No, this and the other ADL holes were all unintentional. Fixed.
89-90: using this_type = ::boost::yap::detail::remove_cv_ref_t<decltype(*this)>; \ using lhs_type = ::boost::yap::detail::operand_type_t<expr_template, this_type const &>; \ Is there a reason why you're stripping off qualifiers and then adding them back again?
It was simply for consistency among the three different overloads each macro provides. I did it I think when there was a lot more going on in each macro, to keep from getting confused. I've removed them now though.
619: ::boost::hana::ic_detail::parse<sizeof...(c)>({c...}) Calling another library's details is a big red flag.
Well, it's my maintenance burden, no? I think it beats copy/pasting, since the implementation is quite stable, and since I already include the relevant header anyway.
detail/algorithm.hpp:
20: struct static_const This dance is no longer necessary in C++17. variables can be declared `inline`.
Yeah, but this library supports C++14 too. Anyway, the customization points are going away. Thanks again for the detailed review! Will there by chance be a part 3? Zach

AMDG On 02/14/2018 03:11 PM, Zach Laine via Boost wrote:
On Tue, Feb 13, 2018 at 4:51 PM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
I've snipped most of this email. The missing parts were either addressed last night in my local copy of the code, or are now GitHub tickets.
algorithm.hpp:
221: template <long long I, typename Expr> decltype(auto) get (Expr && expr, hana::llong<I> i); Does this need to be restricted to hana::llong, rather than, say, std::integral_constant?
It's the lack of nice syntax for integral_constant literals that made me choose this. I write get(expr, N_c) a lot, and I expect users to as well.
Supporting an IntegralConstant doesn't mean that you can't pass 0_c.
300: get_c: hana uses at/at_c. If you're going to use get, I'd prefer to use get<0> rather than get_c<0> to match the naming in the standard library.
I prefer what Hana does to the get() from the standard.
Then call it at rather than get. If you're using a well-known name, then you should follow the existing conventions for that function.
Also, the get() from the standard is heavily overloaded, and I'd like not to match its signature, if only for clarity. For instance, after a quick glance at the code, what does get<3>(x) do? I dislike that, in part due to ADL, I really have no idea without looking somewhere else.
<snip>
expression_free_operators.hpp/expression_if_else.hpp:
- Why is this not part of expression.hpp?
For the same reason that expression_free_operators.hpp is not. Those three headers are separate pieces that you may want some or all of, so I kept them separate.
expression_if_else makes a certain amount of sense as being separate. Making expression_free_operators separate comes with a high risk of ODR problems in any code that has different behavior depending on whether a given operator is defined. It's extra strange because expression_free_operators.hpp only has half the operator overloads, with the other half being defined as members.
print.hpp:
60: inline bool is_const_expr_ref (...) { return false; } If the argument has a non-trivial copy/move/destructor then this is conditionally supported with implementation defined semantics. [expr.call] Don't use the elipsis like this if you want to write portable code.
I did no know that; thanks. Fixed.
It used to be flat-out undefined behavior, but C++11 relaxed the rules somewhat. Note that it's safe in an unevaluated context, which (in my experience) is a more common use of the ellipsis in C++ code.
<snip> Thanks again for the detailed review!
Will there by chance be a part 3?
Yes. I still have the tests + a summary. You may have noticed that I haven't given a vote yet. I was planning to finish it quickly today, but since Louis extended the review, I can be a bit more thorough. In Christ, Steven Watanabe

On Wed, Feb 14, 2018 at 4:45 PM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
AMDG
On 02/14/2018 03:11 PM, Zach Laine via Boost wrote:
On Tue, Feb 13, 2018 at 4:51 PM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
I've snipped most of this email. The missing parts were either addressed last night in my local copy of the code, or are now GitHub tickets.
algorithm.hpp:
221: template <long long I, typename Expr> decltype(auto) get (Expr && expr, hana::llong<I> i); Does this need to be restricted to hana::llong, rather than, say, std::integral_constant?
It's the lack of nice syntax for integral_constant literals that made me choose this. I write get(expr, N_c) a lot, and I expect users to as
well.
Supporting an IntegralConstant doesn't mean that you can't pass 0_c.
Is the change from std::integral_constant to IntegralConstant significant? That is, are you asking that I accept models of some concept, or just std::integral_constant?
300: get_c: hana uses at/at_c. If you're going to use get, I'd prefer to use get<0> rather than get_c<0> to match the naming in the standard library.
I prefer what Hana does to the get() from the standard.
Then call it at rather than get. If you're using a well-known name, then you should follow the existing conventions for that function.
I'm with you in the general case. I think get() is overloaded to the point of having lost any specific meaning though. There's the std::get() overloads that operate on tuple-like types, but there are also numerous other get()s in the wild, including the somewhat similar ones from Boost.Fusion. I don't know of any that take an integral constant as a second parameter. I don't think this is a significant source of confusion.
<snip>
expression_free_operators.hpp/expression_if_else.hpp:
- Why is this not part of expression.hpp?
For the same reason that expression_free_operators.hpp is not. Those three headers are separate pieces that you may want some or all of, so I kept them separate.
expression_if_else makes a certain amount of sense as being separate. Making expression_free_operators separate comes with a high risk of ODR problems in any code that has different behavior depending on whether a given operator is defined. It's extra strange because expression_free_operators.hpp only has half the operator overloads, with the other half being defined as members.
I've read this a few times and don't get it. Without including expression_free_operators.hpp, where might the other ODR-colliding operators come from? Do you mean if the user defines her own operators for expression<>, and then includes expression_free_operators.hpp in another TU, or something else? [snip] Let me bring operator||/operator&& short circuiting back up. After implementing it, I don't think it's appropriate. Consider the current non-short-circuiting implementation on the boost_review branch: https://github.com/tzlaine/yap/blob/boost_review/include/boost/yap/detail/de... #define BOOST_YAP_BINARY_OPERATOR_CASE(op, op_name) \ template <> \ struct default_eval_expr_impl<expr_kind:: op_name> \ { \ template <typename Expr, typename ...T> \ decltype(auto) operator() (Expr && expr, T && ... args) \ { \ using namespace hana::literals; \ return \ default_eval_expr(static_cast<Expr &&>(expr).elements[0_c], static_cast<T &&>(args)...) op \ default_eval_expr(static_cast<Expr &&>(expr).elements[1_c], static_cast<T &&>(args)...); \ } \ }; If the "&&" in "default_eval_expr(...) && default_eval_expr(...)" resolves to a built-in operator&&(), short-circuiting will happen as always, right? But if the "&&" resolves to a user-defined operator&&(), let's say this one: struct my_type { // ... std::shared_ptr<int> operator&&(my_type const & rhs) {/*...*/} }; then I really don't want to artificially impose the short-circuiting. The whole point of evaluate() is that it evaluates to whatever your Yap expression would have if it were a plain ol' non-Yap expression.
Thanks again for the detailed review!
Will there by chance be a part 3?
Yes. I still have the tests + a summary. You may have noticed that I haven't given a vote yet. I was planning to finish it quickly today, but since Louis extended the review, I can be a bit more thorough.
Sweet. Looking forward to it. Zach

AMDG On 02/14/2018 10:29 PM, Zach Laine via Boost wrote:
On Wed, Feb 14, 2018 at 4:45 PM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
On 02/14/2018 03:11 PM, Zach Laine via Boost wrote:
On Tue, Feb 13, 2018 at 4:51 PM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
algorithm.hpp:
221: template <long long I, typename Expr> decltype(auto) get (Expr && expr, hana::llong<I> i); Does this need to be restricted to hana::llong, rather than, say, std::integral_constant?
It's the lack of nice syntax for integral_constant literals that made me choose this. I write get(expr, N_c) a lot, and I expect users to as well.
Supporting an IntegralConstant doesn't mean that you can't pass 0_c.
Is the change from std::integral_constant to IntegralConstant significant? That is, are you asking that I accept models of some concept, or just std::integral_constant?
I meant hana::IntegralConstant in the first place. std::integral_constant was just an example of another IntegralConstant, which hana::at can accept.
<snip>
expression_free_operators.hpp/expression_if_else.hpp:
- Why is this not part of expression.hpp?
For the same reason that expression_free_operators.hpp is not. Those three headers are separate pieces that you may want some or all of, so I kept them separate.
expression_if_else makes a certain amount of sense as being separate. Making expression_free_operators separate comes with a high risk of ODR problems in any code that has different behavior depending on whether a given operator is defined. It's extra strange because expression_free_operators.hpp only has half the operator overloads, with the other half being defined as members.
I've read this a few times and don't get it. Without including expression_free_operators.hpp, where might the other ODR-colliding operators come from? Do you mean if the user defines her own operators for expression<>, and then includes expression_free_operators.hpp in another TU, or something else?
I was talking about the one definition rule as applied to some function that uses the operators. As an example, consider something that you yourself have written: boost/yap/print.hpp: template <typename T, typename = void_t<>> struct printer { std::ostream & operator() (std::ostream & os, T const &) { return os << "<<unprintable-value>>"; } }; template <typename T> struct printer< T, void_t<decltype( std::declval<std::ostream &>() << std::declval<T const &>() )> > { std::ostream & operator() (std::ostream & os, T const & x) { return os << x; } }; Now consider a class that defines a stream operator separately from the class: // x.hpp class X {}; // x_print.hpp std::ostream& operator<<(std::ostream&, const X&); Next, I have two separate translation units: //a.cpp #include <x.hpp> #include <x_print.hpp> yap::detail::print_value(cout, X{}); // b.cpp #include <x.hpp> yap::detail::print_value(cout, X{}); printer chooses a different specialization in a.cpp than it does in b.cpp, which results in undefined behavior. As a result of this problem, I believe that any free functions (including operators) that are intended to be found by ADL *must* be declared alongside the class definition.
[snip]
Let me bring operator||/operator&& short circuiting back up. After implementing it, I don't think it's appropriate. Consider the current non-short-circuiting implementation on the boost_review branch:
https://github.com/tzlaine/yap/blob/boost_review/include/boost/yap/detail/de...
#define BOOST_YAP_BINARY_OPERATOR_CASE(op, op_name) \ template <> \ struct default_eval_expr_impl<expr_kind:: op_name> \ { \ template <typename Expr, typename ...T> \ decltype(auto) operator() (Expr && expr, T && ... args) \ { \ using namespace hana::literals; \ return \ default_eval_expr(static_cast<Expr &&>(expr).elements[0_c], static_cast<T &&>(args)...) op \ default_eval_expr(static_cast<Expr &&>(expr).elements[1_c], static_cast<T &&>(args)...); \ } \ };
This implementation is fine. The problem was the original implementation that called eval_ ## op_name, bypassing built-in short-circuiting.
If the "&&" in "default_eval_expr(...) && default_eval_expr(...)" resolves to a built-in operator&&(), short-circuiting will happen as always, right? But if the "&&" resolves to a user-defined operator&&(), let's say this one:
struct my_type { // ... std::shared_ptr<int> operator&&(my_type const & rhs) {/*...*/} };
then I really don't want to artificially impose the short-circuiting. The whole point of evaluate() is that it evaluates to whatever your Yap expression would have if it were a plain ol' non-Yap expression.
I agree with you. You don't need to add your own short-circuiting. You just need to not get in the way of the language's short-circuiting. In Christ, Steven Watanabe

On Thu, Feb 15, 2018 at 10:11 AM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
AMDG
On 02/14/2018 10:29 PM, Zach Laine via Boost wrote:
On Wed, Feb 14, 2018 at 4:45 PM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
On 02/14/2018 03:11 PM, Zach Laine via Boost wrote:
On Tue, Feb 13, 2018 at 4:51 PM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
algorithm.hpp:
221: template <long long I, typename Expr> decltype(auto) get (Expr && expr, hana::llong<I> i); Does this need to be restricted to hana::llong, rather than, say, std::integral_constant?
It's the lack of nice syntax for integral_constant literals that made me choose this. I write get(expr, N_c) a lot, and I expect users to as well.
Supporting an IntegralConstant doesn't mean that you can't pass 0_c.
Is the change from std::integral_constant to IntegralConstant significant? That is, are you asking that I accept models of some concept, or just std::integral_constant?
I meant hana::IntegralConstant in the first place. std::integral_constant was just an example of another IntegralConstant, which hana::at can accept.
Great. Will do then.
<snip>
expression_free_operators.hpp/expression_if_else.hpp:
- Why is this not part of expression.hpp?
For the same reason that expression_free_operators.hpp is not. Those three headers are separate pieces that you may want some or all of, so I kept them separate.
expression_if_else makes a certain amount of sense as being separate. Making expression_free_operators separate comes with a high risk of ODR problems in any code that has different behavior depending on whether a given operator is defined. It's extra strange because expression_free_operators.hpp only has half the operator overloads, with the other half being defined as members.
I've read this a few times and don't get it. Without including expression_free_operators.hpp, where might the other ODR-colliding operators come from? Do you mean if the user defines her own operators
for
expression<>, and then includes expression_free_operators.hpp in another TU, or something else?
I was talking about the one definition rule as applied to some function that uses the operators. As an example, consider something that you yourself have written:
boost/yap/print.hpp:
template <typename T, typename = void_t<>> struct printer { std::ostream & operator() (std::ostream & os, T const &) { return os << "<<unprintable-value>>"; } };
template <typename T> struct printer< T, void_t<decltype( std::declval<std::ostream &>() << std::declval<T const &>() )> > { std::ostream & operator() (std::ostream & os, T const & x) { return os << x; } };
Now consider a class that defines a stream operator separately from the class:
// x.hpp class X {};
// x_print.hpp std::ostream& operator<<(std::ostream&, const X&);
Next, I have two separate translation units:
//a.cpp #include <x.hpp> #include <x_print.hpp> yap::detail::print_value(cout, X{});
// b.cpp #include <x.hpp> yap::detail::print_value(cout, X{});
printer chooses a different specialization in a.cpp than it does in b.cpp, which results in undefined behavior.
As a result of this problem, I believe that any free functions (including operators) that are intended to be found by ADL *must* be declared alongside the class definition.
Ah, thanks. I'll probably just jam everything into expression.hpp then. Zach

AMDG On 02/14/2018 03:11 PM, Zach Laine via Boost wrote:
On Tue, Feb 13, 2018 at 4:51 PM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
317: in left: kind == expr_kind::expr_ref || detail::arity_of<kind>() == detail::expr_arity::two, This assertion isn't quite strict enough as it will accept an expr_ref to anything. I think you need a get_kind_after_unwrapping_expr_refs function, as this seems to be needed in quite a few places.
That would provide better locality of error messages. However, you'll still get an error, just a call or two down the call stack.
Nope. left does not unwrap references by calling itself recursively (which would handle the checking correctly). It calls get, which has much looser checking.
It's a lot of compile-time mechanism to get an improvement, and this slows down all the non-error cases. I don't think it's a favorable trade.
In Christ, Steven Watanabe

We encourage your participation in this review. At a minimum, please state:
- Whether you believe the library should be accepted into Boost
Not without major improvement of the documentation
- Your name
Alex Hagen-Zanker
- Your knowledge of the problem domain
I have used Expression Templates in my own work. Notably, I am not familiar with Proto, which is a hindrance in reading the documentation
- What is your evaluation of the library's: * Documentation * Design
I have only looked at the documentation
- Did you attempt to use the library? If so:
No This is a library with great potential. Expression Templates indeed are rad. However, if you want people to use the library, you need to take more care in documenting its purpose and functionality. I went through the documentation page-by-page and mainly ran into confusion. Please see my notes below: * Boost YAP review notes The documentation could be a lot more accessible. First of all it would be good if you could limit the references to Proto, this is not helpful for users who don't know Proto. Perhaps one separate page where you list the difference between Proto and Yap would be useful. Next, remove all the jokes. Some are funny and others not, but none are helpful. And when trying to get my head around how to use the library, it is annoying to read the same joke again-and-again. (Yes, it took me many reads to understand what the library is offering exactly). The bloggy casual style is annoying me, when I am trying hard to make sense of the library. Don't use examples without explaining what they are an example of. It would be good if the documentation page could include a definition / explanation of what an expression template is. Actually, I think you need to define what an expression is in this context. In my book, expressions are not just made of operators, but can also include function calls. However, the implicit impression I get from the YAP documentation is that you only consider expressions based on operators, and not functions. For instance, I consider std::sqrt(x) an expression. However, looking at expr_kind (https://tzlaine.github.io/yap/doc/html/boost/yap/expr_kind.html) it seems that this kind expression is not supported, (but it could be as a Transform). *Section Compiler Support I don't think it is acceptable to produce a library that cannot be compiled in MSVC own front-end.I can see that Hana got away with it on the premise of being ground-breaking and pushing the boundaries. However, YAP is not trying to be groundbreaking, it is offering stuff that Proto already offered and generalizing functionality of Eigen, UBLAS and others. Can you really not support MSVC? What is the bottleneck, and did you try to work around it? *Section Dependencies Why depend on Hana, my impression of Hana and the associated review / discussion is that this library is made for demonstration of next generation C++ rather than actual use. *Section Header Organisation "If you want to ensure that you don't accidentally use expression<> (which I recommend)," I suppose you mean to recommend not using expression<>, WHY do you offer it? At this stage it does not make sense to talk about a thing I could do, but rather shouldn't. You haven't told me anything yet and are confusing me with expression<>: it is there but don't use it... How would I 'accidentally ' use it. *Section Expressions https://tzlaine.github.io/yap/doc/html/boost_yap__proposed_/manual/tutorial/... The language used in the documentation could be much more precise. Superfluous qualifications can be removed and at times detail can be added. "Boost.YAP consists of expressions and functions that operate on them. Any type that models the Expression concept will work with any of the functions in Boost.YAP that takes an expression." Could be reduced to: "Many functions in YAP operate on types that model the Expression concept." "For a type T to model the Expression concept, T must contain at least an expr_kind (terminal, plus-operation, etc.) and a boost::hana::tuple<> of values. That's it. This means that user-defined templates modeling ExpressionTemplate are very straightforward to make." The mention of ExpressionTemplate comes out of nowhere. You were explaining Expressions, not ExpressionTemplates. Could be reduced to: "The Expression concept requires: 1. an expr_kind member that indicates the kind of expression (terminal, plus-operation, etc.) 2. a boost::hana::tuple<> member which contains the values upon which the expression operates." with a separate line: "The ExpressionTemplate concept requires a template class that models the Expression concept and takes both the expr_kind member value and the boost:::hana::tuple<> type as template arguments". "Ok, so it's not that interesting by itself -- it has no operators. But we can still use it with the Boost.YAP functions that take an Expression. Let's make a plus-expression manually: " This sentence makes no sense to an uninformed reader. Why would an Expression need to have operators, you said all it needs is an expr_kind and a bh::tuple? What do you mean by make a "plus-expression manually"? Is there another way? It has not been introduced yet. You then use yap::make_terminal and yap::make_expression functions that have not been introduced. Before, you said that functions take an Expression as input, but evidently these functions do not. The note about expression<> does not make sense for an uninformed reader at this stage. The note about functions working on Expressions and not ExpressionTemplates seems obvious and does not address a real concern. the note about "one more thing" BOOST_YAP_CONVERSION_OPERATOR_TEMPLATE (a dubious solution),leaves me thoroughly confused. Why did you create this macro if it is a dubious solution? Please don't provide us with dubious solutions. Definitely don't do so not at this stage (still having to explain what Yap does). *Section Kind of Expression When you write: "A terminal contains a non-Expression value" I suppose you mean that "the bh::tuple of a terminal does not include Expression types", because an expression is a type not a value and bh::tuple of Expression is also a non-Expression (bh::tuple does not have an expr_kind member)/ *Section Operators Starts with an example, without clarifying what it is an example of. You write "As shown here, that sort of operator can be mixed with normal, non-lazy ones". I don't see this at all. Nothing has been shown here. There is a macro, but it is not clear what it is doing. I don't know what this "sort of operator" is and I don't know what a normal non-lazy operator is. I don't understand what you mean by mixed: what is being mixed with what? You use this macro BOOST_YAP_USER_BINARY_OPERATOR_MEMBER, but don't bother explaining what it does. You then write "Use of the macros is not necessary (you can write your own operators that return Expressions if you like), but it is suitable 99% of the time. " So when is it suitable and when not? If I run my program 1000 times will it give the wrong answer 10 times? *Section Explicit Transformation You use graph terminology (breadth first, node), but you have not yet explained what the graph is, what are the nodes and what are the edges? Is a node each member of the bh::tuple and when the member is an Expression, each member of its bh::tuple. You say that transform visits each node, but do not say what happens when a node is visited. The note on this page seems very informal. Can you explain in more precise terms? The note about generic code does not make sense to me. Need more explanation "A Boost.YAP transformation is free-form; it must return a value, but may do just about anything else. It can transform an expression into anything -- a new expression of any kind, or even a non-expression value (effectively evaluating the expression). For example, here is the get_arity transform from the Calc3 example: " I see that a xform can do anything to a type that models expression. this open-endedness is confusing. What is it that xform does.? *Section Implicit Transformation This section has no content and refers to other parts of doc "Implicit evaluation can apply when BOOST_YAP_CONVERSION_OPERATOR_TEMPLATE is defined. Note that this macro only changes the definition of expression<>, though. Your custom expression templates will not be affected. See the assignment to d1 in Lazy Vector for an example." OK, you already told me not to use expression<> , so BOOST_YAP_CONVERSION_OPERATOR_TEMPLATE is of no interest to me. *Section Evaluating Expression Ok, I understand evaluation and evaluation_as. The bit about BOOST_YAP_CONVERSION_OPERATOR_TEMPLATE is highly confusing. It only pertends to expression<> which you already told me not to use. *section printing - good *Section Operator Macros This section is cryptic, for instance in the columns Use for BOOST_YAP_USER_UDT_UNARY_OPERATOR : "Free operators defined over non-Expression types constrained by a type trait (e.g. all std::map<>s)." What is a type constrained by a type_trait, and how is that related to std::map. * Section How Expression Operands Are Treated This starts by talking about expression<> which you told me not to use. "Due to a limitation of Doxygen, each of the value(), left(), right(), and operator overloads listed here is a stand-in for three member functions." Doxygen is a tool, it should work for you. You should not let it dictate how you document the library. Some parts of the documentation use boost::yap, others bost::yap Did you consider Robert Ramey's advice on how to document libraries? https://www.youtube.com/watch?v=YxmdCxX9dMk . I think it might be beneficial for yap. I hope this is helpful. I won't manage to do a full review, so this is just based on the manual part of the documentation. Alex

On Tue, Feb 13, 2018 at 7:07 PM, a.hagen-zanker--- via Boost < boost@lists.boost.org> wrote:
We encourage your participation in this review. At a minimum, please state:
Thanks for reviewing, Alex.
* Boost YAP review notes
The documentation could be a lot more accessible. First of all it would be good if you could limit the references to Proto, this is not helpful for users who don't know Proto. Perhaps one separate page where you list the difference between Proto and Yap would be useful.
Yeah, I'm getting the impression you're not alone in not having used Proto. I'll do exactly as you suggest here. *TODO*
Next, remove all the jokes. Some are funny and others not, but none are helpful. And when trying to get my head around how to use the library, it is annoying to read the same joke again-and-again. (Yes, it took me many reads to understand what the library is offering exactly). The bloggy casual style is annoying me, when I am trying hard to make sense of the library.
Well, I'm a pretty casual person.
Don't use examples without explaining what they are an example of.
Could you give an example? No snark, I just don't know exactly what you're referring to here.
It would be good if the documentation page could include a definition / explanation of what an expression template is.
Agreed.
Actually, I think you need to define what an expression is in this context. In my book, expressions are not just made of operators, but can also include function calls. However, the implicit impression I get from the YAP documentation is that you only consider expressions based on operators, and not functions. For instance, I consider std::sqrt(x) an expression. However, looking at expr_kind (https://tzlaine.github.io/ yap/doc/html/boost/yap/expr_kind.html) it seems that this kind expression is not supported, (but it could be as a Transform).
That's supported. std::sqrt(x) would need to be Yap-ified if x is not already Yap expression, perhaps by making a sqrt Yap terminal, or by calling std::sqrt on a Yap expression. Assuming x is a Yap expression, the entire expression "std::sqrt(x)" is a Yap expression whose kind is expr_kind::call. This underscores the need to make this more explicit in the docs, though. *TODO*
*Section Compiler Support I don't think it is acceptable to produce a library that cannot be compiled in MSVC own front-end.I can see that Hana got away with it on the premise of being ground-breaking and pushing the boundaries. However, YAP is not trying to be groundbreaking, it is offering stuff that Proto already offered and generalizing functionality of Eigen, UBLAS and others.
Yap's feature set is essentially a superset of Proto's, but with reasonable compile times. That is something that no library has done before. This is literally the definition of groundbreaking. But I take your point. You want MSVC support.
Can you really not support MSVC? What is the bottleneck, and did you try to work around it?
MSVC does not have a conforming C++14 compiler. Its constexpr support is quite bad. Louis Dionne has mentioned that the MSVC team is explicitly trying to get Hana working as part of its recent development efforts. Hopefully such changes land soon. I'm not terribly interested in undertaking herculean efforts to make a constexpr-friendly and metaprogramming-heavy library work on a constexpr- and metaporgramming-unfriendly compiler. Some Boost libraries do that and some do not, and this has been the case for years.
*Section Dependencies Why depend on Hana, my impression of Hana and the associated review / discussion is that this library is made for demonstration of next generation C++ rather than actual use.
Because it makes everything having to do with tuple use and manipulation easy. This is why we make and use libraries, right? So we don't each have to reinvent the wheel? Also, a 4-year-old standard is no longer appropriately called "next generation C++," especially when it's not even the most recent one.
*Section Header Organisation "If you want to ensure that you don't accidentally use expression<> (which I recommend),"
I suppose you mean to recommend not using expression<>, WHY do you offer it? At this stage it does not make sense to talk about a thing I could do, but rather shouldn't. You haven't told me anything yet and are confusing me with expression<>: it is there but don't use it... How would I 'accidentally ' use it.
I'll make this clearer in the docs. The idea here is that expression<> exists for limited uses of Yap where no code is required for your expression template itself -- you just use expression<>. Such cases are small ET systems local to a TU, which are essentially an implementation detail; quick sketches or prototypes to get the broad strokes of an ET solution before actually implementing your own template(s); and sloppy individuals who don't really care about type safety. *TODO*
*Section Expressions https://tzlaine.github.io/yap/doc/html/boost_yap__proposed_/ manual/tutorial/expressions.html
The language used in the documentation could be much more precise. Superfluous qualifications can be removed and at times detail can be added.
[snip] All the remaining comments are an indication to me that the introduction of new ideas in the docs needs to be more slower-paced and explicit. And that bost:: is not a thing. I'll make those changes. *TODO*. Zach

On 2/14/2018 4:00 PM, Zach Laine via Boost wrote:
On Tue, Feb 13, 2018 at 7:07 PM, a.hagen-zanker--- via Boost < boost@lists.boost.org> wrote:
We encourage your participation in this review. At a minimum, please state:
Thanks for reviewing, Alex.
* Boost YAP review notes
The documentation could be a lot more accessible. First of all it would be good if you could limit the references to Proto, this is not helpful for users who don't know Proto. Perhaps one separate page where you list the difference between Proto and Yap would be useful.
Yeah, I'm getting the impression you're not alone in not having used Proto. I'll do exactly as you suggest here. *TODO*
Next, remove all the jokes. Some are funny and others not, but none are helpful. And when trying to get my head around how to use the library, it is annoying to read the same joke again-and-again. (Yes, it took me many reads to understand what the library is offering exactly). The bloggy casual style is annoying me, when I am trying hard to make sense of the library.
Well, I'm a pretty casual person.
Don't use examples without explaining what they are an example of.
Could you give an example? No snark, I just don't know exactly what you're referring to here.
It would be good if the documentation page could include a definition / explanation of what an expression template is.
Agreed.
Actually, I think you need to define what an expression is in this context. In my book, expressions are not just made of operators, but can also include function calls. However, the implicit impression I get from the YAP documentation is that you only consider expressions based on operators, and not functions. For instance, I consider std::sqrt(x) an expression. However, looking at expr_kind (https://tzlaine.github.io/ yap/doc/html/boost/yap/expr_kind.html) it seems that this kind expression is not supported, (but it could be as a Transform).
That's supported. std::sqrt(x) would need to be Yap-ified if x is not already Yap expression, perhaps by making a sqrt Yap terminal, or by calling std::sqrt on a Yap expression. Assuming x is a Yap expression, the entire expression "std::sqrt(x)" is a Yap expression whose kind is expr_kind::call. This underscores the need to make this more explicit in the docs, though. *TODO*
*Section Compiler Support I don't think it is acceptable to produce a library that cannot be compiled in MSVC own front-end.I can see that Hana got away with it on the premise of being ground-breaking and pushing the boundaries. However, YAP is not trying to be groundbreaking, it is offering stuff that Proto already offered and generalizing functionality of Eigen, UBLAS and others.
Yap's feature set is essentially a superset of Proto's, but with reasonable compile times. That is something that no library has done before. This is literally the definition of groundbreaking. But I take your point. You want MSVC support.
Can you really not support MSVC? What is the bottleneck, and did you try to work around it?
MSVC does not have a conforming C++14 compiler. Its constexpr support is quite bad. Louis Dionne has mentioned that the MSVC team is explicitly trying to get Hana working as part of its recent development efforts. Hopefully such changes land soon. I'm not terribly interested in undertaking herculean efforts to make a constexpr-friendly and metaprogramming-heavy library work on a constexpr- and metaporgramming-unfriendly compiler. Some Boost libraries do that and some do not, and this has been the case for years.
MSVC is the big gorilla in the room. Because of that many man-hours have been spent by many Boost developers getting code to work with MSVC. I can tell you with certainty that because of its non-standard preprocessor it took many times the amount of work getting variadic macros in Boost PP to work with MSVC and getting VMD to work with MSVC than all the other work put together in those areas. With that said I want to support your decision not to have to make "herculean efforts", as you say, to get MSVC to work with your library. It's long past time when MSVC should try to get their compiler to support C++ correctly, and this goes not only for their preprocessor, but also for their support of the latest C++ standards. Boost libraries, which should be targeting standards, are one of the best pieces of software to get companies like Microsoft ( hardly the only perpetrator ) to create a C++ compiler which follows the standard. snip...
Zach

Don't use examples without explaining what they are an example of.
Could you give an example? No snark, I just don't know exactly what you're referring to here.
https://tzlaine.github.io/yap/doc/html/boost_yap__proposed_/manual/tutorial/... "Let's see an expression template type with some operators:" OK I see it, but I don't understand what you want me to be looking at. There is only one operator (with some caveat) and two macros that could well be operators. But to be honest, I just looked and didn't find many more cases. I think you can chalk this up to the general request for the tutorial: slow down and introduce more.
*Section Dependencies Why depend on Hana, my impression of Hana and the associated review / discussion is that this library is made for demonstration of next generation C++ rather than actual use.
Because it makes everything having to do with tuple use and manipulation easy. This is why we make and use libraries, right? So we don't each have to reinvent the wheel?
That could be right. However, to me it seems that you chose quite an esoteric template class for the public interface.

On Thu, Feb 15, 2018 at 12:21 PM, a.hagen-zanker--- via Boost < boost@lists.boost.org> wrote:
Don't use examples without explaining what they are an example of.
Could you give an example? No snark, I just don't know exactly what you're referring to here.
https://tzlaine.github.io/yap/doc/html/boost_yap__proposed_/ manual/tutorial/operators.html
"Let's see an expression template type with some operators:"
OK I see it, but I don't understand what you want me to be looking at. There is only one operator (with some caveat) and two macros that could well be operators.
But to be honest, I just looked and didn't find many more cases.
I think you can chalk this up to the general request for the tutorial: slow down and introduce more.
Right. That was my take-away from your review. I'm working on that, and I'm even using std::sqrt() in particular in the intro example explaining expression templates.
*Section Dependencies Why depend on Hana, my impression of Hana and the associated review / discussion is that this library is made for demonstration of next generation C++ rather than actual use.
Because it makes everything having to do with tuple use and manipulation easy. This is why we make and use libraries, right? So we don't each have to reinvent the wheel?
That could be right. However, to me it seems that you chose quite an esoteric template class for the public interface.
The alternative would be to have the user write std::tuple instead of boost::hana::tuple. That seems like a pretty modest ask, considering that std::tuple has no associated algorithms. As Paul Fultz pointed out previously though, Hana can operate on users' std::tuples. So the question comes down to how much compile time (std::tuple is a *lot* slower to compile than hana::tuple) and implementation cost do I want to give up to get MSVC support. My answer is "not much". Others may differ. Patches are most welcome. I don't mean that in a "go away" sense -- I'm really am willing to maintain an MSVC code path. I'm just not willing to write one from scratch. Zach

On Thu, Feb 15, 2018 at 4:05 PM, Peter Dimov via Boost < boost@lists.boost.org> wrote:
Zach Laine wrote:
(std::tuple is a *lot* slower to compile than hana::tuple)
Is it? Interesting.
Doesn't seem so according to metaben.ch.
None of those measures time to compile simple construction. Louis has such a benchmark lying around somewhere, and my claim is based on having seen that. I'm not doing much of what metabench measures in Yap itself.
Zach

Zach Laine wrote:
Doesn't seem so according to metaben.ch.
None of those measures time to compile simple construction.
There is such a benchmark, "make", and it does show std::tuple a lot slower (2-5x depending on number of elements.) YAP tuples aren't going to have 100 elements though, and for short tuples, there isn't that much of a difference.

On Thu, Feb 15, 2018 at 4:30 PM, Peter Dimov via Boost < boost@lists.boost.org> wrote:
Zach Laine wrote:
Doesn't seem so according to metaben.ch.
None of those measures time to compile simple construction.
There is such a benchmark, "make", and it does show std::tuple a lot slower (2-5x depending on number of elements.)
Ok, thanks.
YAP tuples aren't going to have 100 elements though, and for short tuples, there isn't that much of a difference.
True enough. Zach

On Thu, Feb 15, 2018 at 2:30 PM, Peter Dimov via Boost < boost@lists.boost.org> wrote:
Zach Laine wrote:
Doesn't seem so according to metaben.ch.
None of those measures time to compile simple construction.
There is such a benchmark, "make", and it does show std::tuple a lot slower (2-5x depending on number of elements.)
YAP tuples aren't going to have 100 elements though, and for short tuples, there isn't that much of a difference.
The problem is not that you're not creating long tuples, it's that you're creating many short ones. If you look at 10 elements, for example, it's still 10x faster to create a hana::tuple than a std::tuple (chart: https://imgur.com/a/To5JZ). If the cost of creating tuples dominates the compilation time for code that uses Yap, switching to std::tuple in the backend could incur a noticeable slowdown. Similarly, using something even more basic like hana::basic_tuple or rolling your own tuple (without EBO support for example) might yield additional speedups. Of course, it's hard to tell without doing the work and trying it out. Louis

On Sun, Feb 18, 2018 at 7:22 PM, Louis Dionne via Boost < boost@lists.boost.org> wrote:
On Thu, Feb 15, 2018 at 2:30 PM, Peter Dimov via Boost < boost@lists.boost.org> wrote:
Zach Laine wrote:
Doesn't seem so according to metaben.ch.
None of those measures time to compile simple construction.
There is such a benchmark, "make", and it does show std::tuple a lot slower (2-5x depending on number of elements.)
YAP tuples aren't going to have 100 elements though, and for short tuples, there isn't that much of a difference.
The problem is not that you're not creating long tuples, it's that you're creating many short ones. If you look at 10 elements, for example, it's still 10x faster to create a hana::tuple than a std::tuple (chart: https://imgur.com/a/To5JZ). If the cost of creating tuples dominates the compilation time for code that uses Yap, switching to std::tuple in the backend could incur a noticeable slowdown. Similarly, using something even more basic like hana::basic_tuple or rolling your own tuple (without EBO support for example) might yield additional speedups. Of course, it's hard to tell without doing the work and trying it out.
Thanks, Louis. This is more like the very large difference in compile times that I remembered when I last looked at this. Zach

On Thu, Feb 15, 2018 at 4:00 PM, Peter Dimov via Boost < boost@lists.boost.org> wrote:
Zach Laine wrote:
(std::tuple is a *lot* slower to compile than hana::tuple)
Is it? Interesting.
Either way, that's up to the user, isn't it? If he wants to use std::tuple and index with `get(expr, mp_int<0>())` or `get_c<0>(expr)` instead of `get(expr, 0_c)`, let him.
It could be, but that would be a much more complicated interface. As it is, the user need only specify an expression kind non-type template parameter and a Hana tuple to create a model of Expression. Adding another point of customization by allowing her to specify anything for the tuple that models some Tuple concept is a lot to add, considering how simple the status quo is. I think it should be hana::tuple or std::tuple rather than any Tuple. Zach

Zach Laine wrote:
Either way, that's up to the user, isn't it? If he wants to use std::tuple and index with `get(expr, mp_int<0>())` or `get_c<0>(expr)` instead of `get(expr, 0_c)`, let him.
It could be, but that would be a much more complicated interface. As it is, the user need only specify an expression kind non-type template parameter and a Hana tuple to create a model of Expression.
The interface is not more complicated at all. The user just puts an std::tuple where he'd have put the hana::tuple. The implementation is more complicated. :-)
Adding another point of customization by allowing her to specify anything for the tuple that models some Tuple concept is a lot to add, considering how simple the status quo is. I think it should be hana::tuple or std::tuple rather than any Tuple.
I doubt that anyone would object to that; outside of std::pair and std::array, arbitrary TupleLike(*) types are rare and nothing supports them properly. That said, supporting TupleLike instead of std::tuple doesn't seem much more of an effort to me, but that's only based on intuition and cursory look, so I may well be wrong on that one. I just tried and Hana doesn't like any of the standard TupleLikes, not even std::tuple. This strikes me as odd, given that everything is generic and properly conceptified. Am I doing something wrong? std::tuple<int> v1; std::cout << boost::hana::length( v1 ) << std::endl; \boost-git\develop/boost/hana/length.hpp:32:9: error: static_assert failed "hana::length(xs) requires 'xs' to be Foldable" static_assert(hana::Foldable<S>::value, ^ ~~~~~~~~~~~~~~~~~~~~~~~~ (*) TupleLike: has std::tuple_size<T>, std::tuple_element<T>, get<I>(t)

AMDG On 02/15/2018 03:58 PM, Peter Dimov via Boost wrote:
I just tried and Hana doesn't like any of the standard TupleLikes, not even std::tuple. This strikes me as odd, given that everything is generic and properly conceptified. Am I doing something wrong?
Did you #include <boost/hana/ext/std/tuple.hpp>?
std::tuple<int> v1; std::cout << boost::hana::length( v1 ) << std::endl;
\boost-git\develop/boost/hana/length.hpp:32:9: error: static_assert failed "hana::length(xs) requires 'xs' to be Foldable" static_assert(hana::Foldable<S>::value, ^ ~~~~~~~~~~~~~~~~~~~~~~~~
In Christ, Steven Watanabe

Steven Watanabe wrote:
On 02/15/2018 03:58 PM, Peter Dimov via Boost wrote:
I just tried and Hana doesn't like any of the standard TupleLikes, not even std::tuple. This strikes me as odd, given that everything is generic and properly conceptified. Am I doing something wrong?
Did you #include <boost/hana/ext/std/tuple.hpp>?
You're right, I hadn't. I knew I was missing something like that.

On Thu, Feb 15, 2018 at 4:58 PM, Peter Dimov via Boost < boost@lists.boost.org> wrote:
Zach Laine wrote:
Either way, that's up to the user, isn't it? If he wants to use > std::tuple and index with `get(expr, mp_int<0>())` or `get_c<0>(expr)` > instead of `get(expr, 0_c)`, let him.
It could be, but that would be a much more complicated interface. As it is, the user need only specify an expression kind non-type template parameter and a Hana tuple to create a model of Expression.
The interface is not more complicated at all. The user just puts an std::tuple where he'd have put the hana::tuple. The implementation is more complicated. :-)
Well, mostly, but it's still user-visible. A transform might pattern match against expressions like this: struct my_xform { template<typename TerminalValue1, typename TerminalValue2> auto operator()( plus_tag, my_expr<expr_kind::terminal, TUPLE<TerminalValue1>> const & lhs, my_expr<expr_kind::terminal, TUPLE<TerminalValue2>> const & rhs) { return /*...*/; } }; If TUPLE can be only std::tuple or hana::tuple, that's a lot easier to write than if it can be any TupleLike. Zach

Zach Laine wrote:
The interface is not more complicated at all. The user just puts an std::tuple where he'd have put the hana::tuple. The implementation is more complicated. :-)
Well, mostly, but it's still user-visible. A transform might pattern match against expressions like this:
struct my_xform {
template<typename TerminalValue1, typename TerminalValue2> auto operator()( plus_tag, my_expr<expr_kind::terminal, TUPLE<TerminalValue1>> const & lhs, my_expr<expr_kind::terminal, TUPLE<TerminalValue2>> const & rhs) { return /*...*/; }
};
If TUPLE can be only std::tuple or hana::tuple, that's a lot easier to write than if it can be any TupleLike.
That's true in the case where the authors of `my_xform` and `my_expr` are different. If the same person has written both, he would know what tuple to use. But yes. It's also true that for any nontrivial manipulation you'd need tuple algorithms, which std::tuple doesn't have, as your get_arity example demonstrates. And yet, tying to Hana doesn't feel right. I'm thinking about properly completing the "tuple" part of mp11 with all the algorithms that make sense, starting with `transform`, but I doubt I'll have time for that in any near future.

struct my_xform {
template<typename TerminalValue1, typename TerminalValue2> auto operator()( plus_tag, my_expr<expr_kind::terminal, TUPLE<TerminalValue1>> const & lhs, my_expr<expr_kind::terminal, TUPLE<TerminalValue2>> const & rhs) { return /*...*/; }
};
It now occurs to me that if my_expr is not <Kind, Tuple>, but <Kind, T...>, a transform would be able to match it without knowing the tuple type.

Peter Dimov: It now occurs to me that if my_expr is not <Kind, Tuple>, but <Kind, T...>, a transform would be able to match it without knowing the tuple type.
Zach, are you giving this serious consideration? I think it sounds like an excellent suggestion; it would bring the interface closer to the conceptual model (i.e. an expression consists of a function and its arguments, which can also be expressions) . The discussion of std::tuple / hana::tuple / TupleLike suggests to me that something ended up in the interface that really should be an implementation detail. I realise this would be a big refactor. One other advantage, beyond conceptual clarity, would be that the Transform Matching will be much more concise. _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Fri, Feb 16, 2018 at 4:41 AM, a.hagen-zanker--- via Boost < boost@lists.boost.org> wrote:
Peter Dimov: It now occurs to me that if my_expr is not <Kind, Tuple>, but <Kind, T...>, a transform would be able to match it without knowing the tuple type.
Zach, are you giving this serious consideration?
No. The library stated out this way, actually. It makes everything a lot more verbose in the implementation, of course, but for the user too. If the user wants the interface locally, she can always add a template alias that papers over the existence of the tuple. Zach

AMDG On 02/15/2018 05:55 PM, Peter Dimov via Boost wrote:
<snip> It's also true that for any nontrivial manipulation you'd need tuple algorithms, which std::tuple doesn't have, as your get_arity example demonstrates. And yet, tying to Hana doesn't feel right.
What's the point of making libraries if you refuse to use them?
I'm thinking about properly completing the "tuple" part of mp11 with all the algorithms that make sense, starting with `transform`, but I doubt I'll have time for that in any near future.
So, you're going to re-implement those parts of hana, just so that everyone has two libraries to avoid (because dependencies are so terrible). In Christ, Steven Watanabe

Steven Watanabe wrote:
On 02/15/2018 05:55 PM, Peter Dimov via Boost wrote:
<snip> It's also true that for any nontrivial manipulation you'd need tuple algorithms, which std::tuple doesn't have, as your get_arity example demonstrates. And yet, tying to Hana doesn't feel right.
What's the point of making libraries if you refuse to use them?
Who is "you" in this sentence?

Steven Watanabe wrote:
On 02/15/2018 05:55 PM, Peter Dimov via Boost wrote:
<snip> It's also true that for any nontrivial manipulation you'd need tuple algorithms, which std::tuple doesn't have, as your get_arity example demonstrates. And yet, tying to Hana doesn't feel right.
What's the point of making libraries if you refuse to use them?
Who is "you" in this sentence?
This wasn't a rhetorical question. There are three levels at which Hana can be used: 1. hana::tuple in the interface of Yap, resp. in the specification and tests 2. In the implementation of Yap 3. In user code based on Yap (f.ex. implementing transforms) For which level do you object to a supposed refusal to use Hana? Because I was talking about #1.

AMDG On 02/16/2018 06:41 AM, Peter Dimov via Boost wrote:
Steven Watanabe wrote:
On 02/15/2018 05:55 PM, Peter Dimov via Boost wrote:
<snip> It's also true that for any nontrivial manipulation you'd need tuple > > algorithms, which std::tuple doesn't have, as your get_arity example > > demonstrates. And yet, tying to Hana doesn't feel right.
What's the point of making libraries if you refuse to use them?
Who is "you" in this sentence?
This wasn't a rhetorical question.
There are three levels at which Hana can be used:
1. hana::tuple in the interface of Yap, resp. in the specification and tests 2. In the implementation of Yap 3. In user code based on Yap (f.ex. implementing transforms)
For which level do you object to a supposed refusal to use Hana?
I'm going to split (1): 1a. Users passing tuples to Yap should not be restricted to hana::tuple. 1b. On the other hand, it's completely reasonable for Yap to give hana::tuple to users. (Yap has to choose something, after all) I think that Yap is closer to 1b than 1a. (I definitely object to (2) and as for (3), well, if users don't want to use hana, it's their loss.)
Because I was talking about #1.
The thing is, in most cases, the tuple type is chosen by Yap, not by the user. Most of the time, users don't actually need to care what the tuple type is, either. template<expr_kind K, class Tuple> struct my_expr { static const expr_kind kind = K; Tuple elements; }; struct a_transform { template<class T> auto operator()(terminal_tag, T&&t); // ... }; Consequently, I think it's fine for Yap to use whatever is most convenient. For the record, if Zach wants to use std::tuple everywhere instead, I think that's also fine. In Christ, Steven Watanabe

Steven Watanabe wrote:
The thing is, in most cases, the tuple type is chosen by Yap, not by the user. Most of the time, users don't actually need to care what the tuple type is, either.
...
struct a_transform { template<class T> auto operator()(terminal_tag, T&&t); // ... };
Right. I'd guess that tag matching is a later addition, because it only works for simple cases. You can't do auto operator()(any_expression_tag, T&&... t); because there's no way to match a tag, they are just classes, unrelated to anything (and do not even appear in the reference.) One would expect instead of struct plus_tag {}; something rather like template<expr_kind kind> struct expression_tag {}; using plus_tag = expression_tag<expr_kind::plus>; and now f.ex. template<expr_kind kind, class... T> auto get_arity::operator()(expression_tag<kind>, T const&... t) const { return std::max( { yap::transform(t, get_arity{})... } ); }

On Fri, Feb 16, 2018 at 10:22 AM, Peter Dimov via Boost < boost@lists.boost.org> wrote:
Steven Watanabe wrote:
The thing is, in most cases, the tuple type is chosen by Yap, not by the
user. Most of the time, users don't actually need to care what the tuple type is, either.
...
struct a_transform {
template<class T> auto operator()(terminal_tag, T&&t); // ... };
Right.
I'd guess that tag matching is a later addition, because it only works for simple cases. You can't do
auto operator()(any_expression_tag, T&&... t);
I'm don't understand what that would mean, exactly. What you can write though is: template <typename Tag, typename... T> auto operator()(Tag, T &&... t) Is that different from the intended use case above? [snip] template<expr_kind kind, class... T>
auto get_arity::operator()(expression_tag<kind>, T const&... t) const { return std::max( { yap::transform(t, get_arity{})... } );
}
I'm pretty sure you actually can write almost exactly this using the template I showed above. I'll verify that this is the case. That might make the example a lot easier to understand. Zach

AMDG On 02/16/2018 01:09 PM, Zach Laine via Boost wrote:
I'm don't understand what that would mean, exactly. What you can write though is:
template <typename Tag, typename... T> auto operator()(Tag, T &&... t)
Is that different from the intended use case above?
There's one annoying problem with this, which is that it can match an expression transform, with Tag as the Expression and T as an empty parameter pack. In Christ, Steven Watanabe

On Fri, Feb 16, 2018 at 2:28 PM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
AMDG
On 02/16/2018 01:09 PM, Zach Laine via Boost wrote:
I'm don't understand what that would mean, exactly. What you can write though is:
template <typename Tag, typename... T> auto operator()(Tag, T &&... t)
Is that different from the intended use case above?
There's one annoying problem with this, which is that it can match an expression transform, with Tag as the Expression and T as an empty parameter pack.
Very true, and very annoying. I tend to explicitly cover the possible arities. They are unary, binary, ternary for if_else, and N for call. I'm certainly open to something that makes transforms easier to write, if you or Peter have ideas. In this case, something like the expression_tag<> scheme Peter recommended helps with this particular ambiguity: template <typename Tag, typename... T> auto operator()(expression_tag<Tag>, T &&... t) { // and it also seems to help with writing targeted transforms in C++17: if (yap::to_kind<Tag>() == yap::expr_kind::terminal) { return yap::transform(yap::as_expr<my_expr>(std::forward<T>(t))); } else { return yap::make_expression<yap::to_kind<my_expr, Tag>()>(std::forward<T>(t)...); } } Zach

Zach Laine wrote:
In this case, something like the expression_tag<> scheme Peter recommended helps with this particular ambiguity:
template <typename Tag, typename... T> auto operator()(expression_tag<Tag>, T &&... t)
What I suggested was rather template <expr_kind Kind, typename... T> auto operator()(expression_tag<Kind>, T &&... t) as this enforces the necessary 1:1 correspondence between kinds and tags.
{ // and it also seems to help with writing targeted transforms in C++17: if (yap::to_kind<Tag>() == yap::expr_kind::terminal) {
Resp. if (Kind == yap::expr_kind::terminal) { There's an argument to be made in favor of making Expression take a Tag first parameter instead of the non-type Kind (`template<class...> class` is more regular and easier to manipulate in a generic manner) but that might be too much of a change.

On Fri, Feb 16, 2018 at 4:26 PM, Peter Dimov via Boost < boost@lists.boost.org> wrote:
Zach Laine wrote:
In this case, something like the expression_tag<> scheme Peter recommended helps with this particular ambiguity:
template <typename Tag, typename... T> auto operator()(expression_tag<Tag>, T &&... t)
What I suggested was rather
template <expr_kind Kind, typename... T> auto operator()(expression_tag<Kind>, T &&... t)
as this enforces the necessary 1:1 correspondence between kinds and tags.
Right. I missed that the first time. Thanks. Zach

On Fri, Feb 16, 2018 at 4:26 PM, Peter Dimov via Boost < boost@lists.boost.org> wrote:
Zach Laine wrote:
In this case, something like the expression_tag<> scheme Peter recommended helps with this particular ambiguity:
template <typename Tag, typename... T> auto operator()(expression_tag<Tag>, T &&... t)
What I suggested was rather
template <expr_kind Kind, typename... T> auto operator()(expression_tag<Kind>, T &&... t)
as this enforces the necessary 1:1 correspondence between kinds and tags.
{
// and it also seems to help with writing targeted transforms in C++17: if (yap::to_kind<Tag>() == yap::expr_kind::terminal) {
Resp.
if (Kind == yap::expr_kind::terminal) {
There's an argument to be made in favor of making Expression take a Tag first parameter instead of the non-type Kind (`template<class...> class` is more regular and easier to manipulate in a generic manner) but that might be too much of a change.
I'm vaguely embarrassed that it didn't already work this way. I've made this change on the boost_review branch, and the recursive case of the get_arity example transform is now: template <boost::yap::expr_kind Kind, typename... Arg> auto operator() (boost::yap::expr_tag<Kind>, Arg &&... arg) { return boost::hana::maximum( boost::hana::make_tuple( boost::yap::transform( boost::yap::as_expr(std::forward<Arg>(arg)), get_arity{} )... ) ); } It's not quite as nice as using std::max() as you wrote previously, because of course std::max() is not yet constexpr. Zach

On Sat, Feb 17, 2018 at 9:32 AM, Peter Dimov via Boost < boost@lists.boost.org> wrote:
Zach Laine wrote:
It's not quite as nice as using std::max() as you wrote previously,
because of course std::max() is not yet constexpr.
It is, but only under C++17.
I hadn't noticed. I sort of assumed that it was getting added along with all the algorithms constexpr-fied for C++20 in Albuquerque. According to cppreference, it's been constexpr since C++14. Anyway, I can't get the compiler to treat the result of get_arity as a constant expression using std::max, even in C++17 mode with a recent-ish Clang, so I'm leaving the example as-is for now. Zach

Zach Laine wrote:
Anyway, I can't get the compiler to treat the result of get_arity as a constant expression using std::max, even in C++17 mode with a recent-ish Clang, so I'm leaving the example as-is for now.
Yes, unfortunately, the straightforward way doesn't work. This does: template <boost::yap::expr_kind Kind, typename... Arg> auto operator() (boost::yap::expr_tag<Kind>, Arg &&... arg) { auto constexpr r = std::max( { decltype( boost::yap::transform( boost::yap::as_expr(std::forward<Arg>(arg)), get_arity{} ) )::value... } ); return boost::hana::llong<r>{}; } And you're right, it works even with -std=c++14. Whether this is really an improvement over what we started with is questionable. :-)

Zach Laine wrote:
I'm don't understand what that would mean, exactly. What you can write though is:
template <typename Tag, typename... T> auto operator()(Tag, T &&... t)
I considered this as an option and rejected it because it also matches an expression. On second thought however if this form takes priority over the one taking an expression, this might not be an issue.

Steven Watanabe wrote:
(I definitely object to (2) and as for (3), well, if users don't want to use hana, it's their loss.)
You keep implying that not using Hana is a choice done for irrational reasons such as, for instance, "avoiding a dependency", rather than, for instance, being forced by the fact that Hana does not compile on MSVC. Why do you do this, and what do you expect to gain from that, I don't know.

AMDG On 02/16/2018 10:15 AM, Peter Dimov via Boost wrote:
Steven Watanabe wrote:
(I definitely object to (2) and as for (3), well, if users don't want to use hana, it's their loss.)
You keep implying that not using Hana is a choice done for irrational reasons such as, for instance, "avoiding a dependency", rather than, for instance, being forced by the fact that Hana does not compile on MSVC.
That might be a reasonable argument if Yap itself were not a C++14 library. According the hana Jamfile, the reason hana is not supported for msvc is cxx14_constexpr, which Yap also uses.
Why do you do this, and what do you expect to gain from that, I don't know.
In Christ, Steven Watanabe

Steven Watanabe wrote:
That might be a reasonable argument if Yap itself were not a C++14 library. According the hana Jamfile, the reason hana is not supported for msvc is cxx14_constexpr, which Yap also uses.
MSVC 2017 has more than enough C++14 (including constexpr) for many practical purposes. If Yap weren't tied to Hana in its interface, a port of the implementation allowing use under MSVC might have been possible. As is, it isn't, because MSVC 2017 can't even include hana.hpp at present, let alone make any use of it.

AMDG On 02/16/2018 10:55 AM, Peter Dimov via Boost wrote:
Steven Watanabe wrote:
That might be a reasonable argument if Yap itself were not a C++14 library. According the hana Jamfile, the reason hana is not supported for msvc is cxx14_constexpr, which Yap also uses.
MSVC 2017 has more than enough C++14 (including constexpr) for many practical purposes. If Yap weren't tied to Hana in its interface, a port of the implementation allowing use under MSVC might have been possible. As is, it isn't, because MSVC 2017 can't even include hana.hpp at present, let alone make any use of it.
It took me about 5 minutes to fix that. The only problem is that msvc doesn't accept nested name specifiers off of decltype like decltype(f(x))::value. cl.exe: 19.12.25824 In Christ, Steven Watanabe

Steven Watanabe wrote:
As is, it isn't, because MSVC 2017 can't even include hana.hpp at present, let alone make any use of it.
It took me about 5 minutes to fix that.
The implied argument being that it's better to fix Hana, or at least those parts of it that Yap uses, to compile under MSVC instead? I still don't think it's better to have the user write template <typename Expr1, typename Expr2, typename Expr3> decltype(auto) transform_expression ( boost::yap::expression< boost::yap::expr_kind::plus, boost::hana::tuple< boost::yap::expression< boost::yap::expr_kind::multiplies, boost::hana::tuple< Expr1, Expr2 > >, Expr3 > > const & expr ) instead of template <typename Expr1, typename Expr2, typename Expr3> decltype(auto) transform_expression ( boost::yap::expression< boost::yap::expr_kind::plus, std::tuple< boost::yap::expression< boost::yap::expr_kind::multiplies, std::tuple< Expr1, Expr2 > >, Expr3 > > const & expr ) or template <typename Expr1, typename Expr2, typename Expr3> decltype(auto) transform_expression ( boost::yap::expression< boost::yap::expr_kind::plus, boost::yap::expression< boost::yap::expr_kind::multiplies, Expr1, Expr2 >, Expr3 > const & expr ) but, whatever.

AMDG On 02/16/2018 11:29 AM, Peter Dimov via Boost wrote:
Steven Watanabe wrote:
As is, it isn't, because MSVC 2017 can't even include hana.hpp at > present, let alone make any use of it.
It took me about 5 minutes to fix that.
The implied argument being that it's better to fix Hana, or at least those parts of it that Yap uses, to compile under MSVC instead?
Yes. After working around the decltype()::value problem, I just had to add /permissive- to get (at least some of) Hana's tests to build with msvc. Yap itself has more problems on msvc than Hana does. (I gave up after seeing it ICE in the SFINAE for tag transforms.)
I still don't think it's better to have the user write
template <typename Expr1, typename Expr2, typename Expr3> decltype(auto) transform_expression ( boost::yap::expression< boost::yap::expr_kind::plus, boost::hana::tuple< boost::yap::expression< boost::yap::expr_kind::multiplies, boost::hana::tuple< Expr1, Expr2 > >, Expr3 > > const & expr )
instead of
template <typename Expr1, typename Expr2, typename Expr3> decltype(auto) transform_expression ( boost::yap::expression< boost::yap::expr_kind::plus, std::tuple< boost::yap::expression< boost::yap::expr_kind::multiplies, std::tuple< Expr1, Expr2 > >, Expr3 > > const & expr )
Between these first two, I don't actually care.
or
template <typename Expr1, typename Expr2, typename Expr3> decltype(auto) transform_expression ( boost::yap::expression< boost::yap::expr_kind::plus, boost::yap::expression< boost::yap::expr_kind::multiplies, Expr1, Expr2 >, Expr3 > const & expr )
This is a bit nicer in this case, but it's also a bit of a trade-off as the requirements are more difficult to state clearly. Saying it must be exactly a (hana|std)::tuple is completely unambiguous. Also, doing nested matching like this is somewhat imperfect as it doesn't handle expr_refs transparently.
but, whatever.
In Christ, Steven Watanabe

On Fri, Feb 16, 2018 at 2:05 PM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
AMDG
On 02/16/2018 11:29 AM, Peter Dimov via Boost wrote:
Steven Watanabe wrote:
As is, it isn't, because MSVC 2017 can't even include hana.hpp at > present, let alone make any use of it.
It took me about 5 minutes to fix that.
The implied argument being that it's better to fix Hana, or at least those parts of it that Yap uses, to compile under MSVC instead?
Yes. After working around the decltype()::value problem, I just had to add /permissive- to get (at least some of) Hana's tests to build with msvc. Yap itself has more problems on msvc than Hana does. (I gave up after seeing it ICE in the SFINAE for tag transforms.)
I still don't think it's better to have the user write
template <typename Expr1, typename Expr2, typename Expr3> decltype(auto) transform_expression ( boost::yap::expression< boost::yap::expr_kind::plus, boost::hana::tuple< boost::yap::expression< boost::yap::expr_kind::multiplies, boost::hana::tuple< Expr1, Expr2 > >, Expr3 >
const & expr )
instead of
template <typename Expr1, typename Expr2, typename Expr3> decltype(auto) transform_expression ( boost::yap::expression< boost::yap::expr_kind::plus, std::tuple< boost::yap::expression< boost::yap::expr_kind::multiplies, std::tuple< Expr1, Expr2 > >, Expr3 >
const & expr )
Between these first two, I don't actually care.
Same here.
or
template <typename Expr1, typename Expr2, typename Expr3> decltype(auto) transform_expression ( boost::yap::expression< boost::yap::expr_kind::plus, boost::yap::expression< boost::yap::expr_kind::multiplies, Expr1, Expr2 >, Expr3
const & expr )
This is a bit nicer in this case, but it's also a bit of a trade-off as the requirements are more difficult to state clearly. Saying it must be exactly a (hana|std)::tuple is completely unambiguous. Also, doing nested matching like this is somewhat imperfect as it doesn't handle expr_refs transparently.
Right. There's a larger point here, though. I'm preparing a library for submission to Boost. It works on at least one compiler, which is our (admittedly very loose) standard for submissions. It also is using other Boost libraries to avoid reproducing the work done in those. This is also fine for a Boost library to do, even expected. If you don't want new library authors to use existing Boost libraries, we need a clear rationale for when and why that's wrong. All the concerns about MSVC support boil down to how many people could use the library on its first day as part of Boost. Even if I were to make it work for MSVC 2017, how many more users does that cover? It seems only to cover the live-at-head folks. If MSVC continues to get better language support, how long does that support cover those folks before the compiler is good enough anyway? My time for that support is not a good trade for me personally. If someone else wants to navigate MSVC's limitations and submit patches, I'll switch over to std::tuple in the interface, and maintain that work. If that does not happen (and it probably won't), then we know for a fact that using Yap on MSVC does not matter enough for anyone to actually do the work to make that happen. If so, what exactly is this discussion about? Zach

All the concerns about MSVC support boil down to how many people could use the library on its first day as part of Boost. Even if I were to make it work for MSVC 2017, how many more users does that cover? It seems only to cover the live-at-head folks. If MSVC continues to get better language support, how long does that support cover those folks before the compiler is good enough anyway? My time for that support is not a good trade for me personally. If someone else wants to navigate MSVC's limitations and submit patches, I'll switch over to std::tuple in the interface, and maintain that work.
Does it work on clang LLVM targeting the MSVC ABI? If it does, then lack of MSVC support is a non issue. Just compile the stuff using Yap and Hana with clang and link it into your other MSVC compiled stuff. Done. (Incidentally, I have been told that getting unmodified Hana working in MSVC is intended for straight after getting unmodified Ranges working. The Visual C++ team are on it, and I don't doubt they'll succeed soon now, though it will surely take a while for Ranges or Hana based code to become entirely stable on MSVC. As, indeed, we saw with the empty main() function ICE with Outcome) Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/

On Fri, Feb 16, 2018 at 4:27 PM, Niall Douglas via Boost < boost@lists.boost.org> wrote:
All the concerns about MSVC support boil down to how many people could use the library on its first day as part of Boost. Even if I were to make it work for MSVC 2017, how many more users does that cover? It seems only to cover the live-at-head folks. If MSVC continues to get better language support, how long does that support cover those folks before the compiler is good enough anyway? My time for that support is not a good trade for me personally. If someone else wants to navigate MSVC's limitations and submit patches, I'll switch over to std::tuple in the interface, and maintain that work.
Does it work on clang LLVM targeting the MSVC ABI?
If Hana does, and I believe that it does.
If it does, then lack of MSVC support is a non issue. Just compile the stuff using Yap and Hana with clang and link it into your other MSVC compiled stuff. Done.
People's heads seem to explode upon hearing this advice.
(Incidentally, I have been told that getting unmodified Hana working in MSVC is intended for straight after getting unmodified Ranges working. The Visual C++ team are on it, and I don't doubt they'll succeed soon now, though it will surely take a while for Ranges or Hana based code to become entirely stable on MSVC. As, indeed, we saw with the empty main() function ICE with Outcome)
This is my understanding too. Zach

Zach Laine wrote:
If someone else wants to navigate MSVC's limitations and submit patches, I'll switch over to std::tuple in the interface, and maintain that work.
There's the option of stating everything in terms of boost::yap::tuple instead, which today would be an alias to hana::tuple. It's questionable exactly how much this will help users in the event of moving to std::tuple, but it might be possible for at least some user code to not need changes. On a completely unrelated note, have you considered the use case in which an expression supports two separate modes of evaluation? yap::evaluate doesn't take a "mode" parameter, and the way it's specified makes it hard to extend it with one.

On Fri, Feb 16, 2018 at 6:27 PM, Peter Dimov via Boost < boost@lists.boost.org> wrote:
Zach Laine wrote:
If someone else wants to navigate MSVC's limitations and submit patches,
I'll switch over to std::tuple in the interface, and maintain that work.
There's the option of stating everything in terms of boost::yap::tuple instead, which today would be an alias to hana::tuple. It's questionable exactly how much this will help users in the event of moving to std::tuple, but it might be possible for at least some user code to not need changes.
That's reasonable, but I think in the current state of things I'd like the user to know as soon as possible that Hana is in play. If std::tuple is the tuple of choice, I'd probably want the user to know that up-front as well.
On a completely unrelated note, have you considered the use case in which an expression supports two separate modes of evaluation? yap::evaluate doesn't take a "mode" parameter, and the way it's specified makes it hard to extend it with one.
More than that, it's effectively impossible. The idiomatic way to accomplish the same end is to transform() your expression differently for each case, instead of evaluating it differently. Zach

On Feb 16, 2018, at 2:28 PM, Zach Laine via Boost <boost@lists.boost.org> wrote:
If someone else wants to navigate MSVC's limitations and submit patches, I'll switch over to std::tuple in the interface, and maintain that work.
I would be interested in helping improving the support of the library for other compilers(not just MSVC), after I finish with getting my library integrated into boost. So I don’t think its a never will happen situation.

On Fri, Feb 16, 2018 at 11:37 AM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
AMDG
On 02/16/2018 10:15 AM, Peter Dimov via Boost wrote:
Steven Watanabe wrote:
(I definitely object to (2) and as for (3), well, if users don't want to use hana, it's their loss.)
You keep implying that not using Hana is a choice done for irrational reasons such as, for instance, "avoiding a dependency", rather than, for instance, being forced by the fact that Hana does not compile on MSVC.
That might be a reasonable argument if Yap itself were not a C++14 library. According the hana Jamfile, the reason hana is not supported for msvc is cxx14_constexpr, which Yap also uses.
Yes. In fact, as part of an attempt to get MSVC support to work months ago, I tried transliterating the Hana code that choked MSVC with something equivalent and hand-written, and ran into the same problems. It's MSVC, not Hana, that doesn't work. Zach

On Fri, 2018-02-16 at 08:39 -0700, Steven Watanabe via Boost wrote:
AMDG
On 02/16/2018 06:41 AM, Peter Dimov via Boost wrote:
Steven Watanabe wrote:
On 02/15/2018 05:55 PM, Peter Dimov via Boost wrote:
<snip> It's also true that for any nontrivial manipulation you'd need
tuple > > algorithms, which std::tuple doesn't have, as your get_arity example > > demonstrates. And yet, tying to Hana doesn't feel right.
What's the point of making libraries if you refuse to use them?
Who is "you" in this sentence?
This wasn't a rhetorical question.
There are three levels at which Hana can be used:
1. hana::tuple in the interface of Yap, resp. in the specification and tests 2. In the implementation of Yap 3. In user code based on Yap (f.ex. implementing transforms)
For which level do you object to a supposed refusal to use Hana?
I'm going to split (1): 1a. Users passing tuples to Yap should not be restricted to hana::tuple. 1b. On the other hand, it's completely reasonable for Yap to give hana::tuple to users. (Yap has to choose something, after all)
I think that Yap is closer to 1b than 1a.
(I definitely object to (2) and as for (3), well, if users don't want to use hana, it's their loss.)
User most likely want to use hana, but they can't since they need to support MSVC or gcc 5. Zach has seemed to show an interest in maintaining a MSVC or C++11 version if it were contributed, but I dont see how that is possible with Hana in the interface. Any such contribution would break API compatibility or would require maintaining two API which seems worse.

On Fri, Feb 16, 2018 at 11:41 AM, paul via Boost <boost@lists.boost.org> wrote:
User most likely want to use hana, but they can't since they need to support MSVC or gcc 5.
Zach has seemed to show an interest in maintaining a MSVC or C++11 version if it were contributed, but I dont see how that is possible with Hana in the interface. Any such contribution would break API compatibility or would require maintaining two API which seems worse.
For that to work, I would expect to switch over to std::tuple in the interface. Zach

On Feb 15, 2018, at 6:55 PM, Peter Dimov via Boost <boost@lists.boost.org> wrote:
Zach Laine wrote:
The interface is not more complicated at all. The user just puts an > std::tuple where he'd have put the hana::tuple. The implementation is > more complicated. :-)
Well, mostly, but it's still user-visible. A transform might pattern match against expressions like this:
struct my_xform {
template<typename TerminalValue1, typename TerminalValue2> auto operator()( plus_tag, my_expr<expr_kind::terminal, TUPLE<TerminalValue1>> const & lhs, my_expr<expr_kind::terminal, TUPLE<TerminalValue2>> const & rhs) { return /*...*/; }
};
If TUPLE can be only std::tuple or hana::tuple, that's a lot easier to write than if it can be any TupleLike.
That's true in the case where the authors of `my_xform` and `my_expr` are different. If the same person has written both, he would know what tuple to use.
But yes.
It's also true that for any nontrivial manipulation you'd need tuple algorithms, which std::tuple doesn't have, as your get_arity example demonstrates. And yet, tying to Hana doesn't feel right.
I'm thinking about properly completing the "tuple" part of mp11 with all the algorithms that make sense, starting with `transform`, but I doubt I'll have time for that in any near future.
You can use Boost.HOF(aka Fit) for that. It should be fine for small tuple sizes: for_each(tup, f) => unpack(proj(f))(tup) transform(tup, f) => unpack(proj(f, construct<std::tuple>()))(tup) fold(tup, f) => unpack(fold(f))(tup) concat(tup1, tup2) => unpack(construct<std::tuple>())(tup1, tup2) It is a little more to write, but it is fairly trivial vs implementing it yourself.

AMDG On 02/15/2018 03:58 PM, Peter Dimov via Boost wrote:
Zach Laine wrote:
Either way, that's up to the user, isn't it? If he wants to use > std::tuple and index with `get(expr, mp_int<0>())` or `get_c<0>(expr)` instead of `get(expr, 0_c)`, let him.
It could be, but that would be a much more complicated interface. As it is, the user need only specify an expression kind non-type template parameter and a Hana tuple to create a model of Expression.
The interface is not more complicated at all. The user just puts an std::tuple where he'd have put the hana::tuple. The implementation is more complicated. :-)
I know the reflex around here is to make everything as generic as possible, but does generalizing the tuple type actually buy you anything? The tuple type can't have different behavior in any meaningful way. The tuple is stored by value inside the the Expression object, so even if you have an external tuple of the same type, you still need to make a copy. The definition of the Expression concept doesn't allow non-intrusive adaptation. In Christ, Steven Watanabe

Steven Watanabe wrote:
I know the reflex around here is to make everything as generic as possible, but does generalizing the tuple type actually buy you anything?
There's an argument to be made about hardcoding a specific tuple type, as long as everyone agrees on what this specific tuple type ought to be.

On Feb 14, 2018, at 2:00 PM, Zach Laine via Boost <boost@lists.boost.org> wrote:
Actually, I think you need to define what an expression is in this context. In my book, expressions are not just made of operators, but can also include function calls. However, the implicit impression I get from the YAP documentation is that you only consider expressions based on operators, and not functions. For instance, I consider std::sqrt(x) an expression. However, looking at expr_kind (https://tzlaine.github.io/ yap/doc/html/boost/yap/expr_kind.html) it seems that this kind expression is not supported, (but it could be as a Transform).
That's supported. std::sqrt(x) would need to be Yap-ified if x is not already Yap expression, perhaps by making a sqrt Yap terminal, or by calling std::sqrt on a Yap expression. Assuming x is a Yap expression, the entire expression "std::sqrt(x)" is a Yap expression whose kind is expr_kind::call. This underscores the need to make this more explicit in the docs, though. *TODO*
I'm confused by this claim. You seem to be suggesting that STL math functions somehow can evaluate Yap expressions? How can that be? The following code, which I believe implements what you are suggesting, does not compile for me. Yes, there needs to be better documentation of the use of functions and the call operator kind of expression. Cheers, Brook #include <cmath> #include <boost/yap/algorithm.hpp> template < boost::yap::expr_kind Kind, typename Tuple > struct minimal_expr { static const boost::yap::expr_kind kind = Kind; Tuple elements; }; class number {}; int main () { using boost::yap::make_terminal; std::sqrt(make_terminal<minimal_expr>(number{})); return 0; }

On Thu, Feb 15, 2018 at 12:41 PM, Brook Milligan <brook@nmsu.edu> wrote:
On Feb 14, 2018, at 2:00 PM, Zach Laine via Boost <boost@lists.boost.org> wrote:
Actually, I think you need to define what an expression is in this context. In my book, expressions are not just made of operators, but can also include function calls. However, the implicit impression I get from the YAP documentation is that you only consider expressions based on operators, and not functions. For instance, I consider std::sqrt(x) an expression. However, looking at expr_kind (https://tzlaine.github.io/ yap/doc/html/boost/yap/expr_kind.html) it seems that this kind expression is not supported, (but it could be as a Transform).
That's supported. std::sqrt(x) would need to be Yap-ified if x is not already Yap expression, perhaps by making a sqrt Yap terminal, or by calling std::sqrt on a Yap expression. Assuming x is a Yap expression, the entire expression "std::sqrt(x)" is a Yap expression whose kind is expr_kind::call. This underscores the need to make this more explicit in the docs, though. *TODO*
I'm confused by this claim. You seem to be suggesting that STL math functions somehow can evaluate Yap expressions? How can that be?
The following code, which I believe implements what you are suggesting, does not compile for me.
Yes, there needs to be better documentation of the use of functions and the call operator kind of expression.
Cheers, Brook
#include <cmath> #include <boost/yap/algorithm.hpp>
template < boost::yap::expr_kind Kind, typename Tuple > struct minimal_expr { static const boost::yap::expr_kind kind = Kind; Tuple elements; };
class number {};
int main () { using boost::yap::make_terminal; std::sqrt(make_terminal<minimal_expr>(number{})); return 0; }
That's right. But try making `std::sqrt` the terminal that you chain everything off of. As long as the expression template you use has a call operator, it Just Works. For instance, our favorite whipping boy expression<> has one. This compiles, I promise: template <typename T> using term = boost::yap::terminal<boost::yap::expression, T>; boost::yap::expression< boost::yap::expr_kind::plus, boost::hana::tuple< boost::yap::expression< boost::yap::expr_kind::call, boost::hana::tuple< boost::yap::expression< boost::yap::expr_kind::terminal, boost::hana::tuple<double (*)(double)> >, boost::yap::expression< boost::yap::expr_kind::terminal, boost::hana::tuple<double> > > >, boost::yap::expression< boost::yap::expr_kind::terminal, boost::hana::tuple<float> > >
yap_expr = term<double (*)(double)>{{std::sqrt}}(3.0) + 8.0f; Zach

On Mon, Feb 5, 2018 at 9:53 AM, Louis Dionne via Boost < boost@lists.boost.org> wrote:
Dear Boost community,
The formal review of Zach Laine's Yap library starts Monday, February 5th and ends on Wednesday, February 14th.
<snip>
We encourage your participation in this review. At a minimum, please state:
- Whether you believe the library should be accepted into Boost
I recommend Boost ACCEPT Yap as a new library.
- Your name
Barrett Adair
- Your knowledge of the problem domain
At best, average for this mailing list -- probably less. I am familiar and comfortable with hand-rolled expression templates, but I have not (successfully) used Proto.
You are strongly encouraged to also provide additional information:
- What is your evaluation of the library's: * Design
I agree that Boost needs a modern replacement for Proto, and Yap appears to do this with flying colors (as far as my untrained eye can tell).
* Implementation
Did not review. My only comment on the implementation is a minor usability note - a recurring point of frustration for me was that gcc 7.2 errors print expr_kind template value parameters as integer values instead using enum member names, e.g. `meow_expr<(boost::yap::expr_kind)18, boost::hana::tuple<meow>>`. This adds a layer of mental indirection when debugging an expression tree. A very small nice-to-have, if nothing else, would be to assign explicit enum values in boost/yap/algorithm_fwd.hpp for faster cross-referencing.
* Documentation
...can never be good enough, but I'm happy with this. It's pleasantly concise, and it seems complete. I'm confident that I could become proficient with Yap given what is available here. I do think the docs would benefit from the following: * less nesting of sections * a link to Zach's C++Now talk (if the talk still reflects the code, that is) * more examples, as always * Some sections could be merged, such as "operators" and "operator macros", also the transform sections * The "operator macros" section would benefit from having the expr_kind names listed nearby This documentation appears to cater to Proto alumni, which is fine, but an "Intro to Expression Templates" section would be nice for newcomers. A "Suggested reading" section would also be a cheap way to improve accessibility. Smoothing out the cliff-shaped learning curve of this library is a daunting task, but it might be worthwhile in the long run.
* Tests
Built, ran, passed, did not review. Ubuntu 16, gcc 7.2, Boost 1.66
* Usefulness
I have not written an EDSL complex enough to need Proto or Yap, but I can see why this library would be useful.
- Did you attempt to use the library? If so: * Which compiler(s)?
gcc 7.2
* What was the experience? Any problems?
I played around with transforming variations of the following: template <boost::yap::expr_kind Kind, typename Tuple> struct letter_expr { static boost::yap::expr_kind const kind = Kind; Tuple elements; BOOST_YAP_USER_BINARY_OPERATOR_MEMBER(minus, ::letter_expr) }; #define DEFINE_LETTER(c) auto c = boost::yap::make_terminal<letter_expr>(std::string{#c}) DEFINE_LETTER(A); DEFINE_LETTER(B); DEFINE_LETTER(C); DEFINE_LETTER(D); // ... auto foo = H-E-L-L-O-W-O-R-L-D; I'd hoped to get further than I did. I tried and failed to implement a cloud-to-butt [0] transform in the time that I had. I was able to see the library in action and implement some very simple transforms. The documentation and examples were helping me through my issues.
- How much effort did you put into your evaluation of the review?
1 hour reading examples and documentation, 2 hours hacking around, 1 hour writing this review, 0 hours catching up on previous reviews of Yap. Thanks, Barrett [0] https://github.com/panicsteve/cloud-to-butt

On Tue, Feb 13, 2018 at 10:54 PM, Barrett Adair via Boost < boost@lists.boost.org> wrote:
On Mon, Feb 5, 2018 at 9:53 AM, Louis Dionne via Boost < boost@lists.boost.org> wrote:
[snip] Thanks for reviewing, Barrett!
My only comment on the implementation is a minor usability note - a recurring point of frustration for me was that gcc 7.2 errors print expr_kind template value parameters as integer values instead using enum member names, e.g. `meow_expr<(boost::yap::expr_kind)18, boost::hana::tuple<meow>>`. This adds a layer of mental indirection when debugging an expression tree. A very small nice-to-have, if nothing else, would be to assign explicit enum values in boost/yap/algorithm_fwd.hpp for faster cross-referencing.
Simple enough. Done.
* Documentation
...can never be good enough, but I'm happy with this. It's pleasantly concise, and it seems complete. I'm confident that I could become proficient with Yap given what is available here.
I do think the docs would benefit from the following: * less nesting of sections * a link to Zach's C++Now talk (if the talk still reflects the code, that is)
I fear this will be a lot less relevant in a week or so.
* more examples, as always
Any suggestions?
* Some sections could be merged, such as "operators" and "operator macros", also the transform sections
I have them separate only because I don't want to overload the reader with too much detail (the macros part) too early. Would it be sufficient to add cross-links between the two sections? For the same reason, I'm moving "Printing" to the very end. Not sure why it was in the middle there.
* The "operator macros" section would benefit from having the expr_kind names listed nearby
This documentation appears to cater to Proto alumni, which is fine, but an "Intro to Expression Templates" section would be nice for newcomers. A "Suggested reading" section would also be a cheap way to improve accessibility. Smoothing out the cliff-shaped learning curve of this library is a daunting task, but it might be worthwhile in the long run.
I'll add this. I was a little surprised at the lack of familiarity with expression templates, even at C++Now. *TODO* Zach

On Feb 5, 2018, at 8:53 AM, Louis Dionne via Boost via Boost-users <boost-users@lists.boost.org> wrote:
We encourage your participation in this review. At a minimum, please state:
- Whether you believe the library should be accepted into Boost
See below.
- Your name
Brook Milligan
- Your knowledge of the problem domain
I have been working with Yap in production for the past year and with Proto before that. The transition has perhaps colored how I originally approached the use of Yap, which is evident in some of my earlier comments and questions. However, having implemented domain-specific languages in both Proto and Yap, I feel that I have gathered some expertise in the area of expression templates.
* Design
Overall, the design of Yap seems excellent, although I do agree with the comments regarding potential modifications to the expression concept. In particular, I like Peter's idea of making the tag a type and the idea of making the expression template variadic (instead of using a tuple): template<expr_kind kind> struct expression_tag {}; using plus_tag = expression_tag<expr_kind::plus>; template < typename Tag, typename ... T > class expression; It seems to me that this would make matching expressions much more flexible, but perhaps I am missing something. It might also some dispatching in the implementation to make things simpler? It is clear that the key to flexibility is the ability to write transforms, which is something I have not done a lot of. Thus, it is crucial to get the expression concept right so that generic transforms are easy and matching specific expressions is also easy. My intuition suggests the definition above would be good, but this should be hashed out carefully, and the rationale documented.
* Implementation
I have looked through much of the implementation at various points, but feel that others are better positioned to comment more substantively than me. Mostly, I have looked through this trying to figure out how various parts are working, because I could not find the information in the documentation; see below.
* Documentation
The documentation is the weakest point of Yap. In particular, it does not establish (in my mind at least) a clear understanding of the rationale and design of the primary use case, which seems to be the evaluate(transform()) idiom discussed on the list. Introduction: This needs to guide the reader to think about Yap idioms in the right way. The motivation sections seems clear: we need Yap because expression templates are valuable but Proto is too complicated. Perhaps, however, this needs to be recast a bit to avoid bringing in Proto at the first. Perhaps a better motivation is (i) what are expression templates and why are they useful (briefly; refer to a more detailed section that develops this more completely) and (ii) what are some quick examples (really short code snippets, not necessarily complete programs). The features are also clear, but only with a fair bit of background. It also seems that the features are not necessarily in an order that reflects either their importance or the way a user would encounter them. A better list might be: - A simple Expression concept easily modeled by user code. Member and non-member functions on Expressions can be added with compact macros, and a reference implementation exists for prototyping or experimentation. - Evaluation of expression trees matches the semantics of builtin C++ expressions as closely as possible. All operator expressions are evaluated using the corresponding operator definitions for the types involved. This leads to clearer understanding of the semantics of expression tree evaluation, because the definitions are local to the types involved. This is one point at which the design of Yap diverges significantly from that of Proto; in the latter, the semantics of expression tree evaluation is determined by context classes. Consequently, it is much more difficult to reason about the semantics in Proto than in Yap. [ This last bit could also be in a Proto v. Yap section. ] - Expression trees may be transformed explicitly in a user-defined way. This is accomplished with overloaded member functions in a transform class, each one corresponding to subexpressions in the expression tree. While these member functions may transform a subexpression into anything, a common pattern is to transform terminals into either new terminals or appropriate values and to leave other subexpressions unchanged. This evaluate(transform()) idiom is expected to be one of the most common ways of using Yap to manipulate and evaluate expression trees. Even though this use case involves copying the expression tree, it is expected that the new terminals or values will be inexpensive to copy so that the cost of transformation is small. - Functions that operate on expression trees or create expressions. A set of functions is provided (and used by the library) to manipulate expression trees or their components. These should simplify the process of, for example, writing user-defined transforms. - Evaluation of expression trees is generally [ what does this mean? ] equivalent to evaluating the corresponding native code. Thus, there should be no [ true? ] performance cost to using expression trees. - Expression trees may be encapsulated within function objects that evaluate the expression when called. This provides a nice bridge with other function-based libraries. - ... [ Maybe there are other (or fewer) features to emphasize. The point is to expand a bit on what the features mean to the user rather than just list them. It is also to organize them linearly from most to least important/relevant to the user. ] Tutorial: if there are examples that illustrate points in the tutorial, there should be links from the tutorial page to the corresponding example. Expressions: there is an example at the bottom of the page of a problem that recurs elsewhere. The phrase "there are lots of examples of how to do this in the Examples section" is not helpful without references to which particular examples illustrate this point. It should also be clear in the examples what points they are examples of. Kinds of expressions: over the past year I have found myself consulting this page many times expecting a list of the kinds of expressions, i.e., all the tags, etc. Why is that not here? Why does this only refer to "special" kinds? Operators: comments like "you can write your own operators that return Expressions if you like" would be much more useful with the following: (i) an indication of when that would be appropriate (is this just another joke, as in "be a masochist, write your own", or is this "sometimes you legitimately need to and so you can"?) and (ii) examples of how to do this. I believe there are similar points throughout the documentation to which analogous comments apply. It would also be useful to have a link to all the operator macros here. Explicit transformations: the phrases "that will be placed into result at the corresponding place in result" seems confusing. Especially given the central role played by transforms, I feel that a much clearer description of their operation is needed. This could leverage a detailed section on expression trees generally and (another one?) on the representation of them in Yap. The latter is implicit in the expression concept, but clarity would be improved by making it explicit and by relating it to the more general concepts. The sentence "Otherwise, N is used, and visitation of nodes in expr continues below N." seems misleading if I am understanding transforms correctly. First, the "N is used" part is very vague; used for what specifically? Second, the "visitation continues" part seems wrong. Isn't the recursion entirely under the control of the transform? If so, then how can this assertion be made? Perhaps you mean something like "sane transforms should often be written to have this property". Again, this gets at the need to provide much clearer best practices information that is clearly distinct from tutorials and reference material. These all serve different needs and should not be interspersed in the documentation. The tag and expression forms of matching in transforms are mentioned on this page, but documented elsewhere. This is difficult organization. Reorganize this to bring the conceptually related parts together. Evaluating expressions: Given the direction of the discussion (i.e., dropping customization points), it seems that this will be reduced to "Boost.YAP expressions are explicitly evaluated by calling the evaluate()". It seems, however, that this could be a good point to include information on (i) evaluation of operators follows native semantics for the types involved, (ii) the value of local reasoning about semantics, (iii) a brief discussion of equivalencies and differences between the underlying native code and what is evaluated by evaluate(). Operator macros: I think this makes more sense in "Operators". How expression operands are treated: most of the information here (i.e., not the table and specifics relating to is) seem more appropriate in a section dealing with the design of Yap expressions and the implementation of them. This is trying to motivate the correspondence between C++ expressions and Yap expression trees; that needs unification elsewhere. The last bit is more about specifics and should be reference material. Customization points: I assume this will disappear. Transform matching: this information needs to be integrated into a unified description of transforms. Reference: why is the reference organized around the header files? I feel it would make more sense to organize reference information around concepts, classes, functions, and the like. The headers are details that a reader generally does not care about as an index to finding information. They are clearly needed as part of the reference documentation for each class, etc., but that information is provided on those pages. Index: the index should have links to the reference information for functions and classes. It seems that one must go through the header file links to get them (because not everything is linked here), but that does not match my expectation as a reader. Conversely, everything linked to by the header file pages should be linked in the index. Indeed, those pages are what appear to me to be the primary reference documentation, but they are needlessly difficult to navigate to.
* Tests
I have not compiled the tests. I have however compiled and run all the examples except the autodiff one. Additionally, I have written many of my own simple use cases / tests to ensure that I understand how Yap works.
* Usefulness
This library is extremely useful. It has enabled me to build practical applications in ways that are much easier than with Proto. I look forward to the outcome of this review, as I expect it will be even better designed and documented and therefore even more useful. If only viewed from this standpoint, the library should absolutely be accepted.
- Did you attempt to use the library? If so:
* Which compiler(s)?
Apple LLVM version 8.0.0 (clang-800.0.42.1); probably earlier versions as well.
* What was the experience? Any problems?
I have encountered no compilation problems with the library. The only problems I have encountered result from my use of the library, which underscores the need for better documentation that is more precise and provides best practice information.
- How much effort did you put into your evaluation of the review?
I don't suppose the year of using Yap counts exactly, but that has raised a lot of issues for me that I have discussed here. During the review I have regularly followed the comments on the list and have spent an hour or two on contributing additional comments or testing specific ideas that have come up. I have spent several hours going through the documentation and writing this review. Verdict: Yap should be CONDITIONALLY ACCEPTED into Boost. The conditions I have in mind are the following: - The documentation should be dramatically improved. My comments above should make it clear that this includes reorganization as well as clarification. - Alternative expression concepts should be evaluated carefully. This adds to comments on the list regarding suitable tuples, etc. Perhaps tuples are not necessary. This condition is not that the design should be changed, but instead that alternatives should be investigated and the rationale for the chosen one documented clearly in contrast to others. Perhaps this will be evident if the documentation includes more specific information about the representation, the use of transforms, etc. Finally, I want to thank Zach for constructing a really useful library. Despite everything said here, I have found it to be valuable in its current form and would continue to use it even if no changes were made. Thus, I am offering this review from the perspective of trying to make Yap more accessible to people who have not already been through the learning curve.

On Fri, Feb 16, 2018 at 2:25 PM, Brook Milligan via Boost < boost@lists.boost.org> wrote:
On Feb 5, 2018, at 8:53 AM, Louis Dionne via Boost via Boost-users < boost-users@lists.boost.org> wrote:
[snip] Thanks for reviewing, Brook!
* Design
Overall, the design of Yap seems excellent, although I do agree with the comments regarding potential modifications to the expression concept. In particular, I like Peter's idea of making the tag a type and the idea of making the expression template variadic (instead of using a tuple):
template<expr_kind kind> struct expression_tag {};
using plus_tag = expression_tag<expr_kind::plus>;
template < typename Tag, typename ... T > class expression;
It seems to me that this would make matching expressions much more flexible, but perhaps I am missing something. It might also some dispatching in the implementation to make things simpler?
It is clear that the key to flexibility is the ability to write transforms, which is something I have not done a lot of. Thus, it is crucial to get the expression concept right so that generic transforms are easy and matching specific expressions is also easy. My intuition suggests the definition above would be good, but this should be hashed out carefully, and the rationale documented.
I mentioned elsewhere that the library started out this way, with a top-level variadic interface. It made everything more complicated for the implementation adn the interface, and as Steven pointed out elsewhere, it makes expr_ref very hard to get right. [snip]
* Documentation
The documentation is the weakest point of Yap. In particular, it does not establish (in my mind at least) a clear understanding of the rationale and design of the primary use case, which seems to be the evaluate(transform()) idiom discussed on the list.
Introduction: This needs to guide the reader to think about Yap idioms in the right way. The motivation sections seems clear: we need Yap because expression templates are valuable but Proto is too complicated. Perhaps, however, this needs to be recast a bit to avoid bringing in Proto at the first. Perhaps a better motivation is (i) what are expression templates and why are they useful (briefly; refer to a more detailed section that develops this more completely) and (ii) what are some quick examples (really short code snippets, not necessarily complete programs).
There is a much fuller intro to the idea of expression templates themselves, and how Yap represents them in particular, that I added to the boot_review branch a night or two ago. Based on what you wrote above, I'm going to add another section about the idiomatic use of Yap right after than, to laya foundation for understanding the rest of the docs.
The features are also clear, but only with a fair bit of background. It also seems that the features are not necessarily in an order that reflects either their importance or the way a user would encounter them. A better list might be:
[snip] Yeah, I was looking at that list with disdain, but was unsure how exactly to fix it. I like what you wrote here; I've put your detailed notes here into a GitHub ticket.
Tutorial: if there are examples that illustrate points in the tutorial, there should be links from the tutorial page to the corresponding example.
Expressions: there is an example at the bottom of the page of a problem that recurs elsewhere. The phrase "there are lots of examples of how to do this in the Examples section" is not helpful without references to which particular examples illustrate this point. It should also be clear in the examples what points they are examples of.
Done.
Kinds of expressions: over the past year I have found myself consulting this page many times expecting a list of the kinds of expressions, i.e., all the tags, etc. Why is that not here? Why does this only refer to "special" kinds?
This part confuses me a bit. Right after the special ones, I had this: "The other expression kinds are all the overloadable operators, See <link> for the full list." I've put the non-special ones first, with a couple of explicit examples. I really don't want to hit readers with the full list, because it's too much info at that point, and I think people generally know what overloadable operators are. Does that work?
Operators: comments like "you can write your own operators that return Expressions if you like" would be much more useful with the following: (i) an indication of when that would be appropriate (is this just another joke, as in "be a masochist, write your own", or is this "sometimes you legitimately need to and so you can"?) and (ii) examples of how to do this. I believe there are similar points throughout the documentation to which analogous comments apply. It would also be useful to have a link to all the operator macros here.
I don't understand. The full sentence says "Use of the macros is not necessary (you can write your own operators that return Expressions if you like), but it is suitable 99% of the time." There's a link to all the user macros two sentences later.
Explicit transformations: the phrases "that will be placed into result at the corresponding place in result" seems confusing. Especially given the central role played by transforms, I feel that a much clearer description of their operation is needed. This could leverage a detailed section on expression trees generally and (another one?) on the representation of them in Yap. The latter is implicit in the expression concept, but clarity would be improved by making it explicit and by relating it to the more general concepts.
Yeah, I've never been super happy with that sentence. I think this calls for a picture, too.
The sentence "Otherwise, N is used, and visitation of nodes in expr continues below N." seems misleading if I am understanding transforms correctly. First, the "N is used" part is very vague; used for what specifically? Second, the "visitation continues" part seems wrong. Isn't the recursion entirely under the control of the transform? If so, then how can this assertion be made? Perhaps you mean something like "sane transforms should often be written to have this property". Again, this gets at the need to provide much clearer best practices information that is clearly distinct from tutorials and reference material. These all serve different needs and should not be interspersed in the documentation.
The "N is used" and subsequent traversal part is right, it's just badly explained. The idea is that if N is visited and no transform overload applies to it, it is copied into the result, and transform() goes into its children. Needs a rewrite, it seems.
The tag and expression forms of matching in transforms are mentioned on this page, but documented elsewhere. This is difficult organization. Reorganize this to bring the conceptually related parts together.
Agreed. This is already a lot better on the review branch.
Evaluating expressions: Given the direction of the discussion (i.e., dropping customization points), it seems that this will be reduced to "Boost.YAP expressions are explicitly evaluated by calling the evaluate()". It seems, however, that this could be a good point to include information on (i) evaluation of operators follows native semantics for the types involved, (ii) the value of local reasoning about semantics, (iii) a brief discussion of equivalencies and differences between the underlying native code and what is evaluated by evaluate().
I'll make this more explicit.
Operator macros: I think this makes more sense in "Operators".
Again, I don't want to blast the reader with all of this before they need it. I think the link in the earlier section is enough if they want to look at it at that point.
How expression operands are treated: most of the information here (i.e., not the table and specifics relating to is) seem more appropriate in a section dealing with the design of Yap expressions and the implementation of them. This is trying to motivate the correspondence between C++ expressions and Yap expression trees; that needs unification elsewhere. The last bit is more about specifics and should be reference material.
Agreed. Much of this will go into the new section I mentioned at the top.
Customization points: I assume this will disappear.
Yes.
Transform matching: this information needs to be integrated into a unified description of transforms.
Agreed. This is done on the review branch. Reference: why is the reference organized around the header files? I feel
it would make more sense to organize reference information around concepts, classes, functions, and the like. The headers are details that a reader generally does not care about as an index to finding information. They are clearly needed as part of the reference documentation for each class, etc., but that information is provided on those pages.
It isn't, or at least not intentionally so. he fact that the header organization is at the top is a mistake that's I've fixed.
Index: the index should have links to the reference information for functions and classes. It seems that one must go through the header file links to get them (because not everything is linked here), but that does not match my expectation as a reader. Conversely, everything linked to by the header file pages should be linked in the index. Indeed, those pages are what appear to me to be the primary reference documentation, but they are needlessly difficult to navigate to.
That's just how the QuickBook indexing + Doxygen system works, as far as I know. I'm using the auto-indexing. [snip] Zach

AMDG I vote to *ACCEPT* YAP into Boost. Critical issues (I believe that Zach has already addressed most of these): - The eval_xxx customization points. - Handling of terminals for tag transforms. - BOOST_YAP_CONVERSION_OPERATOR_TEMPLATE. - Short-circuiting in evaluate. - A single macro for symmetric binary operators. TESTS: - External dependencies are strongly discouraged. (i.e. gtest). Especially do not include a copy of external components. - Boost.Build is required. compile_test_main.cpp: - This is basically unnecessary, as the compile tests don't need to be linked. copy_only_types.cpp: - This test is not useful, which is why it's #if'ed out. There is never a reason to create a type that can be copied from an lvalue but not an rvalue. (unless a copy isn't really a copy, in which case YAP can't handle either one anyway.) deref.cpp: - the test cases in this file only show pass/fail, they don't indicate the actual type returned by deref, which will make debugging failures harder. I typically use BOOST_MPL_ASSERT((is_same<X, Y>)); specifically for this reason (even when static_assert is available). - Actually, in general, these tests need to check both the type and the value. Checking just the type is insufficient. user_expression_transform_3.cpp: - It took me a while to find these tests for transform, because I (incorrectly) assumed that user_expression_tranform* were all tests for the transform_expression customization point after I looked at the first two. vector_alloc_test.cpp: - Your replacement of operator new is technically illegal, because it can return a null pointer. - I don't think the standard guarantees that vector makes any specific number of allocations. The correct way is to set allocations = 0 after initializing the vectors and then verify that no further allocations happen. Alternately, make operator new throw a bad_alloc during the region where you expect no allocations. As a bonus, it'll be easier to find the cause of unexpected allocations, when they do happen. You need some compile-fail tests: - invalid arguments to accessors, such as left, right, and get. - possibly switching the arguments of transform. (I find it mildly disturbing that this currently compiles, as transform is one function for which I may not always remember the correct argument order.) gcov says the following are not executed by the tests (line numbers as of master): algorithm.hpp: 130 (value_expr_impl), 232, 246 (get_impl), 300 (get_c), 408 (callable), 427 (argument) 701 (most of op_string) expression.hpp: 54, 62 (value), 69, 77 (left), 84, 92 (right), most unary operators (except negate) most binary operators (except multiplies and plus) 247 (call), 310 (value), 428-420 (pre_dec, post_inc, post_dec) most binary operators (except multiplies, plus, and minus) operators.hpp: many unary and binary operators. print.hpp 42 (print_value) - Isn't this overload for placeholders, rather than hana::llong? 50,52,54 (print_type), 87,89 (print_impl) detail/default_eval.hpp: 20 (eval_placeholder) many unary and binary operators. EXAMPLES: doc/user_macro_snippets.cpp:268 - This should be is_vector not is_string. doc/other_snippets.cpp:90 std::cout << evaluate(expr) << "\n"; // Prints "Yay." Wierd! s/Wierd/Weird/ General Notes: - There are three primary modes for processing an expression tree, which I will describe as: 1. transform: Takes a tree and produces another tree. 2. evaluate: Takes a tree and evaluates it in some way. 3. for_each: Processes each node and returns nothing. yap::transform can handle all three as long as you explicitly handle all nodes. The default behavior is (1), which makes (2) and (3) somewhat inconvenient. Your solution in the examples seems to be transform/evaluate for (2) and returning a dummy result for (3). Unfortunately, transform/evaluate doesn't work well if evaluation involves any special constructs that affect control-flow. Just try to handle something like this: let(_a=2) [ _a + 1 ] // evaluates to 3 Returning a dummy result is a bit annoying, but shouldn't cause any real problems, as long as terminals are captured by reference in the result. All in all, I'd like to have some way to change the default behavior of transform, or perhaps have separate functions with different default behavior. Just an idea: switch the default behavior based on the result of transforming the subexpressions: Expression/non-Expression/void. This has the side effect that you must explicitly wrap terminals when returning, but I think that's actually a good thing, as a transform that returns unwrapped terminals, expecting them to be wrapped by the caller, may have inconsistent behaviour. In addition, there is another possible mode that has better type safety which I will call: 4. manual: No default behavior. If a node is not handled explicitly, it is a hard compile error. - Combining transforms isn't exactly easy, because of the way transforms recurse. For example, if I have two transforms that process disjoint sets of nodes, I can't really turn them into a single transform. - How useful is it to separate the Expression concept from the ExpressionTemplate concept? For example, transform requires an ExpressionTemplate for nodes that are not handled explicitly, but that isn't very clear from the documentation, which just says that it takes an Expression. - You say that it's fine to mix and match expressions that are instantiated from different templates. Why would I want to do this? The first thing that comes to mind is combining two independent libraries that both use YAP, but I suspect that that won't work out very well. It seems a bit too easy for a transform to inadvertently recurse into an unrelated expression tree in this case. - The value of a terminal cannot be another expression. Is this a necessary restriction? (This is related to mixing different libraries using YAP, as the most common form is for library A to treat the expressions from library B as terminals. Since a terminal can't hold an Expression, we would need to somehow un-YAP them first.) - BOOST_YAP_BINARY_OPERATOR_MEMBER(assign) needs special handling because of the copy assignment operator. - Unwrapping terminals redux: Unwrapping terminals is convenient when you have terminals of the form: struct xxx_tag {}; auto xxx = make_terminal(xxx_tag{}); AND you are explicitly matching this terminal with auto operator()(call_tag, xxx_tag, T...) or the like. I expect that matching other terminals like double or int is somewhat more rare in real code. If you are matching any subexpression, then unwrapping terminals is usually wrong. Even in the first case, where unwrapping terminals is sometimes convenient, it prevents usage like: auto operator()(call_tag, decay_t<decltype(xxx)> const &); (Which is how I might write a matcher if xxx is not defined by me and xxx_tag is not a documented public symbol). In Christ, Steven Watanabe

On Mon, Feb 19, 2018 at 11:13 AM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
AMDG
I vote to *ACCEPT* YAP into Boost.
Critical issues (I believe that Zach has already addressed most of these): - The eval_xxx customization points. - Handling of terminals for tag transforms. - BOOST_YAP_CONVERSION_OPERATOR_TEMPLATE. - Short-circuiting in evaluate. - A single macro for symmetric binary operators.
Yes, these are all done except the last one. It turns out there's a larger usability concern, which is that there simply aren't enough macros to cover different use cases. I think we need more that take constraining type traits, and a call operator that only takes N parameters, perhaps with constraints as well.
TESTS:
- External dependencies are strongly discouraged. (i.e. gtest). Especially do not include a copy of external components.
Right. I intended all along to transliterate those to Boost.Test. The Google benchmark stuff will probably just stay on a non-Boost-submodule branch. I never intended to check in either of those Google libraries to an eventual Boost submodule, just to be clear.
- Boost.Build is required.
Right. Will do.
compile_test_main.cpp:
- This is basically unnecessary, as the compile tests don't need to be linked.
That's just a CMake-ism. I don't know of an easier way to get CMake to compile compile-only tests.
copy_only_types.cpp:
- This test is not useful, which is why it's #if'ed out. There is never a reason to create a type that can be copied from an lvalue but not an rvalue. (unless a copy isn't really a copy, in which case YAP can't handle either one anyway.)
That's cruft. I'll remove it.
deref.cpp:
- the test cases in this file only show pass/fail, they don't indicate the actual type returned by deref, which will make debugging failures harder. I typically use BOOST_MPL_ASSERT((is_same<X, Y>)); specifically for this reason (even when static_assert is available).
Good idea. I've made this change throughout the test files.
- Actually, in general, these tests need to check both the type and the value. Checking just the type is insufficient.
Will do.
user_expression_transform_3.cpp:
- It took me a while to find these tests for transform, because I (incorrectly) assumed that user_expression_tranform* were all tests for the transform_expression customization point after I looked at the first two.
The first two are gone; they went away with the customization points. It's a lot clearer now.
vector_alloc_test.cpp:
- Your replacement of operator new is technically illegal, because it can return a null pointer.
- I don't think the standard guarantees that vector makes any specific number of allocations. The correct way is to set allocations = 0 after initializing the vectors and then verify that no further allocations happen. Alternately, make operator new throw a bad_alloc during the region where you expect no allocations. As a bonus, it'll be easier to find the cause of unexpected allocations, when they do happen.
Will do.
You need some compile-fail tests: - invalid arguments to accessors, such as left, right, and get. - possibly switching the arguments of transform. (I find it mildly disturbing that this currently compiles, as transform is one function for which I may not always remember the correct argument order.)
Agreed. Will do.
gcov says the following are not executed by the tests (line numbers as of master):
algorithm.hpp: 130 (value_expr_impl), 232, 246 (get_impl), 300 (get_c), 408 (callable), 427 (argument) 701 (most of op_string)
expression.hpp: 54, 62 (value), 69, 77 (left), 84, 92 (right), most unary operators (except negate) most binary operators (except multiplies and plus) 247 (call), 310 (value), 428-420 (pre_dec, post_inc, post_dec) most binary operators (except multiplies, plus, and minus)
operators.hpp: many unary and binary operators.
print.hpp 42 (print_value) - Isn't this overload for placeholders, rather than hana::llong? 50,52,54 (print_type), 87,89 (print_impl)
detail/default_eval.hpp:
20 (eval_placeholder) many unary and binary operators.
I'll add better coverage for these.
EXAMPLES:
doc/user_macro_snippets.cpp:268 - This should be is_vector not is_string.
Thanks, fixed.
doc/other_snippets.cpp:90 std::cout << evaluate(expr) << "\n"; // Prints "Yay." Wierd! s/Wierd/Weird/
I think this already got fixed. A recursive, case-insensitive grep found no "wierd"s anywhere under yap/.
General Notes:
- There are three primary modes for processing an expression tree, which I will describe as: 1. transform: Takes a tree and produces another tree. 2. evaluate: Takes a tree and evaluates it in some way. 3. for_each: Processes each node and returns nothing. yap::transform can handle all three as long as you explicitly handle all nodes. The default behavior is (1), which makes (2) and (3) somewhat inconvenient. Your solution in the examples seems to be transform/evaluate for (2) and returning a dummy result for (3). Unfortunately, transform/evaluate doesn't work well if evaluation involves any special constructs that affect control-flow. Just try to handle something like this: let(_a=2) [ _a + 1 ] // evaluates to 3
That does not look problematic to me, though I don't know what the intended semantics are. If _a is a terminal and refers to something for which "= 2" and "+1" are well-formed, I would expect even evaluate() to do the right thing.
Returning a dummy result is a bit annoying, but shouldn't cause any real problems, as long as terminals are captured by reference in the result. All in all, I'd like to have some way to change the default behavior of transform, or perhaps have separate functions with different default behavior. Just an idea: switch the default behavior based on the result of transforming the subexpressions: Expression/non-Expression/void.
I don't yet get what you're suggesting here. Right now, transform() returns whatever you tell it to, except that it doesn't special-case void return. You can (I have have extensively in the examples) return an Expression or non-Expression from a transform, or from any subset of overloads your transform object provides. What is the behavior that you're suggesting?
This has the side effect that you must explicitly wrap terminals when returning, but I think that's actually a good thing, as a transform that returns unwrapped terminals, expecting them to be wrapped by the caller, may have inconsistent behaviour. In addition, there is another possible mode that has better type safety which I will call: 4. manual: No default behavior. If a node is not handled explicitly, it is a hard compile error.
But this isn't transform() at all, because it doesn't recurse. It only matches the top-level expression, or you get a hard error. Why not just write my_func() that takes a certain expression and returns another? Calling it with the wrong type of expression will result your desired hard error. Wait, is the use case that I think my transform matches all subexpressions within the top-level expression, and I want to verify that this is the case? I don't know how often that will come up. I can't think of a single time I've written a Yap transform expecting it to match all nodes, except to evaluate it. It could be useful in those cases now that I think of it.
- Combining transforms isn't exactly easy, because of the way transforms recurse. For example, if I have two transforms that process disjoint sets of nodes, I can't really turn them into a single transform.
I'm not necessarily opposed to the idea, but how would combining transforms work? Would each node be matched against multiple transform objects, using whichever one works, or something else?
- How useful is it to separate the Expression concept from the ExpressionTemplate concept?
Types and templates are not the same kind of entity. I don't know how to even go about combining these. Moreover, I think they should remain separated, because sometimes a function takes an Expression, sometimes not, sometimes it requires an ExpressionTemplate template parameter, and sometimes not. These are orthogonal requirements for the caller, no?
For example, transform requires an ExpressionTemplate for nodes that are not handled explicitly, but that isn't very clear from the documentation, which just says that it takes an Expression.
I don't understand what you mean. transform() does not require an extrinsic ExpressionTemplate template parameter, and does not use one. It just recycles the ExpressionTemplate that was originally used to instantiate whatever it needs to copy.
- You say that it's fine to mix and match expressions that are instantiated from different templates. Why would I want to do this? The first thing that comes to mind is combining two independent libraries that both use YAP, but I suspect that that won't work out very well. It seems a bit too easy for a transform to inadvertently recurse into an unrelated expression tree in this case.
I have two types, M and S (m and s are two objects of those respective types). M is matrix-like, and has m*m in its set of well-formed expressions; m[x] is ill-formed for any x. S is string-like, and has s[int(2)] in its set of well-formed expressions; s*s is ill-formed. M and S are in the same library, one that I maintain. If I want to use these in Yap expressions, I probably want to be able to write m*m and s[5], but not m[2] or s*s. So I write two expression templates with the right operations defined: template <...> struct m_expr { // ... BOOST_YAP_USER_BINARY_OPERATOR_MEMBER(times, ::m_expr) }; template <...> struct s_expr { // ... BOOST_YAP_USER_BINARY_OPERATOR_MEMBER(aubscript, ::s_expr) }; Now I can write a Yap expression like: lookup_matrix(S("my_matrix")) * some_matrix and transform() it how ever I like. Requiring the two *_expr templates to be unified would be weird. - The value of a terminal cannot be another expression.
Is this a necessary restriction?
Not really; I probably put that restriction there due to lack of imagination. Terminals' values should be treated as simple types, but I'm pretty sure there's no real reason those types can't happen to model Expression.
(This is related to mixing different libraries using YAP, as the most common form is for library A to treat the expressions from library B as terminals. Since a terminal can't hold an Expression, we would need to somehow un-YAP them first.)
As I outlined above, I don't think that's going to be the most common case of mixing ExpressionTemplates in a single Yap expression.
- BOOST_YAP_BINARY_OPERATOR_MEMBER(assign) needs special handling because of the copy assignment operator.
Ah, thanks for pointing that out!
- Unwrapping terminals redux: Unwrapping terminals is convenient when you have terminals of the form: struct xxx_tag {}; auto xxx = make_terminal(xxx_tag{}); AND you are explicitly matching this terminal with auto operator()(call_tag, xxx_tag, T...) or the like. I expect that matching other terminals like double or int is somewhat more rare in real code. If you are matching any subexpression, then unwrapping terminals is usually wrong.
Why do you say that? The only way it is the way it is now is convenience, as you might have guessed. When I know I want to match a particular terminal of a particular type, I'd rather write: auto operator(some_unary_tag, some_type && x); auto operator(some_unary_tag, some_type const & x); versus: auto operator(some_unary_tag, my_expr<yap::expr_kind::terminal, some_type &&> const & x); auto operator(some_unary_tag, my_expr<yap::expr_kind::terminal, some_type const &> const & x); This latter form is what I started with, and I very quickly grew tired of writing all that. If I match a generic expression that may or may not be a terminal, I do have to use yap::as_expr(), which is definitely an inconvenience: template <typename Expr> auto operator(some_unary_tag, Expr const & x) { return some_function_of(yap::as_expr(x)); }
Even in the first case, where unwrapping terminals is sometimes convenient, it prevents usage like: auto operator()(call_tag, decay_t<decltype(xxx)> const &); (Which is how I might write a matcher if xxx is not defined by me and xxx_tag is not a documented public symbol).
My intuition is that this is a far less common case than wanting to match a particular type of terminal. Zach

AMDG On 02/19/2018 08:38 PM, Zach Laine via Boost wrote:
On Mon, Feb 19, 2018 at 11:13 AM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
<snip> General Notes:
- There are three primary modes for processing an expression tree, which I will describe as: 1. transform: Takes a tree and produces another tree. 2. evaluate: Takes a tree and evaluates it in some way. 3. for_each: Processes each node and returns nothing. yap::transform can handle all three as long as you explicitly handle all nodes. The default behavior is (1), which makes (2) and (3) somewhat inconvenient. Your solution in the examples seems to be transform/evaluate for (2) and returning a dummy result for (3). Unfortunately, transform/evaluate doesn't work well if evaluation involves any special constructs that affect control-flow. Just try to handle something like this: let(_a=2) [ _a + 1 ] // evaluates to 3
That does not look problematic to me, though I don't know what the intended semantics are.
http://www.boost.org/libs/phoenix/doc/html/phoenix/modules/scope/let.html
If _a is a terminal and refers to something for which "= 2" and "+1" are well-formed, I would expect even evaluate() to do the right thing.
_a is a placeholder. _a = 2 is not an assignment to some external object. It is a variable definition that is scoped within the let expression. It's not really possible to transform this into something that can be evaluated in a single pass. I claim that it is impossible to implement `let` using Yap without duplicating all the work of evaluate. (Keep in mind that `let` can be nested and can be mixed with if_else.)
Returning a dummy result is a bit annoying, but shouldn't cause any real problems, as long as terminals are captured by reference in the result. All in all, I'd like to have some way to change the default behavior of transform, or perhaps have separate functions with different default behavior. Just an idea: switch the default behavior based on the result of transforming the subexpressions: Expression/non-Expression/void.
I don't yet get what you're suggesting here. Right now, transform() returns whatever you tell it to,
The issue is not what I'm telling it to do, but rather what it does on its own when I don't tell it anything.
except that it doesn't special-case void return. You can (I have have extensively in the examples) return an Expression or non-Expression from a transform, or from any subset of overloads your transform object provides. What is the behavior that you're suggesting?
You've chosen to make it impossible to customize the behavior of evaluate. I believe that Brook brought up evaluate_with_context, which is basically what I want.
This has the side effect that you must explicitly wrap terminals when returning, but I think that's actually a good thing, as a transform that returns unwrapped terminals, expecting them to be wrapped by the caller, may have inconsistent behaviour. In addition, there is another possible mode that has better type safety which I will call: 4. manual: No default behavior. If a node is not handled explicitly, it is a hard compile error.
But this isn't transform() at all, because it doesn't recurse. It only matches the top-level expression, or you get a hard error. Why not just write my_func() that takes a certain expression and returns another?
You're right. The only benefit is that tag transforms are a bit more convenient. Also, it allows for a more consistent interface.
Calling it with the wrong type of expression will result your desired hard error. Wait, is the use case that I think my transform matches all subexpressions within the top-level expression, and I want to verify that this is the case? I don't know how often that will come up. I can't think of a single time I've written a Yap transform expecting it to match all nodes, except to evaluate it. It could be useful in those cases now that I think of it.
If you're building a completely new grammar whose meaning has no relationship to the built in meaning of the operators, (e.g. Spirit), then you basically have to handle everything explicitly.
- Combining transforms isn't exactly easy, because of the way transforms recurse. For example, if I have two transforms that process disjoint sets of nodes, I can't really turn them into a single transform.
I'm not necessarily opposed to the idea, but how would combining transforms work? Would each node be matched against multiple transform objects, using whichever one works, or something else?
Probably the behavior is to choose the first one that matches. That would make it easy to write a transform that overrides some behavior of another transform. (Note that I really have no idea how to make something like this actually work.)
- How useful is it to separate the Expression concept from the ExpressionTemplate concept?
Types and templates are not the same kind of entity. I don't know how to even go about combining these. Moreover, I think they should remain separated, because sometimes a function takes an Expression, sometimes not, sometimes it requires an ExpressionTemplate template parameter, and sometimes not. These are orthogonal requirements for the caller, no?
For example, transform requires an ExpressionTemplate for nodes that are not handled explicitly, but that isn't very clear from the documentation, which just says that it takes an Expression.
I don't understand what you mean. transform() does not require an extrinsic ExpressionTemplate template parameter, and does not use one. It just recycles the ExpressionTemplate that was originally used to instantiate whatever it needs to copy.
Let me rephrase the question: Is it useful to allow Expressions that are not instantiations of an ExpressionTemplate, given that transform can choke on such classes.
- You say that it's fine to mix and match expressions that are instantiated from different templates. Why would I want to do this? The first thing that comes to mind is combining two independent libraries that both use YAP, but I suspect that that won't work out very well. It seems a bit too easy for a transform to inadvertently recurse into an unrelated expression tree in this case.
I have two types, M and S (m and s are two objects of those respective types). M is matrix-like, and has m*m in its set of well-formed expressions; m[x] is ill-formed for any x. S is string-like, and has s[int(2)] in its set of well-formed expressions; s*s is ill-formed. M and S are in the same library, one that I maintain.
If I want to use these in Yap expressions, I probably want to be able to write m*m and s[5], but not m[2] or s*s. So I write two expression templates with the right operations defined:
template <...> struct m_expr { // ... BOOST_YAP_USER_BINARY_OPERATOR_MEMBER(times, ::m_expr) };
template <...> struct s_expr { // ... BOOST_YAP_USER_BINARY_OPERATOR_MEMBER(aubscript, ::s_expr) };
Now I can write a Yap expression like:
lookup_matrix(S("my_matrix")) * some_matrix
and transform() it how ever I like. Requiring the two *_expr templates to be unified would be weird.
It seems a bit odd to have matrices and strings in one library, rather than matrices and vectors, but I get your point. Please consider adding something like this to the documentation.
- The value of a terminal cannot be another expression.
Is this a necessary restriction?
Not really; I probably put that restriction there due to lack of imagination. Terminals' values should be treated as simple types, but I'm pretty sure there's no real reason those types can't happen to model Expression.
(This is related to mixing different libraries using YAP, as the most common form is for library A to treat the expressions from library B as terminals. Since a terminal can't hold an Expression, we would need to somehow un-YAP them first.)
As I outlined above, I don't think that's going to be the most common case of mixing ExpressionTemplates in a single Yap expression.
That's why I said mixing different *libraries*, not just mixing different templates from a single library (which are most likely intended to work together). The basic idea is that library A creates expressions that are models of some concept C, while library B expects a terminal that models the same concept C. library A and library B know nothing about each other, and are only connected because they both know about C.
- Unwrapping terminals redux: Unwrapping terminals is convenient when you have terminals of the form: struct xxx_tag {}; auto xxx = make_terminal(xxx_tag{}); AND you are explicitly matching this terminal with auto operator()(call_tag, xxx_tag, T...) or the like. I expect that matching other terminals like double or int is somewhat more rare in real code. If you are matching any subexpression, then unwrapping terminals is usually wrong.
Why do you say that? The only way it is the way it is now is convenience, as you might have guessed. When I know I want to match a particular terminal of a particular type, I'd rather write:
auto operator(some_unary_tag, some_type && x); auto operator(some_unary_tag, some_type const & x);
versus:
auto operator(some_unary_tag, my_expr<yap::expr_kind::terminal, some_type &&> const & x); auto operator(some_unary_tag, my_expr<yap::expr_kind::terminal, some_type const &> const & x);
This latter form is what I started with, and I very quickly grew tired of writing all that.
You can always use a typedef or alias. i.e. my_term<some_type&&>.
If I match a generic expression that may or may not be a terminal, I do have to use yap::as_expr(), which is definitely an inconvenience:
template <typename Expr> auto operator(some_unary_tag, Expr const & x) { return some_function_of(yap::as_expr(x)); }
My intuition is that this is the most common pattern, and should be what we optimize for. Also, changing terminals to yap::expression may cause random weirdness if the transform returns an Expression. In addition, I feel that needing to call as_expr is more surprising than needing to write out the the terminal as an expression. Actually... I have a much better idea. Why don't you allow transform for a non-expr to match a terminal tag transform? <snip> In Christ, Steven Watanabe

On Tue, Feb 20, 2018 at 9:29 AM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
AMDG
On 02/19/2018 08:38 PM, Zach Laine via Boost wrote:
On Mon, Feb 19, 2018 at 11:13 AM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
[snip]
Unfortunately, transform/evaluate doesn't work well if evaluation involves any special constructs that affect control-flow. Just try to handle something like this: let(_a=2) [ _a + 1 ] // evaluates to 3
That does not look problematic to me, though I don't know what the intended semantics are.
http://www.boost.org/libs/phoenix/doc/html/phoenix/modules/scope/let.html
If _a is a terminal and refers to something for which "= 2" and "+1" are well-formed, I would expect even evaluate() to do the right thing.
_a is a placeholder. _a = 2 is not an assignment to some external object. It is a variable definition that is scoped within the let expression. It's not really possible to transform this into something that can be evaluated in a single pass. I claim that it is impossible to implement `let` using Yap without duplicating all the work of evaluate. (Keep in mind that `let` can be nested and can be mixed with if_else.)
That looks like a great candidate for an example, so I made one out of it: https://github.com/tzlaine/yap/commit/4b383f9343a2a8affaf132c5be1eeb99a56e58... It took most of the examples from the documentation page you posted above, and it works for them, including nesting. For instance: boost::yap::evaluate( let(_a = 1_p, _b = 2_p) [ // _a here is an int: 1 let(_a = 3_p) // hides the outer _a [ cout << _a << _b // prints "Hello, World" ] ], 1, " World", "Hello," ); That's verbatim from the Phoenix docs (except for the yap::evaluate() call of course), with the same behavior. The entire example is only 158 lines, including empty lines an some comment lines. The trick is to make let() a regular eager function and leave everything else lazy Yap expression stuff. I don't know if this counts as evaluation in "a single pass" as you first mentioned, but I don't care, because the user won't care either -- she can't really tell. [snip]
I don't yet get what you're suggesting here. Right now, transform()
returns whatever you tell it to,
The issue is not what I'm telling it to do, but rather what it does on its own when I don't tell it anything.
I take it from this that you mean the copying of nodes unmatched by the transform object. If so, I think this is covered by transform_strict() (as I'm provisionally calling it), that hard-errors on unmatched nodes. Does that suffice, or are there other aspects you find problematic?
except that it doesn't special-case void return. You can (I have have extensively in the examples) return an Expression or non-Expression from a transform, or from any subset of overloads your transform object provides. What is the behavior that you're suggesting?
You've chosen to make it impossible to customize the behavior of evaluate. I believe that Brook brought up evaluate_with_context, which is basically what I want.
Yes, I have. I don't believe you actually want otherwise, though. Such a tailored evaluate() would look something like this: evaluate_with_context(expr, context, placeholder_subs...); // the subs are of course optional What does evaluate_with_context now do? Let's say expr is "a + b". Does the context only apply to the evaluation of a and b as terminals, or does it apply to the plus operation as well? Are such applications of the context conditional? How does the reader quickly grasp what the evaluate_with_context() call does? This seems like really muddy code to me. If you have something else in mind, please provide more detail -- I may of course be misunderstanding you.
This has the side effect that you must explicitly wrap terminals when returning, but I think that's actually a good thing, as a transform that returns unwrapped terminals, expecting them to be wrapped by the caller, may have inconsistent behaviour. In addition, there is another possible mode that has better type safety which I will call: 4. manual: No default behavior. If a node is not handled explicitly, it is a hard compile error.
But this isn't transform() at all, because it doesn't recurse. It only matches the top-level expression, or you get a hard error. Why not just write my_func() that takes a certain expression and returns another?
You're right. The only benefit is that tag transforms are a bit more convenient. Also, it allows for a more consistent interface.
Calling it with the wrong type of expression will result your desired hard error. Wait, is the use case that I think my transform matches all subexpressions within the top-level expression, and I want to verify that this is the case? I don't know how often that will come up. I can't think of a single time I've written a Yap transform expecting it to match all nodes, except to evaluate it. It could be useful in those cases now that I think of it.
If you're building a completely new grammar whose meaning has no relationship to the built in meaning of the operators, (e.g. Spirit), then you basically have to handle everything explicitly.
Ok, I'm convinced this is a good idea. I've added a GitHub ticket about creating a transform_strict(), as I mentioned above.
- Combining transforms isn't exactly easy, because of the way transforms recurse. For example, if I have two transforms that process disjoint sets of nodes, I can't really turn them into a single transform.
I'm not necessarily opposed to the idea, but how would combining transforms work? Would each node be matched against multiple transform objects, using whichever one works, or something else?
Probably the behavior is to choose the first one that matches. That would make it easy to write a transform that overrides some behavior of another transform. (Note that I really have no idea how to make something like this actually work.)
Well if *you* don't... :) I'll wait until you get back to me with more details before I bite off such a feature.
- How useful is it to separate the Expression concept from the ExpressionTemplate concept?
Types and templates are not the same kind of entity. I don't know how to even go about combining these. Moreover, I think they should remain separated, because sometimes a function takes an Expression, sometimes not, sometimes it requires an ExpressionTemplate template parameter, and sometimes not. These are orthogonal requirements for the caller, no?
For example, transform requires an ExpressionTemplate for nodes that are not handled explicitly, but that isn't very clear from the documentation, which just says that it takes an Expression.
I don't understand what you mean. transform() does not require an extrinsic ExpressionTemplate template parameter, and does not use one. It just recycles the ExpressionTemplate that was originally used to instantiate whatever it needs to copy.
Let me rephrase the question: Is it useful to allow Expressions that are not instantiations of an ExpressionTemplate, given that transform can choke on such classes.
Hm. I think this is a documentation failure. I need to look and see what the actual requirements are. I think it should always be fine to have non-template-derived Expressions in terminals. This is where one usually writes them anyway.
- You say that it's fine to mix and match expressions that are instantiated from different templates. Why would I want to do this? The first thing that comes to mind is combining two independent libraries that both use YAP, but I suspect that that won't work out very well. It seems a bit too easy for a transform to inadvertently recurse into an unrelated expression tree in this case.
I have two types, M and S (m and s are two objects of those respective types). M is matrix-like, and has m*m in its set of well-formed expressions; m[x] is ill-formed for any x. S is string-like, and has s[int(2)] in its set of well-formed expressions; s*s is ill-formed. M and S are in the same library, one that I maintain.
If I want to use these in Yap expressions, I probably want to be able to write m*m and s[5], but not m[2] or s*s. So I write two expression templates with the right operations defined:
template <...> struct m_expr { // ... BOOST_YAP_USER_BINARY_OPERATOR_MEMBER(times, ::m_expr) };
template <...> struct s_expr { // ... BOOST_YAP_USER_BINARY_OPERATOR_MEMBER(aubscript, ::s_expr) };
Now I can write a Yap expression like:
lookup_matrix(S("my_matrix")) * some_matrix
and transform() it how ever I like. Requiring the two *_expr templates to be unified would be weird.
It seems a bit odd to have matrices and strings in one library, rather than matrices and vectors, but I get your point. Please consider adding something like this to the documentation.
Will do.
- The value of a terminal cannot be another expression.
Is this a necessary restriction?
Not really; I probably put that restriction there due to lack of imagination. Terminals' values should be treated as simple types, but I'm pretty sure there's no real reason those types can't happen to model Expression.
(This is related to mixing different libraries using YAP, as the most common form is for library A to treat the expressions from library B as terminals. Since a terminal can't hold an Expression, we would need to somehow un-YAP them first.)
As I outlined above, I don't think that's going to be the most common case of mixing ExpressionTemplates in a single Yap expression.
That's why I said mixing different *libraries*, not just mixing different templates from a single library (which are most likely intended to work together). The basic idea is that library A creates expressions that are models of some concept C, while library B expects a terminal that models the same concept C. library A and library B know nothing about each other, and are only connected because they both know about C.
I want to see a compelling example of this causing a problem. My expectation is that my transform overloads will only match the types or patterns of types I specify. If I visit N nodes in a tree, and half of them are from library A and half from library B, my transform overloads are probably going to match only within the half that I anticipated. If this is not the case, it's probably because I've written the overloads loosely on purpose, or too loosely and need to fix my code.
- Unwrapping terminals redux: Unwrapping terminals is convenient when you have terminals of the form: struct xxx_tag {}; auto xxx = make_terminal(xxx_tag{}); AND you are explicitly matching this terminal with auto operator()(call_tag, xxx_tag, T...) or the like. I expect that matching other terminals like double or int is somewhat more rare in real code. If you are matching any subexpression, then unwrapping terminals is usually wrong.
Why do you say that? The only way it is the way it is now is convenience, as you might have guessed. When I know I want to match a particular terminal of a particular type, I'd rather write:
auto operator(some_unary_tag, some_type && x); auto operator(some_unary_tag, some_type const & x);
versus:
auto operator(some_unary_tag, my_expr<yap::expr_kind::terminal, some_type &&> const & x); auto operator(some_unary_tag, my_expr<yap::expr_kind::terminal, some_type const &> const & x);
This latter form is what I started with, and I very quickly grew tired of writing all that.
You can always use a typedef or alias. i.e. my_term<some_type&&>.
I decided to conduct this experiment and see how it went. I removed the terminal_value() function and all its uses from default_eval.hpp; this is all that was required to disable terminal unwrapping. Almost immediately, I ran into something I did not want to deal with. From one of the tests: decltype(auto) operator()( yap::expr_tag<yap::expr_kind::call>, tag_type, double a, double b) { /* ... */ } Now, the tag type part is not so bad. As you mentioned, I can just write my_term<tag_type> or similar. Now, what about the doubles? In order to match one of those, I need to pick my_term<double>, my_term<double &>, my_term<double const &>, etc., because the normal reference binding and copying rules of parameter passing can no longer help me. It doesn't matter that I only want to use "b" as a double by-value, it matters what the user happened to write in the expression I'm transforming. The alternative is of course to write generic code to catch a plain ol' double, which I object to philosophically and pragmatically. We're back to writing a and b as AExpr and BExpr template-parameterized types, but because those might also match some ints or std::strings or whatever, we're also going to need to write some sfinae or concept constraints.
If I match a generic expression that may or may not be a terminal, I do have to use yap::as_expr(), which is definitely an inconvenience:
template <typename Expr> auto operator(some_unary_tag, Expr const & x) { return some_function_of(yap::as_expr(x)); }
My intuition is that this is the most common pattern, and should be what we optimize for. Also, changing terminals to yap::expression may cause random weirdness if the transform returns an Expression. In addition, I feel that needing to call as_expr is more surprising than needing to write out the the terminal as an expression.
I agree that this is the most likely more common pattern. It's just that when you *do* want to match terminals, especially more than one at the same time, the pain of doing it without terminal unwrapping if far greater than the pain of using as_expr() in code like the common case above.
Actually... I have a much better idea. Why don't you allow transform for a non-expr to match a terminal tag transform?
I've read this a few times now and cannot parse. Could you rephrase? Zach

AMDG On 02/20/2018 11:16 PM, Zach Laine via Boost wrote:
On Tue, Feb 20, 2018 at 9:29 AM, Steven Watanabe via Boost < boost@lists.boost.org> wrote: <snip> That looks like a great candidate for an example, so I made one out of it:
https://github.com/tzlaine/yap/commit/4b383f9343a2a8affaf132c5be1eeb99a56e58...
It took most of the examples from the documentation page you posted above, and it works for them, including nesting. For instance:
boost::yap::evaluate( let(_a = 1_p, _b = 2_p) [ // _a here is an int: 1
let(_a = 3_p) // hides the outer _a [ cout << _a << _b // prints "Hello, World" ] ], 1, " World", "Hello," );
That's verbatim from the Phoenix docs (except for the yap::evaluate() call of course), with the same behavior. The entire example is only 158 lines, including empty lines an some comment lines. The trick is to make let() a regular eager function and leave everything else lazy Yap expression stuff. I don't know if this counts as evaluation in "a single pass" as you first mentioned, but I don't care, because the user won't care either -- she can't really tell.
[snip]
evaluate(let(_a = 1_p << 3) [ _a << "1", _a << "2" ], std::cout); // prints 3132, but should print 312 Also, let(_a=_a+_a)[let(_a=_a+_a)[let(_a=_a+_a)[...]]] has exponential cost.
I don't yet get what you're suggesting here. Right now, transform()
returns whatever you tell it to,
The issue is not what I'm telling it to do, but rather what it does on its own when I don't tell it anything.
I take it from this that you mean the copying of nodes unmatched by the transform object. If so, I think this is covered by transform_strict() (as I'm provisionally calling it), that hard-errors on unmatched nodes. Does that suffice, or are there other aspects you find problematic?
except that it doesn't special-case void return. You can (I have have extensively in the examples) return an Expression or non-Expression from a transform, or from any subset of overloads your transform object provides. What is the behavior that you're suggesting?
You've chosen to make it impossible to customize the behavior of evaluate. I believe that Brook brought up evaluate_with_context, which is basically what I want.
Yes, I have. I don't believe you actually want otherwise, though. Such a tailored evaluate() would look something like this:
evaluate_with_context(expr, context, placeholder_subs...); // the subs are of course optional
What does evaluate_with_context now do? Let's say expr is "a + b". Does the context only apply to the evaluation of a and b as terminals, or does it apply to the plus operation as well?
It applies to the plus operation first. If the context has a handler for plus, then it's up to the context to handle recursion. If it does not, then it becomes evaluate_with_context(a, ctx, x...) + evaluate_with_context(b, ctx, x...)
Are such applications of the context conditional? How does the reader quickly grasp what the evaluate_with_context() call does? This seems like really muddy code to me. If you have something else in mind, please provide more detail -- I may of course be misunderstanding you.
My idea is that it would behave exactly like transform, except that the default behavior for nodes that are not handled by the context is to evaluate the operators instead of building a new expression.
<snip>
- Unwrapping terminals redux: Unwrapping terminals is convenient when you have terminals of the form: struct xxx_tag {}; auto xxx = make_terminal(xxx_tag{}); AND you are explicitly matching this terminal with auto operator()(call_tag, xxx_tag, T...) or the like. I expect that matching other terminals like double or int is somewhat more rare in real code. If you are matching any subexpression, then unwrapping terminals is usually wrong.
Why do you say that? The only way it is the way it is now is convenience, as you might have guessed. When I know I want to match a particular terminal of a particular type, I'd rather write:
auto operator(some_unary_tag, some_type && x); auto operator(some_unary_tag, some_type const & x);
versus:
auto operator(some_unary_tag, my_expr<yap::expr_kind::terminal, some_type &&> const & x); auto operator(some_unary_tag, my_expr<yap::expr_kind::terminal, some_type const &> const & x);
This latter form is what I started with, and I very quickly grew tired of writing all that.
You can always use a typedef or alias. i.e. my_term<some_type&&>.
I decided to conduct this experiment and see how it went. I removed the terminal_value() function and all its uses from default_eval.hpp; this is all that was required to disable terminal unwrapping. Almost immediately, I ran into something I did not want to deal with. From one of the tests:
decltype(auto) operator()( yap::expr_tag<yap::expr_kind::call>, tag_type, double a, double b) { /* ... */ }
Personally, I believe that this is probably a very rare situation outside of test cases and toy examples.
Now, the tag type part is not so bad. As you mentioned, I can just write my_term<tag_type> or similar. Now, what about the doubles? <snip>
If I match a generic expression that may or may not be a terminal, I do have to use yap::as_expr(), which is definitely an inconvenience:
template <typename Expr> auto operator(some_unary_tag, Expr const & x) { return some_function_of(yap::as_expr(x)); }
My intuition is that this is the most common pattern, and should be what we optimize for. Also, changing terminals to yap::expression may cause random weirdness if the transform returns an Expression. In addition, I feel that needing to call as_expr is more surprising than needing to write out the the terminal as an expression.
I agree that this is the most likely more common pattern. It's just that when you *do* want to match terminals, especially more than one at the same time, the pain of doing it without terminal unwrapping if far greater than the pain of using as_expr() in code like the common case above.
Actually... I have a much better idea. Why don't you allow transform for a non-expr to match a terminal tag transform?
I've read this a few times now and cannot parse. Could you rephrase?
struct F { auto operator()(terminal_tag, int x) { return x * 2; } }; transform(3, F{}); // returns 6 This allows you to avoid the need for as_expr. I think this behavior is consistent, when you also unwrap terminals, as you would essentially be treating terminals and the raw values as being interchangeable. In Christ, Steven Watanabe

On Wed, Feb 21, 2018 at 10:28 AM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
AMDG
On Tue, Feb 20, 2018 at 9:29 AM, Steven Watanabe via Boost < boost@lists.boost.org> wrote: <snip> That looks like a great candidate for an example, so I made one out of it:
https://github.com/tzlaine/yap/commit/4b383f9343a2a8affaf132c5be1eeb 99a56e58df
It took most of the examples from the documentation page you posted above, and it works for them, including nesting. For instance:
boost::yap::evaluate( let(_a = 1_p, _b = 2_p) [ // _a here is an int: 1
let(_a = 3_p) // hides the outer _a [ cout << _a << _b // prints "Hello, World" ] ], 1, " World", "Hello," );
That's verbatim from the Phoenix docs (except for the yap::evaluate() call of course), with the same behavior. The entire example is only 158
On 02/20/2018 11:16 PM, Zach Laine via Boost wrote: lines,
including empty lines an some comment lines. The trick is to make let() a regular eager function and leave everything else lazy Yap expression stuff. I don't know if this counts as evaluation in "a single pass" as you first mentioned, but I don't care, because the user won't care either -- she can't really tell.
[snip]
evaluate(let(_a = 1_p << 3) [ _a << "1", _a << "2" ], std::cout); // prints 3132, but should print 312
Why should that print out 312? Isn't equivalent to: std::cout << 3 << "1", std::cout << 3 << "2" ? If not, why not?
Also, let(_a=_a+_a)[let(_a=_a+_a)[let(_a=_a+_a)[...]]] has exponential cost.
Sure. It's also not allowed, though. From the let docs: The RHS (right hand side lambda-expression) of each local-declaration cannot refer to any LHS local-id. At this point, the local-ids are not in scope yet; they will only be in scope in the let-body. The code below is in error: let( _a = 1 , _b = _a // Error: _a is not in scope yet ) [ // _a and _b's scope starts here /*. body .*/ ] Checking this is an exercise left for the reader. :) [snip]
What does evaluate_with_context now do? Let's say expr is "a + b". Does
the context only apply to the evaluation of a and b as terminals, or does it apply to the plus operation as well?
It applies to the plus operation first. If the context has a handler for plus, then it's up to the context to handle recursion. If it does not, then it becomes evaluate_with_context(a, ctx, x...) + evaluate_with_context(b, ctx, x...)
Are such applications of the context conditional? How does the reader quickly grasp what the evaluate_with_context() call does? This seems like really muddy code to me. If you have something else in mind, please provide more detail -- I may of course be misunderstanding you.
My idea is that it would behave exactly like transform, except that the default behavior for nodes that are not handled by the context is to evaluate the operators instead of building a new expression.
Ah, I think I get it now. This is really a transform_evaluate() then? If so, that does sound useful. [snip]
I decided to conduct this experiment and see how it went. I removed the
terminal_value() function and all its uses from default_eval.hpp; this is all that was required to disable terminal unwrapping. Almost immediately, I ran into something I did not want to deal with. From one of the tests:
decltype(auto) operator()( yap::expr_tag<yap::expr_kind::call>, tag_type, double a, double b) { /* ... */ }
Personally, I believe that this is probably a very rare situation outside of test cases and toy examples.
This may prove to be true, though I'm not convinced now that it is (at least not the "very rare" part). If it is, I still think the current design is preferable to the alternative. This follows the "make simple things easy and complicated things possible" philosophy. Not unwrapping terminals is a lot more painful than I think you're allowing, in those cases where you *do* want to match terminals. [snip]
Actually... I have a much better idea. Why
don't you allow transform for a non-expr to match a terminal tag transform?
I've read this a few times now and cannot parse. Could you rephrase?
struct F { auto operator()(terminal_tag, int x) { return x * 2; } };
transform(3, F{}); // returns 6
This allows you to avoid the need for as_expr. I think this behavior is consistent, when you also unwrap terminals, as you would essentially be treating terminals and the raw values as being interchangeable.
I like this idea at first glance, but I'll need to look at the places where transform() applied to a non-Expression is acting as a no-op for a reason. Zach

AMDG On 02/21/2018 10:11 AM, Zach Laine via Boost wrote:
On Wed, Feb 21, 2018 at 10:28 AM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
On 02/20/2018 11:16 PM, Zach Laine via Boost wrote:
On Tue, Feb 20, 2018 at 9:29 AM, Steven Watanabe via Boost < boost@lists.boost.org> wrote: <snip> That looks like a great candidate for an example, so I made one out of it:
https://github.com/tzlaine/yap/commit/4b383f9343a2a8affaf132c5be1eeb 99a56e58df
<snip> [snip]
evaluate(let(_a = 1_p << 3) [ _a << "1", _a << "2" ], std::cout); // prints 3132, but should print 312
Why should that print out 312? Isn't equivalent to:
std::cout << 3 << "1", std::cout << 3 << "2"
? If not, why not?
The expected behavior is that _a is a local variable, not macro-like substitution. let(_a=<expr>) [ <body> ] should be equivalent to auto _a = <expr>; { <body>; } rather than: #define _a <expr> <body>;
Also, let(_a=_a+_a)[let(_a=_a+_a)[let(_a=_a+_a)[...]]] has exponential cost.
Sure. It's also not allowed, though. From the let docs:
Right. Assume the whole thing is wrapped in let(_a=1_p)[] and then it is legal. (each _a refers to the _a in the outer scope.) In Christ, Steven Watanabe

On Wed, Feb 21, 2018 at 11:31 AM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
AMDG
On 02/21/2018 10:11 AM, Zach Laine via Boost wrote:
On Wed, Feb 21, 2018 at 10:28 AM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
On 02/20/2018 11:16 PM, Zach Laine via Boost wrote:
On Tue, Feb 20, 2018 at 9:29 AM, Steven Watanabe via Boost < boost@lists.boost.org> wrote: <snip> That looks like a great candidate for an example, so I made one out of it:
https://github.com/tzlaine/yap/commit/4b383f9343a2a8affaf132c5be1eeb 99a56e58df
<snip> [snip]
evaluate(let(_a = 1_p << 3) [ _a << "1", _a << "2" ], std::cout); // prints 3132, but should print 312
Why should that print out 312? Isn't equivalent to:
std::cout << 3 << "1", std::cout << 3 << "2"
? If not, why not?
The expected behavior is that _a is a local variable, not macro-like substitution.
let(_a=<expr>) [ <body> ]
should be equivalent to
auto _a = <expr>; { <body>; }
rather than:
#define _a <expr> <body>;
Also, let(_a=_a+_a)[let(_a=_a+_a)[let(_a=_a+_a)[...]]] has exponential cost.
Sure. It's also not allowed, though. From the let docs:
Right. Assume the whole thing is wrapped in let(_a=1_p)[] and then it is legal. (each _a refers to the _a in the outer scope.)
I think I get it now. It seems evaluation *must* happen outer-to-inner, not inner-to-outer, and that you need substitutions for things like placeholders so you can do partial evaluations in order to figure out what type a let-placeholder will be. This is the first really compelling use case I have seen for why transform_evaluate() is necessary. Zach

On Feb 21, 2018, at 9:28 AM, Steven Watanabe via Boost <boost@lists.boost.org> wrote: What does evaluate_with_context now do? Let's say expr is "a + b". Does the context only apply to the evaluation of a and b as terminals, or does it apply to the plus operation as well? It applies to the plus operation first. If the context has a handler for plus, then it's up to the context to handle recursion. If it does not, then it becomes evaluate_with_context(a, ctx, x...) + evaluate_with_context(b, ctx, x...) Are such applications of the context conditional? How does the reader quickly grasp what the evaluate_with_context() call does? This seems like really muddy code to me. If you have something else in mind, please provide more detail -- I may of course be misunderstanding you. My idea is that it would behave exactly like transform, except that the default behavior for nodes that are not handled by the context is to evaluate the operators instead of building a new expression. For the record, this is exactly what I had in mind when I was originally suggesting evaluate_with_context(). It seems to me that this does not introduce new surprises or require "code spelunking". As it is, to understand the meaning of evaluate(), one might have to wander all over a code base to find all the relevant definitions. Perhaps this is fine for well-understood types, but it does not lend itself to types with no natural operator definitions, for which operators might be better _not_ defined. In such cases, it might actually be easier to just look at a single "context" class for overloads. It is certainly not a surprising place to look if the documentation for evaluate_with_context() specifies something like what Steven wrote above. Furthermore, it would mean that certain types of grammars that are not tightly tied to naturally defined C++ operators (Steven's mention of Spirit is apt here) could be written much more easily (and with more type safety) by providing different context classes (perhaps Qi v. Karma is relevant) for different calls to evaluate_with_context(). I still feel that this is a valid use case that should be supported. Cheers, Brook

On Wed, Feb 21, 2018 at 11:15 AM, Brook Milligan via Boost < boost@lists.boost.org> wrote:
On Feb 21, 2018, at 9:28 AM, Steven Watanabe via Boost <boost@lists.boost.org> wrote:
What does evaluate_with_context now do? Let's say expr is "a + b". Does the context only apply to the evaluation of a and b as terminals, or does it apply to the plus operation as well?
It applies to the plus operation first. If the context has a handler for plus, then it's up to the context to handle recursion. If it does not, then it becomes evaluate_with_context(a, ctx, x...) + evaluate_with_context(b, ctx, x...)
Are such applications of the context conditional? How does the reader quickly grasp what the evaluate_with_context() call does? This seems like really muddy code to me. If you have something else in mind, please provide more detail -- I may of course be misunderstanding you.
My idea is that it would behave exactly like transform, except that the default behavior for nodes that are not handled by the context is to evaluate the operators instead of building a new expression.
For the record, this is exactly what I had in mind when I was originally suggesting evaluate_with_context(). It seems to me that this does not introduce new surprises or require "code spelunking". As it is, to understand the meaning of evaluate(), one might have to wander all over a code base to find all the relevant definitions. Perhaps this is fine for well-understood types, but it does not lend itself to types with no natural operator definitions, for which operators might be better _not_ defined. In such cases, it might actually be easier to just look at a single "context" class for overloads. It is certainly not a surprising place to look if the documentation for evaluate_with_context() specifies something like what Steven wrote above. Furthermore, it would mean that certain types of grammars that are not tightly tied to naturally defined C++ operators (Steven's mention of Spirit is apt here) could be written much more easily (and with more type safety) by providing different context classes (perhaps Qi v. Karma is relevant) for different calls to evaluate_with_context(). I still feel that this is a valid use case that should be supported.
I agree. I've added this to the GitHub issues list. I'm probably going to call it transform_evaluate(), which is closer to what it does IMO than evaluate_with_context(). I truly did not understand that this was what you meant. Zach

AMDG On 02/20/2018 11:16 PM, Zach Laine via Boost wrote:
On Tue, Feb 20, 2018 at 9:29 AM, Steven Watanabe via Boost < boost@lists.boost.org> wrote: <snip>
- Combining transforms isn't exactly easy, because of the way transforms recurse. For example, if I have two transforms that process disjoint sets of nodes, I can't really turn them into a single transform.
I'm not necessarily opposed to the idea, but how would combining transforms work? Would each node be matched against multiple transform objects, using whichever one works, or something else?
Probably the behavior is to choose the first one that matches. That would make it easy to write a transform that overrides some behavior of another transform. (Note that I really have no idea how to make something like this actually work.)
Well if *you* don't... :) I'll wait until you get back to me with more details before I bite off such a feature.
So, here's a concrete example of the problem I'm trying to solve: Suppose that Alice defines let(_a=1)[_a] as as we talked about, along with a let_transform that can be used with transform_evaluate. Now, Bob also thinks that Yap is really cool and defines switch_(1_p)[case_<1>("one"), case_<2>("two")] with a corresponding switch_transform which, again, works with transform_evaluate. Finally, Carol comes along and wants to write: let(_a=5)[switch_(_a)[case_<5>("Five")]]. How do we combine let_transform and switch_transform? It may be that the best solution is for let_transform and switch_transform to use CRTP (which doesn't require anything from Yap). The original reason I was thinking this way is that if transforms were somehow composable, then we wouldn't need transform_evaluate, as it could be implemented like: auto transform_evaluate(auto&& expr, auto&& t) { return transform(expr, combine_transforms(t, default_evaluate{})); } In Christ, Steven Watanabe

On Wed, Feb 21, 2018 at 5:30 PM, Steven Watanabe via Boost < boost@lists.boost.org> wrote:
AMDG
On 02/20/2018 11:16 PM, Zach Laine via Boost wrote:
On Tue, Feb 20, 2018 at 9:29 AM, Steven Watanabe via Boost < boost@lists.boost.org> wrote: <snip>
- Combining transforms isn't exactly easy, because of the way transforms recurse. For example, if I have two transforms that process disjoint sets of nodes, I can't really turn them into a single transform.
I'm not necessarily opposed to the idea, but how would combining transforms work? Would each node be matched against multiple transform objects, using whichever one works, or something else?
Probably the behavior is to choose the first one that matches. That would make it easy to write a transform that overrides some behavior of another transform. (Note that I really have no idea how to make something like this actually work.)
Well if *you* don't... :) I'll wait until you get back to me with more details before I bite off such a feature.
So, here's a concrete example of the problem I'm trying to solve:
Suppose that Alice defines let(_a=1)[_a] as as we talked about, along with a let_transform that can be used with transform_evaluate. Now, Bob also thinks that Yap is really cool and defines switch_(1_p)[case_<1>("one"), case_<2>("two")] with a corresponding switch_transform which, again, works with transform_evaluate. Finally, Carol comes along and wants to write: let(_a=5)[switch_(_a)[case_<5>("Five")]]. How do we combine let_transform and switch_transform?
It may be that the best solution is for let_transform and switch_transform to use CRTP (which doesn't require anything from Yap).
The original reason I was thinking this way is that if transforms were somehow composable, then we wouldn't need transform_evaluate, as it could be implemented like: auto transform_evaluate(auto&& expr, auto&& t) { return transform(expr, combine_transforms(t, default_evaluate{})); }
It took some doing, but I really liked this idea, so I've implemented this. These major changes have just gone in: 1) Add the ability to pass an arbitrary number of transform objects to transform. At each expression, an attempt is made to match the tag-transforms of the first transform object, then the expression-transforms of the first transform object, the tag-transforms for the second transform object, the expression-transforms of the second transform object, etc. 2) Add a new function evaluation(), that returns a transform object that can be used to evaluate an expression. 3) Add a new transform_strict() function that gives a hard error on any failure to match an expression, but that is otherwise just like transform(). It's very light on testing, but it's working for all the tests and examples that use evaluate(), which is now implemented in terms of evaluation(). And the chained transforms work at least for simple cases. Zach
participants (14)
-
a.hagen-zanker@surrey.ac.uk
-
Barrett Adair
-
Brook Milligan
-
Edward Diener
-
Fletcher, John P
-
Louis Dionne
-
Louis Dionne
-
Niall Douglas
-
P F
-
paul
-
Peter Dimov
-
Robert
-
Steven Watanabe
-
Zach Laine