Re: [boost] [GSoC, MPL11] Community probe

Gonzalo Brito Gadeschi <g.brito <at> aia.rwth-aachen.de> writes:
I have a question: the proposal and library are called MPL11.
That is true, but I'm starting to realize that it's a very bad name. The project started about two years ago, when C++14 was still out of my sight. Be reassured that I always consider the latest version of the language, not only C++11.
In my experience, relaxed constexpr allows for beautiful metaprogramming with respect to C++11, in particular when combined with Boost.Fusion, and I think that it is really a game changer with respect to C++11 constexpr, since it allows not only "functional" metaprogramming but also "imperative" metaprogramming, significantly lowering the learning curve of metaprogramming for C++ programmers.
Could you please expand on the nature of metaprogramming you have been doing with constexpr? Are you talking about manipulating values of a literal type, or about "pure type" computations? See below for more on that.
Have you consider the influence of relaxed constexpr (C++14) in your library?
Could it simplify the design/implementation/usage of the MPL?
I did consider the impact of C++14 on the design of the library, and I still am. At this point, my conclusion is that we must define what we mean by a "template metaprogramming library". I differentiate between two main kinds of computations that can be done at compile-time. The first is manipulating "pure types" with metafunctions (e.g. type traits): using void_pointer = std::add_pointer_t<void>; using pointers = mpl::transform< mpl::list<int, char, void, my_own_type>, mpl::quote<std::add_pointer> >::type; The above example is completely artificial but you get the point. The second kind of computation is manipulating values of a literal type at compile-time. using three = mpl::plus<mpl::int_<1>, mpl::int_<2>>::type; The MPL wraps these values into types so it can treat computations on those as computations of the first kind, but another library could perhaps handle them differently (probably using constexpr). constexpr int three = plus(1, 2); It is easy to see how constexpr (and relaxed constexpr) can make the second kind of computation easier to express, since that is exactly its purpose. However, it is much less clear how constexpr helps us with computations of the first kind. And by that I really mean that using constexpr in some way to perform those computations might be more cumbersome and less efficient than good old metafunctions. As for using constexpr to express computations on literal values, another question arises. Should there be a general library for handling those at compile-time, or should it be the responsibility of a well written domain- specific library to provide means to perform domain-specific computations at compile-time whenever possible? For example, should the MPL11 provide constexpr math functions, or should the std:: math functions be constexpr whenever possible? Another example is std::array. While the MPL11 could provide a constexpr container for homogeneous values of a literal type, simply changing the `begin` and `end` of `std::array` to be constexpr would make it possible to iterate on the array at compile-time. template <typename T, std::size_t n> constexpr T sum(std::array<T, n> array) { T s{0}; for (T i: array) // awesome s += i; return s; } constexpr std::array<int, 5> array{{1, 2, 3, 4, 5}}; static_assert(sum(array) == 15, ""); Unfortunately, I don't think this is happening in C++14, but I could be mistaken. Still, you certainly understand the implications. So a valid question that must be answered before I/we can come up with a "final" version of the library that can be proposed to Boost (or for standardization) is: "What is the purpose of a TMP library?" Once that is well defined, we won't be shooting at a moving target anymore. Right now, I have avoided these questions as much as possible by focusing on computations of the first kind. For those computations, my research so far shows that constexpr is unlikely to be of any help. If someone can come up with counter-examples or ideas that seem to refute this, _please_ let me know and I'll even buy you a beer in Aspen. This is _very_ important; it's central to my current work.
Would a rewrite of MPL11 be necessary afterwards (MPL14)?
If constexpr turns out to be a game changer for computations of the first kind, then I suspect a large part of the current MPL11 would have to be rewritten. Of course, I tried to prevent this from happening by going slowly and looking at all the possibilities, but these things happen and it's part of the game. As for computations of the second kind, not _much_ work (comparatively) has been done on them because I knew it was a can of worms. So even drastic changes would not kill the library as it stands. Finally, if any of this does not make sense to someone, please let me know (on this list or privately) so I can explain further and correct any fallacious reasoning on my part. I hope this answers your questions; sorry for the long-winded answer but I think it can be informative for others too. Regards, Louis

On 29 Apr 2014 at 21:40, Louis Dionne wrote:
I have a question: the proposal and library are called MPL11.
That is true, but I'm starting to realize that it's a very bad name. The project started about two years ago, when C++14 was still out of my sight. Be reassured that I always consider the latest version of the language, not only C++11.
What's wrong with MPL v2.0? Niall -- Currently unemployed and looking for work in Ireland. Work Portfolio: http://careers.stackoverflow.com/nialldouglas/

On 2 May 2014 at 23:35, Louis Dionne wrote:
What's wrong with MPL v2.0?
In what namespace should the library live?
There is Boost precedent for this e.g. boost::phoenix or boost::python. In short, your namespace would be boost::mpl. MPL98 might be rehoused into boost::mpl::v1 and yours might be mapped into boost::mpl::v2. A C macro define then maps the user selected namespace version into boost::mpl, with the default usually being the most recent version. For now you can use boost::mpl::v2 safely. If you just happened to invest the work in remapping existing MPL into boost::mpl::v1 in preparation but keeping v1 mapped into boost::mpl, I would doubt anyone here would complain. Niall -- Currently unemployed and looking for work in Ireland. Work Portfolio: http://careers.stackoverflow.com/nialldouglas/

Niall Douglas <s_sourceforge <at> nedprod.com> writes:
In short, your namespace would be boost::mpl. MPL98 might be rehoused into boost::mpl::v1 and yours might be mapped into boost::mpl::v2. A C macro define then maps the user selected namespace version into boost::mpl, with the default usually being the most recent version.
That is good to know -- thanks. For now I will leave it as boost::mpl11 because I got more urgent stuff to do, but I'll eventually rename it. Louis

On Sat, May 3, 2014 at 4:05 PM, Niall Douglas <s_sourceforge@nedprod.com>wrote:
On 2 May 2014 at 23:35, Louis Dionne wrote:
What's wrong with MPL v2.0?
In what namespace should the library live?
There is Boost precedent for this e.g. boost::phoenix or boost::python.
In short, your namespace would be boost::mpl. MPL98 might be rehoused into boost::mpl::v1 and yours might be mapped into boost::mpl::v2. A C macro define then maps the user selected namespace version into boost::mpl, with the default usually being the most recent version.
For now you can use boost::mpl::v2 safely. If you just happened to invest the work in remapping existing MPL into boost::mpl::v1 in preparation but keeping v1 mapped into boost::mpl, I would doubt anyone here would complain.
The other precedent is boost::signals2
Niall
-- Currently unemployed and looking for work in Ireland. Work Portfolio: http://careers.stackoverflow.com/nialldouglas/
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On 3 May 2014 at 19:00, Klaim - Joël Lamotte wrote:
There is Boost precedent for this e.g. boost::phoenix or boost::python.
The other precedent is boost::signals2
True, but he's planning for old mpl and new mpl to interoperate. If they were completely dimorphic, then sure I'd go for a new namespace. Niall -- Currently unemployed and looking for work in Ireland. Work Portfolio: http://careers.stackoverflow.com/nialldouglas/

On Tue, Apr 29, 2014 at 4:40 PM, Louis Dionne <ldionne.2@gmail.com> wrote:
Gonzalo Brito Gadeschi <g.brito <at> aia.rwth-aachen.de> writes:
In my experience, relaxed constexpr allows for beautiful metaprogramming with respect to C++11, in particular when combined with Boost.Fusion, and I think that it is really a game changer with respect to C++11 constexpr, since it allows not only "functional" metaprogramming but also "imperative" metaprogramming, significantly lowering the learning curve of metaprogramming for C++ programmers.
Could you please expand on the nature of metaprogramming you have been doing with constexpr? Are you talking about manipulating values of a literal type, or about "pure type" computations? See below for more on that.
I agree, but even more so when it comes to c++14, due to the availability of generalized automatic return type deduction. This plus relaxed constexpr as *completely* changed the way I write metaprograms. We might be talking about something slightly different, though, as I've found that std::tuple used in conjunction with these two new language features means that I no longer need MPL or Fusion for most things. I still have need for MPL's sorted data structures, though.
Have you consider the influence of relaxed constexpr (C++14) in your library?
Could it simplify the design/implementation/usage of the MPL?
I did consider the impact of C++14 on the design of the library, and I still am. At this point, my conclusion is that we must define what we mean by a "template metaprogramming library".
I differentiate between two main kinds of computations that can be done at
compile-time. The first is manipulating "pure types" with metafunctions (e.g. type traits):
using void_pointer = std::add_pointer_t<void>; using pointers = mpl::transform< mpl::list<int, char, void, my_own_type>, mpl::quote<std::add_pointer> >::type;
The above example is completely artificial but you get the point. The second kind of computation is manipulating values of a literal type at compile-time.
using three = mpl::plus<mpl::int_<1>, mpl::int_<2>>::type;
The MPL wraps these values into types so it can treat computations on those as computations of the first kind, but another library could perhaps handle them differently (probably using constexpr).
constexpr int three = plus(1, 2);
I recently decided to completely rewrite a library for linear algebra on heterogeneous types using Clang 3.4, which is c++14 feature-complete (modulo bugs). My library previously used lots of MPL and Boost.Fusion code, and was largely an unreadable mess. The new version only uses MPL's set, but no other MPL and no Fusion code, and is quite easy to understand (at least by comparison). The original version took me months of spare time to write, including lots of time trying to wrestle MPL and Fusion into doing what I needed them to do. The rewrite was embarrassingly easy; it took me about two weeks of spare time. I threw away entire files of return-type-computing metaprograms. The overall line count is probably 1/4 what it was before. My library and its needs are probably atypical with respect to MPL usage overall, but is probably representative of much use of Fusion, so keep that in mind below. Here are the metaprogramming capabilities I needed for my Fusion-like data structures: 1) compile-time type traits, as above 2) simple compile-time computation, as above 3) purely compile-time iteration over every element of a single list of types 4) purely compile-time iteration over every pair of elements in two lists of types (for zip-like operations, e.g. elementwise matrix products) 5) runtime iteration over every element of a single tuple 6) runtime iteration over every pair of elements in two tuples (again, for zip-like operations) For my purposes, operations performed at each iteration in 3 through 6 above may sometimes require the index of the iteration. Again, this is probably atypical. 1 is covered nicely by existing traits, and 2 is covered by ad hoc application-specific code (I don't see how a library helps here). There are several solutions that work for at least one of 3-6: - Compile-time foldl(); I did mine as constexpr, simply for readability. - Runtime foldl(). - Direct expansion of a template parameter pack; example: template <typename MatrixLHS, typename MatrixRHS, std::size_t ...I> auto element_prod_impl ( MatrixLHS lhs, MatrixRHS rhs, std::index_sequence<I...> ) { return std::make_tuple( (tuple_access::get<I>(lhs) * tuple_access::get<I>(rhs))... ); } (This produces the actual result of multiplying two matrices element-by-element (or at least the resulting matrix's internal tuple storage). I'm not really doing any metaprogramming here at all, and that's sort of the point. Any MPL successor should be as easy to use as the above was to write, or I'll always write the above instead. A library might help here, since I had to write similar functions to do elementwise division, addition, etc., but if a library solution has more syntactic weight than the function above, I won't be inclined to use it.) - Ad hoc metafunctions and constexpr functions that iterate on type-lists. - Ad hoc metafunctions and constexpr functions that iterate over the values [1..N). - Ad hoc metafunctions and constexpr functions that iterate over [1..N) indices into a larger or smaller range of values. I was unable to find much in common between my individual ad hoc implementations that I could lift up into library abstractions, or at least not without increasing the volume of code more than it was worth to me. I was going for simple and maintainable over abstract. Part of the lack of commonality was that in one case, I needed indices for each iteration, in another one I needed types, in another case I needed to accumulate a result, in another I needed to return multiple values, etc. Finding an abstraction that buys you more than it costs you is difficult in such circumstances. So, I'm full of requirements, and no answers. :) I hope this helps, if only with scoping. I'll be in Aspen if you want to discuss it there too.
It is easy to see how constexpr (and relaxed constexpr) can make the second kind of computation easier to express, since that is exactly its purpose. However, it is much less clear how constexpr helps us with computations of the first kind. And by that I really mean that using constexpr in some way to perform those computations might be more cumbersome and less efficient than good old metafunctions.
I've been using these to write less code, if only a bit less. Instead of: template <typename Tuple> struct meta; template <typename ...T> struct meta<std::tuple<T...>> { using type = /*...*/; }; I've been writing: template <typename ...T> constexpr auto meta (std::tuple<T...>) { return /*...*/; } ...and calling it as decltype(meta(std::tuple</*...*/>{})). This both eliminates the noise coming from having a base/specialization template pair instead of one template, and also removes the need for a *_t template alias and/or typename /*...*/::type. [snip]
So a valid question that must be answered before I/we can come up with a "final" version of the library that can be proposed to Boost (or for standardization) is:
"What is the purpose of a TMP library?"
Once that is well defined, we won't be shooting at a moving target anymore. Right now, I have avoided these questions as much as possible by focusing on computations of the first kind. For those computations, my research so far shows that constexpr is unlikely to be of any help. If someone can come up with counter-examples or ideas that seem to refute this, _please_ let me know and I'll even buy you a beer in Aspen. This is _very_ important; it's central to my current work.
Zach

On 30 April 2014 15:03, Zach Laine wrote:
Here are the metaprogramming capabilities I needed for my Fusion-like data structures:
1) compile-time type traits, as above 2) simple compile-time computation, as above 3) purely compile-time iteration over every element of a single list of types 4) purely compile-time iteration over every pair of elements in two lists of types (for zip-like operations, e.g. elementwise matrix products) 5) runtime iteration over every element of a single tuple 6) runtime iteration over every pair of elements in two tuples (again, for zip-like operations)
For my purposes, operations performed at each iteration in 3 through 6 above may sometimes require the index of the iteration. Again, this is probably atypical.
1 is covered nicely by existing traits, and 2 is covered by ad hoc application-specific code (I don't see how a library helps here).
There are several solutions that work for at least one of 3-6:
- Compile-time foldl(); I did mine as constexpr, simply for readability. - Runtime foldl(). - Direct expansion of a template parameter pack; example:
template <typename MatrixLHS, typename MatrixRHS, std::size_t ...I> auto element_prod_impl ( MatrixLHS lhs, MatrixRHS rhs, std::index_sequence<I...> ) { return std::make_tuple( (tuple_access::get<I>(lhs) * tuple_access::get<I>(rhs))... ); }
(This produces the actual result of multiplying two matrices element-by-element (or at least the resulting matrix's internal tuple storage). I'm not really doing any metaprogramming here at all, and that's sort of the point.
I found your whole email very interesting, thanks for sharing your experience, but I wanted to comment on the point above. In some ways MPL is beautiful, for what it manages to do and the design and ideas present in it, but syntactically it is hideous and hairy. I'm extremely pleased that the addition of two "small" features (return type deduction and variadic templates, the latter enabling tuples and index sequences) to the core C++ language enables you to write code like that above, rather than jumping through complex hoops with MPL. That's a real success story in my opinion.
Any MPL successor should be as easy to use as the above was to write, or I'll always write the above instead.
I wholeheartedly agree. The MPL was necessary in the past because recreating even small parts of that framework was a massive undertaking. Now it's comparatively simple to do some things directly (or with a small ad-hoc utility) rather than needing to leverage chunks of the MPL. If MPL v2 isn't much easier to read and write than MPL then it will be a wasted opportunity.
I've been using these to write less code, if only a bit less.
Instead of:
template <typename Tuple> struct meta;
template <typename ...T> struct meta<std::tuple<T...>> { using type = /*...*/; };
I've been writing:
template <typename ...T> constexpr auto meta (std::tuple<T...>) { return /*...*/; }
...and calling it as decltype(meta(std::tuple</*...*/>{})). This both eliminates the noise coming from having a base/specialization template pair instead of one template, and also removes the need for a *_t template alias and/or typename /*...*/::type.
Yes, I've found constexpr functions can greatly simplify some aspects of metaprogramming, although so far a lot of it has been seeing what other people are doing with them and I haven't quite got into the habit of using them fully myself.

Zach Laine <whatwasthataddress <at> gmail.com> writes: [...]
I recently decided to completely rewrite a library for linear algebra on heterogeneous types using Clang 3.4, which is c++14 feature-complete (modulo bugs). My library previously used lots of MPL and Boost.Fusion code, and was largely an unreadable mess. The new version only uses MPL's set, but no other MPL and no Fusion code, and is quite easy to understand (at least by comparison). The original version took me months of spare time to write, including lots of time trying to wrestle MPL and Fusion into doing what I needed them to do. The rewrite was embarrassingly easy; it took me about two weeks of spare time. I threw away entire files of return-type-computing metaprograms. The overall line count is probably 1/4 what it was before. My library and its needs are probably atypical with respect to MPL usage overall, but is probably representative of much use of Fusion, so keep that in mind below.
I looked at the Units-BLAS codebase (assuming that's what you were referring to) to get a better understanding of your use case. It was very helpful in understanding at least some of the requirements for a TMP library; thank you for that. In what follows, I sketch out possible solutions to some of your issues. I'm mostly thinking out loud.
Here are the metaprogramming capabilities I needed for my Fusion-like data structures:
1) compile-time type traits, as above 2) simple compile-time computation, as above 3) purely compile-time iteration over every element of a single list of types 4) purely compile-time iteration over every pair of elements in two lists of types (for zip-like operations, e.g. elementwise matrix products) 5) runtime iteration over every element of a single tuple 6) runtime iteration over every pair of elements in two tuples (again, for zip-like operations)
For my purposes, operations performed at each iteration in 3 through 6 above may sometimes require the index of the iteration. Again, this is probably atypical.
Some kind of counting range with a zip_with constexpr function should do the trick. Hence you could do (pseudocode): zip_with(your_constexpr_function, range_from(0), tuple1, ..., tupleN) where range_from(n) produces a range from n to infinity. I have been able to implement zip_with, but I'm struggling to make it constexpr because I need a lambda somewhere in there. The range_from(n) should be quite feasible.
1 is covered nicely by existing traits, and 2 is covered by ad hoc application-specific code (I don't see how a library helps here).
I agree.
There are several solutions that work for at least one of 3-6:
- Compile-time foldl(); I did mine as constexpr, simply for readability. - Runtime foldl(). - Direct expansion of a template parameter pack; example:
template <typename MatrixLHS, typename MatrixRHS, std::size_t ...I> auto element_prod_impl ( MatrixLHS lhs, MatrixRHS rhs, std::index_sequence<I...> ) { return std::make_tuple( (tuple_access::get<I>(lhs) * tuple_access::get<I>(rhs))... ); }
(This produces the actual result of multiplying two matrices element-by-element (or at least the resulting matrix's internal tuple storage). I'm not really doing any metaprogramming here at all, and that's sort of the point. Any MPL successor should be as easy to use as the above was to write, or I'll always write the above instead. A library might help here, since I had to write similar functions to do elementwise division, addition, etc., but if a library solution has more syntactic weight than the function above, I won't be inclined to use it.)
I think direct expansion of parameter packs would not be required in this case if we had a zip_with operation: zip_with(std::multiplies<>{}, lhs, rhs) However, there are other similar functions in your codebase that perform operations that are not covered by the standard function objects. For this, it would be _very_ useful to have constexpr lambdas.
- Ad hoc metafunctions and constexpr functions that iterate on type-lists. - Ad hoc metafunctions and constexpr functions that iterate over the values [1..N). - Ad hoc metafunctions and constexpr functions that iterate over [1..N) indices into a larger or smaller range of values.
I'm sorry, but I don't understand the last one. Do you mean iteration through a slice [1, N) of another sequence?
I was unable to find much in common between my individual ad hoc implementations that I could lift up into library abstractions, or at least not without increasing the volume of code more than it was worth to me. I was going for simple and maintainable over abstract. Part of the lack of commonality was that in one case, I needed indices for each iteration, in another one I needed types, in another case I needed to accumulate a result, in another I needed to return multiple values, etc. Finding an abstraction that buys you more than it costs you is difficult in such circumstances.
I do think it is possible to find nice abstractions, but it will be worth lifting into a separate library. I also think it will require departing from the usual STL concepts and going into FP-world.
So, I'm full of requirements, and no answers. :) I hope this helps, if only with scoping. I'll be in Aspen if you want to discuss it there too.
I'm looking forward to it.
It is easy to see how constexpr (and relaxed constexpr) can make the second kind of computation easier to express, since that is exactly its purpose. However, it is much less clear how constexpr helps us with computations of the first kind. And by that I really mean that using constexpr in some way to perform those computations might be more cumbersome and less efficient than good old metafunctions.
I've been using these to write less code, if only a bit less.
Instead of:
template <typename Tuple> struct meta;
template <typename ...T> struct meta<std::tuple<T...>> { using type = /*...*/; };
I've been writing:
template <typename ...T> constexpr auto meta (std::tuple<T...>) { return /*...*/; }
...and calling it as decltype(meta(std::tuple</*...*/>{})). This both eliminates the noise coming from having a base/specialization template pair instead of one template, and also removes the need for a *_t template alias and/or typename /*...*/::type.
That's valid in most use cases, but this won't work if you want to manipulate incomplete types, void and function types. Unless I'm mistaken, you can't instantiate a tuple holding any of those. Since a TMP library must clearly be able to handle the funkiest types, I don't think we can base a new TMP library on metafunctions with that style, unless a workaround is found. I also fear this might be slower because of possibly complex overload resolution, but without benchmarks that's just FUD. Like you said initially, I think your use case is representative of a C++14 Fusion-like library more than a MPL-like one. I'll have to clearly define the boundary between those before I can claim to have explored the whole design space for a new TMP library. Regards, Louis

On Mon, May 5, 2014 at 9:22 AM, Louis Dionne <ldionne.2@gmail.com> wrote:
Zach Laine <whatwasthataddress <at> gmail.com> writes:
[...]
I looked at the Units-BLAS codebase (assuming that's what you were referring to) to get a better understanding of your use case. It was very helpful in understanding at least some of the requirements for a TMP library; thank you for that. In what follows, I sketch out possible solutions to some of your issues. I'm mostly thinking out loud.
That's the one. I hope you looked at the C++14 branch though. It seems from the comments below that you did.
Here are the metaprogramming capabilities I needed for my Fusion-like data structures:
1) compile-time type traits, as above 2) simple compile-time computation, as above 3) purely compile-time iteration over every element of a single list of types 4) purely compile-time iteration over every pair of elements in two lists of types (for zip-like operations, e.g. elementwise matrix products) 5) runtime iteration over every element of a single tuple 6) runtime iteration over every pair of elements in two tuples (again, for zip-like operations)
For my purposes, operations performed at each iteration in 3 through 6 above may sometimes require the index of the iteration. Again, this is probably atypical.
Some kind of counting range with a zip_with constexpr function should do the trick. Hence you could do (pseudocode):
zip_with(your_constexpr_function, range_from(0), tuple1, ..., tupleN)
where range_from(n) produces a range from n to infinity. I have been able to implement zip_with, but I'm struggling to make it constexpr because I need a lambda somewhere in there. The range_from(n) should be quite feasible.
If your intent is that zip_with produces only a type, I don't actually have a use for it. I can directly do the numeric computation, and the type computation comes along for free, thanks to automatic return type deduction and a foldl()-type approach.
1 is covered nicely by existing traits, and 2 is covered by ad hoc application-specific code (I don't see how a library helps here).
I agree.
There are several solutions that work for at least one of 3-6:
- Compile-time foldl(); I did mine as constexpr, simply for readability. - Runtime foldl(). - Direct expansion of a template parameter pack; example:
template <typename MatrixLHS, typename MatrixRHS, std::size_t ...I> auto element_prod_impl ( MatrixLHS lhs, MatrixRHS rhs, std::index_sequence<I...> ) { return std::make_tuple( (tuple_access::get<I>(lhs) * tuple_access::get<I>(rhs))... ); }
(This produces the actual result of multiplying two matrices element-by-element (or at least the resulting matrix's internal tuple storage). I'm not really doing any metaprogramming here at all, and that's sort of the point. Any MPL successor should be as easy to use as the above was to write, or I'll always write the above instead. A library might help here, since I had to write similar functions to do elementwise division, addition, etc., but if a library solution has more syntactic weight than the function above, I won't be inclined to use it.)
I think direct expansion of parameter packs would not be required in this case if we had a zip_with operation:
zip_with(std::multiplies<>{}, lhs, rhs)
Except that, as I understand it, direct expansion is cheaper at compile time, and the make_tuple(...) expression above is arguably clearer to maintainers than zip_with(...).
However, there are other similar functions in your codebase that perform operations that are not covered by the standard function objects. For this, it would be _very_ useful to have constexpr lambdas.
- Ad hoc metafunctions and constexpr functions that iterate on type-lists. - Ad hoc metafunctions and constexpr functions that iterate over the values [1..N). - Ad hoc metafunctions and constexpr functions that iterate over [1..N) indices into a larger or smaller range of values.
I'm sorry, but I don't understand the last one. Do you mean iteration through a slice [1, N) of another sequence?
Yes.
I was unable to find much in common between my individual ad hoc implementations that I could lift up into library abstractions, or at least not without increasing the volume of code more than it was worth to me. I was going for simple and maintainable over abstract. Part of the lack of commonality was that in one case, I needed indices for each iteration, in another one I needed types, in another case I needed to accumulate a result, in another I needed to return multiple values, etc. Finding an abstraction that buys you more than it costs you is difficult in such circumstances.
I do think it is possible to find nice abstractions, but it will be worth lifting into a separate library. I also think it will require departing from the usual STL concepts and going into FP-world.
Sounds good, although I tried for quite a while to do just this (using an FP style), and failed to find anything that added any clarity.
It is easy to see how constexpr (and relaxed constexpr) can make the second kind of computation easier to express, since that is exactly its
However, it is much less clear how constexpr helps us with computations of the first kind. And by that I really mean that using constexpr in some way to perform those computations might be more cumbersome and less efficient than good old metafunctions.
I've been using these to write less code, if only a bit less.
Instead of:
template <typename Tuple> struct meta;
template <typename ...T> struct meta<std::tuple<T...>> { using type = /*...*/; };
I've been writing:
template <typename ...T> constexpr auto meta (std::tuple<T...>) { return /*...*/; }
...and calling it as decltype(meta(std::tuple</*...*/>{})). This both eliminates the noise coming from having a base/specialization template
[...] purpose. pair
instead of one template, and also removes the need for a *_t template alias and/or typename /*...*/::type.
That's valid in most use cases, but this won't work if you want to manipulate incomplete types, void and function types. Unless I'm mistaken, you can't instantiate a tuple holding any of those. Since a TMP library must clearly be able to handle the funkiest types, I don't think we can base a new TMP library on metafunctions with that style, unless a workaround is found.
Right. I was using expansion into a tuple as an example, but this would work just as well: template <typename ...T> constexpr auto meta (some_type_sequence_template<T...>) { return some_type_sequence_template</*...*/>{}; } And this can handle whatever types you like.
I also fear this might be slower because of possibly complex overload resolution, but without benchmarks that's just FUD.
That's not FUD. I benchmarked Clang and GCC (albeit ~3 years ago), and found that consistently using function templates instead of struct templates to increase compile times by about 20%. For me, the clarity and reduction in code-noise are worth the compile time hit. YMMV.
Like you said initially, I think your use case is representative of a C++14 Fusion-like library more than a MPL-like one. I'll have to clearly define the boundary between those before I can claim to have explored the whole design space for a new TMP library.
This is true. However, I have found in my use of TMP (again, for a somewhat specific use case), in code that used to rely on it, to be largely irrelevant. That is, I was able to simply throw away so much TMP code that I assert that TMP in C++14 is actually relatively pedestrian stuff. The interesting bit to me is how to create a library that handles both MPL's old domain and Fusion's old domain as a single new library domain. I realize this may be a bit more than you intended to bite off in one summer, though. Zach

Zach Laine <whatwasthataddress <at> gmail.com> writes:
On Mon, May 5, 2014 at 9:22 AM, Louis Dionne <ldionne.2 <at> gmail.com> wrote:
Zach Laine <whatwasthataddress <at> gmail.com> writes:
[...]
I looked at the Units-BLAS codebase (assuming that's what you were referring to) to get a better understanding of your use case. It was very helpful in understanding at least some of the requirements for a TMP library; thank you for that. In what follows, I sketch out possible solutions to some of your issues. I'm mostly thinking out loud.
That's the one. I hope you looked at the C++14 branch though. It seems from the comments below that you did.
Yes, I looked at the C++14 branch. [...]
Some kind of counting range with a zip_with constexpr function should do the trick. Hence you could do (pseudocode):
zip_with(your_constexpr_function, range_from(0), tuple1, ..., tupleN)
where range_from(n) produces a range from n to infinity. I have been able to implement zip_with, but I'm struggling to make it constexpr because I need a lambda somewhere in there. The range_from(n) should be quite feasible.
If your intent is that zip_with produces only a type, I don't actually have a use for it. I can directly do the numeric computation, and the type computation comes along for free, thanks to automatic return type deduction and a foldl()-type approach.
No, zip_with should return a tuple. I should have shown the implementation: https://gist.github.com/ldionne/fd460b13ef26856b1f3b
I think direct expansion of parameter packs would not be required in this case if we had a zip_with operation:
zip_with(std::multiplies<>{}, lhs, rhs)
Except that, as I understand it, direct expansion is cheaper at compile time, and the make_tuple(...) expression above is arguably clearer to maintainers than zip_with(...).
It is possible to use direct expansion in the implementation of zip_with (see the Gist). This way, you get the same compile-time performance improvement over a naive recursive approach, but you abstract the details away from the user. However, I have not benchmarked the zip_with above. Regarding the clarity of the expression, I would on the contrary argue that zip_with is clearer. It is a well-known idiom in FP and it is more succint and general than the hand-written solution. [...]
That's valid in most use cases, but this won't work if you want to manipulate incomplete types, void and function types. Unless I'm mistaken, you can't instantiate a tuple holding any of those. Since a TMP library must clearly be able to handle the funkiest types, I don't think we can base a new TMP library on metafunctions with that style, unless a workaround is found.
Right. I was using expansion into a tuple as an example, but this would work just as well:
template <typename ...T> constexpr auto meta (some_type_sequence_template<T...>) { return some_type_sequence_template</*...*/>{}; }
And this can handle whatever types you like.
Dumb me. But there's still a problem; how would you implement e.g. front()? template <typename ...xs> struct list { }; template <typename x, typename ...xs> constexpr auto front(list<x, xs...>) { return x{}; } If the front type is not nice, this won't work, so we still need some kind of workaround. We could perhaps wrap those problematic types in the following way: template <typename T> struct box { using type = T; }; and use them as using void_ = decltype(front(list<box<void>>{}))::type; However, another problem arises in this case. How would you map a metafunction (e.g. a type trait) over a sequence of such types? struct add_pointer { template <typename T> constexpr std::add_pointer_t<T> operator()(T) const { return std::add_pointer_t<T>{}; } }; template <typename F, typename ...xs> constexpr auto map(F f, list<xs...>) { return list<decltype(f(xs{}))...>{}; } using pointers = decltype( map(add_pointer{}, list<box<void>, int, char>{}) ); But then, we would have pointers == list<box<void>*, int*, char*> instead of pointers == list<box<void*>, int*, char*> Of course, we could specialize all the type traits for box<>, but I'm not sure that's the best option.
I also fear this might be slower because of possibly complex overload resolution, but without benchmarks that's just FUD.
That's not FUD. I benchmarked Clang and GCC (albeit ~3 years ago), and found that consistently using function templates instead of struct templates to increase compile times by about 20%. For me, the clarity and reduction in code-noise are worth the compile time hit. YMMV.
Good to know. It would be possible to still use structs (and aliases) in the implementation of core operations like foldl and foldr. That could help mitigate the issue.
Like you said initially, I think your use case is representative of a C++14 Fusion-like library more than a MPL-like one. I'll have to clearly define the boundary between those before I can claim to have explored the whole design space for a new TMP library.
This is true. However, I have found in my use of TMP (again, for a somewhat specific use case), in code that used to rely on it, to be largely irrelevant. That is, I was able to simply throw away so much TMP code that I assert that TMP in C++14 is actually relatively pedestrian stuff. The interesting bit to me is how to create a library that handles both MPL's old domain and Fusion's old domain as a single new library domain. I realize this may be a bit more than you intended to bite off in one summer, though.
That is a large bite for sure. I'm not sure yet that merging the MPL and Fusion in a single library is feasible, but I'm currently trying to figure this out. I think the Aspen meeting will be helpful. Regards, Louis

Zach Laine <whatwasthataddress <at> gmail.com> writes: [...]
I recently decided to completely rewrite a library for linear algebra on heterogeneous types using Clang 3.4, which is c++14 feature-complete (modulo bugs). My library previously used lots of MPL and Boost.Fusion code, and was largely an unreadable mess. The new version only uses MPL's set, but no other MPL and no Fusion code, and is quite easy to understand (at least by comparison). The original version took me months of spare time to write, including lots of time trying to wrestle MPL and Fusion into doing what I needed them to do. The rewrite was embarrassingly easy; it took me about two weeks of spare time. I threw away entire files of return-type-computing metaprograms. The overall line count is probably 1/4 what it was before. My library and its needs are probably atypical with respect to MPL usage overall, but is probably representative of much use of Fusion, so keep that in mind below.
I already posted this message to the list, but it seems like it was lost. Apologies in advance if I end up double-posting. I looked at the Units-BLAS codebase (assuming that's what you were referring to) to get a better understanding of your use case. It was very helpful in understanding at least some of the requirements for a TMP library; thank you for that. In what follows, I sketch out possible solutions to some of your issues. I'm mostly thinking out loud.
Here are the metaprogramming capabilities I needed for my Fusion-like data structures:
1) compile-time type traits, as above 2) simple compile-time computation, as above 3) purely compile-time iteration over every element of a single list of types 4) purely compile-time iteration over every pair of elements in two lists of types (for zip-like operations, e.g. elementwise matrix products) 5) runtime iteration over every element of a single tuple 6) runtime iteration over every pair of elements in two tuples (again, for zip-like operations)
For my purposes, operations performed at each iteration in 3 through 6 above may sometimes require the index of the iteration. Again, this is probably atypical.
Some kind of counting range with a zip_with constexpr function should do the trick. Hence you could do (pseudocode): zip_with(your_constexpr_function, range_from(0), tuple1, ..., tupleN) where range_from(n) produces a range from n to infinity. I have been able to implement zip_with, but I'm struggling to make it constexpr because I need a lambda somewhere in there. The range_from(n) should be quite feasible.
1 is covered nicely by existing traits, and 2 is covered by ad hoc application-specific code (I don't see how a library helps here).
I agree.
There are several solutions that work for at least one of 3-6:
- Compile-time foldl(); I did mine as constexpr, simply for readability. - Runtime foldl(). - Direct expansion of a template parameter pack; example:
template <typename MatrixLHS, typename MatrixRHS, std::size_t ...I> auto element_prod_impl ( MatrixLHS lhs, MatrixRHS rhs, std::index_sequence<I...> ) { return std::make_tuple( (tuple_access::get<I>(lhs) * tuple_access::get<I>(rhs))... ); }
(This produces the actual result of multiplying two matrices element-by-element (or at least the resulting matrix's internal tuple storage). I'm not really doing any metaprogramming here at all, and that's sort of the point. Any MPL successor should be as easy to use as the above was to write, or I'll always write the above instead. A library might help here, since I had to write similar functions to do elementwise division, addition, etc., but if a library solution has more syntactic weight than the function above, I won't be inclined to use it.)
I think direct expansion of parameter packs would not be required in this case if we had a zip_with operation: zip_with(std::multiplies<>{}, lhs, rhs) However, there are other similar functions in your codebase that perform operations that are not covered by the standard function objects. For this, it would be _very_ useful to have constexpr lambdas.
- Ad hoc metafunctions and constexpr functions that iterate on type-lists. - Ad hoc metafunctions and constexpr functions that iterate over the values [1..N). - Ad hoc metafunctions and constexpr functions that iterate over [1..N) indices into a larger or smaller range of values.
I'm sorry, but I don't understand the last one. Do you mean iteration through a slice [1, N) of another sequence?
I was unable to find much in common between my individual ad hoc implementations that I could lift up into library abstractions, or at least not without increasing the volume of code more than it was worth to me. I was going for simple and maintainable over abstract. Part of the lack of commonality was that in one case, I needed indices for each iteration, in another one I needed types, in another case I needed to accumulate a result, in another I needed to return multiple values, etc. Finding an abstraction that buys you more than it costs you is difficult in such circumstances.
I do think it is possible to find nice abstractions, but it will be worth lifting into a separate library. I also think it will require departing from the usual STL concepts and going into FP-world.
So, I'm full of requirements, and no answers. :) I hope this helps, if only with scoping. I'll be in Aspen if you want to discuss it there too.
I'm looking forward to it.
It is easy to see how constexpr (and relaxed constexpr) can make the second kind of computation easier to express, since that is exactly its purpose. However, it is much less clear how constexpr helps us with computations of the first kind. And by that I really mean that using constexpr in some way to perform those computations might be more cumbersome and less efficient than good old metafunctions.
I've been using these to write less code, if only a bit less.
Instead of:
template <typename Tuple> struct meta;
template <typename ...T> struct meta<std::tuple<T...>> { using type = /*...*/; };
I've been writing:
template <typename ...T> constexpr auto meta (std::tuple<T...>) { return /*...*/; }
...and calling it as decltype(meta(std::tuple</*...*/>{})). This both eliminates the noise coming from having a base/specialization template pair instead of one template, and also removes the need for a *_t template alias and/or typename /*...*/::type.
That's valid in most use cases, but it won't work if you want to manipulate incomplete types, void and function types. Unless I'm mistaken, you can't instantiate a tuple holding any of those. Since a TMP library must clearly be able to handle the funkiest types, I don't think we can base a new TMP library on metafunctions with that style, unless a workaround is found. I also fear this might be slower because of possibly complex overload resolution, but without benchmarks that's just FUD. Like you said initially, I think your use case is representative of a C++14 Fusion-like library more than a MPL-like one. I'll have to clearly define the boundary between those before I can claim to have explored the whole design space for a new TMP library. Regards, Louis
participants (5)
-
Jonathan Wakely
-
Klaim - Joël Lamotte
-
Louis Dionne
-
Niall Douglas
-
Zach Laine