
Zach Laine <whatwasthataddress <at> gmail.com> writes: [...]
I recently decided to completely rewrite a library for linear algebra on heterogeneous types using Clang 3.4, which is c++14 feature-complete (modulo bugs). My library previously used lots of MPL and Boost.Fusion code, and was largely an unreadable mess. The new version only uses MPL's set, but no other MPL and no Fusion code, and is quite easy to understand (at least by comparison). The original version took me months of spare time to write, including lots of time trying to wrestle MPL and Fusion into doing what I needed them to do. The rewrite was embarrassingly easy; it took me about two weeks of spare time. I threw away entire files of return-type-computing metaprograms. The overall line count is probably 1/4 what it was before. My library and its needs are probably atypical with respect to MPL usage overall, but is probably representative of much use of Fusion, so keep that in mind below.
I looked at the Units-BLAS codebase (assuming that's what you were referring to) to get a better understanding of your use case. It was very helpful in understanding at least some of the requirements for a TMP library; thank you for that. In what follows, I sketch out possible solutions to some of your issues. I'm mostly thinking out loud.
Here are the metaprogramming capabilities I needed for my Fusion-like data structures:
1) compile-time type traits, as above 2) simple compile-time computation, as above 3) purely compile-time iteration over every element of a single list of types 4) purely compile-time iteration over every pair of elements in two lists of types (for zip-like operations, e.g. elementwise matrix products) 5) runtime iteration over every element of a single tuple 6) runtime iteration over every pair of elements in two tuples (again, for zip-like operations)
For my purposes, operations performed at each iteration in 3 through 6 above may sometimes require the index of the iteration. Again, this is probably atypical.
Some kind of counting range with a zip_with constexpr function should do the trick. Hence you could do (pseudocode): zip_with(your_constexpr_function, range_from(0), tuple1, ..., tupleN) where range_from(n) produces a range from n to infinity. I have been able to implement zip_with, but I'm struggling to make it constexpr because I need a lambda somewhere in there. The range_from(n) should be quite feasible.
1 is covered nicely by existing traits, and 2 is covered by ad hoc application-specific code (I don't see how a library helps here).
I agree.
There are several solutions that work for at least one of 3-6:
- Compile-time foldl(); I did mine as constexpr, simply for readability. - Runtime foldl(). - Direct expansion of a template parameter pack; example:
template <typename MatrixLHS, typename MatrixRHS, std::size_t ...I> auto element_prod_impl ( MatrixLHS lhs, MatrixRHS rhs, std::index_sequence<I...> ) { return std::make_tuple( (tuple_access::get<I>(lhs) * tuple_access::get<I>(rhs))... ); }
(This produces the actual result of multiplying two matrices element-by-element (or at least the resulting matrix's internal tuple storage). I'm not really doing any metaprogramming here at all, and that's sort of the point. Any MPL successor should be as easy to use as the above was to write, or I'll always write the above instead. A library might help here, since I had to write similar functions to do elementwise division, addition, etc., but if a library solution has more syntactic weight than the function above, I won't be inclined to use it.)
I think direct expansion of parameter packs would not be required in this case if we had a zip_with operation: zip_with(std::multiplies<>{}, lhs, rhs) However, there are other similar functions in your codebase that perform operations that are not covered by the standard function objects. For this, it would be _very_ useful to have constexpr lambdas.
- Ad hoc metafunctions and constexpr functions that iterate on type-lists. - Ad hoc metafunctions and constexpr functions that iterate over the values [1..N). - Ad hoc metafunctions and constexpr functions that iterate over [1..N) indices into a larger or smaller range of values.
I'm sorry, but I don't understand the last one. Do you mean iteration through a slice [1, N) of another sequence?
I was unable to find much in common between my individual ad hoc implementations that I could lift up into library abstractions, or at least not without increasing the volume of code more than it was worth to me. I was going for simple and maintainable over abstract. Part of the lack of commonality was that in one case, I needed indices for each iteration, in another one I needed types, in another case I needed to accumulate a result, in another I needed to return multiple values, etc. Finding an abstraction that buys you more than it costs you is difficult in such circumstances.
I do think it is possible to find nice abstractions, but it will be worth lifting into a separate library. I also think it will require departing from the usual STL concepts and going into FP-world.
So, I'm full of requirements, and no answers. :) I hope this helps, if only with scoping. I'll be in Aspen if you want to discuss it there too.
I'm looking forward to it.
It is easy to see how constexpr (and relaxed constexpr) can make the second kind of computation easier to express, since that is exactly its purpose. However, it is much less clear how constexpr helps us with computations of the first kind. And by that I really mean that using constexpr in some way to perform those computations might be more cumbersome and less efficient than good old metafunctions.
I've been using these to write less code, if only a bit less.
Instead of:
template <typename Tuple> struct meta;
template <typename ...T> struct meta<std::tuple<T...>> { using type = /*...*/; };
I've been writing:
template <typename ...T> constexpr auto meta (std::tuple<T...>) { return /*...*/; }
...and calling it as decltype(meta(std::tuple</*...*/>{})). This both eliminates the noise coming from having a base/specialization template pair instead of one template, and also removes the need for a *_t template alias and/or typename /*...*/::type.
That's valid in most use cases, but this won't work if you want to manipulate incomplete types, void and function types. Unless I'm mistaken, you can't instantiate a tuple holding any of those. Since a TMP library must clearly be able to handle the funkiest types, I don't think we can base a new TMP library on metafunctions with that style, unless a workaround is found. I also fear this might be slower because of possibly complex overload resolution, but without benchmarks that's just FUD. Like you said initially, I think your use case is representative of a C++14 Fusion-like library more than a MPL-like one. I'll have to clearly define the boundary between those before I can claim to have explored the whole design space for a new TMP library. Regards, Louis