"Simple C++11 metaprogramming"
I've recently made the mistake to reread Eric Niebler's excellent "Tiny Metaprogramming Library" article http://ericniebler.com/2014/11/13/tiny-metaprogramming-library/ which of course prompted me to try to experiment with my own tiny metaprogramming library and to see how I'd go about implementing tuple_cat (a challenge Eric gives.) Ordinarily, any such experiments of mine leave no trace once I abandon them and move on, but this time I decided to at least write an article about the result, so here it is, with the hope someone might find it useful. :-) http://pdimov.com/cpp2/simple_cxx11_metaprogramming.html
Peter Dimov <lists@pdimov.com> sayeth:
I've recently made the mistake to reread Eric Niebler's excellent "Tiny Metaprogramming Library" article
http://ericniebler.com/2014/11/13/tiny-metaprogramming-library/
which of course prompted me to try to experiment with my own tiny metaprogramming library and to see how I'd go about implementing tuple_cat (a challenge Eric gives.)
Ordinarily, any such experiments of mine leave no trace once I abandon them and move on, but this time I decided to at least write an article about the result, so here it is, with the hope someone might find it useful. :-)
Very nice. Very clean and impressive post. I particularly liked these two assertions: (a) mp_size to compute type-list length is a truly generic primitive which is, "...so nice that [you'd] argue that all our metaprogramming primitives ought to have this property." (b) "Lack-of-higher-order-metaprogramming", suggesting that in C++11 and beyond, we largely should not need metafunctions such as compose, bind, or a lambda library. Very interesting. I find your argument compelling. --charley
charleyb123 . wrote:
Very nice. Very clean and impressive post.
Thank you. The speed at which you were able to read all of it is certainly impressive in its own right.
I particularly liked these two assertions:
(a) mp_size to compute type-list length is a truly generic primitive which is, "...so nice that [you'd] argue that all our metaprogramming primitives ought to have this property."
(b) "Lack-of-higher-order-metaprogramming", suggesting that in C++11 and beyond, we largely should not need metafunctions such as compose, bind, or a lambda library.
The only problem with (b) is that you can't define a template alias inside a function, which I myself found more than a bit annoying. Lambda functions a-la Hana have a distinct advantage here - they can be defined inside functions. Still. :-)
charleyb123 . wrote:
http://pdimov.com/cpp2/simple_cxx11_metaprogramming.html
Very nice. Very clean and impressive post.
Peter respondeth:
Thank you. <snip, read-it-fast>
I happened to be looking at the screen when you posted, and opened the link. Still, I think it's greatly an attribute of the (very) high quality of your post. This was my "clean" reference: Your post is very well written, very well organized, and you express very clear assertions with supporting rationale and example(s). (Works for me, anyway. ;-)) Bravo -- I'm sure it took you a LONG time to write. There's a lot there, and you clearly needed to experiment with the compiler quite-a-bit to feel-your-way-through. Greatly to your credit, I think this an especially timely topic. I've been thinking about similar things for a while, but not with the clarity and understanding that you expressed. Your "opening" grabbed my attention: (a) <snip, historic reference to Boost-MPL and C++03 approaches> (b) <snip, C++11 changed the playing field because of variadic templates and associated parameter packs> IMHO, we're "re-learning" the TMP thing with C++11, C++14, and C++17. I've been watching Boost-Hana closely, and it's kind of blowing my mind. I particularly liked these two assertions:
(a) mp_size to compute type-list length is a truly generic primitive which is, "...so nice that [you'd] argue that all our metaprogramming primitives ought to have this property."
(b) "Lack-of-higher-order-metaprogramming", suggesting that in C++11 and beyond, we largely should not need metafunctions such as compose, bind, or a lambda library.
The only problem with (b) is that you can't define a template alias inside a function, which I myself found more than a bit annoying. Lambda functions a-la Hana have a distinct advantage here - they can be defined inside functions.
Still. :-)
Good point, I'll have to think-on-that. It doesn't take away from your "main-thrust", though, which is that an extreme elegance is now possible *without* metafunctions. That's Really Nice, because they seem kind of "klunky" to me, and I like the idea where we can increasingly get-away from them. The other thing that I wanted to mention (which I almost put in my first response, but then took-it-out) is that I really agree with your approach to use a prefix ("mp_" in your case) rather than relying on an enclosing namespace. I agree that it's nice to be able to exercise more control over exactly what the compiler is finding-and-expanding without worrying about weird namespace "discoveries". ;-)) Again, thank you very much for the article: It shouts loudly that we have new and more elegant options for template metaprogramming, and I really like the direction you suggest. I really think yours is going to be one of those "classic-posts" that we'll still be talking about for the next few years. (So, if I ever run into you, drinks are "on-me" -- you deserve it.) --charley
-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of charleyb123 . Sent: 30 May 2015 23:32 To: boost@lists.boost.org Subject: Re: [boost] "Simple C++11 metaprogramming"
charleyb123 . wrote:
Very nice. Very clean and impressive post.
+1 But there is a BIG error in the title - "Simple" and "C++11 metaprogramming" are incompatible! Paul PS My brain is hurting - and I haven't got to the end yet :-( --- Paul A. Bristow Prizet Farmhouse Kendal UK LA8 8AB +44 (0) 1539 561830
On 05/30/2015 09:26 AM, Peter Dimov wrote:
I've recently made the mistake to reread Eric Niebler's excellent "Tiny Metaprogramming Library" article
http://ericniebler.com/2014/11/13/tiny-metaprogramming-library/
which of course prompted me to try to experiment with my own tiny metaprogramming library and to see how I'd go about implementing tuple_cat (a challenge Eric gives.)
Ordinarily, any such experiments of mine leave no trace once I abandon them and move on, but this time I decided to at least write an article about the result, so here it is, with the hope someone might find it useful. :-)
Excellent article Peter! It builds nicely from one concept to the next. Within Spirit X3 we have been discussing lightweight metaprogramming in the world of C++14. You have articulated very well what we are beginning to discover - "I posit that such higher order metaprogramming is, in the majority of cases, not necessary in C++11." This thought intrigues me and I'm excited to see how far we can take it in libraries such as Spirit X3 and boostache. Thank you for taking the time to put the article together and share it with the community. Take care - michael -- Michael Caisse ciere consulting ciere.com
On 30/05/15 23:11, Michael Caisse wrote:
Within Spirit X3 we have been discussing lightweight metaprogramming in the world of C++14. You have articulated very well what we are beginning to discover - "I posit that such higher order metaprogramming is, in the majority of cases, not necessary in C++11." This thought intrigues me and I'm excited to see how far we can take it in libraries such as Spirit X3 and boostache.
I don't understand how that's a good thing. Higher order programming makes programs better.
Le 30/05/15 18:26, Peter Dimov a écrit :
I've recently made the mistake to reread Eric Niebler's excellent "Tiny Metaprogramming Library" article
http://ericniebler.com/2014/11/13/tiny-metaprogramming-library/
which of course prompted me to try to experiment with my own tiny metaprogramming library and to see how I'd go about implementing tuple_cat (a challenge Eric gives.)
Ordinarily, any such experiments of mine leave no trace once I abandon them and move on, but this time I decided to at least write an article about the result, so here it is, with the hope someone might find it useful. :-)
Hi and thanks for the article. I like very much the way you have reached to make generic meta functions on type list. I'm curious which concept is behind all these erased type list that are recognized as any type defined as template <class ... Ts> struct TL; and for which TL<> is the neutral element respect to mp_append, but that the real type is not as important as the type list it defines. That is, what is important are the Ts... not the TL. All these type list classes are in some way isomorphic. The query meta functions have no issue, The single issue is when we need to construct a type list, as we need a concrete type. Your rational to use a prefix mp_ let me perplex. Does it mean that namespaces have not reached its goal? IMO, both mp_append_impl and mp_append are useful. Do you think that we don't need any more the functionality of the mp_append_impl in c++1/c++14? If you considered it still useful, maybe you can follow the standard naming convention mp_append and mp_append_t. A minor remark, the definition of template<template<class...> class L1, class... T1, template<class...> class L2, class... T2, class... Lr> struct mp_append_impl<L1<T1...>, L2<T2...>, Lr...> { using type = mp_append<L1<T1..., T2...>, Lr...>; // *** }; could be template<template<class...> class L1, class... T1, template<class...> class L2, class... T2, class... Lr> struct mp_append_impl<L1<T1...>, L2<T2...>, Lr...> : mp_append_impl<L1<T1..., T2...>, Lr...> // **** {}; There is no need to forward reference append. Am I missing something trivial? The use of mp_list<> in mp_append_impl when there are 0 arguments is weird. Just wondering if adding a template <class ...> class R argument would make everything clearer. An alternative is to not define it for 0 arguments. I find weird also the use of mp_rename to apply a meta-function to a type list. It is annoying that we have a different syntax in c++ for type construction and meta function classes. If the result of a template alias could be directly another class template meta-function classes would be simple type constructors. But the result must be a type, and so we need to use a member class template (apply) to be able to return meta-functions. If something like this was possible template <class T> using x = template <class U> {}; Meta-function class could use this construction and then the syntax would be the same. x<int> would be yet a class template and so x<int><int> would be possible. E.g. the mp_constant template<class V> struct mp_constant { template<class...> using apply = V; }; template<class L, class V> using mp_fill = mp_transform<mp_constant<V>::template apply, L>; could be template<class V> using mp_constant = template<class...> V; template<class L, class V> using mp_fill = mp_transform<mp_constant<V>, L>; But we don't have this. (Note: I'm not saying that introducing something like that is an easy task, I have no idea of the consequences). BTW, we have a name for lowering a MFC to a class template template <class MFC> using unquote = MFC::template apply In the same way we have added suffix _t, we could add suffix _f for the corresponding MFC using add_pointer_f = quote<add_pointer_t>; You are right that it is simple to write template <template <class> class F, class X> using twice = F<F,X>>; template <class X> struct two_pointers : twice<add_pointer_t, X> {}; than template <class F, class X> using twice = apply<F, apply<F,X>>; template <class X> struct two_pointers : twice<lambda<add_pointer<_1> >, X> {}; The first works better with the type traits _t extensions is_same<twice< add_pointer_t, int>, int**> If we had some MFC add_pointer_f (take another example) struct add_pointer_f { template <class T> struct apply : add_pointer_t<T> {}; }; then we would need to unquote is_same<twice< unquote<add_pointer_f>, int>, int**> Following your design there is no reason for tuple_cat_ to works only with mp_list. template<class R, template<class...> class I, class...Is, template<class...> class K, class... Ks, class Tp> R tuple_cat_( I<Is...>, K<Ks...>, Tp tp ) { return R{ std::get<Ks::value>(std::get<Is::value>(tp))... }; } The name of the templates I and K are not important. High order meta-programming appears as soon as you want a meta-function to return a meta-function. This kind of meta-functions are at the origin of meta-function classes. I don't agree with you that it is preferable to define off line meta-functions as in template<class... T> using Fgh = F<G<H<T...>>>; or template<class L> using F = mp_iota<mp_size<L>>; if I don't have a good name. But good designers find always good names ;-) template<class T, class U> using sizeof_less = mp_bool<(sizeof(T) < sizeof(U))>; If one day we have static constexpr lambdas, wouldn't we be able to use some kind of lambda meta-functions as well. Is not that what the proposed Boost.Hana and Boost.Fit do already using a specific trick? STATIC_LAMBDA(T, U) { return sizeof(T) < sizeof(U); }; Thanks for sharing your ideas, Vicente
Vicente J. Botet Escriba wrote:
Your rational to use a prefix mp_ let me perplex. Does it mean that namespaces have not reached its goal?
Namespaces work well most of the time, but metaprogramming is a bit of a special case, because many of the identifiers we'd like to use in a metaprogramming library are either keywords (if, int, false) or already exist in namespace std (list, fill). The former forces us to add the suffix _ at times, the latter has been known to create problems via argument-dependent lookup. It's of course possible to use a namespace, meta:: as in Eric's library, or perhaps mp::, but I don't think that it buys us much.
IMO, both mp_append_impl and mp_append are useful. Do you think that we don't need any more the functionality of the mp_append_impl in c++1/c++14? If you considered it still useful, maybe you can follow the standard naming convention mp_append and mp_append_t.
The point I am making by not using the standard _t convention is that in C++11 aliases should be our primary primitive. The origin of the _t convention is that the ordinary non-_t traits were there first because there was no other way, and then the _t convenience aliases were added as an afterthought. This is a historical artifact. A C++11 design that does not have its origins in C++03 could and should choose the non-_t name for the alias.
The use of mp_list<> in mp_append_impl when there are 0 arguments is weird. Just wondering if adding a template <class ...> class R argument would make everything clearer.
The "idiomatic" way of making mp_append use a specific R is mp_append<R<>, ...>, which is not that much harder to use than the hypothetical mp_append<R, ...>. As a general rule, I prefer to keep functions homogeneous if possible.
Unfortunately, alias templates are not always the primary way of defining something (they are, however, the primary way of using something). The problem is that you cannot specialize an alias template, which is essential in much metaprogramming. I've seen the convention of _c for the class version, so the alias template that people actually use can be uncluttered of suffixes. _impl works just as well.
If one day we have static constexpr lambdas, wouldn't we be able to use some kind of lambda meta-functions
We don't need constexpr lambdas to achieve this. We can do higher-order type computations using dependent typing. I show here how to implement boost fusion filter so it will take a lambda rather than a MPL placeholder expression: http://pfultz2.com/blog/2015/01/24/dependent-typing/ Which is based on the ideas presented by Zach Laine and Matt Calabrese several years ago at boostcon. So in essence you can write this in C++14: auto numbers = simple_filter(some_fusion_sequence, [](auto x) { return is_integral<decltype(x)>() or is_floating_point<decltype(x)>(); }); There is no `constexpr` required by the user. It all happens through dependent typing. This is how Boost.Hana works as well. Furthermore, Boost.Hana is not `constexpr`-metaprogramming(athough it makes use of `constexpr`), it is actually dependently-typed metaprogramming, which is much more powerful in current C++.
Is not that what the proposed Boost.Hana and Boost.Fit do already using a specific trick?
That trick, in general, is only used to achieve `constexpr` initialization of lambdas, which is necessary when declaring global lambdas. In can be used in other ways to allow type deduction of lambdas in a `constexpr` context. However, in general, with dependently-typed metaprogramming this isn't necessary. Paul -- View this message in context: http://boost.2283326.n4.nabble.com/Simple-C-11-metaprogramming-tp4676344p467... Sent from the Boost - Dev mailing list archive at Nabble.com.
2015-05-30 13:26 GMT-03:00 Peter Dimov <lists@pdimov.com>:
I've recently made the mistake to reread Eric Niebler's excellent "Tiny Metaprogramming Library" article
http://ericniebler.com/2014/11/13/tiny-metaprogramming-library/
which of course prompted me to try to experiment with my own tiny metaprogramming library and to see how I'd go about implementing tuple_cat (a challenge Eric gives.)
Ordinarily, any such experiments of mine leave no trace once I abandon them and move on, but this time I decided to at least write an article about the result, so here it is, with the hope someone might find it useful. :-)
Very didactic overview on C++11 metaprogramming, I find many of your points of view very much aligned to my own. I do believe however some of your assumptions are a bit to hasty. Higher order constructs, such as argument binding, are necessary in order to enable a truly expressive functional idiom and template aliases can't possibly make for a reasonable substitute for them. The need for so called metafunction-classes arises most naturally as soon as one decides to write a metafunction that returns another metafunction. As pointed out by Vincente Escriba, template aliases can't allow for an indistinguishable handling of metafunctions and variables (that is types), for the obvious reason they can't match template type parameters. Another thing that I believe has been overlooked, is the fact that the property of lazy evaluation is entirely lost by directly typedef'ing metafunctions as their result and, along with it, goes SFINAE friendliness as well, or at least it becomes much trickier to pull. The ability to use SFINAE as a means to select between alternative implementations is, I dare say, the main reason why one decides to go all the way through metaprogramming to begin with. While I'm at it, I see that there's a widespread rejection to MPL's typename <...>::type idiom but I fail to grasp the reason why. Sure one might find it cumbersome to write it over and over again, but that's a solved problem and has always been. MPL provides lambda<> and the accompanying apply<> which elegantly overcome this cumbersomeness by greatly reducing the necessity to sprinkle typename <...>::type all over the place. Taken from your article typedef typename add_reference< typename add_const< typename add_pointer<X>::type >::type >::type Xpcr; becomes simply using Xpcr = typename apply<add_reference<add_const<add_pointer<_1>>>, X>::type; and even if that single typename <>::type is found too much to bear, one has now the option to adopt the C++14 convention and define apply_t accordingly. The most important thing to realize here, however, is the fact one retains the option to lazily eval the expression if so desired, or rather required in a SFINAE context. That is to say, typename <>::type must be regarded as a feature, not as an awkward implementation detail. I urge you to refrain from using C++11 so promptly as an excuse to deem MPL obsolete. It has been the de facto metaprogramming library for over a decade now for a good reason and, in my opinion, C++11 is not changing that any time soon. Sure there are now other very elegant alternatives, Hana and many others are there to prove it, but I don't see why deprecating a library whose simplicity has allowed support of a remarkable number of compilers still in production use today, many of which can't cope with the newest C++14 constructions yet. *Bruno C. O. Dutra*
Bruno Dutra wrote:
The need for so called metafunction-classes arises most naturally as soon as one decides to write a metafunction that returns another metafunction. As pointed out by Vincente Escriba, template aliases can't allow for an indistinguishable handling of metafunctions and variables (that is types), for the obvious reason they can't match template type parameters.
I actually know all that. That's kind of the point. You can't just say "we need higher-order metaprogramming to return metafunctions from metafunctions" - this is a tautology. This is what "higher-order metaprogramming" means. You're basically saying that we need higher-order metaprogramming to do higher-order metaprogramming. True but trivial. There obviously do exist occasions that call for higher-order metaprogramming. The question is can we get by in 97% of the cases without it. Not whether it's useful, but whether it's indispensable. Whether there's a room for a "simple" metaprogramming library that doesn't provide higher-order constructs and is therefore based on template aliases and not on metafunction classes, and whether such a library can be adequately useful for real world tasks. (I'm open to the possibility that the answer is "no", but I'd like it to be "yes".)
2015-06-01 20:55 GMT-03:00 Peter Dimov <lists@pdimov.com>:
You can't just say "we need higher-order metaprogramming to return metafunctions from metafunctions" - this is a tautology. This is what "higher-order metaprogramming" means. You're basically saying that we need higher-order metaprogramming to do higher-order metaprogramming. True but trivial.
My choice of words might have been unfortunate, I was just trying to draw attention to the fact that even if argument binding and all those bells and whistles aren't necessary, metafunctions returning metafunctions alone would call for metafunction-classes. Still, laziness and SFINAE friendliness are the properties I deem most fundamental on any metaprogramming library.
There obviously do exist occasions that call for higher-order metaprogramming. The question is can we get by in 97% of the cases without it. Not whether it's useful, but whether it's indispensable. Whether there's a room for a "simple" metaprogramming library that doesn't provide higher-order constructs and is therefore based on template aliases and not on metafunction classes, and whether such a library can be adequately useful for real world tasks. (I'm open to the possibility that the answer is "no", but I'd like it to be "yes".)
I get the point, the question I raise is: "is MPL all that complex in a C++11 world?" After long discussions on this list a couple of months ago I decided to experiment with rewriting MPL from scratch using C++11 as a starting point. So far I have its "metafunctional" halve, that is, the part deals with higher order metaprogramming, fairly mature, to the point I'm preparing to document it. What I've found, is that by using C++11 constructs it becomes much simpler and, I dare say, intuitive than good old MPL. I argue, that perhaps that's just simple enough. As soon as I have it documented I'll share on this list, but If you'd be interested in taking a look at it before I'm able to do so, you can find it here: https://github.com/brunocodutra/mpl2/tree/master/include/boost/mpl2/metafunc... Unit tests, that serve as examples, can be found here: https://github.com/brunocodutra/mpl2/tree/master/test/metafunctional *Bruno C. O. Dutra*
Bruno Dutra wrote:
Still, laziness and SFINAE friendliness are the properties I deem most fundamental on any metaprogramming library.
I meant to respond to these points too. The eagerness of template aliases is an obvious problem in at least one place and that place is mp_if, where the naive mp_if<P<T>, F<T>, G<T>> evaluates both F<T> and G<T>. But I'm not sure if there are others, under the assumption that we've no higher-order constructs. Eric Niebler's meta has an entire lazy:: namespace with deferred copies of all its constructs; continuing the general theme, I wonder whether all this is strictly needed once we jettison the lambda part. SFINAE friendliness can be quite a curse. It often transforms reasonably comprehensible error messages into full-scale Agatha Christie mysteries (the compiler output even being of comparable length). So I'm not convinced (at the moment) that a metaprogramming library should be SFINAE-friendly. I presently tend to lean towards the philosophy of static_assert'ing as much as possible, and leaving the primary templates undefined instead of empty.
SFINAE friendliness can be quite a curse. It often transforms reasonably comprehensible error messages into full-scale Agatha Christie mysteries (the compiler output even being of comparable length). So I'm not convinced (at the moment) that a metaprogramming library should be SFINAE-friendly.
Yep, this is something where compilers need to improve. It should treat the diagnostic to undefined template instantiation the same way as overload resolution failure. This way you can have a trace back to the original error, instead of showing a bunch of unrelated errors. Paul -- View this message in context: http://boost.2283326.n4.nabble.com/Simple-C-11-metaprogramming-tp4676344p467... Sent from the Boost - Dev mailing list archive at Nabble.com.
On Jun 2, 2015 7:43 AM, "Peter Dimov" <lists@pdimov.com> wrote:
Bruno Dutra wrote:
Still, laziness and SFINAE friendliness are the properties I deem most fundamental on any metaprogramming library.
I meant to respond to these points too.
The eagerness of template aliases is an obvious problem in at least one place and that place is mp_if, where the naive mp_if<P<T>, F<T>, G<T>> evaluates both F<T> and G<T>. But I'm not sure if there are others, under the assumption that we've no higher-order constructs. Eric Niebler's meta has an entire lazy:: namespace with deferred copies of all its constructs; continuing the general theme, I wonder whether all this is strictly needed once we jettison the lambda part.
SFINAE friendliness can be quite a curse. It often transforms reasonably comprehensible error messages into full-scale Agatha Christie mysteries (the compiler output even being of comparable length). So I'm not convinced (at the moment) that a metaprogramming library should be SFINAE-friendly. I presently tend to lean towards the philosophy of static_assert'ing as much as possible, and leaving the primary templates undefined instead of empty.
That was *exactly* my approach in the beginning of my endeavor to rewrite MPL, but that was only untill I had to write the first trait introspection and realized static_assert'ions are not SFINAE friendly. The need for laziness and SFINAE friendliness arises most clearly when one is faced with a decision problem, whereby one is in the need for a predicate such as is_evaluable<>, which expects a metafunction (a variadic template template parameter in your case) and a variadic set of arguments, and has to decided whether evaluating such metafunction for the given set of arguments yields a valid result or errors out. Errors could be due to arity mismatch or the incompleteness of the instantiated template for example, but of course such predicates are expected to dodge any such traps and yield either true_type or false_type accordingly. Perhaps one could go by without such predicates, but that would feel just too crippling, it would be like programming C without branching. Or maybe one should bite the bullet and rely on a lazy backend to allow for such predicates (see below), but then why not exposing the lazy backend to the end user if it's already there? That's why I decided to follow a mixed approach, providing both lazy and eager versions of metafunctions via the typename $<>::type and $_t<> duality introduced by C++14. //traits example using a lazy backend #include <type_traits> template<typename> struct test_c; template<> struct test_c<void>{using type = void;}; template<typename _> using test = typename test_c<_>::type; template<template<typename...> class f, typename... args> struct is_evaluable_c { template<template<typename...> class g, typename = g<args...>> static std::true_type check(int); template<template<typename...> class> static std::false_type check(...); using type = decltype(check<f>(0)); }; template<template<typename...> class f, typename... args> using is_evaluable = typename is_evaluable_c<f, args...>::type; static_assert(is_evaluable<test, void>::value, ""); static_assert(!is_evaluable<test, int>::value, ""); static_assert(!is_evaluable<test, void, int>::value, ""); Bruno Dutra
Bruno Dutra wrote:
//traits example using a lazy backend ... template<typename _> using test = /*...*/ ... template<template<typename...> class f, typename... args> using is_evaluable = /*...*/
static_assert(is_evaluable<test, void>::value, ""); static_assert(!is_evaluable<test, int>::value, ""); static_assert(!is_evaluable<test, void, int>::value, "");
Where is the laziness here? The user sees test<> (an alias), is_evaluable<> (an alias), and the static asserts don't refer to anything else. If I add another metafunction (in my sense, i.e. a template alias) template<class T> using test2 = test<std::decay_t<T>>; that has no lazy test2_c backend, is_evaluable still works for it: static_assert(is_evaluable<test2, void const volatile>::value, ""); static_assert(!is_evaluable<test2, int()>::value, ""); static_assert(!is_evaluable<test2, void, int>::value, "");
On Jun 2, 2015 10:03 AM, "Peter Dimov" <lists@pdimov.com> wrote:
Bruno Dutra wrote:
//traits example using a lazy backend
...
template<typename _> using test = /*...*/
...
template<template<typename...> class f, typename... args> using is_evaluable = /*...*/
static_assert(is_evaluable<test, void>::value, ""); static_assert(!is_evaluable<test, int>::value, ""); static_assert(!is_evaluable<test, void, int>::value, "");
Where is the laziness here? The user sees test<> (an alias),
is_evaluable<> (an alias), and the static asserts don't refer to anything else. That's the point, one stretches long lengths to provide laziness under the hood and yet deprives the end user to make direct use of it, despite the fact it allows for useful features, such as lazy branching.
If I add another metafunction (in my sense, i.e. a template alias)
template<class T> using test2 = test<std::decay_t<T>>;
that has no lazy test2_c backend, is_evaluable still works for it:
static_assert(is_evaluable<test2, void const volatile>::value, ""); static_assert(!is_evaluable<test2, int()>::value, ""); static_assert(!is_evaluable<test2, void, int>::value, "");
What you mean? std::decay<>::type is the lazy backend. That's actually an example of the point I'm trying to make here, std::decay_t<> is and should be provided only as a shorthand for its lazy counterpart, without, however, attempting to replace it, simply because it can't. That is, the lazy version remains available to the end users if needed. Bruno Dutra
Bruno Dutra wrote:
What you mean? std::decay<>::type is the lazy backend.
Eh, no, it's just an implementation detail. Change your template<> struct test_c<void>{using type = void;}; to template<> struct test_c<int>{using type = int;}; and then look at the following decay<> of mine, which I'm sure you'll agree really has no "lazy backend". template<class T> T make(); template<class T> T decay_impl(T); template<class T> using decay = decltype(decay_impl(make<T&&>())); template<class T> using test2 = test<decay<T>>; static_assert(is_evaluable<test2, int const volatile>::value, ""); static_assert(!is_evaluable<test2, int()>::value, ""); static_assert(!is_evaluable<test2, void, int>::value, ""); is_evaluable still works as expected. It even works on decay<> directly: static_assert(is_evaluable<decay, int const volatile>::value, ""); static_assert(is_evaluable<decay, int()>::value, ""); static_assert(!is_evaluable<decay, int, float>::value, ""); static_assert(!is_evaluable<decay, void>::value, "");
On Jun 2, 2015 11:09 AM, "Peter Dimov" <lists@pdimov.com> wrote:
Bruno Dutra wrote:
What you mean? std::decay<>::type is the lazy backend.
Eh, no, it's just an implementation detail. Change your
template<> struct test_c<void>{using type = void;};
to
template<> struct test_c<int>{using type = int;};
I'm sorry, I don't see why this is relevant, could you elaborate?
and then look at the following decay<> of mine, which I'm sure you'll agree really has no "lazy backend".
template<class T> T make(); template<class T> T decay_impl(T); template<class T> using decay = decltype(decay_impl(make<T&&>()));
template<class T> using test2 = test<decay<T>>;
I agree that it does not have a "lazy backend" in the sense of my previous replies, but I don't see your point, you've just switched to an equivalent type dependent evaluation similar to what Hana does, but now at the expense of not being able to even provide a lazy version to the end user in order to allow for lazy branching, that is, you just made it crypt enough just so you could call it "implementation detail". All I'm trying to say is that laziness is required somehow under the hood at some point and I don't see why hiding it from the end user if the effort must be made to bring it about anyways. Keep in mind it is useful and very much aligned to the current design of the standard C++ library and please note I'm not even arguing in favor of lambdas and binding at this point. Bruno Dutra
Bruno Dutra wrote:
I agree that it does not have a "lazy backend" in the sense of my previous replies, but I don't see your point, ...
My point is that is_evaluable works on metafunctions regardless of whether they have lazy backends or not. You were citing is_evaluable as a motivating example that lazy backends are necessary, but it doesn't need them. I do agree that for branching one does need laziness. So if you want, f.ex. template<class Def, template<class...> class F, class... T> using eval_or_default = /**/ // if is_evaluable<F, T...> returns F<T...> else returns Def then I agree that you have to have a lazy if. My basic question is "do you need laziness for something else apart from lazy if?"
...but now at the expense of not being able to even provide a lazy version to the end user...
The lazy version is not hard to recover, but I'm still wondering whether it's even necessary outside of if. template<template<class...> class F, class... T> struct mp_defer { using type = F<T...>; };
2015-06-02 11:54 GMT-03:00, Peter Dimov <lists@pdimov.com>:
[...]
I do agree that for branching one does need laziness. So if you want, f.ex.
template<class Def, template<class...> class F, class... T> using eval_or_default = /**/ // if is_evaluable<F, T...> returns F<T...> else returns Def
then I agree that you have to have a lazy if.
My basic question is "do you need laziness for something else apart from lazy if?"
The lazy version is not hard to recover, but I'm still wondering whether it's even necessary outside of if.
template<template<class...> class F, class... T> struct mp_defer { using type = F<T...>; };
mp_defer is a reasonable way of providing a lazy interface if it is only meant to be used in conjunction with your proposed eval_or_default. Apart from branching, laziness is also required for lambdas as presented by MPL, but if you are to argue that those are not necessary, than perhaps mp_defer might indeed be all you need. In any case I'd rather stick to the typename $<>::type vs $_t<> duality to mimic the standard library, but that of course is just a matter of personal taste. At any rate I think I agree with Louis that if one is restricted to such very simple cases, than perhaps <type_traits> is all one needs. That is, in my opinion any metaprogramming library should just go ahead and provide the more complex lambda facilities if it is to prove itself really useful for the end user. Bruno Dutra
On 6/2/2015 3:42 AM, Peter Dimov wrote:
Bruno Dutra wrote:
Still, laziness and SFINAE friendliness are the properties I deem most fundamental on any metaprogramming library.
I meant to respond to these points too.
The eagerness of template aliases is an obvious problem in at least one place and that place is mp_if, where the naive mp_if<P<T>, F<T>, G<T>> evaluates both F<T> and G<T>. But I'm not sure if there are others, under the assumption that we've no higher-order constructs. Eric Niebler's meta has an entire lazy:: namespace with deferred copies of all its constructs; continuing the general theme, I wonder whether all this is strictly needed once we jettison the lambda part.
In my experience, meta::defer (and all the stuff in meta's lazy:: namespace) is useful outside of lambdas. You want to defer computations for branches and short-circuit evaluation (if, and, or) where the branch(es) not taken would have hard errors; or for compile-time optimization, to avoid computing results you won't use. This is real, and has nothing to do with lambdas.
SFINAE friendliness can be quite a curse. It often transforms reasonably comprehensible error messages into full-scale Agatha Christie mysteries (the compiler output even being of comparable length). So I'm not convinced (at the moment) that a metaprogramming library should be SFINAE-friendly. I presently tend to lean towards the philosophy of static_assert'ing as much as possible, and leaving the primary templates undefined instead of empty.
I made Meta SFINAE-friendly on a lark and found it useful in practice. (See this thread[*] where Louis and I compared common_type implementations with Meta and Hana.) Of course you're right about mysterious errors, and I don't feel I have a good response yet. [*] http://lists.boost.org/Archives/boost/2015/03/220446.php -- Eric Niebler Boost.org http://www.boost.org
Eric Niebler wrote:
I made Meta SFINAE-friendly on a lark and found it useful in practice. (See this thread[*] where Louis and I compared common_type implementations with Meta and Hana.) ... [*] http://lists.boost.org/Archives/boost/2015/03/220446.php
Having a SFINAE-friendly fold helps, but here's my implementation of this challenge. // common_type template<class...> struct common_type { using type_not_present = void***[]; }; template<class T> struct common_type<T>: std::decay<T> { }; template<class T1, class T2, class... T> using common_type_impl = common_type<decltype(declval<bool>()? declval<T1>(): declval<T2>()), T...>; template<class T1, class T2, class...T> struct common_type<T1, T2, T...>: eval_or_default<common_type<>, common_type_impl, T1, T2, T...> { }; #include <iostream> #include <typeinfo> int main() { std::cout << typeid( common_type<char volatile[], void*, int const[], void*>::type ).name() << std::endl; std::cout << typeid( common_type<char, float, int*, double>::type_not_present ).name() << std::endl; } where eval_or_default<Def, F, T...> checks is_evaluable<F, T...> (from Bruno Dutra's earlier message) and returns F<T...> if evaluable, Def if not. Getting it right was a bit tricky, but I find the final result quite readable. (This implements the specification in the latest C++17 draft.)
On 06/04/2015 04:30 PM, Peter Dimov wrote:
Eric Niebler wrote:
I made Meta SFINAE-friendly on a lark and found it useful in practice. (See this thread[*] where Louis and I compared common_type implementations with Meta and Hana.) ... [*] http://lists.boost.org/Archives/boost/2015/03/220446.php
Having a SFINAE-friendly fold helps, but here's my implementation of this challenge.
// common_type
template<class...> struct common_type { using type_not_present = void***[]; };
template<class T> struct common_type<T>: std::decay<T> { };
template<class T1, class T2, class... T> using common_type_impl = common_type<decltype(declval<bool>()? declval<T1>(): declval<T2>()), T...>;
template<class T1, class T2, class...T> struct common_type<T1, T2, T...>: eval_or_default<common_type<>, common_type_impl, T1, T2, T...> { };
#include <iostream> #include <typeinfo>
int main() { std::cout << typeid( common_type<char volatile[], void*, int const[], void*>::type ).name() << std::endl; std::cout << typeid( common_type<char, float, int*, double>::type_not_present ).name() << std::endl; }
where eval_or_default<Def, F, T...> checks is_evaluable<F, T...> (from Bruno Dutra's earlier message) and returns F<T...> if evaluable, Def if not.
Getting it right was a bit tricky, but I find the final result quite readable. (This implements the specification in the latest C++17 draft.)
Wow. Very elegant. -- Michael Caisse ciere consulting ciere.com
On 6/4/2015 4:30 PM, Peter Dimov wrote:
Eric Niebler wrote:
I made Meta SFINAE-friendly on a lark and found it useful in practice. (See this thread[*] where Louis and I compared common_type implementations with Meta and Hana.) ... [*] http://lists.boost.org/Archives/boost/2015/03/220446.php
Having a SFINAE-friendly fold helps, but here's my implementation of this challenge.
// common_type
template<class...> struct common_type { using type_not_present = void***[]; };
template<class T> struct common_type<T>: std::decay<T> { };
template<class T1, class T2, class... T> using common_type_impl = common_type<decltype(declval<bool>()? declval<T1>(): declval<T2>()), T...>;
template<class T1, class T2, class...T> struct common_type<T1, T2, T...>: eval_or_default<common_type<>, common_type_impl, T1, T2, T...> { };
#include <iostream> #include <typeinfo>
int main() { std::cout << typeid( common_type<char volatile[], void*, int const[], void*>::type ).name() << std::endl; std::cout << typeid( common_type<char, float, int*, double>::type_not_present ).name() << std::endl; }
where eval_or_default<Def, F, T...> checks is_evaluable<F, T...> (from Bruno Dutra's earlier message) and returns F<T...> if evaluable, Def if not.
Getting it right was a bit tricky, but I find the final result quite readable. (This implements the specification in the latest C++17 draft.)
I'm not so sure. If a user defines a specialization for common_type<MyInt, int>, it won't get used when computing common_type<MyInt, int, int>. I think that would complicate your implementation somewhat. I like the mutual recursion between common_type and common_type_impl, but really you've just implemented fold, and it would be better to have fold as a separate reusable algorithm. I see that you've isolated the SFINAE-friendliness in your library to the eval_or_default utility. It's an interesting choice, but I think it's why you can't use a fold here, am I right? -- Eric Niebler Boost.org http://www.boost.org
Eric Niebler wrote:
(This implements the specification in the latest C++17 draft.)
I'm not so sure. If a user defines a specialization for common_type<MyInt, int>, it won't get used when computing common_type<MyInt, int, int>.
Interesting question. This is the specification: For the common_type trait applied to a parameter pack T of types, the member type shall be either defined or not present as follows: (3.1) — If sizeof...(T) is zero, there shall be no member type. (3.2) — If sizeof...(T) is one, let T0 denote the sole type in the pack T. The member typedef type shall denote the same type as decay_t<T0>. (3.3) — If sizeof...(T) is greater than one, let T1, T2, and R, respectively, denote the first, second, and (pack of) remaining types comprising T. [ Note: sizeof...(R) may be zero. —end note ] Let C denote the type, if any, of an unevaluated conditional expression (5.16) whose first operand is an arbitrary value of type bool, whose second operand is an xvalue of type T1, and whose third operand is an xvalue of type T2. If there is such a type C, the member typedef type shall denote the same type, if any, as common_type_t<C,R...>. Otherwise, there shall be no member type. I'd say that this is what I implemented. But you do have a point that it doesn't do what yours does.
Eric Niebler wrote:
(This implements the specification in the latest C++17 draft.)
I'm not so sure. If a user defines a specialization for common_type<MyInt, int>, it won't get used when computing common_type<MyInt, int, int>.
As it turns out, users are not allowed to specialize common_type, which is why the C++17 specification does not concern itself with such specializations. As a curiosity, decaying the arguments before the conditional operator is also subtly wrong. #include <iostream> #include <typeinfo> struct X { operator int (); operator long () const; }; struct Y { operator int (); operator long () const; }; int main() { std::cout << typeid( true? declval<X const&>(): declval<Y const&>() ).name() << std::endl; std::cout << typeid( true? declval<X&>(): declval<Y&>() ).name() << std::endl; std::cout << typeid( true? declval<X>(): declval<Y>() ).name() << std::endl; }
On 6/5/2015 9:42 AM, Peter Dimov wrote:
Eric Niebler wrote:
(This implements the specification in the latest C++17 draft.)
I'm not so sure. If a user defines a specialization for common_type<MyInt, int>, it won't get used when computing common_type<MyInt, int, int>.
As it turns out, users are not allowed to specialize common_type, which is why the C++17 specification does not concern itself with such specializations.
Users are allowed to specialize `common_type`. There is a standing issue against SFINAE-friendly `common_type` by Eric and yours truly: https://cplusplus.github.io/LWG/lwg-active.html#2465 Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com
On 6/5/2015 5:42 AM, Peter Dimov wrote:
Eric Niebler wrote:
(This implements the specification in the latest C++17 draft.)
I'm not so sure. If a user defines a specialization for common_type<MyInt, int>, it won't get used when computing common_type<MyInt, int, int>.
As it turns out, users are not allowed to specialize common_type, which is why the C++17 specification does not concern itself with such specializations.
It's in Table 57 - Other transformations, where common_type is described:
template <class... T> struct common_type;
The member typedef type shall be defined as set out below. All types in the parameter pack T shall be complete or (possibly cv) void. A program may specialize this trait if at least one template parameter in the specialization is a user-defined type. [ Note: Such specializations are needed when only explicit conversions are desired among the template arguments. --end note ]
As a curiosity, decaying the arguments before the conditional operator is also subtly wrong.
Hrm, interesting. I think there's a library issue about that. -- Eric Niebler Boost.org http://www.boost.org
Eric Niebler wrote:
If a user defines a specialization for common_type<MyInt, int>, it won't get used when computing common_type<MyInt, int, int>. I think that would complicate your implementation somewhat.
It does. I now need a SFINAE-friendly defer in addition to eval_or_default. // common_type template<class...> struct common_type { using type_not_present = void***[]; }; template<class T> struct common_type<T>: std::decay<T> { }; template<class T1, class T2> using builtin_common_type = decltype(declval<bool>()? declval<T1>(): declval<T2>()); template<class T1, class T2> struct common_type<T1, T2>: mp_if_c< std::is_same<T1, std::decay_t<T1>>::value && std::is_same<T2, std::decay_t<T2>>::value, mp_defer<builtin_common_type, T1, T2>, common_type<std::decay_t<T1>, std::decay_t<T2>>> { }; template<class T1, class T2, class... T> using common_type_impl = common_type<typename common_type<T1, T2>::type, T...>; template<class T1, class T2, class...T> struct common_type<T1, T2, T...>: eval_or_default<common_type<>, common_type_impl, T1, T2, T...> { }; mp_defer was before just template<template<class...> class F, class... T> struct mp_defer { using type = F<T...>; }; but now has to be // mp_defer struct empty { }; template<template<class...> class F, class... T> struct mp_defer_impl { using type = F<T...>; }; template<template<class...> class F, class... T> using mp_defer = mp_if<is_evaluable<F, T...>, mp_defer_impl<F, T...>, empty>;
I like the mutual recursion between common_type and common_type_impl, but really you've just implemented fold, and it would be better to have fold as a separate reusable algorithm. I see that you've isolated the SFINAE-friendliness in your library to the eval_or_default utility. It's an interesting choice, but I think it's why you can't use a fold here, am I right?
I can use a fold if the fold is SFINAE-friendly, yes. But I'm not yet convinced that it has to be. This would imply making all algorithms SFINAE-friendly, or making fold an exception, or adding both SFINAE/non-SFINAE variants of some. None of those sounds appealing.
On 6/5/2015 6:23 AM, Peter Dimov wrote:
Eric Niebler wrote:
If a user defines a specialization for common_type<MyInt, int>, it won't get used when computing common_type<MyInt, int, int>. I think that would complicate your implementation somewhat.
It does. I now need a SFINAE-friendly defer in addition to eval_or_default.
// common_type
template<class...> struct common_type { using type_not_present = void***[]; };
template<class T> struct common_type<T>: std::decay<T> { };
template<class T1, class T2> using builtin_common_type = decltype(declval<bool>()? declval<T1>(): declval<T2>());
template<class T1, class T2> struct common_type<T1, T2>: mp_if_c< std::is_same<T1, std::decay_t<T1>>::value && std::is_same<T2, std::decay_t<T2>>::value, mp_defer<builtin_common_type, T1, T2>, common_type<std::decay_t<T1>, std::decay_t<T2>>> { };
template<class T1, class T2, class... T> using common_type_impl = common_type<typename common_type<T1, T2>::type, T...>;
template<class T1, class T2, class...T> struct common_type<T1, T2, T...>: eval_or_default<common_type<>, common_type_impl, T1, T2, T...> { };
mp_defer was before just
template<template<class...> class F, class... T> struct mp_defer { using type = F<T...>; };
but now has to be
// mp_defer
struct empty { };
template<template<class...> class F, class... T> struct mp_defer_impl { using type = F<T...>; };
template<template<class...> class F, class... T> using mp_defer = mp_if<is_evaluable<F, T...>, mp_defer_impl<F, T...>, empty>;
Now it's basically the same as my implementation, but without the use of fold.
I like the mutual recursion between common_type and common_type_impl, but really you've just implemented fold, and it would be better to have fold as a separate reusable algorithm. I see that you've isolated the SFINAE-friendliness in your library to the eval_or_default utility. It's an interesting choice, but I think it's why you can't use a fold here, am I right?
I can use a fold if the fold is SFINAE-friendly, yes. But I'm not yet convinced that it has to be. This would imply making all algorithms SFINAE-friendly, or making fold an exception, or adding both SFINAE/non-SFINAE variants of some. None of those sounds appealing.
...unless there's some way to use the same fold for both, giving the user a way to choose. -- Eric Niebler Boost.org http://www.boost.org
On 02/06/15 11:42, Peter Dimov wrote:
The eagerness of template aliases is an obvious problem in at least one place and that place is mp_if, where the naive mp_if<P<T>, F<T>, G<T>> evaluates both F<T> and G<T>. But I'm not sure if there are others, under the assumption that we've no higher-order constructs. Eric Niebler's meta has an entire lazy:: namespace with deferred copies of all its constructs; continuing the general theme, I wonder whether all this is strictly needed once we jettison the lambda part.
A core foundational library should not instantiate templates that it doesn't strictly need to. Instantiating templates has a very high impact on compile-time memory usage, performance, and on the well-formed programs it allows. A good library is one that is designed to be the least intrusive on the applications that use it. For a library, efficiency is more important than simplicity.
Mathias Gaunard wrote:
A core foundational library should not instantiate templates that it doesn't strictly need to. Instantiating templates has a very high impact on compile-time memory usage, performance, ...
That's not always true in C++11/17. One example is the and<T...> primitive, for which the short-circuit version is much slower than the one that always expands T::value... . Another is find_if<L<T...>, P>, which is also much faster when it expands P<T>... for every element, even though it technically doesn't need to proceed beyond the first element for which P<T> is true.
and on the well-formed programs it allows.
On 16/07/15 12:12, Peter Dimov wrote:
Mathias Gaunard wrote:
A core foundational library should not instantiate templates that it doesn't strictly need to. Instantiating templates has a very high impact on compile-time memory usage, performance, ...
That's not always true in C++11/17. One example is the and<T...> primitive, for which the short-circuit version is much slower than the one that always expands T::value...
That would depend on the cost of instantiating T, which you cannot predict since it is a user-defined type.
On 7/16/2015 5:29 PM, Mathias Gaunard wrote:
On 16/07/15 12:12, Peter Dimov wrote:
Mathias Gaunard wrote:
A core foundational library should not instantiate templates that it doesn't strictly need to. Instantiating templates has a very high impact on compile-time memory usage, performance, ...
That's not always true in C++11/17. One example is the and<T...> primitive, for which the short-circuit version is much slower than the one that always expands T::value...
That would depend on the cost of instantiating T, which you cannot predict since it is a user-defined type.
There's merit for both approaches, as it depends entirely on the use case. For instance, when used in a `static_assert` the possible benefits of short-circuiting the `false` case is irrelevant, while a cheaper `true` case is beneficial. Given that the eager version is straightforward to write in C++17, it'd be preferable for `and_` to represent the traditional short-circuited approach its name implies. Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com
On 6/2/15 7:55 AM, Peter Dimov wrote:
There obviously do exist occasions that call for higher-order metaprogramming. The question is can we get by in 97% of the cases without it. Not whether it's useful, but whether it's indispensable. Whether there's a room for a "simple" metaprogramming library that doesn't provide higher-order constructs and is therefore based on template aliases and not on metafunction classes, and whether such a library can be adequately useful for real world tasks. (I'm open to the possibility that the answer is "no", but I'd like it to be "yes".)
+10 This is exactly what I've been saying the past year or so. This seems the consensus among people who have "been there and done that". I personally am avoiding fancy TMP libraries now in favor of simpler mechanisms. Regards, -- Joel de Guzman http://www.ciere.com http://boost-spirit.com http://www.cycfi.com/
On Mon, Jun 1, 2015 at 9:32 PM, Joel de Guzman <djowel@gmail.com> wrote:
On 6/2/15 7:55 AM, Peter Dimov wrote:
There obviously do exist occasions that call for higher-order metaprogramming. The question is can we get by in 97% of the cases without it. Not whether it's useful, but whether it's indispensable. Whether there's a room for a "simple" metaprogramming library that doesn't provide higher-order constructs and is therefore based on template aliases and not on metafunction classes, and whether such a library can be adequately useful for real world tasks. (I'm open to the possibility that the answer is "no", but I'd like it to be "yes".)
+10 This is exactly what I've been saying the past year or so. This seems the consensus among people who have "been there and done that". I personally am avoiding fancy TMP libraries now in favor of simpler mechanisms.
Yes. A thousand times yes. TMP gets even easier with some of the C++14 features, so much so that I keep wondering if I'll ever use MPL (or the type-computation-only portion of Hana) again. Zach
Zach Laine <whatwasthataddress <at> gmail.com> writes:
[...]
TMP gets even easier with some of the C++14 features, so much so that I keep wondering if I'll ever use MPL (or the type-computation-only portion of Hana) again.
I agree that pure-type computations are much less useful than they used to be. In fact, I don't think I've used the type-level part of Hana for a lot more than proving that it really works to skeptics. For actual work, it turns out that auto-deduced return type and algorithms on tuples can get you a long way before you actually need to mess with the type-level. Also, for really really small type-level computations, <type_traits> are usually all that's needed, and _any_ metaprogramming library would just get in the way. Just my .2 Regards, Louis
On Tue, Jun 2, 2015 at 10:16 AM, Louis Dionne <ldionne.2@gmail.com> wrote:
Zach Laine <whatwasthataddress <at> gmail.com> writes:
[...]
TMP gets even easier with some of the C++14 features, so much so that I keep wondering if I'll ever use MPL (or the type-computation-only portion of Hana) again.
I agree that pure-type computations are much less useful than they used to be. In fact, I don't think I've used the type-level part of Hana for a lot more than proving that it really works to skeptics. For actual work, it turns out that auto-deduced return type and algorithms on tuples can get you a long way before you actually need to mess with the type-level. Also, for really really small type-level computations, <type_traits> are usually all that's needed, and _any_ metaprogramming library would just get in the way.
Just my .2
This is my experience exactly. Except that I usually just give 0.02, but maybe I'm just cheap. ;) Zach
On 02/06/2015 04:32, Joel de Guzman wrote:
+10 This is exactly what I've been saying the past year or so. This seems the consensus among people who have "been there and done that". I personally am avoiding fancy TMP libraries now in favor of simpler mechanisms.
I've begun followign this too. Edouard Alligand and I also started a shared project largely inspired from Peter works for our very limited use cases we didn't want to duplicate: https://github.com/edouarda/brigand Not perfect in any sense but it looks satisfying to use :)
Peter, I can confirm peeking at the code that at least clang ships a naive linear recursion implementation of make_index_sequence and I read somewhere that GCC doesn't do any better, however it's well known <http://stackoverflow.com/a/17426611/801438> it may be implemented in O(log(n)) plus taking full advantage of memoization. Since Part 2 of your article is all about benchmarking, have you actually tried it? I see your "duplicate immune" version of mp_contains_impl and also mp_map_find and hence mp_at depend on it, so I would expect one to see measurable improvements on their benchmarks. Following is an implementation if *Bruno C. O. Dutra*
*<it seems I accidentally hit send, sorry about that>* 2015-07-24 0:42 GMT-03:00 Bruno Dutra <brunocodutra@gmail.com>:
Peter,
I can confirm peeking at the code that at least clang ships a naive linear recursion implementation of make_index_sequence and I read somewhere that GCC doesn't do any better, however it's well known <http://stackoverflow.com/a/17426611/801438> it may be implemented in O(log(n)) plus taking full advantage of memoization. Since Part 2 of your article is all about benchmarking, have you actually tried it? I see your "duplicate immune" version of mp_contains_impl and also mp_map_find and hence mp_at depend on it, so I would expect one to see measurable improvements on their benchmarks. Following is an implementation if
Following is an implementation of it template<std::size_t...> struct indices { using type = indices; }; template<typename, typename> struct merge_indices; template<std::size_t... l, std::size_t... u> struct merge_indices<indices<l...>, indices<u...>> : indices<l..., sizeof...(l) + u...> {}; template<std::size_t n> struct make_indices; template<std::size_t n> using make_indices_t = typename make_indices<n>::type; template<std::size_t n> struct make_indices : merge_indices<make_indices_t<n/2>, make_indices_t<n - n/2>> {}; template<> struct make_indices<0U> : indices<> {}; template<> struct make_indices<1U> : indices<0U> {}; template<std::size_t n> using make_indices_t = typename make_indices<n>::type; Regards, *Bruno C. O. Dutra*
Bruno Dutra wrote:
Peter,
I can confirm peeking at the code that at least clang ships a naive linear recursion implementation of make_index_sequence and I read somewhere that GCC doesn't do any better, however it's well known <http://stackoverflow.com/a/17426611/801438> it may be implemented in O(log(n)) plus taking full advantage of memoization. Since Part 2 of your article is all about benchmarking, have you actually tried it?
Yes, the timings in Part 2 use a log N integer_sequence. // integer_sequence template<class T, T... I> struct integer_sequence { }; template<class S1, class S2> struct append_integer_sequence; template<class T, T... I, T... J> struct append_integer_sequence<integer_sequence<T, I...>, integer_sequence<T, J...>> { using type = integer_sequence< T, I..., ( J + sizeof...(I) )... >; }; template<class T, T N> struct make_integer_sequence_impl; template<class T, T N> using make_integer_sequence = typename make_integer_sequence_impl<T, N>::type; template<class T, T N> struct make_integer_sequence_impl_ { private: static T const M = N / 2; static T const R = N % 2; using S1 = make_integer_sequence<T, M>; using S2 = typename append_integer_sequence<S1, S1>::type; using S3 = make_integer_sequence<T, R>; using S4 = typename append_integer_sequence<S2, S3>::type; public: using type = S4; }; template<class T, T N> struct make_integer_sequence_impl: mp_if_c< N == 0, mp_identity<integer_sequence<T>>, mp_if_c< N == 1, mp_identity<integer_sequence<T, 0>>, make_integer_sequence_impl_<T, N> > > { }; // index_sequence template<std::size_t... I> using index_sequence = integer_sequence<std::size_t, I...>; template<std::size_t N> using make_index_sequence = make_integer_sequence<std::size_t, N>;
On Jul 24, 2015 6:11 AM, "Peter Dimov" <lists@pdimov.com> wrote:
Bruno Dutra wrote:
Peter, Since Part 2 of your article is all about benchmarking, have you
actually tried it?
Yes, the timings in Part 2 use a log N integer_sequence.
Interesting. Since your code snippets refer to the std library for the implementation of integer_sequence, at first I thought it could've been the culprit behind the degradation on performance of mp_contains, however, If it were the case, such an impressive performance of mp_at wouldn't be expected. At any rate I decided to mention it just in case you might have overlooked it. I wonder why mp_contains is that affected by the double inheritance depth and if there's any way around it. Especially on MSVC, that looks like a bug. Regards, Bruno
On 5/30/15 9:26 AM, Peter Dimov wrote:
I've recently made the mistake to reread Eric Niebler's excellent "Tiny Metaprogramming Library" article
http://ericniebler.com/2014/11/13/tiny-metaprogramming-library/
which of course prompted me to try to experiment with my own tiny metaprogramming library and to see how I'd go about implementing tuple_cat (a challenge Eric gives.)
Ordinarily, any such experiments of mine leave no trace once I abandon them and move on,
We need a permanent place for stuff like this.
but this time I decided to at least write an article about the result, so here it is, with the hope someone might find it useful. :-)
This to me is incredible. I worthy successor to Abraham's and Gurtovoy's MPL book. Clearly C++/11 is going to turnout to be a major game changer with repercussions beyond what we can imagine now. But I had a really dumb question. The examples use mp_if< But I couldn't find the definition of mp_if anywhere. The document looks self contained - what am I missing here? Robert Ramey
Robert Ramey wrote:
This to me is incredible. I worthy successor to Abraham's and Gurtovoy's MPL book. Clearly C++/11 is going to turnout to be a major game changer with repercussions beyond what we can imagine now.
Thank you for the kind words.
But I had a really dumb question. The examples use mp_if< But I couldn't find the definition of mp_if anywhere. The document looks self contained - what am I missing here?
It's not a dumb question at all - this is an oversight on my part. I updated the second article with the definition of mp_if. Thanks for catching it.
On 7/16/2015 10:35 AM, Robert Ramey wrote:
On 5/30/15 9:26 AM, Peter Dimov wrote:
I've recently made the mistake to reread Eric Niebler's excellent "Tiny Metaprogramming Library" article
http://ericniebler.com/2014/11/13/tiny-metaprogramming-library/
which of course prompted me to try to experiment with my own tiny metaprogramming library and to see how I'd go about implementing tuple_cat (a challenge Eric gives.)
Ordinarily, any such experiments of mine leave no trace once I abandon them and move on,
We need a permanent place for stuff like this.
but this time I decided to at least write an article about the result, so here it is, with the hope someone might find it useful. :-)
This to me is incredible. I worthy successor to Abraham's and Gurtovoy's MPL book. Clearly C++/11 is going to turnout to be a major game changer with repercussions beyond what we can imagine now.
AFAICT, Peter's code is basically a work-alike of my Meta library [^1] with one feature added (the ability to treat any template instance as a typelist), and one feature removed (metafunction classes and their associated utilities). I really believe losing metafunction classes is a significant loss -- not so much because template template parameters are so awful (but they kind of are), but because all the functional composition utilities (bind, compose, curry, uncurry, on, etc.) are so incredibly useful. And also because core issue #1430 [^2] makes it impossible to handle variadic and fixed-arity template aliases uniformly. Meta's approach -- borrowed from the MPL -- of quoting template template parameters to turn them into metafunction classes (aka alias classes) puts the workaround for #1430 in one place so that it can be ignored and forgotten. But in the end, Peter's and my thinking is pretty aligned wrt metaprogramming in C++11 and beyond. [1]: https://github.com/ericniebler/meta [2]: http://www.open-std.org/jtc1/sc22/wg21/docs/cwg_active.html#1430 -- Eric Niebler Boost.org http://www.boost.org
On Aug 13, 2015 3:45 PM, "Eric Niebler" <eniebler@boost.org> wrote:
AFAICT, Peter's code is basically a work-alike of my Meta library [^1] with one feature added (the ability to treat any template instance as a typelist), and one feature removed (metafunction classes and their associated utilities).
I really believe losing metafunction classes is a significant loss -- not so much because template template parameters are so awful (but they kind of are), but because all the functional composition utilities (bind, compose, curry, uncurry, on, etc.) are so incredibly useful. And also because core issue #1430 [^2] makes it impossible to handle variadic and fixed-arity template aliases uniformly.
Meta's approach -- borrowed from the MPL -- of quoting template template parameters to turn them into metafunction classes (aka alias classes) puts the workaround for #1430 in one place so that it can be ignored and forgotten.
But in the end, Peter's and my thinking is pretty aligned wrt metaprogramming in C++11 and beyond.
[1]: https://github.com/ericniebler/meta [2]: http://www.open-std.org/jtc1/sc22/wg21/docs/cwg_active.html#1430
After maturing my understanding of metaprogramming a little further, I come to the conclusion eager metafunction evaluation using template aliases and lazy metafunction evaluation through explicit access of a nested ::type are two sides of the same coin and choosing one over the other is mostly just a matter of taste. That said, I'd like to share two reasons why I prefer the lazy version: 1. Most tools for metaprogramming in the standard library are lazy, so following the principle of least surprise, lazy it shall remain 2. Lazy evaluation of metafunctions allow for expressive construction of "lambda expressions" as formalized by MPL I concur with Eric that functional composition is a killer feature and I strongly believe it should constitute the very core of any metaprogramming library. I just go a step further and greatly simplify things by getting rid of "metafunction classes" altogether. I've managed to transfer the entire burden of abstraction to an analogous of MPL's apply, which expects a "lambda expression" and a set of arguments and, through a handful of partial specializations, directly evaluates it. Atop apply<>, bind<> becomes a one-liner, as do everything else just as gracefully. Oh and this way one also avoids dealing with core issue #1430, since MPL's quote<> is no more. If one looks closer, by doing the actual recursive metafunction evaluation in a SFINAE context behind the scenes, apply<> becomes a monadic bind of metafunctions, which, from this perspective, are themselves nothing more than optionals. Such concepts bring great expressiveness to a functional world. I've been exploring this idea with my library Metal [1] (yes the name kindda sucks, I know, but it is what it is) which is an evolution of my earlier attempt of reimplementing MPL. Not that gave up on the earlier idea, it just has grown far beyond that. [1]: https://brunocodutra.github.io/metal/concepts.html
On 8/14/2015 6:05 AM, Bruno Dutra wrote:
After maturing my understanding of metaprogramming a little further, I come to the conclusion eager metafunction evaluation using template aliases and lazy metafunction evaluation through explicit access of a nested ::type are two sides of the same coin and choosing one over the other is mostly just a matter of taste.
That said, I'd like to share two reasons why I prefer the lazy version:
1. Most tools for metaprogramming in the standard library are lazy, so following the principle of least surprise, lazy it shall remain
I think that's primarily, or at least in part, because alias templates didn't exist before C++11. Just because we did things that way in the past doesn't necessarily mean it's the best way now.
2. Lazy evaluation of metafunctions allow for expressive construction of "lambda expressions" as formalized by MPL
True, but in my experience, lambdas are not often needed.
I concur with Eric that functional composition is a killer feature and I strongly believe it should constitute the very core of any metaprogramming library. I just go a step further and greatly simplify things by getting rid of "metafunction classes" altogether. I've managed to transfer the entire burden of abstraction to an analogous of MPL's apply, which expects a "lambda expression" and a set of arguments and, through a handful of partial specializations, directly evaluates it. Atop apply<>, bind<> becomes a one-liner, as do everything else just as gracefully. Oh and this way one also avoids dealing with core issue #1430, since MPL's quote<> is no more.
Although interesting from a design perspective, I suspect that if you benchmark you'll find this approach is too heavy. Compile-time lambdas are expensive. Turning *every* metafunction evaluation into a lambda evaluation is going to kill compile-times. Apologies if I've misunderstood.
If one looks closer, by doing the actual recursive metafunction evaluation in a SFINAE context behind the scenes, apply<> becomes a monadic bind of metafunctions, which, from this perspective, are themselves nothing more than optionals. Such concepts bring great expressiveness to a functional world.
You refer to your eval template that returns just<Ret> or nothing? It's interesting, but essentially the same as turning evaluation failures into a SFINAE-able condition. It has the same pros and cons, too. If you get back a "nothing" from a complicated computation, you're left wondering why. I don't have a good solution to that yet.
I've been exploring this idea with my library Metal [1] (yes the name kindda sucks, I know, but it is what it is) which is an evolution of my earlier attempt of reimplementing MPL. Not that gave up on the earlier idea, it just has grown far beyond that.
Thanks, -- Eric Niebler Boost.org http://www.boost.org
2015-08-19 20:31 GMT-03:00 Eric Niebler <eniebler@boost.org>:
<snip>
I concur with Eric that functional composition is a killer feature and I strongly believe it should constitute the very core of any metaprogramming library. I just go a step further and greatly simplify things by getting rid of "metafunction classes" altogether. I've managed to transfer the entire burden of abstraction to an analogous of MPL's apply, which expects a "lambda expression" and a set of arguments and, through a handful of partial specializations, directly evaluates it. Atop apply<>, bind<> becomes a one-liner, as do everything else just as gracefully. Oh and this way one also avoids dealing with core issue #1430, since MPL's quote<> is no more.
Although interesting from a design perspective, I suspect that if you benchmark you'll find this approach is too heavy. Compile-time lambdas are expensive. Turning *every* metafunction evaluation into a lambda evaluation is going to kill compile-times.
Apologies if I've misunderstood.
Not every metafunction evaluation is made into a lambda evaluation, just those that require higher order composability, such as fold, transform, count_if, etc. In any case I'd like to discuss some numbers regarding the following three *naive* implementations of transform<>, where extract<> is an alias to "typename ::type", eval<> [1] is a SFINAE friendly metafunction evaluator and apply<> behaves like its homonym from MPL, but uses eval<> as its SFINAE backend. template< template<typename...> class expr, template<typename...> class list, typename... args
struct transform<expr, list<args...>> { // using type = list<extract<expr<args>>...> A // using type = list<extract<eval<expr, args>>...> B // using type = list<extract<apply<expr<_1>, args>>...> C }; And the numbers for GCC 5.2.0 are: 1000 3000 5000 A 0.64 1.65 2.62 B 0.86 2.31 3.73 C 2.01 5.51 9.36 Clearly you are right, lambdas are expensive, but interestingly eval<> is not that much. Congratulations, you have just convinced me to provide a further specialization for apply<>: template< template<template<typename...> class> class lbd, template<typename...> class expr, typename... args
struct apply<lbd<expr>, args...> : eval<expr, args...> {}; Which allows the user to not depend on lambdas if she doesn't need to. template<template<typename...> class> struct lbd; template< template<typename...> class expr, template<typename...> class list, typename... args
struct transform<expr, list<args...>> { using type = list<extract<apply<lbd<expr>, args>>...>; }; And the numbers are 1000 3000 5000 D 0.84 2.33 3.75 i.e. essentially the same as B.
If one looks closer, by doing the actual recursive metafunction
evaluation in a SFINAE context behind the scenes, apply<> becomes a monadic bind of metafunctions, which, from this perspective, are themselves nothing more than optionals. Such concepts bring great expressiveness to a functional world.
You refer to your eval template that returns just<Ret> or nothing? It's interesting, but essentially the same as turning evaluation failures into a SFINAE-able condition. It has the same pros and cons, too. If you get back a "nothing" from a complicated computation, you're left wondering why. I don't have a good solution to that yet.
What I mean is that by carefully using SFINAE to avoid internal errors one can guarantee that instantiating any metafunction part of the library is "safe", provided of course the user provides SFINAE friendly arguments. Now allow me to elaborate on what I mean by "safe". First let's consider the case where user provided arguments are SFINAE friendly, i.e. user provided lambda, for instance, does not inherit from some undefined/final/fundamental type or anything of the sort, then instantiating any metafunction<args...> part of the library is guaranteed to be strictly equivalent to either of the following: 1. opt {}; 2. opt {using type = ret;}; Since, by definition, a lazy metafunction is only evaluated when its nested ::type is named, then any metafunction is inherently a model of optional, that is, it either has a nested type (just something) or not (nothing). Such a guarantee allows one to provide traits such as is_just and is_nothing that can be used to safely test whether any metafunction call has succeeded, this way, has_key, for example, may be reduced to the following: template<typename seq, typename key> struct has_key : is_just<at_key<seq, key>> {}; and is guaranteed to safely inherit from either std::true_type or std::false_type for whatever arguments the user provides. Now lets imagine the user nonetheless tries to evaluate a metafunction that instantiates to "nothing". The only error the compiler will raise will look similar to the following: error: no type named 'type' in 'struct metafunction<args...>' That's it, no scary internals exposed. Now lets assume the user provides arguments that are not SFINAE friendly that do induce some irrecoverable error. In this case the compiler will complain about an error in the user code, which is even less scary than some internal error inside a library. I don't think there is any better approach regarding user friendliness apart from static_assert'ing everything, which, on the other hand, has the down side of making it impossible to implement useful traits such as is_just and is_nothing. Thank you for such an insightful discussion, Bruno [1] https://brunocodutra.github.io/metal/structmetal_1_1eval.html
On 2015-08-20 19:50, Bruno Dutra wrote: <snip>
Now lets imagine the user nonetheless tries to evaluate a metafunction that instantiates to "nothing". The only error the compiler will raise will look similar to the following:
error: no type named 'type' in 'struct metafunction<args...>'
That's it, no scary internals exposed.
I'm not so sure that's wise. These "scary internals" are often vital hints to debugging when code isn't doing what you expect. How confident are you that such is not needed here? John Bytheway
2015-08-22 8:43 GMT-03:00 John Bytheway <jbytheway+boost@gmail.com>:
Now lets imagine the user nonetheless tries to evaluate a metafunction
On 2015-08-20 19:50, Bruno Dutra wrote: <snip> that
instantiates to "nothing". The only error the compiler will raise will look similar to the following:
error: no type named 'type' in 'struct metafunction<args...>'
That's it, no scary internals exposed.
I'm not so sure that's wise. These "scary internals" are often vital hints to debugging when code isn't doing what you expect. How confident are you that such is not needed here?
John Bytheway
How confident are you that "scary internals" provide any better hint than the actual list of arguments that were used to instantiate the affected metafunction? Now seriously, allow me to elaborate. Suppose the user provides an invalid set of arguments to some metafunction, say a fundamental type where a list is expected, and this invalid set of arguments is allowed to be blindly forwarded ever deeper up to a point where some error is triggered. Invariably, the first error reported by the compiler would be about some undocumented internal template which can't be instantiated for some set of arguments that, in turn, are often totally unrelated to what the user is trying to do. All while the actual metafunction instantiated by the user, together with the invalid set of arguments that are in fact behind the everything, would be dozens of lines bellow, hidden among multiple other [probably unexpected] metafunction instantiations. I fail to imagine how an error message could be more misleading than this. Assuming the library is free of bugs, the error must have its roots in the set of arguments provided by the user, so that's the first thing the user should check and is usually the user's only hope of figuring it out. That's precisely the very first thing I make sure the compiler outputs as part of its error message. In my own experience, every time I abuse MPL, I find myself scrolling past dozens and dozens of incomprehensible, and worse yet, recursive template instantiations, looking for the actual metafunction instantiation that triggered the error. It's not fun at all. Bruno
On 8/13/15 11:44 AM, Eric Niebler wrote:
On 7/16/2015 10:35 AM, Robert Ramey wrote:
On 5/30/15 9:26 AM, Peter Dimov wrote:
I've recently made the mistake to reread Eric Niebler's excellent "Tiny Metaprogramming Library" article
http://ericniebler.com/2014/11/13/tiny-metaprogramming-library/
which of course prompted me to try to experiment with my own tiny metaprogramming library and to see how I'd go about implementing tuple_cat (a challenge Eric gives.)
Ordinarily, any such experiments of mine leave no trace once I abandon them and move on,
We need a permanent place for stuff like this.
but this time I decided to at least write an article about the result, so here it is, with the hope someone might find it useful. :-)
This to me is incredible. I worthy successor to Abraham's and Gurtovoy's MPL book. Clearly C++/11 is going to turnout to be a major game changer with repercussions beyond what we can imagine now.
AFAICT, Peter's code is basically a work-alike of my Meta library [^1] with one feature added (the ability to treat any template instance as a typelist), and one feature removed (metafunction classes and their associated utilities).
To me there is a fundamental difference between Peter's formulation and others such as yours, Hana, and others. I see Peter's document as a "cheat sheet" which describes useful TMP idioms which are supported directly by C++. It shows how far one can go with no library at all. This is useful in a way that a TMP library is not. a) It is extremely helpful as a learning tool. b) It is very useful to one who needs just a little bit of TMP to get over some programming hurdle in the most direct and expedient manner. If one includes one of these idioms in his code, you don't need to include a whole library with a lot of new conceptual overhead. You just have to refer to the Simple C++/TMP document. c) Adding some other library is a bigger deal. It requires reference to more elaborate complete documentation which someone has to maintain. Once someone creates a library, they naturally want to inform it with some conceptual integrity. This makes it better as a library, but then likely requires new concepts and more complete documentation and maintenance. So I see Peter's Cheet sheet as complementing a library solution. It helps users learn the concepts, provides solutions for many problems, but has not pretension to creating a "library". Of course as people become more comfortable with TMP through the use of Peter's cheetsheet, some will eventually move on to creating some more complex metafunctions on their own and/or move to a more complete library solution. In short - I see peter's work as complementing rather than competing with library solutions.
I really believe losing metafunction classes is a significant loss -- not so much because template template parameters are so awful (but they kind of are), but because all the functional composition utilities (bind, compose, curry, uncurry, on, etc.) are so incredibly useful. And also because core issue #1430 [^2] makes it impossible to handle variadic and fixed-arity template aliases uniformly.
Right - If you need this stuff - use a library.
Meta's approach -- borrowed from the MPL -- of quoting template template parameters to turn them into metafunction classes (aka alias classes) puts the workaround for #1430 in one place so that it can be ignored and forgotten.
Which of course is the essence and motivation for creating a library. But Peter's document doesn't purport to be a library. I realize that peter got sucked in to demonstrating how far one could take this - his ego got the best of him and this diluted his point. The power of his document is not that it can do everything - it's that one can do a lot with zero conceptual overhead and very little investment.
But in the end, Peter's and my thinking is pretty aligned wrt metaprogramming in C++11 and beyond.
I think we all agree that C++11+ has opened up lot's of unanticipated new territory in meta programming.
[1]: https://github.com/ericniebler/meta [2]: http://www.open-std.org/jtc1/sc22/wg21/docs/cwg_active.html#1430
Robert Ramey
Robert Ramey wrote:
So I see Peter's Cheet sheet as complementing a library solution. It helps users learn the concepts, provides solutions for many problems, but has not pretension to creating a "library".
Eric may have been referencing https://github.com/pdimov/mp11 which is a library (not yet finished.)
participants (17)
-
Agustín K-ballo Bergé
-
Bruno Dutra
-
charleyb123 .
-
David Stone
-
Eric Niebler
-
Joel de Guzman
-
Joel FALCOU
-
John Bytheway
-
Louis Dionne
-
Mathias Gaunard
-
Michael Caisse
-
Paul A. Bristow
-
Peter Dimov
-
pfultz2
-
Robert Ramey
-
Vicente J. Botet Escriba
-
Zach Laine