
Dear Boost community, The formal review of Louis Dionne's Hana library begins today,10th June and ends on 24th June. Hana is a header-only library for C++ metaprogramming that provides facilities for computations on both types and values. It provides a superset of the functionality provided by Boost.MPL and Boost.Fusion but with more expressiveness, faster compilation times, and faster (or equal) run times. To dive right in to examples, please see the Quick start section of the library's documentation: http://ldionne.com/hana/index.html#tutorial-quickstart Hana makes use of C++14 language features and thus requires a C++14 conforming compiler. It is recommended you evaluate it with clang 3.5 or higher. Hana's source code is available on Github: https://github.com/ldionne/hana Full documentation is also viewable on Github: http://ldionne.github.io/hana To read the documentation offline: git clone http://github.com/ldionne/hana --branch=gh-pages doc/gh-pages For a gentle introduction to Hana, please see: 1. C++Now 2015: http://ldionne.github.io/hana-cppnow-2015 (slides) 2. C++Con 2014: https://youtu.be/L2SktfaJPuU (video) http://ldionne.github.io/hana-cppcon-2014 (slides) We encourage your participation in this review. At a minimum, kindly state: - Whether you believe the library should be accepted into Boost * Conditions for acceptance - Your name - Your knowledge of the problem domain. You are strongly encouraged to also provide additional information: - What is your evaluation of the library's: * Design * Implementation * Documentation * Tests * Usefulness - Did you attempt to use the library? If so: * Which compiler(s) * What was the experience? Any problems? - How much effort did you put into your evaluation of the review? We await your feedback! Best, Glen

Glen Fernandes <glen.fernandes <at> gmail.com> writes:
Dear Boost community,
The formal review of Louis Dionne's Hana library begins today,10th June and ends on 24th June.
[...]
Hana's source code is available on Github: https://github.com/ldionne/hana
Full documentation is also viewable on Github: http://ldionne.github.io/hana
To read the documentation offline: git clone http://github.com/ldionne/hana --branch=gh-pages doc/gh-pages
For a gentle introduction to Hana, please see: 1. C++Now 2015: http://ldionne.github.io/hana-cppnow-2015 (slides) 2. C++Con 2014: https://youtu.be/L2SktfaJPuU (video) http://ldionne.github.io/hana-cppcon-2014 (slides)
Dear Boost, I have made a version of Hana available online through the Wandbox service: http://melpon.org/wandbox/permlink/MZqKhMF7tiaNZdJg This will allow you to play around with Hana, and compile and run your programs online. You don't have any reason not to try Hana out anymore :-). However, please note that the version of Hana available online is slightly different from the one currently on master, which is frozen for the duration of the review. The two breaking changes are: 1. the removal of the Traversable concept 2. the merge of the <boost/hana/struct_macros.hpp> header into the <boost/hana/struct.hpp> header You should probably not notice any difference. Regards, Louis

Here some points I see as I have been reviewing Hana. I see you have made some changes on github, so maybe some of this doesn't apply anymore. - Dot shouldn't be used in names, underscores should be used instead. - `fold` and `reverse_fold` should be preferred over `fold_right` and ``fold_left`. This is more familiar to C++ programmers. - Concepts are capitalized, however, models of a concept should not be capitalized(such as `IntregralConstant`, `Either`, `Lazy`, `Optional`, `Tuple`, etc) - IntregralConstant is very strange. In Hana, its not a concept(even though its capitalized), but rather a so called "data type". Furthermore, because of this strangeness it doesn't interoperate with other IntregralConstants(such as from Tick) even though all the operators are defined. - The `.times` seems strange and should be a free function so it works with any IntregralConstant. - In the section 'Taking control of SFINAE', seems like it could be solved a lot simpler and easier using `overload_linear`. - Concepts refer to 'superclasses' these should be listed either as refinements or listed under the requirements section(which seem to be missing). It would be nicer if the concepts were documented like how they are at cppreference: http://en.cppreference.com/w/cpp/concept - Concepts make no mention of minimum type requirement such as MoveConstructible. - Organization of documentation could be better. Its nice showing what algorithms when the user views a concept, but it would be better if all the algorithms could be viewed together. - For compile-time `Iterable` sequence(which is all you support right now), `is_empty` can be inferred, and should be optional. - Overall, I think the Concepts could be simplified. They seem to be too complicated, and it leads to many surprises which seem to not make sense(such as using `Range` or `String` with `concat` or using `tick::integral_constant`). - Currently, none of the algorithms are constrained, instead it uses `static_assert`, which I think is bad for a library that is targeting modern compilers. - It would be nice if the use of variable templates would be optional(and not used internally), since without inline variables, it can lead to ODR violations and executable bloat. Overall, I would like to see Hana compile on more compilers before it gets accepted into boost(currently it doesn't even compile on my macbook). Thanks, Paul -- View this message in context: http://boost.2283326.n4.nabble.com/Boost-Hana-Formal-review-for-Hana-tp46769... Sent from the Boost - Dev mailing list archive at Nabble.com.

Paul Fultz II <pfultz2 <at> yahoo.com> writes:
Here some points I see as I have been reviewing Hana. I see you have made some changes on github, so maybe some of this doesn't apply anymore.
The changes I made are only to develop, so whatever you say is still applicable to master.
- Dot shouldn't be used in names, underscores should be used instead.
This will be modified, since it was asked for by 3 different persons so far. For future reference, you can refer to [1] for the status of this issue.
- `fold` and `reverse_fold` should be preferred over `fold_right` and ``fold_left`. This is more familiar to C++ programmers.
First, to my knowledge, the only libraries that even define fold and/or reverse_fold are Fusion and MPL, so it's not like there was an undeniable precedent for using those names instead of something else in C++. But even then, `fold` and `reverse_fold` functions are provided for consistency with those libraries, so I really don't see what's the problem. If you prefer those names, you can use them and they have exactly the same semantics as their Fusion counterpart.
- Concepts are capitalized, however, models of a concept should not be capitalized(such as `IntregralConstant`, `Either`, `Lazy`, `Optional`, `Tuple`, etc)
`IntegralConstant`, `Tuple`, etc... are tags used for tag dispatching, like Fusion's `vector_tag` & friends. I can't use a non-capitalized version as-is, because it's going to clash with `integral_constant`. Also, I find that using something like `tuple_tag` is uglier than using `Tuple`. Consider for example make<tuple_tag>(xs...) to<tuple_tag>(xs) make<Tuple>(xs...) to<Tuple>(xs) I find that using `Tuple` just leads to prettier code. Also, since Hana does not specify the type of all of its containers, we refer to them by their tag instead of their (unspecified) type. So for example, I talk of a Hana `Range`, not a Hana `range`, because the latter is not a type. If I was to use the name `range_tag` instead of `Range`, I couldn't do that as easily. Fusion does not have this problem because all of its containers have well specified types, so you can always talk about a `fusion::vector` and it is always understood that this includes `fusion::vector3`, but Hana does not have that.
- IntregralConstant is very strange. In Hana, its not a concept(even though its capitalized), but rather a so called "data type". Furthermore, because of this strangeness it doesn't interoperate with other IntregralConstants(such as from Tick) even though all the operators are defined.
IntegralConstant is not a concept, that's true. The fact that it does not interoperate with Tick's IntegralConstants out-of-the-box has nothing to do with that, however. You must make your Tick integral constants a model of Hana's Constant concept for it to work. See the gist at [2] for how to do that.
- The `.times` seems strange and should be a free function so it works with any IntregralConstant.
The idea of having a `.times` member function comes from Ruby [3]. I personally think it is expressive and a simple tool for a simple job. The syntax is also much cleaner than using a non-member function. Consider int_<10>.times([]{ std::cout << "foo" << std::endl; }); times(int_<10>, []{ std::cout << "foo" << std::endl; }); However, as we've been discussing in this issue [4], I might add an equivalent non-member function.
- In the section 'Taking control of SFINAE', seems like it could be solved a lot simpler and easier using `overload_linear`.
First, `overload_linearly` is an implementation detail, since I've moved the Functional module out of the way to give me the possibility of using Fit in the future. However, your point is valid; I could also write the following instead: auto optionalToString = hana::overload_linearly( [](auto&& x) -> decltype(x.toString()) { return x.toString(); }, [](auto&&) -> std::string { return "toString not defined"; } ); This approach is fine for the optionalToString function, which is rather simple. I wanted to show how to use Optional to control compile-time empty-ness in complex cases, so I'll just expand this section or change the example to something that is really better solved using Optional. Thanks for the heads up; you can refer to this issue [5] in the future.
- Concepts refer to 'superclasses' these should be listed either as refinements or listed under the requirements section(which seem to be missing). It would be nicer if the concepts were documented like how they are at cppreference: http://en.cppreference.com/w/cpp/concept
This was fixed on develop. I now use the term "Refined concept" instead of "Superclass". Regarding concept requirements, they are listed in the "minimal complete definition" section of each concept. Then, semantic properties that must be satisfied are explained in the "laws" section.
- Concepts make no mention of minimum type requirement such as MoveConstructible.
I believe the right place to put this would be in the documentation of concrete models like `Tuple`, but not in the concepts (like `Sequence`). Hana's concepts operate at a slightly higher level and they do not really have a notion of storage. But I agree that it is necessary to document these requirements. Please refer to this issue [6] for status.
- Organization of documentation could be better. Its nice showing what algorithms when the user views a concept, but it would be better if all the algorithms could be viewed together.
I assume you are talking about the reference documentation and not the tutorial. I agree that it could be easier to look for algorithms. There are also other quirks I'd like to see gone. The problem is that Doxygen is pretty inflexible, and I'm already tweaking it quite heavily. I'm considering either using a different tool completely or making some changes in the organization of the reference. Your more precise comment about viewing algorithms on their own page is already tracked by this issue [7].
- For compile-time `Iterable` sequence(which is all you support right now), `is_empty` can be inferred, and should be optional.
How can it be inferred?
- Overall, I think the Concepts could be simplified. They seem to be too complicated, and it leads to many surprises which seem to not make sense(such as using `Range` or `String` with `concat` or using `tick::integral_constant`).
1. Concatenating ranges does not make sense. A Hana Range is a contiguous sequence of compile-time integers. What happens when you concatenate `make_range(0_c, 3_c)` with `make_range(6_c, 10_c)`? It's not contiguous anymore, so it's not a Range anymore. 2. Concatenating strings makes complete sense, indeed. This could be handled very naturally by defining a `Monoid` model, but it was not done because I did not like using `+` for concatenating strings :-). I opened this issue [8] to try and find a proper resolution. 3. Tick's integral_constants can be handled as I explained in the gist at [2]. As a fundamental library, Hana was designed to be very general and extensible in ways I couldn't possibly foresee. Hence, I could have stuck with Fusion's concept hierarchy (minus iterators), but that would have been less general than what I was aiming for. Also, Hana is slightly biased towards functional programming, and it reflects in the concepts. If that is what you mean by "complicated", then I'd say this generality and power is a feature rather than a bug. I would really like to know specifically which concepts you find too complicated or superfluous. There are definitely things that could be improved, but in general I am very content with the current hierarchy and I think this is one of Hana's strengths, to be frank.
- Currently, none of the algorithms are constrained, instead it uses `static_assert`, which I think is bad for a library that is targeting modern compilers.
People have mixed opinions about this. I personally think the last thing you want is for an overload to SFINAE-out because of some failure deep down the call chain, considering we're working with heterogeneous objects. I think the best way to go is to fail very fast and very explicitly with a nice static_assert message, which is what Hana tries very hard to do. I also think the minority of people that would benefit from having SFINAE friendly algorithms is largely outweighted by the majority of non template metaprogramming gurus (likely not reading this list) who would rather have a nice and helpful `static_assert` message explaining what they messed up. Also, there's the problem that being SFINAE-friendly could hurt compile-time performance, because everytime you call an algorithm we'd have to check whether it's going to compile. However, because we're working with dependent types, checking whether the algorithm compiles requires doing the algorithm itself, which is in general costly. We could however emulate this by using the Tag system. For example, `fold_left` could be defined as: template <typename Xs, typename State, typename F, typename = std::enable_if_t<models<Foldable, Xs>()> > constexpr decltype(auto) fold_left(Xs&& xs, State&& state, F&& f) { // ... } This would give us an approximative SFINAE-friendliness, but like I said above I think the best approach is to fail loud and fast.
- It would be nice if the use of variable templates would be optional(and not used internally), since without inline variables, it can lead to ODR violations and executable bloat.
Without variable templates, we would have to write `type<T>{}`, `int_<1>{}`, etc.. all of the time instead of `type<T>` and `int_<1>`. Sure, that's just two characters, but considering you couldn't even rely on what is `type<T>` (if it were a type, since `decltype(type<T>)` is currently unspecified), we're really not gaining much. In short; no variable templates means a less usable library, without much benefits (see next paragraph). Regarding variable templates and ODR, I thought variable templates couldn't lead to ODR violations? I know global function objects (even constexpr) can lead to ODR violations, but I wasn't aware about the problem for variable templates. I would appreciate if you could show me where the problem lies more specifically. Also, for reference, there's a defect report [9] related to global constexpr objects, and an issue tracking this problem here [10]. Finally, regarding executable bloat, we're talking about stateless constexpr objects here. At worst, we're talking 1 byte per object. At best (and most likely), we're talking about 0 bytes because of link time optimizations. Otherwise, I could also give internal linkage to the global objects and they would probably be optimized away by the compiler itself, without even requiring LTO. Am I dreaming?
Overall, I would like to see Hana compile on more compilers before it gets accepted into boost(currently it doesn't even compile on my macbook).
What compiler did you try to compile it with? Also, an important GCC bug that was preventing Hana from working properly on GCC was fixed a couple of days ago, so I'm going to try to finish the port ASAP and I'm fairly confident that it should work on GCC 5.2. Thanks for all your comments. Regards, Louis [1]: https://github.com/ldionne/hana/issues/114 [2]: https://gist.github.com/ldionne/32f61a7661d219ca834d#file-main-cpp [3]: http://ruby-doc.org/core-1.9.3/Integer.html#method-i-times [4]: https://github.com/ldionne/hana/issues/100 [5]: https://github.com/ldionne/hana/issues/115 [6]: https://github.com/ldionne/hana/issues/116 [7]: https://github.com/ldionne/hana/issues/82 [8]: https://github.com/ldionne/hana/issues/117 [9]: http://www.open-std.org/JTC1/SC22/WG21/docs/cwg_active.html#2104 [10]: https://github.com/ldionne/hana/issues/76

Here some points I see as I have been reviewing Hana. I see you have made some changes on github, so maybe some of this doesn't apply anymore.
The changes I made are only to develop, so whatever you say is still applicable to master.
- Dot shouldn't be used in names, underscores should be used instead.
This will be modified, since it was asked for by 3 different persons so far. For future reference, you can refer to [1] for the status of this issue.
Awesome.
- `fold` and `reverse_fold` should be preferred over `fold_right` and ``fold_left`. This is more familiar to C++ programmers.
First, to my knowledge, the only libraries that even define fold and/or reverse_fold are Fusion and MPL, so it's not like there was an undeniable precedent for using those names instead of something else in C++. But even then, `fold` and `reverse_fold` functions are provided for consistency with those libraries, so I really don't see what's the problem. If you prefer those names, you can use them and they have exactly the same semantics as their Fusion counterpart.
- Concepts are capitalized, however, models of a concept should not be capitalized(such as `IntregralConstant`, `Either`, `Lazy`, `Optional`, `Tuple`, etc)
`IntegralConstant`, `Tuple`, etc... are tags used for tag dispatching, like Fusion's `vector_tag` & friends. I can't use a non-capitalized version as-is, because it's going to clash with `integral_constant`. Also, I find that using something like `tuple_tag` is uglier than using `Tuple`. Consider for example
make<tuple_tag>(xs...) to<tuple_tag>(xs)
make<Tuple>(xs...) to<Tuple>(xs)
I find that using `Tuple` just leads to prettier code. Also, since Hana does not specify the type of all of its containers, we refer to them by their tag instead of their (unspecified) type. So for example, I talk of a Hana `Range`, not a Hana `range`, because the latter is not a type. If I was to use the name `range_tag` instead of `Range`, I couldn't do that as easily. Fusion does not have this problem because all of its containers have well specified types, so you can always talk about a `fusion::vector` and it is always understood that this includes `fusion::vector3`, but Hana does not have that.
Actually, for me, seeing it written as `tuple_tag` makes so much more sense when I read the code, and could help a lot when I read the documentation. Perhaps, that is just me, because other people don't seem to struggle with this.
- IntregralConstant is very strange. In Hana, its not a concept(even though its capitalized), but rather a so called "data type". Furthermore, because of this strangeness it doesn't interoperate with other IntregralConstants(such as from Tick) even though all the operators are defined.
IntegralConstant is not a concept, that's true. The fact that it does not interoperate with Tick's IntegralConstants out-of-the-box has nothing to do with that, however. You must make your Tick integral constants a model of Hana's Constant concept for it to work. See the gist at [2] for how to do that.
Awesome, thanks. So all the concepts are explicit.
- The `.times` seems strange and should be a free function so it works with any IntregralConstant.
The idea of having a `.times` member function comes from Ruby [3]. I personally think it is expressive and a simple tool for a simple job. The syntax is also much cleaner than using a non-member function. Consider
int_<10>.times([]{ std::cout << "foo" << std::endl; }); times(int_<10>, []{ std::cout << "foo" << std::endl; });
However, as we've been discussing in this issue [4], I might add an equivalent non-member function.
If you find the chaining much cleaner, then perhaps you could make it a pipable function instead: int_<10> | times([]{ std::cout << "foo" << std::endl; }); This way it could still apply to any IntegralConstant.
- In the section 'Taking control of SFINAE', seems like it could be solved a lot simpler and easier using `overload_linear`.
First, `overload_linearly` is an implementation detail, since I've moved the Functional module out of the way to give me the possibility of using Fit in the future. However, your point is valid; I could also write the following instead:
auto optionalToString = hana::overload_linearly( [](auto&& x) -> decltype(x.toString()) { return x.toString(); }, [](auto&&) -> std::string { return "toString not defined"; } );
This approach is fine for the optionalToString function, which is rather simple. I wanted to show how to use Optional to control compile-time empty-ness in complex cases, so I'll just expand this section or change the example to something that is really better solved using Optional.
Thanks for the heads up; you can refer to this issue [5] in the future.
Ok got it. Although, SFINAE is a pretty powerful compile-time Optional built into the language.
- Concepts refer to 'superclasses' these should be listed either as refinements or listed under the requirements section(which seem to be missing). It would be nicer if the concepts were documented like how they are at cppreference: http://en.cppreference.com/w/cpp/concept
This was fixed on develop. I now use the term "Refined concept" instead of "Superclass". Regarding concept requirements, they are listed in the "minimal complete definition" section of each concept. Then, semantic properties that must be satisfied are explained in the "laws" section.
Great.
- Concepts make no mention of minimum type requirement such as MoveConstructible.
I believe the right place to put this would be in the documentation of concrete models like `Tuple`, but not in the concepts (like `Sequence`). Hana's concepts operate at a slightly higher level and they do not really have a notion of storage. But I agree that it is necessary to document these requirements. Please refer to this issue [6] for status.
I was thinking more when using algorithms, since tuple will take on the constructibility of its members.
- Organization of documentation could be better. Its nice showing what algorithms when the user views a concept, but it would be better if all the algorithms could be viewed together.
I assume you are talking about the reference documentation and not the tutorial. I agree that it could be easier to look for algorithms. There are also other quirks I'd like to see gone. The problem is that Doxygen is pretty inflexible, and I'm already tweaking it quite heavily. I'm considering either using a different tool completely or making some changes in the organization of the reference.
Your more precise comment about viewing algorithms on their own page is already tracked by this issue [7].
Awesome. Have you thought about using a different documentation tool instead, like mkdocs or sphinx?
- For compile-time `Iterable` sequence(which is all you support right now), `is_empty` can be inferred, and should be optional.
How can it be inferred?
Well, there is several ways it could be formailised, but either `head` and `tail` do not exist for an empty sequence, or if `tail` always returns an empty sequence even when empty, you just detect that `seq == tail(seq)`.
- Overall, I think the Concepts could be simplified. They seem to be too complicated, and it leads to many surprises which seem to not make sense(such as using `Range` or `String` with `concat` or using `tick::integral_constant`).
1. Concatenating ranges does not make sense. A Hana Range is a contiguous sequence of compile-time integers. What happens when you concatenate `make_range(0_c, 3_c)` with `make_range(6_c, 10_c)`? It's not contiguous anymore, so it's not a Range anymore.
Even though concat takes a Range, why can't it just return a tuple instead?
2. Concatenating strings makes complete sense, indeed. This could be handled very naturally by defining a `Monoid` model, but it was not done because I did not like using `+` for concatenating strings :-). I opened this issue [8] to try and find a proper resolution.
3. Tick's integral_constants can be handled as I explained in the gist at [2].
As a fundamental library, Hana was designed to be very general and extensible in ways I couldn't possibly foresee. Hence, I could have stuck with Fusion's concept hierarchy (minus iterators), but that would have been less general than what I was aiming for. Also, Hana is slightly biased towards functional programming, and it reflects in the concepts. If that is what you mean by "complicated", then I'd say this generality and power is a feature rather than a bug.
I would really like to know specifically which concepts you find too complicated or superfluous. There are definitely things that could be improved, but in general I am very content with the current hierarchy and I think this is one of Hana's strengths, to be frank.
I agree that using Fusion style of concepts is a bad idea. The representation of the machine doesn't map to the representation in the compiler. I think part of it is my confusion with data types and concepts when reading the documentation. Also, it is more general, and the more I think about it, I don't think there is a simpler way and still support compile-time lazy and infinite sequences and be efficient.
- Currently, none of the algorithms are constrained, instead it uses `static_assert`, which I think is bad for a library that is targeting modern compilers.
People have mixed opinions about this. I personally think the last thing you want is for an overload to SFINAE-out because of some failure deep down the call chain, considering we're working with heterogeneous objects. I think the best way to go is to fail very fast and very explicitly with a nice static_assert message, which is what Hana tries very hard to do.
I also think the minority of people that would benefit from having SFINAE friendly algorithms is largely outweighted by the majority of non template metaprogramming gurus (likely not reading this list) who would rather have a nice and helpful `static_assert` message explaining what they messed up.
Also, there's the problem that being SFINAE-friendly could hurt compile-time performance, because everytime you call an algorithm we'd have to check whether it's going to compile. However, because we're working with dependent types, checking whether the algorithm compiles requires doing the algorithm itself, which is in general costly.
Templates are memoized by the compiler, so the algorithm isn't done twice.
We could however emulate this by using the Tag system. For example, `fold_left` could be defined as:
template <typename Xs, typename State, typename F, typename = std::enable_if_t<models<Foldable, Xs>()> > constexpr decltype(auto) fold_left(Xs&& xs, State&& state, F&& f) { // ... }
This would give us an approximative SFINAE-friendliness, but like I said above I think the best approach is to fail loud and fast.
It would fail loud not fast. Using substitution failure, the compiler will stop substitution as soon as their is a failure, whereas with a static_assert, it substitutes everything and then checks for failure. So using enable_if is always faster than static_assert. Also, as a side note, you should never use `enable_if_t`, as it is harder for the compiler to give a good diagnostic(a macro still works really well though if you don't mind macros).
- It would be nice if the use of variable templates would be optional(and not used internally), since without inline variables, it can lead to ODR violations and executable bloat.
Without variable templates, we would have to write `type<T>{}`, `int_<1>{}`, etc.. all of the time instead of `type<T>` and `int_<1>`. Sure, that's just two characters, but considering you couldn't even rely on what is `type<T>` (if it were a type, since `decltype(type<T>)` is currently unspecified), we're really not gaining much. In short; no variable templates means a less usable library, without much benefits (see next paragraph).
How is it less usable? It seems like it would be more usable, since the library can now support compilers with no or flaky variable templates.
Regarding variable templates and ODR, I thought variable templates couldn't lead to ODR violations? I know global function objects (even constexpr) can lead to ODR violations, but I wasn't aware about the problem for variable templates. I would appreciate if you could show me where the problem lies more specifically. Also, for reference, there's a defect report [9] related to global constexpr objects, and an issue tracking this problem here [10].
Finally, regarding executable bloat, we're talking about stateless constexpr objects here. At worst, we're talking 1 byte per object. At best (and most likely), we're talking about 0 bytes because of link time optimizations. Otherwise, I could also give internal linkage to the global objects and they would probably be optimized away by the compiler itself, without even requiring LTO. Am I dreaming?
The size of the symbol table is usually larger than 1 byte for binary formats.
Overall, I would like to see Hana compile on more compilers before it gets accepted into boost(currently it doesn't even compile on my macbook).
What compiler did you try to compile it with? Also, an important GCC bug that was preventing Hana from working properly on GCC was fixed a couple of days ago, so I'm going to try to finish the port ASAP and I'm fairly confident that it should work on GCC 5.2.
Using Apple's clang 6, which corresponds to clang 3.5 off of the trunk. What is the bug preventing compilation on gcc 5.2? Paul -- View this message in context: http://boost.2283326.n4.nabble.com/Boost-Hana-Formal-review-for-Hana-tp46769... Sent from the Boost - Dev mailing list archive at Nabble.com.

Paul Fultz II <pfultz2 <at> yahoo.com> writes:
[...]
- IntregralConstant is very strange. In Hana, its not a concept(even though its capitalized), but rather a so called "data type". Furthermore, because of this strangeness it doesn't interoperate with other IntregralConstants(such as from Tick) even though all the operators are defined.
IntegralConstant is not a concept, that's true. The fact that it does not interoperate with Tick's IntegralConstants out-of-the-box has nothing to do with that, however. You must make your Tick integral constants a model of Hana's Constant concept for it to work. See the gist at [2] for how to do that.
Awesome, thanks. So all the concepts are explicit.
Actually, not __all__ of them. There are a few concepts that refine other concepts, and whose laws are strict enough to guarantee that only a single model of these refined concepts can exist. In these cases, it is safe to provide an automatic model of some concept. For example, this is the case of Sequence. It refines several concepts, like Functor for example. However, the laws are so strict that if you are a Sequence, there is a unique valid model of the Functor concept that you could (and must) define. In this case, the model of Functor is provided automatically. You can of course specialize the algorithm for performance purposes. If the model of Functor was not forced to be unique because of Sequence's laws, Hana would not make an arbitrary choice and you would have to define that model explicitly.
- The `.times` seems strange and should be a free function so it works with any IntregralConstant.
The idea of having a `.times` member function comes from Ruby [3]. I personally think it is expressive and a simple tool for a simple job. The syntax is also much cleaner than using a non-member function. Consider
int_<10>.times([]{ std::cout << "foo" << std::endl; }); times(int_<10>, []{ std::cout << "foo" << std::endl; });
However, as we've been discussing in this issue [4], I might add an equivalent non-member function.
If you find the chaining much cleaner, then perhaps you could make it a pipable function instead:
int_<10> | times([]{ std::cout << "foo" << std::endl; });
This way it could still apply to any IntegralConstant.
That's an interesting idea. However, Hana does not have a concept of pipable function, so that would need to be added somehow.
- In the section 'Taking control of SFINAE', seems like it could be solved a lot simpler and easier using `overload_linear`.
First, `overload_linearly` is an implementation detail, since I've moved the Functional module out of the way to give me the possibility of using Fit in the future. However, your point is valid; I could also write the following instead:
auto optionalToString = hana::overload_linearly( [](auto&& x) -> decltype(x.toString()) { return x.toString(); }, [](auto&&) -> std::string { return "toString not defined"; } );
This approach is fine for the optionalToString function, which is rather simple. I wanted to show how to use Optional to control compile-time empty-ness in complex cases, so I'll just expand this section or change the example to something that is really better solved using Optional.
Thanks for the heads up; you can refer to this issue [5] in the future.
Ok got it. Although, SFINAE is a pretty powerful compile-time Optional built into the language.
Sure, but a proper library-based Optional allows one to represent failures caused by more than invalid expressions, which I think is more generally useful. For example, you can define a safe division that returns `nothing` when you divide by zero. Using SFINAE for this seems like using a hammer to screw something. Also, Optional allows more sophisticated operations like transform(opt, f) -> applies f to the optional value if it is there, and return just(the result) or nothing. filter(opt, pred) -> keep the optional value if it is there and it satisfies the predicate and many more. Composing optional values like this using SFINAE might be harder, IDK.
- Concepts refer to 'superclasses' these should be listed either as refinements or listed under the requirements section(which seem to be missing). It would be nicer if the concepts were documented like how they are at cppreference: http://en.cppreference.com/w/cpp/concept
This was fixed on develop. I now use the term "Refined concept" instead of "Superclass". Regarding concept requirements, they are listed in the "minimal complete definition" section of each concept. Then, semantic properties that must be satisfied are explained in the "laws" section.
Great.
- Concepts make no mention of minimum type requirement such as MoveConstructible.
I believe the right place to put this would be in the documentation of concrete models like `Tuple`, but not in the concepts (like `Sequence`). Hana's concepts operate at a slightly higher level and they do not really have a notion of storage. But I agree that it is necessary to document these requirements. Please refer to this issue [6] for status.
I was thinking more when using algorithms, since tuple will take on the constructibility of its members.
So you mean like since `filter` does a copy of the sequence, the elements in the sequence should be copy/move-constructible, right? That makes sense. See this issue [1].
- Organization of documentation could be better. Its nice showing what algorithms when the user views a concept, but it would be better if all the algorithms could be viewed together.
I assume you are talking about the reference documentation and not the tutorial. I agree that it could be easier to look for algorithms. There are also other quirks I'd like to see gone. The problem is that Doxygen is pretty inflexible, and I'm already tweaking it quite heavily. I'm considering either using a different tool completely or making some changes in the organization of the reference.
Your more precise comment about viewing algorithms on their own page is already tracked by this issue [7].
Awesome. Have you thought about using a different documentation tool instead, like mkdocs or sphinx?
Yes, I'm currently considering switching away from Doxygen. Though I don't know what I'd switch to. I have almost only one requirement; that the documentation be written in the source code.
- For compile-time `Iterable` sequence(which is all you support right now), `is_empty` can be inferred, and should be optional.
How can it be inferred?
Well, there is several ways it could be formailised, but either `head` and `tail` do not exist for an empty sequence,
I don't want to use SFINAE to determine this. What if we want to support runtime Iterables like std::vector, that only know they're empty at runtime?
or if `tail` always returns an empty sequence even when empty, you just detect that `seq == tail(seq)`.
Well, `tail` will fail when called on an empty sequence. But let's assume this was not the case. You would then be required to implement `==` for your Iterable, which is much more complicated than implementing `is_empty`.
- Overall, I think the Concepts could be simplified. They seem to be too complicated, and it leads to many surprises which seem to not make sense(such as using `Range` or `String` with `concat` or using `tick::integral_constant`).
1. Concatenating ranges does not make sense. A Hana Range is a contiguous sequence of compile-time integers. What happens when you concatenate `make_range(0_c, 3_c)` with `make_range(6_c, 10_c)`? It's not contiguous anymore, so it's not a Range anymore.
Even though concat takes a Range, why can't it just return a tuple instead?
`concat`'s signature is M(T) x M(T) -> M(T) where M is any MonadPlus. Mathematically, this is similar to the operation of a Monoid, except it is universally quantified on T. It would really break the conceptual integrity (and your expectations abou `concat`) for it to return a container of a different kind. It is much cleaner to explicitly perform the conversion by using concat( to<Tuple>(make_range(...)), to<Tuple>(make_range(...)) ) and then there's no surprise that you get back a Tuple.
[...]
Also, there's the problem that being SFINAE-friendly could hurt compile-time performance, because everytime you call an algorithm we'd have to check whether it's going to compile. However, because we're working with dependent types, checking whether the algorithm compiles requires doing the algorithm itself, which is in general costly.
Templates are memoized by the compiler, so the algorithm isn't done twice.
That's right. Screw my last argument, but I still think it would be more harmful than helpful. Also, if you need your algorithms to be constrained, you can simply use `is_a<...>` or `models<...>` from Hana to constrain it on your side.
We could however emulate this by using the Tag system. For example, `fold_left` could be defined as:
template <typename Xs, typename State, typename F, typename = std::enable_if_t<models<Foldable, Xs>()> > constexpr decltype(auto) fold_left(Xs&& xs, State&& state, F&& f) { // ... }
This would give us an approximative SFINAE-friendliness, but like I said above I think the best approach is to fail loud and fast.
It would fail loud not fast. Using substitution failure, the compiler will stop substitution as soon as their is a failure, whereas with a static_assert, it substitutes everything and then checks for failure. So using enable_if is always faster than static_assert.
What I meant by _fast_ is that Hana asserts in the interface methods, so that you get an error as soon as you call that method with wrong arguments. Also, the actual implementation is not called when the static_assertion is triggered, to reduce the compiler spew.
Also, as a side note, you should never use `enable_if_t`, as it is harder for the compiler to give a good diagnostic(a macro still works really well though if you don't mind macros).
Good to know, thanks. Inside the implementation, I tend to use `std::enable_if<>` to avoid instantiating the `enable_if_t` template alias anyway.
- It would be nice if the use of variable templates would be optional(and not used internally), since without inline variables, it can lead to ODR violations and executable bloat.
Without variable templates, we would have to write `type<T>{}`, `int_<1>{}`, etc.. all of the time instead of `type<T>` and `int_<1>`. Sure, that's just two characters, but considering you couldn't even rely on what is `type<T>` (if it were a type, since `decltype(type<T>)` is currently unspecified), we're really not gaining much. In short; no variable templates means a less usable library, without much benefits (see next paragraph).
How is it less usable? It seems like it would be more usable, since the library can now support compilers with no or flaky variable templates.
I simply mean that `int_<1>{}` is less pretty than `int_<1>`. It is longer by about 28% and most importantly it defeats the philosophy that we're manipulating objects, not types. At any rate, compilers other than Clang and GCC are probably missing far too many features (other than variable templates) for them to compile Hana. I don't think variable templates would change much to that.
Regarding variable templates and ODR, I thought variable templates couldn't lead to ODR violations? I know global function objects (even constexpr) can lead to ODR violations, but I wasn't aware about the problem for variable templates. I would appreciate if you could show me where the problem lies more specifically. Also, for reference, there's a defect report [9] related to global constexpr objects, and an issue tracking this problem here [10].
Finally, regarding executable bloat, we're talking about stateless constexpr objects here. At worst, we're talking 1 byte per object. At best (and most likely), we're talking about 0 bytes because of link time optimizations. Otherwise, I could also give internal linkage to the global objects and they would probably be optimized away by the compiler itself, without even requiring LTO. Am I dreaming?
The size of the symbol table is usually larger than 1 byte for binary formats.
But is it reasonable to think that they'll be optimized away?
Overall, I would like to see Hana compile on more compilers before it gets accepted into boost(currently it doesn't even compile on my macbook).
What compiler did you try to compile it with? Also, an important GCC bug that was preventing Hana from working properly on GCC was fixed a couple of days ago, so I'm going to try to finish the port ASAP and I'm fairly confident that it should work on GCC 5.2.
Using Apple's clang 6, which corresponds to clang 3.5 off of the trunk.
So you're with XCode < 3.6, right? This is not supported, unfortunately. I don't know when Apple branched of clang 3.5, or what they did to it, but it happily explodes on my computer too.
What is the bug preventing compilation on gcc 5.2?
There are several of them. An important one was [2], which was recently fixed. I'll continue working on the port as time permits. Regards, Louis [1]: https://github.com/ldionne/hana/issues/125 [2]: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=65719

If you find the chaining much cleaner, then perhaps you could make it a pipable function instead:
int_<10> | times([]{ std::cout << "foo" << std::endl; });
This way it could still apply to any IntegralConstant. « []
That's an interesting idea. However, Hana does not have a concept of pipable function, so that would need to be added somehow.
Well, if you use Fit in the future, then Fit already provides the mechanism for that.
Sure, but a proper library-based Optional allows one to represent failures caused by more than invalid expressions, which I think is more generally useful. For example, you can define a safe division that returns `nothing` when you divide by zero. Using SFINAE for this seems like using a hammer to screw something. Also, Optional allows more sophisticated operations like
transform(opt, f) -> applies f to the optional value if it is there, and return just(the result) or nothing. filter(opt, pred) -> keep the optional value if it is there and it satisfies the predicate
and many more. Composing optional values like this using SFINAE might be harder, IDK.
I don't see why it would be hard. Plus, there is a compile-time performance benefit, since the compiler will stop at the first substitution failure.
- Concepts make no mention of minimum type requirement such as MoveConstructible.
I believe the right place to put this would be in the documentation of concrete models like `Tuple`, but not in the concepts (like `Sequence`). Hana's concepts operate at a slightly higher level and they do not really have a notion of storage. But I agree that it is necessary to document these requirements. Please refer to this issue [6] for status.
I was thinking more when using algorithms, since tuple will take on the constructibility of its members. « []
So you mean like since `filter` does a copy of the sequence, the elements in the sequence should be copy/move-constructible, right? That makes sense. See this issue [1].
Yes.
Awesome. Have you thought about using a different documentation tool instead, like mkdocs or sphinx?
« []
Yes, I'm currently considering switching away from Doxygen. Though I don't know what I'd switch to. I have almost only one requirement; that the documentation be written in the source code.
Well I use mkdocs for Fit, and I write my documentation in the source code, but I have a script that slurps them up. Perhaps, mkdocs could be extended to do this automatically.
Well, there is several ways it could be formailised, but either `head` and `tail` do not exist for an empty sequence,
I don't want to use SFINAE to determine this. What if we want to support runtime Iterables like std::vector, that only know they're empty at runtime?
Then `is_empty` would be required for runtime sequences.
or if `tail` always returns an empty sequence even when empty, you just detect that `seq == tail(seq)`.
Well, `tail` will fail when called on an empty sequence. But let's assume this was not the case. You would then be required to implement `==` for your Iterable, which is much more complicated than implementing `is_empty`.
Well you could simply rely on `std::is_same`. At least, this only applies to compile-time sequences, not runtime sequences.
1. Concatenating ranges does not make sense. A Hana Range is a contiguous sequence of compile-time integers. What happens when you concatenate `make_range(0_c, 3_c)` with `make_range(6_c, 10_c)`? It's not contiguous anymore, so it's not a Range anymore.
Even though concat takes a Range, why can't it just return a tuple instead? « []
`concat`'s signature is
M(T) x M(T) -> M(T)
where M is any MonadPlus. Mathematically, this is similar to the operation of a Monoid, except it is universally quantified on T. It would really break the conceptual integrity
How does it break the conceptual integrity? Isn't a Tuple a MonadPlus as well?
(and your expectations abou `concat`)
Well, I find not being able to call `concat` on Range more surprising than it returning a Tuple instead of a Range.
How is it less usable? It seems like it would be more usable, since the library can now support compilers with no or flaky variable templates. « []
I simply mean that `int_<1>{}` is less pretty than `int_<1>`. It is longer by about 28% and most importantly it defeats the philosophy that we're manipulating objects, not types.
I don't see how it breaks the philosophy. `type<T>()` is obviously an object, and it looks even more so like an object than a variable template. It is only two more characters, and is simpler and cleaner than using the variable templates.
At any rate, compilers other than Clang and GCC are probably missing far too many features (other than variable templates) for them to compile Hana. I don't think variable templates would change much to that.
Well, it might be possible to support compilers such as gcc 4.9 or clang 3.4. Obviously, visual studio will be out of the question for at least another half of decade.
Using Apple's clang 6, which corresponds to clang 3.5 off of the trunk. « []
So you're with XCode < 3.6, right? This is not supported, unfortunately. I don't know when Apple branched of clang 3.5, or what they did to it, but it happily explodes on my computer too.
Well its Xcode 6.2. Paul -- View this message in context: http://boost.2283326.n4.nabble.com/Boost-Hana-Formal-review-for-Hana-tp46769... Sent from the Boost - Dev mailing list archive at Nabble.com.

Paul Fultz II <pfultz2 <at> yahoo.com> writes:
If you find the chaining much cleaner, then perhaps you could make it a pipable function instead:
int_<10> | times([]{ std::cout << "foo" << std::endl; });
This way it could still apply to any IntegralConstant. « []
That's an interesting idea. However, Hana does not have a concept of pipable function, so that would need to be added somehow.
Well, if you use Fit in the future, then Fit already provides the mechanism for that.
Sure, but a proper library-based Optional allows one to represent failures caused by more than invalid expressions, which I think is more generally useful. For example, you can define a safe division that returns `nothing` when you divide by zero. Using SFINAE for this seems like using a hammer to screw something. Also, Optional allows more sophisticated operations like
transform(opt, f) -> applies f to the optional value if it is there, and return just(the result) or nothing. filter(opt, pred) -> keep the optional value if it is there and it satisfies the predicate
and many more. Composing optional values like this using SFINAE might be harder, IDK.
I don't see why it would be hard. Plus, there is a compile-time performance benefit, since the compiler will stop at the first substitution failure.
I don't know, I just couldn't think of a way to easily implement something like the compile-time calculator at [1] using raw SFINAE. I think it is easier to wrap SFINAE behind an object and to use actual functions to manipulate this object rather than operating at the SFINAE-level directly, even though this surely gives a compile-time benefit. I'm not saying it's impossible, I'm just saying it needs a lot more creativity than using an Optional object, especially since C++ programmers will be familiar with the runtime concept when std::optional gets in.
Awesome. Have you thought about using a different documentation tool instead, like mkdocs or sphinx?
« []
Yes, I'm currently considering switching away from Doxygen. Though I don't know what I'd switch to. I have almost only one requirement; that the documentation be written in the source code.
Well I use mkdocs for Fit, and I write my documentation in the source code, but I have a script that slurps them up. Perhaps, mkdocs could be extended to do this automatically.
I looked at your setup and I find it quite nice. However, I'm worried about a couple of features I need. For example, it is possible to include code snippets taken from actual files for the examples? Also, is there some cross-referencing going on in the source code, so links are generated automatically? Overall, I think the idea of using a template to generate a static site like they do is the best idea and it's the future, but I'm just worried the project might not provide enough features at the moment?
Well, there is several ways it could be formailised, but either `head` and `tail` do not exist for an empty sequence,
I don't want to use SFINAE to determine this. What if we want to support runtime Iterables like std::vector, that only know they're empty at runtime?
Then `is_empty` would be required for runtime sequences.
or if `tail` always returns an empty sequence even when empty, you just detect that `seq == tail(seq)`.
Well, `tail` will fail when called on an empty sequence. But let's assume this was not the case. You would then be required to implement `==` for your Iterable, which is much more complicated than implementing `is_empty`.
Well you could simply rely on `std::is_same`. At least, this only applies to compile-time sequences, not runtime sequences.
make_tuple(int_<1>) compares equal to make_tuple(long_<1>), even though they have different types. So std::is_same wouldn't be general enough. Overall, I think asking for is_empty to be specified explicitly is very natural, and it should also be very easy to do. Say you're writing a lazy stream or some other sequence generating stuff on the fly. At the most basic level, there are three things you need to provide to your users: - A way to get the first element of the stream, i.e. the current element. That's the `head` function. - A way to advance the stream by one position, i.e. to get to the next element. That's the `tail` function. - A way to know when the stream is done producing values. That's the `is_empty` function.
1. Concatenating ranges does not make sense. A Hana Range is a contiguous sequence of compile-time integers. What happens when you concatenate `make_range(0_c, 3_c)` with `make_range(6_c, 10_c)`? It's not contiguous anymore, so it's not a Range anymore.
Even though concat takes a Range, why can't it just return a tuple instead? « []
`concat`'s signature is
M(T) x M(T) -> M(T)
where M is any MonadPlus. Mathematically, this is similar to the operation of a Monoid, except it is universally quantified on T. It would really break the conceptual integrity
How does it break the conceptual integrity? Isn't a Tuple a MonadPlus as well?
The M has to be a single thing. It can't be a Range and a Tuple at the same time. In other words, Range(T) x Range(T) -> Tuple(T) does not match the signature M(T) x M(T) -> M(T) but it does match A(T) x A(T) -> M(T)
How is it less usable? It seems like it would be more usable, since the library can now support compilers with no or flaky variable templates. « []
I simply mean that `int_<1>{}` is less pretty than `int_<1>`. It is longer by about 28% and most importantly it defeats the philosophy that we're manipulating objects, not types.
I don't see how it breaks the philosophy. `type<T>()` is obviously an object, and it looks even more so like an object than a variable template. It is only two more characters, and is simpler and cleaner than using the variable templates.
I really don't think it's cleaner, but that's subjective.
At any rate, compilers other than Clang and GCC are probably missing far too many features (other than variable templates) for them to compile Hana. I don't think variable templates would change much to that.
Well, it might be possible to support compilers such as gcc 4.9 or clang 3.4. Obviously, visual studio will be out of the question for at least another half of decade.
I have no interest in supporting non-C++14 compilers. Hana is a cutting edge library, and much of its purpose would be lost if it were to include workarounds for older compilers. We're in 2015, and compilers will catch up. Also, Hana proposes a new paradigm. We don't know how to fully take advantage of it, and we don't know what are its limits yet. A lot of time will pass before it replaces Fusion and MPL, and it might also never happen. I still think Hana will stay an "experimental" library used mostly by hardcore C++ programmers for at least a couple of months. These programmers are not stuck on Clang 3.4 anyway, so supporting it adds little value IMHO. Finally, there are usages of generic lambdas and generalized constexpr that would most likely make older compilers scream. I don't think variable templates are the biggest anti-portability factor in Hana.
Using Apple's clang 6, which corresponds to clang 3.5 off of the trunk. « []
So you're with XCode < 3.6, right? This is not supported, unfortunately. I don't know when Apple branched of clang 3.5, or what they did to it, but it happily explodes on my computer too.
Well its Xcode 6.2.
Sorry, I meant to ask if you were using Xcode < 6.3 (not 3.6), which you are. As documented in the README [2], it is unfortunately unsupported. However, you're on OS X and you have Homebrew (I __know__ you do :-). brew tap homebrew/versions brew install llvm36 It's very quick to install because it's a bottle, so you don't have to compile it yourself. And then Hana will be fully working. Regards, Louis [1]: https://goo.gl/pm6fKb [2]: https://github.com/ldionne/hana#prerequisites-and-installation

I don't know, I just couldn't think of a way to easily implement something like the compile-time calculator at [1] using raw SFINAE. I think it is easier to wrap SFINAE behind an object and to use actual functions to manipulate this object rather than operating at the SFINAE-level directly, even though this surely gives a compile-time benefit. I'm not saying it's impossible, I'm just saying it needs a lot more creativity than using an Optional object, especially since C++ programmers will be familiar with the runtime concept when std::optional gets in.
Like this here: https://gist.github.com/pfultz2/bfba2bfdca7dec26273c
I looked at your setup and I find it quite nice. However, I'm worried about a couple of features I need. For example, it is possible to include code snippets taken from actual files for the examples?
No, that would need to be an extra step, and its something I want to add to my setup(it could be in the step that pulls from the source).
Also, is there some cross-referencing going on in the source code, so links are generated automatically?
I am not quite sure what you mean, but you can cross reference other pages.
Overall, I think the idea of using a template to generate a static site like they do is the best idea and it's the future, but I'm just worried the project might not provide enough features at the moment?
This is true, and maybe an area where spinx or some other tool(like jekyll) might be better.
make_tuple(int_<1>) compares equal to make_tuple(long_<1>), even though they have different types. So std::is_same wouldn't be general enough.
That doesn't apply here. You just define `tail` to return itself when empty. Paul -- View this message in context: http://boost.2283326.n4.nabble.com/Boost-Hana-Formal-review-for-Hana-tp46769... Sent from the Boost - Dev mailing list archive at Nabble.com.

Paul Fultz II <pfultz2 <at> yahoo.com> writes:
I don't know, I just couldn't think of a way to easily implement something like the compile-time calculator at [1] using raw SFINAE. I think it is easier to wrap SFINAE behind an object and to use actual functions to manipulate this object rather than operating at the SFINAE-level directly, even though this surely gives a compile-time benefit. I'm not saying it's impossible, I'm just saying it needs a lot more creativity than using an Optional object, especially since C++ programmers will be familiar with the runtime concept when std::optional gets in.
Like this here: https://gist.github.com/pfultz2/bfba2bfdca7dec26273c
What happens if you need multiple statements in your functions? You can't use auto-deduced return type with your approach, which is a serious limitation for doing type-level computations inside functions. I'm not saying your approach is wrong, and your implementation of the above is very clever, but clearly defining an Optional object which can, in some specific cases, represent SFINAE errors, is not wrong either.
[...]
Also, is there some cross-referencing going on in the source code, so links are generated automatically?
I am not quite sure what you mean, but you can cross reference other pages.
Sorry, I meant that names in the code should generate links to their documentation.
Overall, I think the idea of using a template to generate a static site like they do is the best idea and it's the future, but I'm just worried the project might not provide enough features at the moment?
This is true, and maybe an area where spinx or some other tool(like jekyll) might be better.
What I want is a way to parse C++ code and build some kind of object that I can access from a Liquid template (I think that's what they use for Jekyll). Anyway, it will be built one day.
make_tuple(int_<1>) compares equal to make_tuple(long_<1>), even though they have different types. So std::is_same wouldn't be general enough.
That doesn't apply here. You just define `tail` to return itself when empty.
I don't understand what you mean. My statement was simply that we couldn't implement `equal` as `std::is_same` for compile-time sequences, because sometimes two types are different but they should still compare `equal`. I used make_tuple(int_<1>) and make_tuple(long_<1>) as an example of two such sequences whose types are different, but that should still compare equal. Regards, Louis

I don't know, I just couldn't think of a way to easily implement something like the compile-time calculator at [1] using raw SFINAE. I think it is easier to wrap SFINAE behind an object and to use actual functions to manipulate this object rather than operating at the SFINAE-level directly, even though this surely gives a compile-time benefit. I'm not saying it's impossible, I'm just saying it needs a lot more creativity than using an Optional object, especially since C++ programmers will be familiar with the runtime concept when std::optional gets in.
Like this here: https://gist.github.com/pfultz2/bfba2bfdca7dec26273c « []
What happens if you need multiple statements in your functions? You can't use auto-deduced return type with your approach, which is a serious limitation for doing type-level computations inside functions. I'm not saying your approach is wrong, and your implementation of the above is very clever, but clearly defining an Optional object which can, in some specific cases, represent SFINAE errors, is not wrong either.
Yes, multiple statements is a limitation of using SFINAE everywhere. And, btw, I am not trying to imply that Optional should be taken out of the library. It most definitely has its uses.
Overall, I think the idea of using a template to generate a static site like they do is the best idea and it's the future, but I'm just worried the project might not provide enough features at the moment?
This is true, and maybe an area where spinx or some other tool(like jekyll) might be better.
What I want is a way to parse C++ code and build some kind of object that I can access from a Liquid template (I think that's what they use for Jekyll). Anyway, it will be built one day.
That sounds like a pretty cool documentation tool, unfortunately, it doesn't exists today.
make_tuple(int_<1>) compares equal to make_tuple(long_<1>), even though they have different types. So std::is_same wouldn't be general enough.
That doesn't apply here. You just define `tail` to return itself when empty.
I don't understand what you mean. My statement was simply that we couldn't implement `equal` as `std::is_same` for compile-time sequences, because sometimes two types are different but they should still compare `equal`. I used make_tuple(int_<1>) and make_tuple(long_<1>) as an example of two such sequences whose types are different, but that should still compare equal.
Because `tail(make_tuple(int_<1>))` will never be `make_tuple(long_<1>)`. What matters is that `tail(make_tuple())` returns `make_tuple()`(that is if you decide to formalize it this way). Paul -- View this message in context: http://boost.2283326.n4.nabble.com/Boost-Hana-Formal-review-for-Hana-tp46769... Sent from the Boost - Dev mailing list archive at Nabble.com.

Le 14/06/15 21:19, Louis Dionne a écrit : > Paul Fultz II <pfultz2 <at> yahoo.com> writes: > > >> - `fold` and `reverse_fold` should be preferred over `fold_right` and >> ``fold_left`. This is more familiar to C++ programmers. > First, to my knowledge, the only libraries that even define fold and/or > reverse_fold are Fusion and MPL, so it's not like there was an undeniable > precedent for using those names instead of something else in C++. But > even then, `fold` and `reverse_fold` functions are provided for consistency > with those libraries, so I really don't see what's the problem. If you > prefer those names, you can use them and they have exactly the same > semantics as their Fusion counterpart. Meta uses it also. I'm wondering if reverse_fold shouldn't accept the same function signature as fold. It shouldn't be the case for fold_left and fold_right, as the parameters of the function to apply are exchanged. fold,fold.left:F(T)×S×(S×T→S)→S fold.right:F(T)×S×(T×S→S)→S BTW, it would be nice if reverse_fold (and any function) had its own signature. reverse_fold:F(T)×S×(S×T→S)→S > >> - Concepts are capitalized, however, models of a concept should not be >> capitalized(such as `IntregralConstant`, `Either`, `Lazy`, `Optional`, >> `Tuple`, etc) > `IntegralConstant`, `Tuple`, etc... are tags used for tag dispatching, > like Fusion's `vector_tag` & friends. I can't use a non-capitalized > version as-is, because it's going to clash with `integral_constant`. > Also, I find that using something like `tuple_tag` is uglier than using > `Tuple`. Consider for example > > make<tuple_tag>(xs...) > to<tuple_tag>(xs) > > make<Tuple>(xs...) > to<Tuple>(xs) I would prefer of Hana uses only CamelCase for C++17/20 Concepts or for C++14 type requirements. This will be confusing for more than one. You have also the option to define make having a class template as parameter (See [A]). make<_tuple>(xs...) to<_tuple>(xs) On Expected I defined it with a default parameter and specialize it template <class T = holder, class E = exception_ptr> struct expected. This allowed to use make<expected<>>(xs) I don't know how this trick could be applicable to variadic templates. > I find that using `Tuple` just leads to prettier code. Also, since Hana > does not specify the type of all of its containers, we refer to them by their > tag instead of their (unspecified) type. So for example, I talk of a Hana > `Range`, not a Hana `range`, because the latter is not a type. If I was to > use the name `range_tag` instead of `Range`, I couldn't do that as easily. > Fusion does not have this problem because all of its containers have well > specified types, so you can always talk about a `fusion::vector` and it is > always understood that this includes `fusion::vector3`, but Hana does not > have that. I would prefer also that Hana defines its concrete types, but I think this battle is lost. >> - IntregralConstant is very strange. In Hana, its not a concept(even though >> its capitalized), but rather a so called "data type". Furthermore, because >> of this strangeness it doesn't interoperate with other >> IntregralConstants(such as from Tick) even though all the operators are >> defined. > IntegralConstant is not a concept, that's true. The fact that it does not > interoperate with Tick's IntegralConstants out-of-the-box has nothing to > do with that, however. You must make your Tick integral constants a model > of Hana's Constant concept for it to work. See the gist at [2] for how to > do that. This is capital to the understanding of Hana. Hana has no automatic mapping. You must state explicitly that a type is a model of a Concept and I like it. This doesn't follow however the current trend of C++17/20 Concepts. A type is a model of a Concept is it has an explicit mapping of its associated mapping structure. The Constant concept could have a mcd that defaults to the nested members, and so make easier your mapping. I would prefer if Hana used a different name for his Concepts. In addition Hana Concepts have an associated class, which C++7/20 Concepts are predicates. I would reserve the CamelCase for C++ Concepts and the lowercase for the mapping struct, struct applicative { template <typename A> struct transform_impl; }; template <typename F> concept bool Applicative = requires ....; > - Concepts refer to 'superclasses' these should be listed either as > refinements or listed under the requirements section(which seem to be > missing). It would be nicer if the concepts were documented like how they > are at cppreference:http://en.cppreference.com/w/cpp/concept > This was fixed on develop. I now use the term "Refined concept" instead of > "Superclass". Regarding concept requirements, they are listed in the > "minimal complete definition" section of each concept. Then, semantic > properties that must be satisfied are explained in the "laws" section. I agree that following the same formalism than the C++ standard will be nice. An alternative would be to use C++TS Concepts for describing them :) > >> - Concepts make no mention of minimum type requirement such as >> MoveConstructible. > I believe the right place to put this would be in the documentation of > concrete models like `Tuple`, but not in the concepts (like `Sequence`). > Hana's concepts operate at a slightly higher level and they do not really > have a notion of storage. But I agree that it is necessary to document > these requirements. Please refer to this issue [6] for status. > I don't know. The constraints must be stated where needed. There is no reason a concept shouldn't require that the underlying type is a model of some other Concept it this is needed. I believe that all the concepts make use of MoveConstructible types, or am I wrong. I agree that the documentation is not precise enough respect to this point. >> - Organization of documentation could be better. Its nice showing what >> algorithms when the user views a concept, but it would be better if all >> the algorithms could be viewed together. > I assume you are talking about the reference documentation and not the > tutorial. I agree that it could be easier to look for algorithms. There > are also other quirks I'd like to see gone. The problem is that Doxygen > is pretty inflexible, and I'm already tweaking it quite heavily. I'm > considering either using a different tool completely or making some > changes in the organization of the reference. > > Your more precise comment about viewing algorithms on their own page is > already tracked by this issue [7]. > I'm not a fan of having all the algorithms on the same level. This force us to choose a different name. Haskell has this constraint and uses prefix f, m a , but we have namespaces in C++. I would move all the algorithms to a namespace associated to the concept. This would give us much more flexibility respect to the names. I know already that no one share this design but I wanted to shere it again. namespace monoid { struct mapping // monoid::mapping plays the current use of Monoid { template <class T> struct instance {...}; // global mapping // mcd // as_additive (T;+; 0) // as_multiplicative (T;*; 1) // as_sequence (S(T); append; T{}) }; // operations // ... } This doesn't mean that we can not have an index of all the algorithms. >> - Overall, I think the Concepts could be simplified. They seem to be too >> complicated, and it leads to many surprises which seem to not make >> sense(such as using `Range` or `String` with `concat` or using >> `tick::integral_constant`). > 1. Concatenating ranges does not make sense. A Hana Range is a contiguous > sequence of compile-time integers. What happens when you concatenate > `make_range(0_c, 3_c)` with `make_range(6_c, 10_c)`? It's not contiguous > anymore, so it's not a Range anymore. Concat could be defined on a DiscontinuousRange, but Hana doesn't have this Concept. > 2. Concatenating strings makes complete sense, indeed. This could be handled > very naturally by defining a `Monoid` model, but it was not done because > I did not like using `+` for concatenating strings :-). I opened this > issue [8] to try and find a proper resolution. I missed the fact that Monoid introduces + for plus. The operator+ must be documented and appear on the Monoid folder. I wouldn't be weird to use + for concatenating strings as std::string provides already operator+(). It would be much more weir to use zero/plus/+ with a monoid (int, *, 1) This is the problem of naming the function associated to a Concept. Haskell uses mappend and mempty? These names comes from the List Monoid. I don't like them neither. IMHO, the names of Monoid operations must be independent of the domain. A monoid is a triplet (T, op, neutral), where op is a binary operation on T and neutral is the neutral element T respect to this operation. What is wrong then with monoid::op and monoid::neutral, instead of the Hana globals plus and zero? Too verbose? Having these names on a namespace let the user make the choice. I would not provide any operators for the Monoid Concept. > 3. Tick's integral_constants can be handled as I explained in the > gist at [2]. > > As a fundamental library, Hana was designed to be very general and extensible > in ways I couldn't possibly foresee. Hence, I could have stuck with Fusion's > concept hierarchy (minus iterators), but that would have been less general > than what I was aiming for. Also, Hana is slightly biased towards functional > programming, and it reflects in the concepts. If that is what you mean by > "complicated", then I'd say this generality and power is a feature rather > than a bug. > > I would really like to know specifically which concepts you find too > complicated or superfluous. There are definitely things that could be > improved, but in general I am very content with the current hierarchy > and I think this is one of Hana's strengths, to be frank. With the previous Hana version we had structs defining different Minimal Complete Definition (mcd). One of the possible mcd would be one that just forwards using some syntactical convention. Thus doing the mapping for this kind of types consists only in making the mapping inherit from this mcd. >> - It would be nice if the use of variable templates would be optional(and >> not used internally), since without inline variables, it can lead to ODR >> violations and executable bloat. > Without variable templates, we would have to write `type<T>{}`, `int_<1>{}`, > etc.. all of the time instead of `type<T>` and `int_<1>`. Sure, that's just > two characters, but considering you couldn't even rely on what is `type<T>` > (if it were a type, since `decltype(type<T>)` is currently unspecified), > we're really not gaining much. In short; no variable templates means a less > usable library, without much benefits (see next paragraph). I'm not aware of ODR issues with variable templates. I will be interested on a pointer. > Regarding variable templates and ODR, I thought variable templates couldn't > lead to ODR violations? I know global function objects (even constexpr) can > lead to ODR violations, but I wasn't aware about the problem for variable > templates. I would appreciate if you could show me where the problem lies > more specifically. Also, for reference, there's a defect report [9] related > to global constexpr objects, and an issue tracking this problem here [10]. > > Finally, regarding executable bloat, we're talking about stateless constexpr > objects here. At worst, we're talking 1 byte per object. At best (and most > likely), we're talking about 0 bytes because of link time optimizations. > Otherwise, I could also give internal linkage to the global objects and they > would probably be optimized away by the compiler itself, without even > requiring LTO. Am I dreaming? The best is to measure it ;-) Are there any measures of the same program using Hana and Meta? > >> Overall, I would like to see Hana compile on more compilers before it gets >> accepted into boost(currently it doesn't even compile on my macbook). I would like that more compilers can run Hana programs without problems, but this is the compiler problem. Please, let Hana make use of as many useful features for the library as the C++14 language has. I would be even accept some extensions that even if not already in the standard or on a TS, have a on going proposal. Of course this should be provided conditionally. As for example compile time string literals. > > [1]:https://github.com/ldionne/hana/issues/114 > [2]:https://gist.github.com/ldionne/32f61a7661d219ca834d#file-main-cpp > [3]:http://ruby-doc.org/core-1.9.3/Integer.html#method-i-times > [4]:https://github.com/ldionne/hana/issues/100 > [5]:https://github.com/ldionne/hana/issues/115 > [6]:https://github.com/ldionne/hana/issues/116 > [7]:https://github.com/ldionne/hana/issues/82 > [8]:https://github.com/ldionne/hana/issues/117 > [9]:http://www.open-std.org/JTC1/SC22/WG21/docs/cwg_active.html#2104 > [10]:https://github.com/ldionne/hana/issues/76 > > [A] https://github.com/viboes/std-make/tree/master/doc/proposal/factories

Vicente J. Botet Escriba <vicente.botet <at> wanadoo.fr> writes:
Le 14/06/15 21:19, Louis Dionne a écrit :
Paul Fultz II <pfultz2 <at> yahoo.com> writes:
- `fold` and `reverse_fold` should be preferred over `fold_right` and ``fold_left`. This is more familiar to C++ programmers. First, to my knowledge, the only libraries that even define fold and/or reverse_fold are Fusion and MPL, so it's not like there was an undeniable precedent for using those names instead of something else in C++. But even then, `fold` and `reverse_fold` functions are provided for consistency with those libraries, so I really don't see what's the problem. If you prefer those names, you can use them and they have exactly the same semantics as their Fusion counterpart. Meta uses it also. I'm wondering if reverse_fold shouldn't accept the same function signature as fold. It shouldn't be the case for fold_left and fold_right, as the parameters of the function to apply are exchanged.
reverse_fold has the same signature as fold. It does exactly what Fusion's reverse_fold does.
fold,fold.left:F(T)×S×(S×T→S)→S fold.right:F(T)×S×(T×S→S)→S BTW, it would be nice if reverse_fold (and any function) had its own signature. reverse_fold:F(T)×S×(S×T→S)→S
Good idea; I will document the signature of reverse_fold. See [1].
- Concepts are capitalized, however, models of a concept should not be capitalized(such as `IntregralConstant`, `Either`, `Lazy`, `Optional`, `Tuple`, etc) `IntegralConstant`, `Tuple`, etc... are tags used for tag dispatching, like Fusion's `vector_tag` & friends. I can't use a non-capitalized version as-is, because it's going to clash with `integral_constant`. Also, I find that using something like `tuple_tag` is uglier than using `Tuple`. Consider for example
make<tuple_tag>(xs...) to<tuple_tag>(xs)
make<Tuple>(xs...) to<Tuple>(xs) I would prefer of Hana uses only CamelCase for C++17/20 Concepts or for C++14 type requirements. This will be confusing for more than one.
You have also the option to define make having a class template as parameter (See [A]).
make<_tuple>(xs...) to<_tuple>(xs)
In Hana, `make` is a variable template and `make<...>` is a function object. Unfortunately, this means that we have to make a choice between accepting a type template parameter or a template template parameter, but not both.
[...]
I would prefer also that Hana defines its concrete types, but I think this battle is lost.
It isn't a lost battle. I would also prefer to define my concrete types, believe me, but I'm not sure how to do it without screwing users up. There are also some compile-time issues with specifying concrete types in some cases. For example, say I specify the concrete type of a `Set` as `_set<...>`, and the concrete type of `int_<i>` as `_int<i>`. Then, you're allowed to write the following (or are you?): auto xs = hana::make_set(int_<1>, int_<2>, int_<3>); hana::_set<_int<3>, _int<2>, _int<1>> ys = xs; Should this work? Well, sort of, because the order of elements inside a Set is unspecified, so any permutation of hana::_set<int_<1>, int_<2>, int_<3>> should be a valid receiver type for `xs`. But this seemingly naive assignment bears a considerable compile-time cost, since you have to assign each corresponding element from `xs` into `ys`, element by element. It's just a can of worms, and I'm sure there's a right way to open it but for now I have decided to leave it closed.
- IntregralConstant is very strange. In Hana, its not a concept(even though its capitalized), but rather a so called "data type". Furthermore, because of this strangeness it doesn't interoperate with other IntregralConstants(such as from Tick) even though all the operators are defined. IntegralConstant is not a concept, that's true. The fact that it does not interoperate with Tick's IntegralConstants out-of-the-box has nothing to do with that, however. You must make your Tick integral constants a model of Hana's Constant concept for it to work. See the gist at [2] for how to do that. This is capital to the understanding of Hana. Hana has no automatic mapping. You must state explicitly that a type is a model of a Concept and I like it. This doesn't follow however the current trend of C++17/20 Concepts.
A type is a model of a Concept is it has an explicit mapping of its associated mapping structure.
The Constant concept could have a mcd that defaults to the nested members, and so make easier your mapping.
You're right, I've been thinking about that. This prompts me to open this issue [2].
I would prefer if Hana used a different name for his Concepts. In addition Hana Concepts have an associated class, which C++7/20 Concepts are predicates. I would reserve the CamelCase for C++ Concepts and the lowercase for the mapping struct,
struct applicative { template <typename A> struct transform_impl; };
template <typename F> concept bool Applicative = requires ....;
I think it would be better to wait until we _actually_ have Concepts in the language before trying to emulate them too hard. I think it is good that Hana does not pretend to have Concepts (in the Concepts-lite sense). Doing otherwise could be misleading.
[...]
- Concepts make no mention of minimum type requirement such as MoveConstructible. I believe the right place to put this would be in the documentation of concrete models like `Tuple`, but not in the concepts (like `Sequence`). Hana's concepts operate at a slightly higher level and they do not really have a notion of storage. But I agree that it is necessary to document these requirements. Please refer to this issue [6] for status.
I don't know. The constraints must be stated where needed. There is no reason a concept shouldn't require that the underlying type is a model of some other Concept it this is needed. I believe that all the concepts make use of MoveConstructible types, or am I wrong. I agree that the documentation is not precise enough respect to this point.
I'm not saying that e.g. Iterable _should not_ document these constraints, I'm just saying it does not, at the moment, operate at such a low level. For example, an infinite stream generating the Fibonacci sequence could be a model of Iterable, but there is no notion of storage in this case. However, like I said, I agree that these requirements must be documented where they exist.
[...]
This doesn't mean that we can not have an index of all the algorithms.
An index of all the algorithms will be done.
[...]
2. Concatenating strings makes complete sense, indeed. This could be handled very naturally by defining a `Monoid` model, but it was not done because I did not like using `+` for concatenating strings . I opened this issue [8] to try and find a proper resolution. I missed the fact that Monoid introduces + for plus. The operator+ must be documented and appear on the Monoid folder.
I wouldn't be weird to use + for concatenating strings as std::string provides already operator+(). It would be much more weir to use zero/plus/+ with a monoid (int, *, 1)
This is the problem of naming the function associated to a Concept.
Haskell uses mappend and mempty? These names comes from the List Monoid. I don't like them neither. IMHO, the names of Monoid operations must be independent of the domain. A monoid is a triplet (T, op, neutral), where op is a binary operation on T and neutral is the neutral element T respect to this operation. What is wrong then with monoid::op and monoid::neutral, instead of the Hana globals plus and zero? Too verbose? Having these names on a namespace let the user make the choice.
It becomes wrong when you have more complex type classes. How would you call the Ring's operation and identity? And how would you call the operations of a Monad? eta and mu as the mathematicians do it? No, I think we have to put actual names at some point, even if that means losing some generality.
I would not provide any operators for the Monoid Concept.
I am thinking about dissociating the operators from the concepts for technical reasons; see [3]. Instead, operators would be handled for each data type.
Regarding variable templates and ODR, I thought variable templates couldn't lead to ODR violations? I know global function objects (even constexpr) can lead to ODR violations, but I wasn't aware about the problem for variable templates. I would appreciate if you could show me where the problem lies more specifically. Also, for reference, there's a defect report [9] related to global constexpr objects, and an issue tracking this problem here [10].
Finally, regarding executable bloat, we're talking about stateless constexpr objects here. At worst, we're talking 1 byte per object. At best (and most likely), we're talking about 0 bytes because of link time optimizations. Otherwise, I could also give internal linkage to the global objects and they would probably be optimized away by the compiler itself, without even requiring LTO. Am I dreaming? The best is to measure it Are there any measures of the same program using Hana and Meta?
It goes without saying that Meta will have 0 runtime overhead, since it's purely at the type level. The only question is whether Hana can do just as well. From my micro-benchmarks, I know Hana can do just as good in those cases. However, it is hard to predict exactly what will happen for non trivial programs. I think compressing the storage of empty types should make it much, much easier for the compiler to optimize everything away. Also, like I said in another answer to Roland, it is always possible to enclose any value-level computation in `decltype` to ensure that no code is actually generated. But of course this is slightly annoying. Regards, Louis [1]: https://github.com/ldionne/hana/issues/131 [2]: https://github.com/ldionne/hana/issues/132 [3]: https://github.com/ldionne/hana/issues/138

- Whether you believe the library should be accepted into Boost Yes. * Conditions for acceptance I have no explicit conditions without satisfaction of which Hana should not proceed, but I have some recommendations for changes below. - Your name Zach Laine - Your knowledge of the problem domain. I'm knowledgeable about metaprogramming and value+type programming, a la Fusion. - How much effort did you put into your evaluation of the review? I partially converted an existing TMP-heavy library for doing linear algebra using heterogeneously-typed Boot.Units values in matrices. I spent about 8 hours on that. I also have been to all Louis' talks and have looked thoroughly (though not used) previous versions of the library. You are strongly encouraged to also provide additional information: - What is your evaluation of the library's: * Design Overall, very good. In my partial conversion mentioned above, I was able to cut out lots of boilerplate code, that essentially just iterated over tuple elements, with Hana algorithms. I was able to use hana::tuple as a replacement for std::tuple relatively painlessly. The code looked smaller and just *better* using Hana than it did when I hand-rolled it. That is significant, IMO. I have been saying since I first converted this same library over to C++14 code that I have no interest in MPL or any other such thing, since metaprogramming involving values is largely trivial in C++14, and all metaprogramming is much easier. See Peter Dimov's recent blog post for a more thorough analysis of this than I could ever write. With all of that, I still found I was able to reduce the code in my library by using Hana. There are some things that I consider worth changing, however: - The use of '.' in names instead of '_' (Louis has already said that he will change this). - The inclusion of multiple functions that do the same thing. Synonyms such as size()/length() and fold()/fold.left() sow confusion. As a maintainer of code that uses Hana, it means I have to intermittently stop and refer back to the docs. Hana is already quite large, and there's a lot there to try to understand! - I would prefer that the names of things that are opposites be obviously so in their naming. For instance, filter()/remove_if() are a opposites -- they do the same thing, just with the predicate inverted. Could these be called remove_if()/remove_if_not()? I (and I have heard this from others as well) have worked in a place that had multiple functions called filter() in the codebase, at least one of which meant filter-as-in-filter-out, and at least one of which meant filter-as-in-keep. filter() may be a more elegant spelling than the admittedly-clunky remove_if_not(), but the latter will not require me to refer to the docs when reviewing code. - I'd like to see first() / last() / after_first() / before_last() instead of head() / last() / tail() / init() (this was suggested by Louis offline), as it reinforces the relationships in the names. - I find boost::hana::make<boost::hana::Tuple>(...) to be clunkier than boost::hana::_tuple<>, in many places. Since I want to write this in some cases, it would be nice if it was called boost::hana::tuple<> instead, so it doesn't look like I'm using some implementation-detail type. * Implementation Very good; mind-bending in some of the implementation details. However, there are some choices that Louis made that I think should be changed: - The inclusion of standard headers was forgone in an effort to optimize compile times further (Hana builds quickly, but including e.g. <type_traits> increases the include time of hana.hpp by factors). This is probably not significant in most users' use of Hana; most users will include Hana once and then instantiate, instantiate, instantiate. The header-inclusion time is just noise in most cases, and is quite fast enough in others. However, rolling one's own std::foo, where std::foo is likely to evolve over time, is asking for subtle differences to creep in, in what users expect std::foo to do vs. what hana::detail::std::foo does. What happens if I use std::foo in some Hana-using code that uses hana::detail::std::foo internally? Probably, subtle problems. Moreover, this practice introduces an unnecessary maintenance burden on Hana to follow these (albeit slowly) moving targets. - I ran in to a significant problem in one case that suggests a general problem. In the linear algebra library I partially reimplemented, one declares matrices like this (here, I use the post-conversion Hana tuples): matrix< hana::tuple<a, b>, hana::tuple<c, d>
m;
Each row is a tuple. But the storage needs to be a single tuple, so matrix<> is actually a template alias for: template <typename Tuple, std::size_t Rows, std::size_t Columns> matrix_t { /* ... */ }; There's a metafunction that maps from the declaration syntax to the correct representational type: template <typename HeadRow, typename ...TailRows> struct matrix_type { static const std::size_t rows = sizeof...(TailRows) + 1; static const std::size_t columns = hana::size(HeadRow{}); /* ... */ using tuple = decltype(hana::flatten( hana::make<hana::Tuple>( HeadRow{}, TailRows{}... ) )); using type = matrix_t<tuple, rows, columns>; }; The problem with the code above, is that it works with a HeadRow tuple type that is a literal type (e.g. a tuple<float, double>). It does not work with a tuple containing non-literal types (e.g. a tuple<boost::units::quantity<boost::units::si::time>, /* ... */>). This is a problem for me. Why should the literalness of the types contained in the tuple determine whether I can know its size at compile time? I'd like to be able to write this as a workaround: static const std::size_t columns = hana::size(hana::type<HeadRow>); I'd also like Hana to know that when it sees a type<> object, that is should handle it specially and do the right thing. There may be other compile-time-value-returning algorithms that would benefit from a similar treatment. I was unable to sketch in a solution to this, because hana::_type is implemented in such a way as to prevent deduction on its nested type (the nested type is actually hana::_type::_::type, not hana::_type::type -- surely to prevent problems elsewhere in the implementation). As an aside, note that I was forced to name this nested '_' type elsewhere in a predicate. This is not something I want my junior coworkers to have to read: template <typename T> constexpr bool some_predicate (hana::_type<T> x) { return hana::_type<T>::_::type::size == some_constant;} Anyway, after trying several different ways to overcome this, I punted on using hana::size() at all, and resorted to: static const std::size_t columns = HeadRow::size; ... which precludes the use of std::tuple or my_arbitrary_tuple_type with my matrix type in future. - I got this error: error: object of type 'boost::hana::_tuple<float, float, float, float>' cannot be assigned because its copy assignment operator is implicitly deleted This seems to result from the definitions of hana::_tuple and hana::detail::closure. Specifically, the use of the defaulted special member functions did not work, given that closure's ctor is declared like this: constexpr closure_impl(Ys&& ...y) : Xs{static_cast<Ys&&>(y)}... { } The implication is that if I construct a tuple from rvalues, I get a tuple<T&&, U&&, ...>. The fact that the special members are defaulted is not helpful, since they are still implicitly deleted if they cannot be generated (e.g. the copy ctor, when the members are rvalue refs). I locally removed all the defaulted members from _tuple and closure, added to closure a defined default ctor, and changed its existing ctor to this: constexpr closure_impl(Ys&& ...y) : Xs{::std::forward<Ys>(y)}... { } ... and I stopped having problems. My change might not have been entirely necessary (I think that the static_cast<> might be ok to leave in there), but something needs to change, and I think tests need to be added to cover copies, assignment, moves, etc., of tuples with different types of values and r/lvalues. * Documentation Again, this is overall very good. Some suggestions: - Make a cheatsheet for the data types, just like the ones for the algorithms. - Do the same thing for the functions that are currently left out (Is this because they are not algorithms per se?). E.g. I used hana::repeat<hana::Tuple>(), but it took some finding. * Tests - The tests seem quite extensive. * Usefulness Extremely useful. I find Hana to be a faster, *much* easier-to-use MPL, and an easier-to-use Fusion, if only because it shares the same interface as the MPL-equivalent part. There are other things in there I haven't used quite yet that are promising as well. A lot of the stuff in Hana feels like a solution in search of a problem -- but that may just be my ignorance of Haskell-style programming. I look forward to one day understanding what I'd use hana::duplicate() for, and using it for that. :) That being said, I really, really, want to do this: tuple_1 foo; tuple_2 bar; boost::hana::copy_if(foo, bar, [](auto && x) { return my_predicate(x); }); Currently, I cannot. IIUC, I must do: tuple_1 foo; auto bar = boost::hana::take_if(foo, [](auto && x) { return my_predicate(x); }); Which does O(N^2) copies, where N = boost::hana::size(bar), as it is pure-functional. Unless the optimizer is brilliant (and it typically is not), I cannot use code like this in a hot section of code. Also, IIUC take_if() cannot accommodate the case that decltype(foo) != decltype(bar) What I ended up doing was this: hana::for_each( hana::range_c<std::size_t, 0, hana::size(foo)>, [&](auto i) { hana::at(bar, i) = static_cast<std::remove_reference_t<decltype(hana::at(bar, i))>>( hana::at(foo, i) ); } ); ... which is a little unsatisfying. I also want move_if(). And a pony. - Did you attempt to use the library? If so: * Which compiler(s) Clang 3.6.1, on both Mac and Linux. My code did actually compile (though did not link due to the already-known bug) on GCC 5.1. * What was the experience? Any problems? I had to add "-lc++abi" to the CMake build to fix the link on Linux, but otherwise no problems. Zach On Wed, Jun 10, 2015 at 4:19 AM, Glen Fernandes <glen.fernandes@gmail.com> wrote:
Dear Boost community,
The formal review of Louis Dionne's Hana library begins today,10th June and ends on 24th June.
Hana is a header-only library for C++ metaprogramming that provides facilities for computations on both types and values. It provides a superset of the functionality provided by Boost.MPL and Boost.Fusion but with more expressiveness, faster compilation times, and faster (or equal) run times.
To dive right in to examples, please see the Quick start section of the library's documentation: http://ldionne.com/hana/index.html#tutorial-quickstart
Hana makes use of C++14 language features and thus requires a C++14 conforming compiler. It is recommended you evaluate it with clang 3.5 or higher.
Hana's source code is available on Github: https://github.com/ldionne/hana
Full documentation is also viewable on Github: http://ldionne.github.io/hana
To read the documentation offline: git clone http://github.com/ldionne/hana --branch=gh-pages doc/gh-pages
For a gentle introduction to Hana, please see: 1. C++Now 2015: http://ldionne.github.io/hana-cppnow-2015 (slides) 2. C++Con 2014: https://youtu.be/L2SktfaJPuU (video) http://ldionne.github.io/hana-cppcon-2014 (slides)
We encourage your participation in this review. At a minimum, kindly state: - Whether you believe the library should be accepted into Boost * Conditions for acceptance - Your name - Your knowledge of the problem domain.
You are strongly encouraged to also provide additional information: - What is your evaluation of the library's: * Design * Implementation * Documentation * Tests * Usefulness - Did you attempt to use the library? If so: * Which compiler(s) * What was the experience? Any problems? - How much effort did you put into your evaluation of the review?
We await your feedback!
Best, Glen
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Zach Laine <whatwasthataddress <at> gmail.com> writes:
[...]
There are some things that I consider worth changing, however:
- The use of '.' in names instead of '_' (Louis has already said that he will change this).
Yes, I'm currently changing this. You can refer to [1].
- The inclusion of multiple functions that do the same thing. Synonyms such as size()/length() and fold()/fold.left() sow confusion. As a maintainer of code that uses Hana, it means I have to intermittently stop and refer back to the docs. Hana is already quite large, and there's a lot there to try to understand!
The synonyms were added as a balm in the few cases where there was a rather strong disagreement on what I wanted and what other people wanted. Perhaps I should have held my position harder, or I should have done the contrary. If the aliases seem to annoy a significant number of persons, I'll consider removing them (and keeping IDK which one yet).
[...]
- I'd like to see first() / last() / after_first() / before_last() instead of head() / last() / tail() / init() (this was suggested by Louis offline), as it reinforces the relationships in the names.
Actually, I thought we had agreed on front/back drop_front/drop_back :-). Anyway, this is tracked by this issue [2]. It's not a 100% trivial change because that will mean that `drop` is replaced by `drop_front`, which can also take an optional number of elements to drop. Same goes for `drop_back`, which will accept an optional number of elements to drop from the end of the sequence.
- I find boost::hana::make<boost::hana::Tuple>(...) to be clunkier than boost::hana::_tuple<>, in many places.
You can use boost::hana::make_tuple(...).
Since I want to write this in some cases, it would be nice if it was called boost::hana::tuple<> instead, so it doesn't look like I'm using some implementation-detail type.
Right; since `_tuple<...>` is a well specified type, I guess it could have a proper name without a leading underscore. More generally, I have to clean up some names I use. For example, you can create a Range with - boost::hana::make_range(...) - boost::hana::make<boost::hana::Range>(...) - boost::hana::range(...) - boost::hana::range_c<...> And the type of any of these objects is called _range<...>, although it is implementation-defined. I opened this issue [3] to track this cleanup process.
* Implementation
Very good; mind-bending in some of the implementation details. However, there are some choices that Louis made that I think should be changed:
- The inclusion of standard headers was forgone in an effort to optimize compile times further (Hana builds quickly, but including e.g. <type_traits> increases the include time of hana.hpp by factors). This is probably not significant in most users' use of Hana; most users will include Hana once and then instantiate, instantiate, instantiate. The header-inclusion time is just noise in most cases, and is quite fast enough in others. However, rolling one's own std::foo, where std::foo is likely to evolve over time, is asking for subtle differences to creep in, in what users expect std::foo to do vs. what hana::detail::std::foo does. What happens if I use std::foo in some Hana-using code that uses hana::detail::std::foo internally? Probably, subtle problems. Moreover, this practice introduces an unnecessary maintenance burden on Hana to follow these (albeit slowly) moving targets.
The current develop branch now uses `<type_traits>` and other standard headers. I am glad I have finally made the move, since this was also preventing me from properly interoperating with `std::integral_constant` and other quirks. All for the better!
- I ran in to a significant problem in one case that suggests a general problem.
In the linear algebra library I partially reimplemented, one declares matrices like this (here, I use the post-conversion Hana tuples):
matrix< hana::tuple<a, b>, hana::tuple<c, d>
m;
Each row is a tuple. But the storage needs to be a single tuple, so matrix<> is actually a template alias for:
template <typename Tuple, std::size_t Rows, std::size_t Columns> matrix_t { /* ... */ };
There's a metafunction that maps from the declaration syntax to the correct representational type:
template <typename HeadRow, typename ...TailRows> struct matrix_type { static const std::size_t rows = sizeof...(TailRows) + 1; static const std::size_t columns = hana::size(HeadRow{});
/* ... */
using tuple = decltype(hana::flatten( hana::make<hana::Tuple>( HeadRow{}, TailRows{}... ) ));
using type = matrix_t<tuple, rows, columns>; };
The problem with the code above, is that it works with a HeadRow tuple type that is a literal type (e.g. a tuple<float, double>). It does not work with a tuple containing non-literal types (e.g. a tuple<boost::units::quantity<boost::units::si::time>, /* ... */>).
This is a problem for me. Why should the literalness of the types contained in the tuple determine whether I can know its size at compile time? I'd like to be able to write this as a workaround:
static const std::size_t columns = hana::size(hana::type<HeadRow>);
That is the incorrect workaround. The proper, Hana-idiomatic way to write what you need is: namespace detail { auto make_matrix_type = [](auto rows) { auto nrows = hana::size(rows); static_assert(nrows >= 1u, "matrix_t<> requires at least one row"); auto ncolumns = hana::size(rows[hana::size_t<0>]); auto uniform = hana::all_of(rows, [=](auto row) { return hana::size(row) == ncolumns; }); static_assert(uniform, "matrix_t<> requires tuples of uniform length"); using tuple_type = decltype(hana::flatten(rows)); return hana::type<matrix_t<tuple_type, nrows, ncolumns>>; }; } template <typename ...Rows> using matrix = typename decltype( detail::make_matrix_type(hana::make_tuple(std::declval<Rows>()...)) )::type; By the way, see my pull request at [4]. I'm now passing all the tests, and with a compilation time speedup!
I'd also like Hana to know that when it sees a type<> object, that is should handle it specially and do the right thing. There may be other compile-time-value-returning algorithms that would benefit from a similar treatment. I was unable to sketch in a solution to this, because hana::_type is implemented in such a way as to prevent deduction on its nested type (the nested type is actually hana::_type::_::type, not hana::_type::type -- surely to prevent problems elsewhere in the implementation).
This is done to prevent ADL to cause the instantiation of `T` in `_type<T>`. For this, `decltype(type<T>)` has to be a dependent type in which `T` does not appear.
As an aside, note that I was forced to name this nested '_' type elsewhere in a predicate. This is not something I want my junior coworkers to have to read:
template <typename T> constexpr bool some_predicate (hana::_type<T> x) { return hana::_type<T>::_::type::size == some_constant;}
You made your life harder than it needed to be. First, you should have written `decltype(hana::type<T>)::type::size` instead of `hana::_type<T>::_::type` This `_` nested type is an implementation detail. But then, you realize that this is actually equivalent to `T::size`. Indeed, `decltype(type<T>)::type` is just `T`. So basically, what you wanted is template <typename T> constexpr bool some_predicate (hana::_type<T> x) { return T::size == some_constant;} which looks digestible junior coworkers :-).
Anyway, after trying several different ways to overcome this, I punted on using hana::size() at all, and resorted to:
static const std::size_t columns = HeadRow::size;
... which precludes the use of std::tuple or my_arbitrary_tuple_type with my matrix type in future.
I think my workaround should make this OK.
- I got this error:
error: object of type 'boost::hana::_tuple<float, float, float, float>' cannot be assigned because its copy assignment operator is implicitly deleted
This is a known problem, tracked by this issue [5].
This seems to result from the definitions of hana::_tuple and hana::detail::closure. Specifically, the use of the defaulted special member functions did not work, given that closure's ctor is declared like this:
constexpr closure_impl(Ys&& ...y) : Xs{static_cast<Ys&&>(y)}... { }
The implication is that if I construct a tuple from rvalues, I get a tuple<T&&, U&&, ...>. The fact that the special members are defaulted is not helpful, since they are still implicitly deleted if they cannot be generated (e.g. the copy ctor, when the members are rvalue refs). I locally removed all the defaulted members from _tuple and closure, added to closure a defined default ctor, and changed its existing ctor to this:
constexpr closure_impl(Ys&& ...y) : Xs{::std::forward<Ys>(y)}... { }
... and I stopped having problems. My change might not have been entirely necessary (I think that the static_cast<> might be ok to leave in there),
Actually, `std::forward<Y>(y)` is equivalent to `static_cast<Y&&>(y)`. I'm using the latter because I measured a compile-time speedup of about 13.9% when removing it. This is because Hana uses perfect forwarding quite a bit, and that was the cost of instantiating `std::forward` (lol). So the only thing you changed really is to remove all the defaulted members.
but something needs to change, and I think tests need to be added to cover copies, assignment, moves, etc., of tuples with different types of values and r/lvalues.
That is true. Added this issue [6].
* Documentation
Again, this is overall very good. Some suggestions:
- Make a cheatsheet for the data types, just like the ones for the algorithms.
Great idea. See this issue [7].
- Do the same thing for the functions that are currently left out (Is this because they are not algorithms per se?). E.g. I used hana::repeat<hana::Tuple>(), but it took some finding.
I only put the __most__ useful algorithms in the cheatsheet, to avoid crowding it too much. I can add some more.
* Tests
- The tests seem quite extensive.
* Usefulness
Extremely useful. I find Hana to be a faster, *much* easier-to-use MPL, and an easier-to-use Fusion, if only because it shares the same interface as the MPL-equivalent part. There are other things in there I haven't used quite yet that are promising as well. A lot of the stuff in Hana feels like a solution in search of a problem -- but that may just be my ignorance of Haskell-style programming. I look forward to one day understanding what I'd use hana::duplicate() for, and using it for that. :)
Lol. The Comonad concept (where duplicate() comes from) is more like an experimental to formalize Laziness. See my blog post about this [8].
That being said, I really, really, want to do this:
tuple_1 foo; tuple_2 bar;
boost::hana::copy_if(foo, bar, [](auto && x) { return my_predicate(x); });
Currently, I cannot. IIUC, I must do:
tuple_1 foo; auto bar = boost::hana::take_if(foo, [](auto && x) { return my_predicate(x); });
Which does O(N^2) copies, where N = boost::hana::size(bar), as it is pure-functional. Unless the optimizer is brilliant (and it typically is not), I cannot use code like this in a hot section of code. Also, IIUC take_if() cannot accommodate the case that decltype(foo) != decltype(bar)
I don't understand; take_if() is not an algorithm provided by Hana.
What I ended up doing was this:
hana::for_each( hana::range_c<std::size_t, 0, hana::size(foo)>, [&](auto i) { hana::at(bar, i) = static_cast<std::remove_reference_t<decltype(hana::at(bar, i))>>( hana::at(foo, i) ); } );
... which is a little unsatisfying.
I also want move_if(). And a pony.
Oh, so you're trying to do an element-wise assignment? Since Hana expects functions to be pure in general and assignment is a side effect, this cannot be achieved with a normal algorithm. However, Hana provides some algorithms that alow side-effects. One of them is for_each, and the other is `.times` (from IntegralConstant). I suggest you could write the following: hana::size(foo).times.with_index([&](auto i) { using T = std::remove_reference_t<decltype(bar[i])>; bar[i] = static_cast<T>(foo[i]); });
- Did you attempt to use the library? If so: * Which compiler(s)
Clang 3.6.1, on both Mac and Linux. My code did actually compile (though did not link due to the already-known bug) on GCC 5.1.
Cool!
* What was the experience? Any problems?
I had to add "-lc++abi" to the CMake build to fix the link on Linux, but otherwise no problems.
That is curious, because the Travis build is on linux and does not need that. Did you set the path to your libc++ installation on Linux? The README states that you should use -DLIBCXX_ROOT=/path/to/libc++ when generating the CMake build system. If it does not work as expected, I'll open up an issue because I really want to track down these problems, which make the library more painful to tryout. Thanks for the review, Zach. I also appreciate all the comments you provided privately; they were super useful. And you were right about the dots in the names; I should have changed them before the review. :-) Regards, Louis [1]: https://github.com/ldionne/hana/issues/114 [2]: https://github.com/ldionne/hana/issues/66 [3]: https://github.com/ldionne/hana/issues/122 [4]: https://github.com/tzlaine/Units-BLAS/pull/1 [5]: https://github.com/ldionne/hana/issues/93 [6]: https://github.com/ldionne/hana/issues/123 [7]: https://github.com/ldionne/hana/issues/124 [8]: http://ldionne.com/2015/03/16/laziness-as-a-comonad/

On Tue, Jun 16, 2015 at 8:04 AM, Louis Dionne <ldionne.2@gmail.com> wrote:
Zach Laine <whatwasthataddress <at> gmail.com> writes:
[...]
- I'd like to see first() / last() / after_first() / before_last() instead of head() / last() / tail() / init() (this was suggested by Louis offline), as it reinforces the relationships in the names.
Actually, I thought we had agreed on front/back drop_front/drop_back :-). Anyway, this is tracked by this issue [2]. It's not a 100% trivial change because that will mean that `drop` is replaced by `drop_front`, which can also take an optional number of elements to drop. Same goes for `drop_back`, which will accept an optional number of elements to drop from the end of the sequence.
That sounds good too.
- I find boost::hana::make<boost::hana::Tuple>(...) to be clunkier than boost::hana::_tuple<>, in many places.
You can use boost::hana::make_tuple(...).
Right. It's just that sometimes I just want to name the type.
- I ran in to a significant problem in one case that suggests a general problem.
In the linear algebra library I partially reimplemented, one declares matrices like this (here, I use the post-conversion Hana tuples):
matrix< hana::tuple<a, b>, hana::tuple<c, d>
m;
Each row is a tuple. But the storage needs to be a single tuple, so matrix<> is actually a template alias for:
template <typename Tuple, std::size_t Rows, std::size_t Columns> matrix_t { /* ... */ };
There's a metafunction that maps from the declaration syntax to the correct representational type:
template <typename HeadRow, typename ...TailRows> struct matrix_type { static const std::size_t rows = sizeof...(TailRows) + 1; static const std::size_t columns = hana::size(HeadRow{});
/* ... */
using tuple = decltype(hana::flatten( hana::make<hana::Tuple>( HeadRow{}, TailRows{}... ) ));
using type = matrix_t<tuple, rows, columns>; };
The problem with the code above, is that it works with a HeadRow tuple type that is a literal type (e.g. a tuple<float, double>). It does not work with a tuple containing non-literal types (e.g. a tuple<boost::units::quantity<boost::units::si::time>, /* ... */>).
This is a problem for me. Why should the literalness of the types contained in the tuple determine whether I can know its size at compile time? I'd like to be able to write this as a workaround:
static const std::size_t columns = hana::size(hana::type<HeadRow>);
That is the incorrect workaround. The proper, Hana-idiomatic way to write what you need is:
namespace detail { auto make_matrix_type = [](auto rows) { auto nrows = hana::size(rows); static_assert(nrows >= 1u, "matrix_t<> requires at least one row");
auto ncolumns = hana::size(rows[hana::size_t<0>]); auto uniform = hana::all_of(rows, [=](auto row) { return hana::size(row) == ncolumns; }); static_assert(uniform, "matrix_t<> requires tuples of uniform length");
using tuple_type = decltype(hana::flatten(rows));
return hana::type<matrix_t<tuple_type, nrows, ncolumns>>; }; }
template <typename ...Rows> using matrix = typename decltype( detail::make_matrix_type(hana::make_tuple(std::declval<Rows>()...)) )::type;
While that definitely works, it also definitely leaves me scratching my head. Why doesn't static const std::size_t columns = hana::size(HeadRow{}); Work in the code above, especially since auto ncolumns = hana::size(rows[hana::size_t<0>]); does? That seems at least a little obscure as an interface. I also could not get hana::size(hana::type<HeadRow>); to work, btw. By the way, see my pull request at [4]. I'm now passing all the tests, and
with a compilation time speedup!
Very nice!
As an aside, note that I was forced to name this nested '_' type elsewhere in a predicate. This is not something I want my junior coworkers to have to read:
template <typename T> constexpr bool some_predicate (hana::_type<T> x) { return hana::_type<T>::_::type::size == some_constant;}
You made your life harder than it needed to be. First, you should have written
`decltype(hana::type<T>)::type::size`
instead of
`hana::_type<T>::_::type`
This `_` nested type is an implementation detail. But then, you realize that this is actually equivalent to `T::size`. Indeed, `decltype(type<T>)::type` is just `T`. So basically, what you wanted is
template <typename T> constexpr bool some_predicate (hana::_type<T> x) { return T::size == some_constant;}
which looks digestible junior coworkers :-).
Ha! Fair enough.
That being said, I really, really, want to do this:
tuple_1 foo; tuple_2 bar;
boost::hana::copy_if(foo, bar, [](auto && x) { return my_predicate(x); });
Currently, I cannot. IIUC, I must do:
tuple_1 foo; auto bar = boost::hana::take_if(foo, [](auto && x) { return my_predicate(x); });
Which does O(N^2) copies, where N = boost::hana::size(bar), as it is pure-functional. Unless the optimizer is brilliant (and it typically is not), I cannot use code like this in a hot section of code. Also, IIUC take_if() cannot accommodate the case that decltype(foo) != decltype(bar)
I don't understand; take_if() is not an algorithm provided by Hana.
Right! I meant filter(). Sorry.
What I ended up doing was this:
hana::for_each( hana::range_c<std::size_t, 0, hana::size(foo)>, [&](auto i) { hana::at(bar, i) = static_cast<std::remove_reference_t<decltype(hana::at(bar, i))>>( hana::at(foo, i) ); } );
... which is a little unsatisfying.
I also want move_if(). And a pony.
Oh, so you're trying to do an element-wise assignment?
Yes.
Since Hana expects functions to be pure in general and assignment is a side effect, this cannot be achieved with a normal algorithm. However, Hana provides some algorithms that alow side-effects. One of them is for_each, and the other is `.times` (from IntegralConstant). I suggest you could write the following:
hana::size(foo).times.with_index([&](auto i) { using T = std::remove_reference_t<decltype(bar[i])>; bar[i] = static_cast<T>(foo[i]); });
Ok. You also had this in the PR you made for Units-BLAS: hana::size(xs).times.with_index([&](auto i) { xs[i] = ys[i]; }); ... which is even simpler. However, either is a pretty awkward way to express an operation that I think will be quite often desired. copy(), copy_if(), move(), and move_if() should be first-order operations. I also want the functional algorithms, but sometimes I simply cannot use them.... Now where's my pony?
- Did you attempt to use the library? If so: * Which compiler(s)
Clang 3.6.1, on both Mac and Linux. My code did actually compile (though did not link due to the already-known bug) on GCC 5.1.
Cool!
* What was the experience? Any problems?
I had to add "-lc++abi" to the CMake build to fix the link on Linux, but otherwise no problems.
That is curious, because the Travis build is on linux and does not need that. Did you set the path to your libc++ installation on Linux? The README states that you should use
-DLIBCXX_ROOT=/path/to/libc++
Ah, cool. I don't think I read the README.
Thanks for the review, Zach. I also appreciate all the comments you provided privately; they were super useful. And you were right about the dots in the names; I should have changed them before the review. :-)
Glad to help! Zach

Zach Laine <whatwasthataddress <at> gmail.com> writes:
On Tue, Jun 16, 2015 at 8:04 AM, Louis Dionne <ldionne.2 <at> gmail.com> wrote:
[...]
That is the incorrect workaround. The proper, Hana-idiomatic way to write what you need is:
namespace detail { auto make_matrix_type = [](auto rows) { auto nrows = hana::size(rows); static_assert(nrows >= 1u, "matrix_t<> requires at least one row");
auto ncolumns = hana::size(rows[hana::size_t<0>]); auto uniform = hana::all_of(rows, [=](auto row) { return hana::size(row) == ncolumns; }); static_assert(uniform, "matrix_t<> requires tuples of uniform length");
using tuple_type = decltype(hana::flatten(rows));
return hana::type<matrix_t<tuple_type, nrows, ncolumns>>; }; }
template <typename ...Rows> using matrix = typename decltype( detail::make_matrix_type(hana::make_tuple(std::declval<Rows>()...)) )::type;
While that definitely works, it also definitely leaves me scratching my head. Why doesn't
static const std::size_t columns = hana::size(HeadRow{});
Work in the code above, especially since
auto ncolumns = hana::size(rows[hana::size_t<0>]);
does? That seems at least a little obscure as an interface.
Notice that static const std::size_t columns = hana::size(HeadRow{}); requires the `hana::size(HeadRow{})` expression to be a constant expression. This requires HeadRow to be both default-constructible and a literal type. On the other hand, auto ncolumns = hana::size(rows[hana::size_t<0>]); just defines a (non-constexpr) variable inside a lambda. It also does not require anything about the constructibility of `HeadRow`. note that I could have written the above Hana-metafunction as follows: auto make_matrix_type = [](auto head_row, auto ...tail_rows) { auto nrows = hana::size_t<sizeof...(tail_rows) + 1>; static_assert(nrows >= 1u, "matrix_t<> requires at least one row"); auto ncolumns = hana::size(head_row); auto rows = hana::make_tuple(head_row, tail_rows...); auto uniform = hana::all_of(rows, [=](auto row) { return hana::size(row) == ncolumns; }); static_assert(uniform, "matrix_t<> requires tuples of uniform length"); using tuple_type = decltype(hana::flatten(rows)); return hana::type<matrix_t<tuple_type, nrows, ncolumns>>; }; which is closer to your initial implementation. Now, it is obvious that auto ncolumns = hana::size(head_row); should work, right?
I also could not get hana::size(hana::type<HeadRow>); to work, btw.
`hana::size` is a function from the Foldable concept. `hana::type<...>` represents a C++ type. Since a C++ type is not Foldable (that wouldn't make any sense), you can't call `hana::size` on a `hana::type<...>`. To understand why what you're asking for can't be done without breaking the conceptual integrity of Hana, consider what would happen if I defined `std::vector<>` as a Foldable. Obviously, the following can't be made to work: hana::size(hana::type<std::vector<int>>) Hana works with objects, and the `hana::type<>` wrapper is there to allow us to represent __actual C++ types__ as objects, for the purpose of calling type traits on them, essentially. The idiomatic Hana-way to do what you're trying to achieve is to consider your matrix rows as objects, which they are, instead of types.
[...]
That being said, I really, really, want to do this:
tuple_1 foo; tuple_2 bar;
boost::hana::copy_if(foo, bar, [](auto && x) { return my_predicate(x); });
Currently, I cannot. IIUC, I must do:
tuple_1 foo; auto bar = boost::hana::take_if(foo, [](auto && x) { return my_predicate(x); });
Which does O(N^2) copies, where N = boost::hana::size(bar), as it is pure-functional. Unless the optimizer is brilliant (and it typically is not), I cannot use code like this in a hot section of code. Also, IIUC take_if() cannot accommodate the case that decltype(foo) != decltype(bar)
I don't understand; take_if() is not an algorithm provided by Hana.
Right! I meant filter(). Sorry.
No problem. Now I understand better. So basically you want to write tuple_1 foo; auto bar = boost::hana::filter(foo, my_predicate); I don't understand why that's O(N^2) copies. That should really be N copies, where `N = hana::size(bar)`. As a bonus, if you don't need `foo` around anymore, you can just write tuple_1 foo; auto bar = boost::hana::filter(std::move(foo), my_predicate); and now you get N moves, not even N copies.
[...]
Since Hana expects functions to be pure in general and assignment is a side effect, this cannot be achieved with a normal algorithm. However, Hana provides some algorithms that alow side-effects. One of them is for_each, and the other is `.times` (from IntegralConstant). I suggest you could write the following:
hana::size(foo).times.with_index([&](auto i) { using T = std::remove_reference_t<decltype(bar[i])>; bar[i] = static_cast<T>(foo[i]); });
Ok. You also had this in the PR you made for Units-BLAS:
hana::size(xs).times.with_index([&](auto i) { xs[i] = ys[i]; });
... which is even simpler. However, either is a pretty awkward way to express an operation that I think will be quite often desired. copy(), copy_if(), move(), and move_if() should be first-order operations. I also want the functional algorithms, but sometimes I simply cannot use them....
First, assignment to tuples will be fixed and I consider it a bug right now. However, copy: auto tuple1 = hana::make_tuple(...); auto tuple2 = tuple1; copy_if: auto tuple1 = hana::make_tuple(...); auto tuple2 = hana::filter(tuple1, predicate); move: auto tuple1 = hana::make_tuple(...); auto tuple2 = std::move(tuple1); move_if: auto tuple1 = hana::make_tuple(...); auto tuple2 = hana::filter(std::move(tuple1), predicate); Does that solve your problem, or am I misunderstanding it?
Now where's my pony?
Here :-) http://goo.gl/JNq0Ve
[...]
Regards, Louis

On Tue, Jun 16, 2015 at 9:35 AM, Louis Dionne <ldionne.2@gmail.com> wrote:
Zach Laine <whatwasthataddress <at> gmail.com> writes:
On Tue, Jun 16, 2015 at 8:04 AM, Louis Dionne <ldionne.2 <at> gmail.com>
wrote:
[...]
That is the incorrect workaround. The proper, Hana-idiomatic way to
write
what you need is:
namespace detail { auto make_matrix_type = [](auto rows) { auto nrows = hana::size(rows); static_assert(nrows >= 1u, "matrix_t<> requires at least one row");
auto ncolumns = hana::size(rows[hana::size_t<0>]); auto uniform = hana::all_of(rows, [=](auto row) { return hana::size(row) == ncolumns; }); static_assert(uniform, "matrix_t<> requires tuples of uniform length");
using tuple_type = decltype(hana::flatten(rows));
return hana::type<matrix_t<tuple_type, nrows, ncolumns>>; }; }
template <typename ...Rows> using matrix = typename decltype(
detail::make_matrix_type(hana::make_tuple(std::declval<Rows>()...))
)::type;
While that definitely works, it also definitely leaves me scratching my head. Why doesn't
static const std::size_t columns = hana::size(HeadRow{});
Work in the code above, especially since
auto ncolumns = hana::size(rows[hana::size_t<0>]);
does? That seems at least a little obscure as an interface.
Notice that
static const std::size_t columns = hana::size(HeadRow{});
requires the `hana::size(HeadRow{})` expression to be a constant expression. This requires HeadRow to be both default-constructible and a literal type. On the other hand,
Right. I do understand that; I noted this in the original comment in my review. auto ncolumns = hana::size(rows[hana::size_t<0>]);
just defines a (non-constexpr) variable inside a lambda. It also does not require anything about the constructibility of `HeadRow`. note that I could have written the above Hana-metafunction as follows:
auto make_matrix_type = [](auto head_row, auto ...tail_rows) { auto nrows = hana::size_t<sizeof...(tail_rows) + 1>; static_assert(nrows >= 1u, "matrix_t<> requires at least one row");
auto ncolumns = hana::size(head_row);
auto rows = hana::make_tuple(head_row, tail_rows...); auto uniform = hana::all_of(rows, [=](auto row) { return hana::size(row) == ncolumns; }); static_assert(uniform, "matrix_t<> requires tuples of uniform length");
using tuple_type = decltype(hana::flatten(rows));
return hana::type<matrix_t<tuple_type, nrows, ncolumns>>; };
which is closer to your initial implementation. Now, it is obvious that
auto ncolumns = hana::size(head_row);
should work, right?
I also understand why this works. I do *not* understand why this did not work for me, since it seems to be exactly what the code in your PR does: static const std::size_t columns = hana::size(std::declval<HeadRow>());
I also could not get hana::size(hana::type<HeadRow>); to work, btw.
`hana::size` is a function from the Foldable concept. `hana::type<...>` represents a C++ type. Since a C++ type is not Foldable (that wouldn't make any sense), you can't call `hana::size` on a `hana::type<...>`.
To understand why what you're asking for can't be done without breaking the conceptual integrity of Hana, consider what would happen if I defined `std::vector<>` as a Foldable. Obviously, the following can't be made to work:
hana::size(hana::type<std::vector<int>>)
Hana works with objects, and the `hana::type<>` wrapper is there to allow us to represent __actual C++ types__ as objects, for the purpose of calling type traits on them, essentially. The idiomatic Hana-way to do what you're trying to achieve is to consider your matrix rows as objects, which they are, instead of types.
Right. I should have been more clear. The use of type<...> was a suggestion for a workaround interface for non-literal types that I was suggesting. I shouldn't have conflated the two threads of discussion. I made that suggestion because I tried both of these: // Does not work; not a literal type static const std::size_t columns = hana::size(HeadRow{}); // Does not work for as-yet-mysterious-to-me reasons static const std::size_t columns = hana::size(std::declval<HeadRow>()); ... and was looking for a way to get a "fake" object of type HeadRow to pass to hana::size().
[...]
That being said, I really, really, want to do this:
tuple_1 foo; tuple_2 bar;
boost::hana::copy_if(foo, bar, [](auto && x) { return
my_predicate(x);
});
Currently, I cannot. IIUC, I must do:
tuple_1 foo; auto bar = boost::hana::take_if(foo, [](auto && x) { return my_predicate(x); });
Which does O(N^2) copies, where N = boost::hana::size(bar), as it is pure-functional. Unless the optimizer is brilliant (and it
typically is
not), I cannot use code like this in a hot section of code. Also, IIUC take_if() cannot accommodate the case that decltype(foo) != decltype(bar)
I don't understand; take_if() is not an algorithm provided by Hana.
Right! I meant filter(). Sorry.
No problem. Now I understand better. So basically you want to write
tuple_1 foo; auto bar = boost::hana::filter(foo, my_predicate);
I don't understand why that's O(N^2) copies. That should really be N copies, where `N = hana::size(bar)`. As a bonus, if you don't need `foo` around anymore, you can just write
tuple_1 foo; auto bar = boost::hana::filter(std::move(foo), my_predicate);
and now you get N moves, not even N copies.
That's good to know. I was concerned that the pure functional implementation would internally produce intermediate values of size 1, 2, 3, ... N. This is often the case in pure functional implementations. Even so, it returns a temporary that must then be copied/moved again into bar. That means I'm doing 2*N copies/moves instead of N. That implies that I still cannot use filter() in hot code. (I know that above bar is initialized with the result of filter(), but in many cases, the result will be assigned to an existing value, and the final copy is not guaranteed to be elided. In much of my code, I need that guarantee, or a way to fall back to direct assignment where the elision does not occur.)
[...]
Since Hana expects functions to be pure in general and assignment is a side effect, this cannot be achieved with a normal algorithm. However, Hana provides some algorithms that alow side-effects. One of them is for_each, and the other is `.times` (from IntegralConstant). I suggest you could write the following:
hana::size(foo).times.with_index([&](auto i) { using T = std::remove_reference_t<decltype(bar[i])>; bar[i] = static_cast<T>(foo[i]); });
Ok. You also had this in the PR you made for Units-BLAS:
hana::size(xs).times.with_index([&](auto i) { xs[i] = ys[i]; });
... which is even simpler. However, either is a pretty awkward way to express an operation that I think will be quite often desired. copy(), copy_if(), move(), and move_if() should be first-order operations. I also want the functional algorithms, but sometimes I simply cannot use them....
First, assignment to tuples will be fixed and I consider it a bug right now. However,
copy: auto tuple1 = hana::make_tuple(...); auto tuple2 = tuple1;
copy_if: auto tuple1 = hana::make_tuple(...); auto tuple2 = hana::filter(tuple1, predicate);
move: auto tuple1 = hana::make_tuple(...); auto tuple2 = std::move(tuple1);
move_if: auto tuple1 = hana::make_tuple(...); auto tuple2 = hana::filter(std::move(tuple1), predicate);
Does that solve your problem, or am I misunderstanding it?
That all works fine, but I actually need assignment across tuple types that are different, but have compatible elements: hana::_tuple<A, B> x = ...; hana::_tuple<C, D> y = ...; // some stuff happens ... // This should compile iff std::is_same<A, C>::value && std::is_same<B, D>::value x = y; // But this should work as long as a C is assignable to an A and a D is assignable to a B: hana::copy(x, y);
Now where's my pony?
Here :-) http://goo.gl/JNq0Ve
That's a pretty pony. Zach

Zach Laine <whatwasthataddress <at> gmail.com> writes:
On Tue, Jun 16, 2015 at 9:35 AM, Louis Dionne <ldionne.2 <at> gmail.com> wrote:
Zach Laine <whatwasthataddress <at> gmail.com> writes:
[...]
which is closer to your initial implementation. Now, it is obvious that
auto ncolumns = hana::size(head_row);
should work, right?
I also understand why this works.
I do *not* understand why this did not work for me, since it seems to be exactly what the code in your PR does:
static const std::size_t columns = hana::size(std::declval<HeadRow>());
You are using std::declval in an evaluated context, which is illegal. Remember that std::declval is declared (but never defined) as template <class _Tp> typename add_rvalue_reference<_Tp>::type declval() noexcept; // no definition I'm using std::declval inside decltype(...), which is an unevaluated context.
[...]
Right. I should have been more clear. The use of type<...> was a suggestion for a workaround interface for non-literal types that I was suggesting. I shouldn't have conflated the two threads of discussion. I made that suggestion because I tried both of these:
// Does not work; not a literal type static const std::size_t columns = hana::size(HeadRow{});
// Does not work for as-yet-mysterious-to-me reasons static const std::size_t columns = hana::size(std::declval<HeadRow>());
... and was looking for a way to get a "fake" object of type HeadRow to pass to hana::size().
I understand. Like I said, from my point of view, the proper workaround is to use a lambda as I suggested.
[...]
No problem. Now I understand better. So basically you want to write
tuple_1 foo; auto bar = boost::hana::filter(foo, my_predicate);
I don't understand why that's O(N^2) copies. That should really be N copies, where `N = hana::size(bar)`. As a bonus, if you don't need `foo` around anymore, you can just write
tuple_1 foo; auto bar = boost::hana::filter(std::move(foo), my_predicate);
and now you get N moves, not even N copies.
That's good to know. I was concerned that the pure functional implementation would internally produce intermediate values of size 1, 2, 3, ... N. This is often the case in pure functional implementations. Even so, it returns a temporary that must then be copied/moved again into bar. That means I'm doing 2*N copies/moves instead of N. That implies that I still cannot use filter() in hot code.
The current implementation of filter for Tuple will be as good as I described. The generic implementation for other sequence types (say an adapted std::tuple) will be slower. So there's room for improvement, of course.
(I know that above bar is initialized with the result of filter(), but in many cases, the result will be assigned to an existing value, and the final copy is not guaranteed to be elided. In much of my code, I need that guarantee, or a way to fall back to direct assignment where the elision does not occur.)
The result of `filter` is an rvalue temporary tuple. If the input sequence to filter was a movable-from tuple, it turns out that this rvalue result will have been move-constructed. The rest is up to the thing that receives the result of filter(). If you assign the result of filter() to something that has a move-assignment operator, then no copy occurs. I might be misunderstanding your requirement.
[...]
First, assignment to tuples will be fixed and I consider it a bug right now. However,
[...]
Does that solve your problem, or am I misunderstanding it?
That all works fine, but I actually need assignment across tuple types that are different, but have compatible elements:
hana::_tuple<A, B> x = ...; hana::_tuple<C, D> y = ...;
// some stuff happens ...
// This should compile iff std::is_same<A, C>::value && std::is_same<B, D>::value x = y;
// But this should work as long as a C is assignable to an A and a D is assignable to a B: hana::copy(x, y);
I guess I will need to decide upon this when I resolve the issue about tuple assignment. It is not yet clear to me why `x = y` should not work when the tuple types are different but have compatible elements. I must think about it. Regards, Louis

On Tue, Jun 16, 2015 at 11:11 AM, Louis Dionne <ldionne.2@gmail.com> wrote:
Zach Laine <whatwasthataddress <at> gmail.com> writes:
On Tue, Jun 16, 2015 at 9:35 AM, Louis Dionne <ldionne.2 <at> gmail.com>
wrote:
Zach Laine <whatwasthataddress <at> gmail.com> writes:
[...]
which is closer to your initial implementation. Now, it is obvious that
auto ncolumns = hana::size(head_row);
should work, right?
I also understand why this works.
I do *not* understand why this did not work for me, since it seems to be exactly what the code in your PR does:
static const std::size_t columns = hana::size(std::declval<HeadRow>());
You are using std::declval in an evaluated context, which is illegal. Remember that std::declval is declared (but never defined) as
template <class _Tp> typename add_rvalue_reference<_Tp>::type declval() noexcept; // no definition
I'm using std::declval inside decltype(...), which is an unevaluated context.
Gah! Thanks.
[...]
No problem. Now I understand better. So basically you want to write
tuple_1 foo; auto bar = boost::hana::filter(foo, my_predicate);
I don't understand why that's O(N^2) copies. That should really be N copies, where `N = hana::size(bar)`. As a bonus, if you don't need `foo` around anymore, you can just write
tuple_1 foo; auto bar = boost::hana::filter(std::move(foo), my_predicate);
and now you get N moves, not even N copies.
That's good to know. I was concerned that the pure functional implementation would internally produce intermediate values of size 1, 2, 3, ... N. This is often the case in pure functional implementations. Even so, it returns a temporary that must then be copied/moved again into bar. That means I'm doing 2*N copies/moves instead of N. That implies that I still cannot use filter() in hot code.
The current implementation of filter for Tuple will be as good as I described. The generic implementation for other sequence types (say an adapted std::tuple) will be slower. So there's room for improvement, of course.
When you say "slower" do you mean 2*N or N^2?
(I know that above bar is initialized with the result of filter(), but in many cases, the result will be assigned to an existing value, and the final copy is not guaranteed to be elided. In much of my code, I need that guarantee, or a way to fall back to direct assignment where the elision does not occur.)
The result of `filter` is an rvalue temporary tuple. If the input sequence to filter was a movable-from tuple, it turns out that this rvalue result will have been move-constructed. The rest is up to the thing that receives the result of filter(). If you assign the result of filter() to something that has a move-assignment operator, then no copy occurs. I might be misunderstanding your requirement.
Sometimes extraneous moves are not ok either. I really, really, need to use mutating operations and side effects at least some of the time.
[...]
First, assignment to tuples will be fixed and I consider it a bug right now. However,
[...]
Does that solve your problem, or am I misunderstanding it?
That all works fine, but I actually need assignment across tuple types that are different, but have compatible elements:
hana::_tuple<A, B> x = ...; hana::_tuple<C, D> y = ...;
// some stuff happens ...
// This should compile iff std::is_same<A, C>::value && std::is_same<B, D>::value x = y;
// But this should work as long as a C is assignable to an A and a D is assignable to a B: hana::copy(x, y);
I guess I will need to decide upon this when I resolve the issue about tuple assignment. It is not yet clear to me why `x = y` should not work when the tuple types are different but have compatible elements. I must think about it.
Well, sometimes C is only explicitly convertible to A. I perhaps overstated things above. What I should have said is, in the general case, "x = y" is not defined for some values, if the assignment relies on implicit conversion. Moreover, I still want to do other mutating operations from one tuple to another, aside from just assignment. Zach

Zach Laine <whatwasthataddress <at> gmail.com> writes:
[...]
The current implementation of filter for Tuple will be as good as I described. The generic implementation for other sequence types (say an adapted std::tuple) will be slower. So there's room for improvement, of course.
When you say "slower" do you mean 2*N or N^2?
Definitely more like 2*N. But I'll let you count :-). For generic sequences, the implementation is roughly equivalent to (once simplified): filter(xs, pred) == flatten(transform(xs, [](auto&& x) { if (pred(x)) return make_tuple(x); else return make_tuple(); })) // Let's denote make_tuple(x1, ..., xn) by [x1, ..., xn]. Let's also // assume the worst case, i.e. the predicate is always satisfied. // Then, we have N moves or copies so far: == flatten([[x1], [x2], ... [xn]]) == fold_right([[x1], [x2], ... [xn]], [], concat) // This is about N more copies: == concat([x1], concat([x2], ... concat([xn], []))) So it's O(k*N) for some constant k (say k <= 10 to be safe), but not O(N^2).
(I know that above bar is initialized with the result of filter(), but in many cases, the result will be assigned to an existing value, and the final copy is not guaranteed to be elided. In much of my code, I need that guarantee, or a way to fall back to direct assignment where the elision does not occur.)
The result of `filter` is an rvalue temporary tuple. If the input sequence to filter was a movable-from tuple, it turns out that this rvalue result will have been move-constructed. The rest is up to the thing that receives the result of filter(). If you assign the result of filter() to something that has a move-assignment operator, then no copy occurs. I might be misunderstanding your requirement.
Sometimes extraneous moves are not ok either. I really, really, need to use mutating operations and side effects at least some of the time.
I understand that sometimes this is needed. For those times, you can use for_each or .times with an index and mutate stuff as you wish. It's very, very hard to write Hana algorithms if you must guarantee that things are called in a specific order, or even called at all, so we have to ensure pure functions are used in the general case. It's also sometimes a compile-time performance tradeoff.
[...]
First, assignment to tuples will be fixed and I consider it a bug right now. However,
[...]
Does that solve your problem, or am I misunderstanding it?
That all works fine, but I actually need assignment across tuple types that are different, but have compatible elements:
hana::_tuple<A, B> x = ...; hana::_tuple<C, D> y = ...;
// some stuff happens ...
// This should compile iff std::is_same<A, C>::value && std::is_same<B, D>::value x = y;
// But this should work as long as a C is assignable to an A and a D is assignable to a B: hana::copy(x, y);
I guess I will need to decide upon this when I resolve the issue about tuple assignment. It is not yet clear to me why `x = y` should not work when the tuple types are different but have compatible elements. I must think about it.
Well, sometimes C is only explicitly convertible to A. I perhaps overstated things above. What I should have said is, in the general case, "x = y" is not defined for some values, if the assignment relies on implicit conversion. Moreover, I still want to do other mutating operations from one tuple to another, aside from just assignment.
Without redesigning the whole library, and without relying on something similar to iterators, I can't currently see a way to provide generic algorithms that could output their result into an existing sequence. But regardless, I'd like to understand what semantics you would be expecting from something like hana::copy(x, y) More generally, would the following do what you need? auto transform_mutate = [](auto& in, auto& out, auto f) { for_each(size(in).times.with_index, [&](auto i) { in[i] = f(out[i]); }); }; Also, it just occurred to me that some algorithms simply can't write their result into an existing sequence, because the type of that resulting sequence isn't even known yet. Consider: hana::tuple<A, B, C> xs; hana::tuple<?, ?, ?> result; // should write the resulting sequence into 'result'. filter_mutate(xs, result, predicate); You don't know what's the type of the tuple before you have performed the algorithm. The only way I see this can be done is by using some kind of result_of namespace like in Fusion, so you can write: hana::tuple<A, B, C> xs; result_of::filter<...>::type result; filter_mutate(xs, result, predicate); But that's not a path I want to take. I think it might be possible to address 80% of your problem by adding one or two simple functions to Hana for making mutation easier, without having to change everything. Regards, Louis

On Wed, Jun 17, 2015 at 9:58 AM, Louis Dionne <ldionne.2@gmail.com> wrote:
Zach Laine <whatwasthataddress <at> gmail.com> writes:
[...]
The current implementation of filter for Tuple will be as good as I described. The generic implementation for other sequence types (say an adapted std::tuple) will be slower. So there's room for improvement, of course.
When you say "slower" do you mean 2*N or N^2?
Definitely more like 2*N. But I'll let you count :-). For generic sequences, the implementation is roughly equivalent to (once simplified):
filter(xs, pred) == flatten(transform(xs, [](auto&& x) { if (pred(x)) return make_tuple(x); else return make_tuple(); }))
// Let's denote make_tuple(x1, ..., xn) by [x1, ..., xn]. Let's also // assume the worst case, i.e. the predicate is always satisfied. // Then, we have N moves or copies so far: == flatten([[x1], [x2], ... [xn]])
== fold_right([[x1], [x2], ... [xn]], [], concat)
// This is about N more copies: == concat([x1], concat([x2], ... concat([xn], [])))
So it's O(k*N) for some constant k (say k <= 10 to be safe), but not O(N^2).
That will be unacceptably slow for many-to-most C++ users.
[...]
First, assignment to tuples will be fixed and I consider it a bug right now. However,
[...]
Does that solve your problem, or am I misunderstanding it?
That all works fine, but I actually need assignment across tuple types that are different, but have compatible elements:
hana::_tuple<A, B> x = ...; hana::_tuple<C, D> y = ...;
// some stuff happens ...
// This should compile iff std::is_same<A, C>::value && std::is_same<B, D>::value x = y;
// But this should work as long as a C is assignable to an A and a D is assignable to a B: hana::copy(x, y);
I guess I will need to decide upon this when I resolve the issue about tuple assignment. It is not yet clear to me why `x = y` should not work when the tuple types are different but have compatible elements. I must think about it.
Well, sometimes C is only explicitly convertible to A. I perhaps overstated things above. What I should have said is, in the general case, "x = y" is not defined for some values, if the assignment relies on implicit conversion. Moreover, I still want to do other mutating operations from one tuple to another, aside from just assignment.
Without redesigning the whole library, and without relying on something similar to iterators, I can't currently see a way to provide generic algorithms that could output their result into an existing sequence.
But regardless, I'd like to understand what semantics you would be expecting from something like
hana::copy(x, y)
I would expect that the constraints on copy() be that size(x) == size(y), and that y[i] = static_cast<decltype(y[i])>(x[i]) is well-formed for all i < size(x) (not that it's not necessary for me that std::is_convertible<decltype(x[i]), decltype(y[i])>::value be true for all i). Perhaps there should be a difference between these two notions of compatibility, called convert() for the former and copy() for the latter.
More generally, would the following do what you need?
auto transform_mutate = [](auto& in, auto& out, auto f) { for_each(size(in).times.with_index, [&](auto i) { in[i] = f(out[i]); }); };
Yes.
Also, it just occurred to me that some algorithms simply can't write their result into an existing sequence, because the type of that resulting sequence isn't even known yet. Consider:
hana::tuple<A, B, C> xs; hana::tuple<?, ?, ?> result;
// should write the resulting sequence into 'result'. filter_mutate(xs, result, predicate);
You don't know what's the type of the tuple before you have performed the algorithm.
Right. I can certainly live with only copy(), convert(), move(), and transform_mutate(). Zach

On Thu, Jun 18, 2015 at 9:07 AM, Zach Laine <whatwasthataddress@gmail.com> wrote:
On Wed, Jun 17, 2015 at 9:58 AM, Louis Dionne <ldionne.2@gmail.com> wrote:
Zach Laine <whatwasthataddress <at> gmail.com> writes:
Right. I can certainly live with only copy(), convert(), move(), and transform_mutate().
To be clear, if I just had transform_mutate(), I could easily use it in place of the others (expect possibly for move(), depending on its semantics). Zach

Zach Laine <whatwasthataddress <at> gmail.com> writes:
On Wed, Jun 17, 2015 at 9:58 AM, Louis Dionne <ldionne.2 <at> gmail.com> wrote:
Zach Laine <whatwasthataddress <at> gmail.com> writes:
[...]
The current implementation of filter for Tuple will be as good as I described. The generic implementation for other sequence types (say an adapted std::tuple) will be slower. So there's room for improvement, of course.
When you say "slower" do you mean 2*N or N^2?
Definitely more like 2*N. But I'll let you count . For generic sequences, the implementation is roughly equivalent to (once simplified):
. [...]
So it's O(k*N) for some constant k (say k <= 10 to be safe), but not O(N^2).
That will be unacceptably slow for many-to-most C++ users.
Can't you use Hana's Tuple, then? It's very hard to give good compile-time performance and runtime performance without knowing the internal representation of the data structures. One way to get good runtime performance is to use iterators and views like Fusion, but then you get lifetime issues. Hana takes for granted that types should be cheap to move. If that's not the case, can't you use a reference_wrapper or something like that?
[...]
Also, it just occurred to me that some algorithms simply can't write their result into an existing sequence, because the type of that resulting sequence isn't even known yet. Consider:
hana::tuple<A, B, C> xs; hana::tuple<?, ?, ?> result;
// should write the resulting sequence into 'result'. filter_mutate(xs, result, predicate);
You don't know what's the type of the tuple before you have performed the algorithm.
Right. I can certainly live with only copy(), convert(), move(), and transform_mutate().
I don't easily see how these mutating algorithms would fit in Hana given its functional design, but I'm not giving up. However, I suspect there might be better (read more functional :-) ways to achieve this optimal performance. To get there, I'd like to make sure I understand exactly what operation you're trying to avoid. Let's assume you wrote the following instead of a transform_mutate equivalent: hana::tuple<T...> xs{...}; hana::tuple<U...> ys; ys = hana::transform(xs, f); This will first apply f() to each element of xs(), and then store the temporary values in a (temporary) tuple by moving them into place. This will then move-assign each element from the temporary tuple into ys. __Is the first move what you are trying to avoid?__ Because in all cases, even with transform_mutate, at least one move is required, in order to assign the temporary return value of f() to the corresponding element of ys. Regards, Louis

On Thu, Jun 18, 2015 at 4:27 PM, Louis Dionne <ldionne.2@gmail.com> wrote:
Zach Laine <whatwasthataddress <at> gmail.com> writes:
On Wed, Jun 17, 2015 at 9:58 AM, Louis Dionne <ldionne.2 <at> gmail.com>
wrote:
Zach Laine <whatwasthataddress <at> gmail.com> writes:
[...]
The current implementation of filter for Tuple will be as good as I described. The generic implementation for other sequence types (say an adapted std::tuple) will be slower. So there's room for improvement, of course.
When you say "slower" do you mean 2*N or N^2?
Definitely more like 2*N. But I'll let you count . For generic sequences, the implementation is roughly equivalent to (once simplified):
. [...]
So it's O(k*N) for some constant k (say k <= 10 to be safe), but not O(N^2).
That will be unacceptably slow for many-to-most C++ users.
Can't you use Hana's Tuple, then? It's very hard to give good compile-time performance and runtime performance without knowing the internal representation of the data structures. One way to get good runtime performance is to use iterators and views like Fusion, but then you get lifetime issues. Hana takes for granted that types should be cheap to move. If that's not the case, can't you use a reference_wrapper or something like that?
I was using Hana tuples everywhere. The issue is that even for those, there is a k>1 when copying. If that k is >2 for other types, it's an even larger problem there.
[...]
Also, it just occurred to me that some algorithms simply can't write their result into an existing sequence, because the type of that resulting sequence isn't even known yet. Consider:
hana::tuple<A, B, C> xs; hana::tuple<?, ?, ?> result;
// should write the resulting sequence into 'result'. filter_mutate(xs, result, predicate);
You don't know what's the type of the tuple before you have performed the algorithm.
Right. I can certainly live with only copy(), convert(), move(), and transform_mutate().
I don't easily see how these mutating algorithms would fit in Hana given its functional design, but I'm not giving up. However, I suspect there might be better (read more functional :-) ways to achieve this optimal performance.
To get there, I'd like to make sure I understand exactly what operation you're trying to avoid. Let's assume you wrote the following instead of a transform_mutate equivalent:
hana::tuple<T...> xs{...}; hana::tuple<U...> ys; ys = hana::transform(xs, f);
This will first apply f() to each element of xs(), and then store the temporary values in a (temporary) tuple by moving them into place. This will then move-assign each element from the temporary tuple into ys.
__Is the first move what you are trying to avoid?__
No, I'm trying to get rid of the temporary altogether. We all know that copies of temporaries get RVO'd out of code like the above a lot of the time, *but not always*. I want a guarantee that I don't need to rely on RVO in a particular case, if efficiency is critical.
Because in all cases, even with transform_mutate, at least one move is required, in order to assign the temporary return value of f() to the corresponding element of ys.
This is only true if there are only functional versions of Hana algorithms. Mutating ones can write straight into the result object, a la the STL algorithms. Zach

Zach Laine <whatwasthataddress <at> gmail.com> writes:
On Thu, Jun 18, 2015 at 4:27 PM, Louis Dionne <ldionne.2 <at> gmail.com> wrote:
[...]
To get there, I'd like to make sure I understand exactly what operation you're trying to avoid. Let's assume you wrote the following instead of a transform_mutate equivalent:
hana::tuple<T...> xs{...}; hana::tuple<U...> ys; ys = hana::transform(xs, f);
This will first apply f() to each element of xs(), and then store the temporary values in a (temporary) tuple by moving them into place. This will then move-assign each element from the temporary tuple into ys.
__Is the first move what you are trying to avoid?__
No, I'm trying to get rid of the temporary altogether. We all know that copies of temporaries get RVO'd out of code like the above a lot of the time, *but not always*. I want a guarantee that I don't need to rely on RVO in a particular case, if efficiency is critical.
Sorry I'm being so slow, but do you mean get rid of the temporary tuple or the temporary value? Regarding the temporary value, I think there just isn't a way to get rid of it. When you write T y = f(x); there is a temporary object created by f(x) and then moved into y, right? Similarly, if you have T y; y = f(x); there's a temporary created by f(x) that gets move-assigned to y. In all cases, there's a temporary value created, and you're relying on the optimizer to elide it. Am I misunderstanding something fundamental about C++, or just being thick? I'll take it that you want to get rid of the temporary tuple. In this case, it is true that using a mutating algorithm will avoid the creation of a temporary tuple. To achieve this, I see three main solutions. The first one is to provide mutating algorithms. I don't like that, but it solves your problem. The second one is to provide lazy views a la Fusion that would compute the results on the fly. When you assign a view to a sequence, each element would be computed and then assigned directly, without creating a temporary tuple. I like this better, but it might have a non-trivial impact on the design of the library and it also represents a lot of work. The third one is to consider this as a corner case, pretend the optimizer does its job properly most of the time, and to let performance freaks write for_each(range(int_<0>, int_<n>), [&](auto i) { output[i] = f(input[i]); }); I'm not sure which one is the best resolution. Regards, Louis

On Sat, Jun 20, 2015 at 4:17 PM, Louis Dionne <ldionne.2@gmail.com> wrote:
Zach Laine <whatwasthataddress <at> gmail.com> writes:
On Thu, Jun 18, 2015 at 4:27 PM, Louis Dionne <ldionne.2 <at> gmail.com>
wrote:
[...]
To get there, I'd like to make sure I understand exactly what operation you're trying to avoid. Let's assume you wrote the following instead of a transform_mutate equivalent:
hana::tuple<T...> xs{...}; hana::tuple<U...> ys; ys = hana::transform(xs, f);
This will first apply f() to each element of xs(), and then store the temporary values in a (temporary) tuple by moving them into place. This will then move-assign each element from the temporary tuple into ys.
__Is the first move what you are trying to avoid?__
No, I'm trying to get rid of the temporary altogether. We all know that copies of temporaries get RVO'd out of code like the above a lot of the time, *but not always*. I want a guarantee that I don't need to rely on RVO in a particular case, if efficiency is critical.
Sorry I'm being so slow, but do you mean get rid of the temporary tuple or the temporary value? Regarding the temporary value,
Not this. [snip]
I'll take it that you want to get rid of the temporary tuple.
This. Zach

On Sat, Jun 20, 2015 at 4:17 PM, Louis Dionne <ldionne.2@>; wrote:
[...]
Sorry I'm being so slow, but do you mean get rid of the temporary tuple or the temporary value? Regarding the temporary value,
Not this.
[snip]
I'll take it that you want to get rid of the temporary tuple.
This.
Ok. I created this issue: https://github.com/ldionne/hana/issues/150 Regards, Louis -- View this message in context: http://boost.2283326.n4.nabble.com/Boost-Hana-Formal-review-for-Hana-tp46769... Sent from the Boost - Dev mailing list archive at Nabble.com.

Hi Glen, Louis and all,
- Whether you believe the library should be accepted into Boost * Conditions for acceptance Yes.
- Your name Kohei Takahashi
- Your knowledge of the problem domain. I'm working on maintaining Boost.Fusion.
You are strongly encouraged to also provide additional information: - What is your evaluation of the library's: * Design Well organized and it shows the great potential of variable templates. Also purely functional programming excites me, I started to learn Haskell :)
* Implementation Some identifiers, whitch defined under boost::hana, begin with single underscore (e.g. `_is_a' in core/is_a.hpp). I think (and be afraid) it will violate [lex.name] p.3 (I referenced n4296) when user use using-directive (i.e. `using namespace boost::hana;'). Rename or move to detail namespace might be better, since thouse are documented as /unspecified-type/. (I found return type of `make<Tuple>' is not a unspecified-type in the document. Is this intended?)
* Documentation Well described, but...
* Should be more clearly about which C++14(11) features are required. Most of users assume working with C++98 since all of currently released Boost libraries work under C++98. Also, some (proprietary or not) vendor might ship customized compiler based on OSS one. In such case, users have difficulty in determining usable or not since some features are unsupported or dropped. Boost.Config will help and depending it is better than self organizing, IMO. * It is important, in contrast to Fusion and std::tuple, Data types can't modify even its element. Users might use similar to Fusion (assgin, tie, ...) because most of the C++ programmers have no backend about purely functional programming and immutables and god compile error. Also, C++14 constexpr allow to assign, thus /constant-expression/ doesn't make sense to describe purely. It is the best, describing about it in Introduction section (also describing about can't make /tie/ is better). * Where is source of documentations? How to generate it? It seems to be generated using doxygen, but any of headers don't contain its documents.
* Tests I didn't run, but it seems to be enough though.
* Usefulness Since names are similar to haskell, some users might confuse about usage (e.g. `maybe' v.s. `Optional'), but it's trivial: no worry about that.
- Did you attempt to use the library? If so: * Which compiler(s) I tried some tiny code with GCC 5.1.0.
* What was the experience? Any problems? Compiling is much faster than Fusion, Wow!
- How much effort did you put into your evaluation of the review? It was bit hard to understand some semantics because I'm not a Haskell programmer and have no mathmatic backends.
Best, Kohei

Kohei Takahashi <flast <at> flast.jp> writes:
[...]
* Implementation Some identifiers, whitch defined under boost::hana, begin with single underscore (e.g. `_is_a' in core/is_a.hpp). I think (and be afraid) it will violate [lex.name] p.3 (I referenced n4296) when user use using-directive (i.e. `using namespace boost::hana;'). Rename or move to detail namespace might be better, since thouse are documented as /unspecified-type/.
I think you're right. But then it means that the placeholders from Boost.Bind (or even Boost.Lambda if you import them to the global namespace) will also violate [lex.name], right? Still, I'll find a different way to name those. The main interest was that it results in very clear error messages. For example, fold_left(1, 2, 3); // obviously an error error: static_assert failed "hana::fold_left(xs, state, f) requires xs to be Foldable" static_assert(_models<Foldable, S>{}, ^ ~~~~~~~~~~~~~~~~~~~~~~ note: in instantiation of function template specialization 'boost::hana::_fold_left::operator()<int, int, int>' requested here fold_left(1, 2, 3); ^ Notice how the name of the function template we're instantiating looks almost the same as the algorithm we called? I'd like to keep something as close to this as possible. You can refer to this issue [1] in the future. Note: It just occurred to me that user-defined literals are required to start with an underscore. Does that mean that we can't possibly import custom UDLs into the global namespace without violating [lex.name]?
(I found return type of `make<Tuple>' is not a unspecified-type in the document. Is this intended?)
Yes, this is intended. I will change the name of _tuple to avoid it looking like an implementation detail.
* Documentation Well described, but...
* Should be more clearly about which C++14(11) features are required. Most of users assume working with C++98 since all of currently released Boost libraries work under C++98. Also, some (proprietary or not) vendor might ship customized compiler based on OSS one. In such case, users have difficulty in determining usable or not since some features are unsupported or dropped. Boost.Config will help and depending it is better than self organizing, IMO.
The compiler requirements are currently documented in the README, which is the first thing that pops up when you go to the project page on GitHub. However, I think I wrongly assumed that everybody would start by the GitHub page and reading the README. A lot of people seem to start with the documentation, which is legitimate. I'll add building instructions and compiler requirements to the tutorial documentation. That's a good suggestion, thanks. See this issue [2] for future reference.
* It is important, in contrast to Fusion and std::tuple, Data types can't modify even its element. Users might use similar to Fusion (assgin, tie, ...) because most of the C++ programmers have no backend about purely functional programming and immutables and god compile error. Also, C++14 constexpr allow to assign, thus /constant-expression/ doesn't make sense to describe purely. It is the best, describing about it in Introduction section (also describing about can't make /tie/ is better).
Technically, Hana requires the functions that are used inside higher-order algorithms to be pure, but that's about it. In other words, there's nothing in Hana's philosophy that prevents you from modifying the elements of a tuple auto xs = make_tuple(...); xs[3_c] = "foobar"; However, there is quite a bit of imprecision in the documentation regarding which functions should return a reference and which ones should return copies, so right now it is unclear when the above is actually valid. That will be addressed by this issue [3].
* Where is source of documentations? How to generate it? It seems to be generated using doxygen, but any of headers don't contain its documents.
The documentation is in the headers. I assume you looked at the online version, though. Comments were stripped from the online version because the library was otherwise too large to upload to Wandbox. If you go look at the source, for example [4], you'll see the Doxygen comments.
* Tests I didn't run, but it seems to be enough though.
* Usefulness Since names are similar to haskell, some users might confuse about usage (e.g. `maybe' v.s. `Optional'), but it's trivial: no worry about that.
- Did you attempt to use the library? If so: * Which compiler(s) I tried some tiny code with GCC 5.1.0.
* What was the experience? Any problems? Compiling is much faster than Fusion, Wow!
- How much effort did you put into your evaluation of the review? It was bit hard to understand some semantics because I'm not a Haskell programmer and have no mathmatic backends.
I feel like only the most advanced/abstract parts of Hana like Applicative and Monad require some functional programming background, but the rest of it should ideally be understandable by any C++ programmer. I'm thinking about the Iterable, Foldable, Searchable and Sequence concepts, which contain 90% of the useful stuff anyway. Thanks a lot for your review, Kohei! Regards, Louis [1]: https://github.com/ldionne/hana/issues/130 [2]: https://github.com/ldionne/hana/issues/128 [3]: https://github.com/ldionne/hana/issues/90 [4]: https://goo.gl/3u122a

Hi Louis,
Some identifiers, whitch defined under boost::hana, begin with single underscore (e.g. `_is_a' in core/is_a.hpp). I think (and be afraid) it will violate [lex.name] p.3 (I referenced n4296) when user use using-directive (i.e. `using namespace boost::hana;'). Rename or move to detail namespace might be better, since thouse are documented as /unspecified-type/. I think you're right. But then it means that the placeholders from Boost.Bind (or even Boost.Lambda if you import them to the global namespace) will also violate [lex.name], right? Still, I'll find a different way to name those. The main interest was that it results in very clear error messages. For example,
fold_left(1, 2, 3); // obviously an error
error: static_assert failed "hana::fold_left(xs, state, f) requires xs to be Foldable" static_assert(_models<Foldable, S>{}, ^ ~~~~~~~~~~~~~~~~~~~~~~ note: in instantiation of function template specialization 'boost::hana::_fold_left::operator()<int, int, int>' requested here fold_left(1, 2, 3); ^ Ah, yes, It is clearer and better, but I think that those symbol names are too generic. So, how about using other namespace like `::boost_hana::tuple` (it seems redundant, though)? e.g. Boost.MPL uses `::mpl_` to implement some details.
The compiler requirements are currently documented in the README, which is the first thing that pops up when you go to the project page on GitHub. You mean [1]? If so, it is not enough what I want. I want something like followings,
-- Hana requires generic lambda, variable templates and ... but not generalized lambda caputure, ... (and so on). It might be tooooo much infos for most of peoples but might help some users and provide guarantee for future releases for who cannot upgrade compiler easier. (Hana requires `shared_timed_mutex`? I believe no.)
Technically, Hana requires the functions that are used inside higher-order algorithms to be pure, but that's about it. In other words, there's nothing in Hana's philosophy that prevents you from modifying the elements of a tuple
auto xs = make_tuple(...); xs[3_c] = "foobar";
However, there is quite a bit of imprecision in the documentation regarding which functions should return a reference and which ones should return copies, so right now it is unclear when the above is actually valid. That will be addressed by this issue [3]. OK, I understand. If the issue affects Hana's design (such as purely, which algorithm can (or not) take a reference and/or can modify within iterating), open a mini-review might be better.
* Where is source of documentations? How to generate it? It seems to be generated using doxygen, but any of headers don't contain its documents. The documentation is in the headers. I assume you looked at the online version, though. Comments were stripped from the online version because the library was otherwise too large to upload to Wandbox. If you go look at the source, for example [4], you'll see the Doxygen comments. Ah, that.
[1] https://github.com/ldionne/hana/blob/master/README.md#prerequisites-and-inst... Best, Kohei

Kohei Takahashi <flast <at> flast.jp> writes:
[...] The main interest was that it results in very clear error messages. For example, [...] Ah, yes, It is clearer and better, but I think that those symbol names are too generic. So, how about using other namespace like `::boost_hana::tuple` (it seems redundant, though)? e.g. Boost.MPL uses `::mpl_` to implement some details.
We could use `::hana_`, or simply use `boost::hana::fold_left_t`. I'll think of something.
The compiler requirements are currently documented in the README, which is the first thing that pops up when you go to the project page on GitHub. You mean [1]? If so, it is not enough what I want. I want something like followings,
-- Hana requires generic lambda, variable templates and ... but not generalized lambda caputure, ... (and so on). [...]
I understand what you want. I added more specific requirements.
[...] However, there is quite a bit of imprecision in the documentation regarding which functions should return a reference and which ones should return copies, so right now it is unclear when the above is actually valid. That will be addressed by this issue [3]. OK, I understand. If the issue affects Hana's design (such as purely, which algorithm can (or not) take a reference and/or can modify within iterating), open a mini-review might be better.
No, the issue is not Hana's design, but rather the current implementation. [snip] Regards, Louis -- View this message in context: http://boost.2283326.n4.nabble.com/Boost-Hana-Formal-review-for-Hana-tp46769... Sent from the Boost - Dev mailing list archive at Nabble.com.

The formal review of Louis Dionne's Hana library begins today,10th June and ends on 24th June.
Hana is a header-only library for C++ metaprogramming that provides facilities for computations on both types and values. It provides a superset of the functionality provided by Boost.MPL and Boost.Fusion but with more expressiveness, faster compilation times, and faster (or equal) run times.
<snip>,
- Whether you believe the library should be accepted into Boost * Conditions for acceptance
Yes, I vote accept. No conditions, although Hana's demand for latest C++14 features likely imply need for timely documentation updates as compilers improve.
- Your name
Charley Bay
- Your knowledge of the problem domain.
Very familiar with template metaprogramming, including active development/support of large production code bases using templates and metaprogramming language features. - What is your evaluation of the library's:
* Design
Clean design, with high-value "wrappers/syntactic-sugar".
* Implementation
Very elegant implementation, including novel template implementation approaches.
* Documentation
Very good, although volatile because: (1) High demand on latest C++ language features (compiler support is evolving) (2) New usage patterns are likely to evolve (due to the nature of what the library provides) * Tests
Header-only library, and compile-time tests are great. More are always good. Perhaps excessively expensive compiler-time tests could be added-or-removed with an #ifdef...#endif.
* Usefulness
Very useful, as a unifying library solving problems previously addressed through multiple libraries and similar-but-not-the-same APIs and usage patterns. Unifying metaprogramming for both types and values is quite novel, and will likely lead to new use patterns not-yet appreciated. IMHO, this is likely the most important reason for addition to Boost. The second reason would be its elegance in using new C++14 patterns and conventions (as TMP has evolved).
- Did you attempt to use the library? If so: * Which compiler(s) * What was the experience? Any problems? - How much effort did you put into your evaluation of the review?
Extensive study of the documentation, and attended or watched all talks on this library over the past couple years. Some light application-use of the library (specific to the examples in the documentation). I am planning to experiment with specific library use cases, but am hampered by spotty compiler support for C++14 language features. --charley

charleyb123 . <charleyb123 <at> gmail.com> writes:
[...]
* Documentation
Very good, although volatile because:
(1) High demand on latest C++ language features (compiler support is evolving)
(2) New usage patterns are likely to evolve (due to the nature of what the library provides)
This is definitely true. Since this is a new paradigm, I'll have to update the documentation to reflect on the best way to use the library as we discover them. I'm sure there are a lot of things we can do with it that we don't know about yet. I'm also sure there are some ways in which the library should not be used, and we'll also discover them. The documentation will have to be kept up to date.
* Tests
Header-only library, and compile-time tests are great. More are always good. Perhaps excessively expensive compiler-time tests could be added-or-removed with an #ifdef...#endif.
There's a target named `tests.quick` (which is still not that quick). It only runs the most important and least time-consuming tests. It's also a good idea to run `make examples -j4` before even trying to run the tests, since obvious failures are likely to pop up when compiling just the examples, which are much faster.
* Usefulness
Very useful, as a unifying library solving problems previously addressed through multiple libraries and similar-but-not-the-same APIs and usage patterns.
Unifying metaprogramming for both types and values is quite novel, and will likely lead to new use patterns not-yet appreciated. IMHO, this is likely the most important reason for addition to Boost. The second reason would be its elegance in using new C++14 patterns and conventions (as TMP has evolved).
- Did you attempt to use the library? If so: * Which compiler(s) * What was the experience? Any problems? - How much effort did you put into your evaluation of the review?
Extensive study of the documentation, and attended or watched all talks on this library over the past couple years. Some light application-use of the library (specific to the examples in the documentation).
I am planning to experiment with specific library use cases, but am hampered by spotty compiler support for C++14 language features.
Thanks a lot for your review, Charley. I'd also like to thank you for your comments and all the discussions we've had at C++Now, which contributed to making the library what it is. Regards, Louis
participants (7)
-
charleyb123 .
-
Glen Fernandes
-
Kohei Takahashi
-
Louis Dionne
-
Paul Fultz II
-
Vicente J. Botet Escriba
-
Zach Laine