Hi Folks, Anyone knows what's up with this so called "multiset" I just found fiddling with the source code? The poor thing seems to have been left unfinished :( Any reasons anyone would not just go ahead and try bringing it to life? I would definitely be up to take the first step, I just don't know whether not I should carry on with it, I mean, isn't the whole purpose of all this is encouraging people like myself to take the step ahead and just do it? Anyways, MPL seems to have seen better days of activity back in the day, there's lots of opportunities for improvement still and I for one would be glad to help developing some fancier features. I'd be glad to hear what you guys have to say. Best regards, *Bruno C. O. Dutra*
On 11/02/2015 01:25, Bruno Dutra wrote:
Anyways, MPL seems to have seen better days of activity back in the day, there's lots of opportunities for improvement still and I for one would be glad to help developing some fancier features.
MPL is not undergoing active development anymore and is just being maintained. I believe however that some people were interested in doing a new C++11 version of MPL. I think the problem is that every year or so someone finds a new fancy way to do meta-programming with the latest C++ features, with noble goals of unifying MPL and Fusion, so most of these rewrites end up as experiments rather than stable libraries.
So what you are trying to say is that MPL is frozen for new features, or just that so far no attempt of improvement has been proven stable enough? I mean, is there interest on new features, provided of course their merits are proven, or the community currently understands MPL should not be changed beyond necessary fixes? At least a backward compatible port of MPL taking advantage of C++11 syntax should be fairly easy to achieve, with the benefits of increasing arity limits up to compiler variadic limits and even overcoming performance shortcomings for setups which have variadic templates enabled. All of this with no need of refactoring on the end user side naturally. On Feb 13, 2015 9:19 AM, "Mathias Gaunard" <mathias.gaunard@ens-lyon.org> wrote:
On 11/02/2015 01:25, Bruno Dutra wrote:
Anyways, MPL seems to have seen better days of activity back in the day,
there's lots of opportunities for improvement still and I for one would be glad to help developing some fancier features.
MPL is not undergoing active development anymore and is just being maintained.
I believe however that some people were interested in doing a new C++11 version of MPL. I think the problem is that every year or so someone finds a new fancy way to do meta-programming with the latest C++ features, with noble goals of unifying MPL and Fusion, so most of these rewrites end up as experiments rather than stable libraries.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/ mailman/listinfo.cgi/boost
Please don't top-post on this list. Rearranging. On Feb 13, 2015, at 8:51 AM, Bruno Dutra <brunocodutra@gmail.com> wrote:
On Feb 13, 2015 9:19 AM, "Mathias Gaunard" <mathias.gaunard@ens-lyon.org> wrote:
On 11/02/2015 01:25, Bruno Dutra wrote:
Anyways, MPL seems to have seen better days of activity back in the day,
there's lots of opportunities for improvement still and I for one would be glad to help developing some fancier features.
MPL is not undergoing active development anymore and is just being maintained.
I believe however that some people were interested in doing a new C++11 version of MPL. I think the problem is that every year or so someone finds a new fancy way to do meta-programming with the latest C++ features, with noble goals of unifying MPL and Fusion, so most of these rewrites end up as experiments rather than stable libraries.
So what you are trying to say is that MPL is frozen for new features, or just that so far no attempt of improvement has been proven stable enough? I mean, is there interest on new features, provided of course their merits are proven, or the community currently understands MPL should not be changed beyond necessary fixes?
At least a backward compatible port of MPL taking advantage of C++11 syntax should be fairly easy to achieve, with the benefits of increasing arity limits up to compiler variadic limits and even overcoming performance shortcomings for setups which have variadic templates enabled. All of this with no need of refactoring on the end user side naturally.
I'm sure many of us would welcome that. For comparison, you should take a look at Louis Dionne's MPL11, which did not end up backward-compatible (but is pretty amazing), and Hana, which strays even further (and combines Fusion and MPL). https://github.com/ldionne/mpl11 https://github.com/ldionne/hana
For comparison, you should take a look at Louis Dionne's MPL11, which did
not end up backward-compatible (but is pretty amazing), and Hana, which strays even further (and combines Fusion and MPL).
https://github.com/ldionne/mpl11 https://github.com/ldionne/hana
I knew there had to be someone out there addressing MPL's shortcomings in a world where C++03 is increasingly deemed old fashioned. Hana sounds promissing, I'll check it out. *Bruno C. O. Dutra*
I believe however that some people were interested in doing a new C++11 version of MPL. I think the problem is that every year or so someone finds a new fancy way to do meta-programming with the latest C++ features, with noble goals of unifying MPL and Fusion, so most of these rewrites end up as experiments rather than stable libraries.
I think Eric Niebler's meta library is good start for a modern MPL library, and it doesn't try to unify MPL and Fusion. Paul -- View this message in context: http://boost.2283326.n4.nabble.com/mpl-multiset-tp4672187p4672238.html Sent from the Boost - Dev mailing list archive at Nabble.com.
I think Eric Niebler's meta library is good start for a modern MPL library, and it doesn't try to unify MPL and Fusion.
Also, you can find it here: https://github.com/ericniebler/meta -- View this message in context: http://boost.2283326.n4.nabble.com/mpl-multiset-tp4672187p4672239.html Sent from the Boost - Dev mailing list archive at Nabble.com.
2015-02-13 15:05 GMT-02:00 pfultz2 <pfultz2@yahoo.com>:
I think Eric Niebler's meta library is good start for a modern MPL library, and it doesn't try to unify MPL and Fusion.
Also, you can find it here:
I see that there seems to be a tendency nowadays for metaprogramming libraries to inhabit that gray zone between compile and runtime computations, specially since the advent of generalized constexpr functions. It seems to me though that embracing constexpr functions within metaprogramming libraries rather cripples a pure metaprogramming breed on at least two instances: 1 - rather hackish syntax; 2 - intrinsic dependency on bleeding edge compilers and language standards. Take hana for example. It is an undeniably impressive piece of work for what's been designed, but that, in fact, is constexpr computations, which happen to depend on and thus provide type computations mechanisms. Hence it suffers of the two shortcomings I mentioned. MPL on the other hand, being a [almost] pure metaprogramming library manages to avoid these shortcomings. Compatible even to C++98, no wonder so many other boost libraries depend on it. MPL is however over 10 years of age and could definitely use some polishing, but I see no reason backwards compatibility should be dropped, like it has for the three examples you guys mentioned. Anyways, I don't have the necessary background to make such assertions, but I believe MPL still has a lot of room to expand within its niche and it baffles me that there has long been no group activelly improving it. Perhaps I'm just too stuborn and should better move on to hana. Well, I'd definitely appreciate some thoughts. Kind regards, *Bruno C. O. Dutra*
On 19 Feb 2015 at 11:59, Bruno Dutra wrote:
Take hana for example. It is an undeniably impressive piece of work for what's been designed, but that, in fact, is constexpr computations, which happen to depend on and thus provide type computations mechanisms. Hence it suffers of the two shortcomings I mentioned.
The list may not be aware that Boost is funding a GSoC extension for Hana, which is currently about half way through. It is currently expected that Hana will reapply for an additional GSoC this summer as well. Quality software takes time. Remember Louis has a full coursework load. Regarding compiler compatibility, Hana almost certainly should work on GCC 5.0 and Louis nearly has it working on GCC 4.9. I'd imagine the compiler which will take the longest is as usual MSVC, but there is a reasonable chance that a VS2017 might just provide enough for Hana, albeit with many workarounds for the lack of two phase lookup. The "hackish" syntax you mention is partially unavoidable with C++, and partially because constexpr was very poorly thought through as a language feature in that it is much too hard to get constexpr to be exact. This is regrettable, but the ship has sailed now, just as we currently overload template syntax with functional compile time programming which is definitely hackish. If we were sensible, we would deprecate that in favour of something like D's compile time programming syntax, but progress on introducing D style syntax has been oddly partial (functions mostly). Niall --- Boost C++ Libraries Google Summer of Code 2015 admin https://svn.boost.org/trac/boost/wiki/SoC2015
Regarding compiler compatibility, Hana almost certainly should work on GCC 5.0 and Louis nearly has it working on GCC 4.9. I'd imagine the compiler which will take the longest is as usual MSVC, but there is a reasonable chance that a VS2017 might just provide enough for Hana, albeit with many workarounds for the lack of two phase lookup.
That is what I mean by dependencies on bleeding edge compilers. Sure, soon enough these currently experimental compiler versions are going to become mainstream across most popular setups, but what about legacy applications or embedded systems for which, as my very limited experience has taught me, full standard compliant compilers are rarely provided?
The "hackish" syntax you mention is partially unavoidable with C++, and partially because constexpr was very poorly thought through as a language feature in that it is much too hard to get constexpr to be exact. This is regrettable, but the ship has sailed now, just as we currently overload template syntax with functional compile time programming which is definitely hackish. If we were sensible, we would deprecate that in favour of something like D's compile time programming syntax, but progress on introducing D style syntax has been oddly partial (functions mostly).
By "hackish" I just ment that decltype-ing constexpr functions to perform type computations feels rather odd. I'm aware this is mostly a matter of personal taste, but I'm still skeptical that this idiom would allow type computations as straightforwardly as traditional template overloading does. As soon as I have time I will try to implement some fancier logic predicates using Hana, like unification and SLD resolution, and compare the verbosity with a traditional approach. Hopefuly it proves me wrong. Please bear in mind however that I don't mean to diminish Hana by any means, on the contrary, as I said, I'm very much impressed by its elegance in accomplishing the task for which it was built. I just advocate that it fills a very different niche than its predecessor MPL, which, in my opinion should not be left aside, specially considering its ability to be compiled by virtually any compiler version targeting any architecture of the last decade. I'm just wondering whether the lack of development of MPL in the past years is just incidental, as developers moved on to new frontiers, or if indeed if reflects a consensus that MPL is obsolete nowadays, which I find hard to believe. *Bruno C. O. Dutra*
On 2/19/2015 12:34 PM, Bruno Dutra wrote:
Regarding compiler compatibility, Hana almost certainly should work on GCC 5.0 and Louis nearly has it working on GCC 4.9. I'd imagine the compiler which will take the longest is as usual MSVC, but there is a reasonable chance that a VS2017 might just provide enough for Hana, albeit with many workarounds for the lack of two phase lookup.
That is what I mean by dependencies on bleeding edge compilers. Sure, soon enough these currently experimental compiler versions are going to become mainstream across most popular setups, but what about legacy applications or embedded systems for which, as my very limited experience has taught me, full standard compliant compilers are rarely provided?
The "hackish" syntax you mention is partially unavoidable with C++, and partially because constexpr was very poorly thought through as a language feature in that it is much too hard to get constexpr to be exact. This is regrettable, but the ship has sailed now, just as we currently overload template syntax with functional compile time programming which is definitely hackish. If we were sensible, we would deprecate that in favour of something like D's compile time programming syntax, but progress on introducing D style syntax has been oddly partial (functions mostly).
By "hackish" I just ment that decltype-ing constexpr functions to perform type computations feels rather odd. I'm aware this is mostly a matter of personal taste, but I'm still skeptical that this idiom would allow type computations as straightforwardly as traditional template overloading does. As soon as I have time I will try to implement some fancier logic predicates using Hana, like unification and SLD resolution, and compare the verbosity with a traditional approach. Hopefuly it proves me wrong.
Please bear in mind however that I don't mean to diminish Hana by any means, on the contrary, as I said, I'm very much impressed by its elegance in accomplishing the task for which it was built. I just advocate that it fills a very different niche than its predecessor MPL, which, in my opinion should not be left aside, specially considering its ability to be compiled by virtually any compiler version targeting any architecture of the last decade.
I'm just wondering whether the lack of development of MPL in the past years is just incidental, as developers moved on to new frontiers, or if indeed if reflects a consensus that MPL is obsolete nowadays, which I find hard to believe.
I believe it reflects the fact that the two developers mostly responsible for creating MPL are not very active in Boost anymore. A while back Stephen Kelly specified changes to MPL to get rid of hacks for old compilers that nobody should be using anymore, and which would simplify the MPL code a bit so that new development could more easily be done with the code. But these changes were never made and the opinion was voiced that Hana would supercede MPL so why make any changes to MPL itself. I am with you in believing that until a new metaprogramming idiom becomes more popular, such as Hana or maybe Eric's blog contribution, it is worthwhile looking at possible improvements in MPL and it is certainly worthwhile fixing any bug reports which may have been made against MPL. I can understand the nervousness by which others view any changes to MPL, since it is so heavily used by other libraries and such a core library, but I do not understand the point of view that MPL should remain as frozen as possible so as not to cause problems with other libraries which use it.
Edward Diener-3 wrote
I'm just wondering whether the lack of development of MPL in the past years is just incidental, as developers moved on to new frontiers, or if indeed if reflects a consensus that MPL is obsolete nowadays, which I find hard to believe.
I believe it reflects the fact that the two developers mostly responsible for creating MPL are not very active in Boost anymore.
The mpl library provides functionality which is no where else available and which many boost libraries depend on. I believe that the fact is that no one is responsible for it anymore. Just eliminating a few hacks in it is not the same as taking responsibility for it. A while back Stephen Kelly specified changes to MPL to get rid of hacks for old compilers that nobody should be using anymore, and which would simplify the MPL code a bit so that new development could more easily be done with the code. This effort had a couple of issues. First it changed the scope of mpl from a library which would support older compilers to one which would limit support to a more modern subset. This basically amounts to a change of interface. Another problem is that making such changes is much, much tricker than first meets the eye. It would be huge effort to do it right. The right way would be to: define MPL2 - a C++11+ version of mpl. Re-implement MPL in terms of C++11. This could skip any part of mpl already in the standard. Presumably it would be a much smaller effort than the original, but it would still be a very large effort. Remember that MPL is the result of efforts of two of the best C++ programmers anywhere. It would be hard to meet that standard. Of course anyone is willing to try, feel free.
But these changes were never made and the opinion was voiced that Hana would supercede MPL so why make any changes to MPL itself. I am with you in believing that until a new metaprogramming idiom becomes more popular, such as Hana or maybe Eric's blog contribution.
The jury is still out on these efforts. My concern about hana is that it seems to have evolved way beyond what MPL does and so wouldn't serve as a "drop-in" improvement. I don't know about Eric's effort but I believe he's said he can't bring to the level of a boost library and no one has expressed any interest.
it is worthwhile looking at possible improvements in MPL and it is certainly worthwhile fixing any bug reports which may have been made against MPL.
I can understand the nervousness by which others view any changes to MPL, since it is so heavily used by other libraries and such a core library, but I do not understand the point of view that MPL should remain as frozen as possible so as not to cause problems with other libraries which use it.
I think the main concern is that someone makes a fix - which fixes his problem - but then walks away from it. Then something breaks and the next person does the same thing. That's not the same as having a maintainer take responsibility for it. This is an instance of our larger problem of finding a way to keep maintainers for libraries. It's still not even close to being solved. Robert Ramey -- View this message in context: http://boost.2283326.n4.nabble.com/mpl-multiset-tp4672187p4672389.html Sent from the Boost - Dev mailing list archive at Nabble.com.
On 2/19/2015 5:19 PM, Robert Ramey wrote:
Edward Diener-3 wrote
I'm just wondering whether the lack of development of MPL in the past years is just incidental, as developers moved on to new frontiers, or if indeed if reflects a consensus that MPL is obsolete nowadays, which I find hard to believe.
I believe it reflects the fact that the two developers mostly responsible for creating MPL are not very active in Boost anymore.
The mpl library provides functionality which is no where else available and which many boost libraries depend on.
I believe that the fact is that no one is responsible for it anymore.
Just eliminating a few hacks in it is not the same as taking responsibility for it.
You are right about this. But MPL is a large library and it should not need total comprehension for a Boost developer to make changes to a small part of that library if the changes made improve MPL in some way.
A while back Stephen Kelly specified changes to MPL to get rid of hacks for old compilers that nobody should be using anymore, and which would simplify the MPL code a bit so that new development could more easily be done with the code.
This effort had a couple of issues. First it changed the scope of mpl from a library which would support older compilers to one which would limit support to a more modern subset.
Is Boost seriously supposed to support largely obsolete compilers into perpetuity with each new release ? I do not think this is viable in all cases and cannot see this as a goal of Boost libraries. At some point, with some new release, I think it is possible to say for a given library that it does not support a largely obsolete compiler.
This basically amounts to a change of interface.
In the case of Stephen Kelly's removal of old compiler workarounds it does not. The interfaces would have remained the same but hacks to support VC6 and VC7, as well as some other compilers that no one should seriously be using anymore, would have been eliminated. My own feeling is that the improvement in making MPL code more understandable would have easily been worth removing these ancient hacks and the compilers they supported.
Another problem is that making such changes is much, much tricker than first meets the eye. It would be huge effort to do it right.
For the removal of MPL hacks that Stephen Kelly did I believe he got everything right. But since we never got to try it, even in 'develop', we will never know.
The right way would be to:
define MPL2 - a C++11+ version of mpl. Re-implement MPL in terms of C++11. This could skip any part of mpl already in the standard. Presumably it would be a much smaller effort than the original, but it would still be a very large effort. Remember that MPL is the result of efforts of two of the best C++ programmers anywhere. It would be hard to meet that standard. Of course anyone is willing to try, feel free.
That is a much bigger change than what Stephen Kelly had tried to do. That does not mean I do not think it might be worthwhile if someone talented enough and with time enough were to do it.
But these changes were never made and the opinion was voiced that Hana would supercede MPL so why make any changes to MPL itself. I am with you in believing that until a new metaprogramming idiom becomes more popular, such as Hana or maybe Eric's blog contribution.
The jury is still out on these efforts. My concern about hana is that it seems to have evolved way beyond what MPL does and so wouldn't serve as a "drop-in" improvement. I don't know about Eric's effort but I believe he's said he can't bring to the level of a boost library and no one has expressed any interest.
it is worthwhile looking at possible improvements in MPL and it is certainly worthwhile fixing any bug reports which may have been made against MPL.
I can understand the nervousness by which others view any changes to MPL, since it is so heavily used by other libraries and such a core library, but I do not understand the point of view that MPL should remain as frozen as possible so as not to cause problems with other libraries which use it.
I think the main concern is that someone makes a fix - which fixes his problem - but then walks away from it. Then something breaks and the next person does the same thing. That's not the same as having a maintainer take responsibility for it.
If anyone changes anything in a library which breaks anything else it is that person's responsibility to fix things. This is true whether it is MPL or anything else. I agree it would be good to have a maintainer for MPL but a single person trying to take over from Dave and Alexey would be a very difficult responsibility. Still I see nothing wrong with others who have maintainer status with MPL making changes as long as they take responsibility for what they do and of course realize how important and central MPL is to so many other Boost libraries so they must be very careful.
This is an instance of our larger problem of finding a way to keep maintainers for libraries. It's still not even close to being solved.
Robert Ramey
Edward Diener-3 wrote
If anyone changes anything in a library which breaks anything else it is that person's responsibility to fix things. This is true whether it is MPL or anything else.
Agreed. The problem is that people who just need a small fix sometimes fail to appreciate the subtle repercussions of that that fix. They go ahead and do it and then figure they're done - sometimes without even running the whole test suite. I don't have an alternative idea. One thing that might be useful would be to set an explicit policy like "MPL is only guaranteed to works with: MSVC 8.0 and above GCC ?? and above etc. I wouldn't suggest anyone go in an start ripping out support for all the old stuff. But if someone is in there in the course of fixing something else, it would be fine - especially since it can't be tested anymore anyway. -- View this message in context: http://boost.2283326.n4.nabble.com/mpl-multiset-tp4672187p4672394.html Sent from the Boost - Dev mailing list archive at Nabble.com.
I'm just wondering whether the lack of development of MPL in the past years is just incidental, as developers moved on to new frontiers, or if indeed if reflects a consensus that MPL is obsolete nowadays, which I find hard to believe.
I believe it reflects the fact that the two developers mostly responsible for creating MPL are not very active in Boost anymore.
The mpl library provides functionality which is no where else available and which many boost libraries depend on.
I believe that the fact is that no one is responsible for it anymore. [...] My concern about hana is that it seems to have evolved way beyond what MPL does and so wouldn't serve as a "drop-in" improvement. I don't know about Eric's effort but I believe he's said he can't bring to the level of a boost library and no one has expressed any interest.
That's exactly the point I was trying to make. Hana is awesome, but it doesn't fit as a replacement for MPL, simply because it was meant for a different purpose. I see no reason why they shouldn't live side by side, each filling its specific niche.
The right way would be to: define MPL2 - a C++11+ version of mpl. Re-implement MPL in terms of C++11. This could skip any part of mpl already in the standard. Presumably it would be a much smaller effort than the original, but it would still be a very large effort. Remember that MPL is the result of efforts of two of the best C++ programmers anywhere. It would be hard to meet that standard. Of course anyone is willing to try, feel free. [...] One thing that might be useful would be to set an explicit policy like "MPL is only guaranteed to works with: MSVC 8.0 and above GCC ?? and above etc.
That is certainly an audacious idea, but I don't see a better way of seriously revamping MPL either. I think that was precisely the original intention of Louis Dionne's MPL11, which ended up deviating its course and dropping backward API compatibility. Quoting from MPL11 github page: "This was initially supposed to be a simple C++11 reimplementation of the Boost.MPL. However, for reasons documented in the rationales, the library was redesigned and the name does not fit so well anymore." Perhaps it would be easier to just start off MPL11 and reintroduce backward compatibility than to start anew. I wouldn't go so bold as to assert I'm up to the task, but I take it as a great opportunity for learning and amusing myself, so I'll look into it at the slow pace my busy life allows me. That said, I'm with Edward Diener in believing there's lots of room for improvement in the library as it is and that one should not be afraid of pushing it forward bit by bit. I understand the concerns regarding breaking compatibility for older setups, but aren't there volunteers who run test suites daily, precisely to assure everything is working as expected across the various setups? One thing that I noticed though, is that MPL test suite right now does not cover anywhere near even the documented API, let alone corner cases which should definitely be tested (running algorithms on empty sequences comes to mind). A bugfix on which I'm working right now, for instance, would have been caught a decade ago, had tests been written to cover the fact, which is explicitly written in the documentation, that insert_range should work for "Extensible Associative Sequences". As soon as I get this bug fixed (which is taking way longer than it should, I must admit), I'll work on expanding current test cases based on the documentation and file a pull request for that. I just count on you boost maintainers to help me out by reviewing and accepting my pull requests as appropriate :) -- *Bruno C. O. Dutra*
Bruno Dutra <brunocodutra <at> gmail.com> writes:
My concern about hana is that it seems to have evolved way beyond what MPL does and so wouldn't serve as a "drop-in" improvement. I don't know about Eric's effort but I believe he's said he can't bring to the level of a boost library and no one has expressed any interest.
That's exactly the point I was trying to make. Hana is awesome, but it doesn't fit as a replacement for MPL, simply because it was meant for a different purpose. I see no reason why they shouldn't live side by side, each filling its specific niche.
I'm a bit late in the game because this thread has gone under my radar; please excuse that, I was busy working on Hana :-). Hana was designed as a replacement for both the MPL and Fusion. Technically, everything you can express with the MPL can be expressed with Hana; I'm working on a mathematical proof for it. The question really is whether it is _convenient_ to do so, and my current opinion is yes. However, metaprogramming with Hana requires you to stop thinking in a MPL way, i.e. with traditional metafunctions. If you keep your old MPL habits, you will end up having to decltype function call expressions all the time, which I agree is cumbersome. If you embrace this new way of metaprogramming, you will only have to use decltype at some precise boundaries; when you actually _need_ a type at the end of the whole computation. My assumption in designing Hana was that while all of our current type-level computations are implemented at the type-level, this is just a side effect of using the MPL and we actually need the types only at some thin boundaries. I suspect (1) the lack of guidelines for doing type-level metaprogramming in Hana (2) the lack of a serious case study (e.g. implementing Phoenix with Hana) could be why people don't see Hana as being fit for a MPL replacement; they don't see how we can achieve type-level metaprogramming easily, which is legitimate given (1) and (2). I clarified the tutorial[1] to improve the situation of (1), but there is still room for improvement. As for (2), I guess I am the one who should take the lead; if someone has a suggestion for a good guinea pig, please let me know. Otherwise, I started implementing the core of Accumulators with Hana; we'll see how that goes. FWIW, I am personally not in favor of having two metaprogramming libraries living side by side. This causes code duplication and interoperation issues, and it also increases the learning curve. Also, you won't be able to backport MPL11 to MPL because I diverged from the MPL in a rather major way by using lazy metafunctions in MPL11. They are more composable and you also don't have to write "typename" everywhere, but it breaks backward compatibility pretty severely. As a final note: if you have comments, doubts or other thoughts about Hana, please express them by either posting on this list or (even better) opening a GitHub[2] issue. If I know what people need, I can make sure that Hana satisfies them properly. I will be asking for an informal review shortly, so stay tuned. Regards, Louis [1]: http://ldionne.github.io/hana [2]: https://github.com/ldionne/hana
On 2/22/2015 5:36 PM, Louis Dionne wrote:
Bruno Dutra <brunocodutra <at> gmail.com> writes:
My concern about hana is that it seems to have evolved way beyond what MPL does and so wouldn't serve as a "drop-in" improvement. I don't know about Eric's effort but I believe he's said he can't bring to the level of a boost library and no one has expressed any interest.
That's exactly the point I was trying to make. Hana is awesome, but it doesn't fit as a replacement for MPL, simply because it was meant for a different purpose. I see no reason why they shouldn't live side by side, each filling its specific niche.
I'm a bit late in the game because this thread has gone under my radar; please excuse that, I was busy working on Hana :-).
Hana was designed as a replacement for both the MPL and Fusion. Technically, everything you can express with the MPL can be expressed with Hana; I'm working on a mathematical proof for it. The question really is whether it is _convenient_ to do so, and my current opinion is yes. However, metaprogramming with Hana requires you to stop thinking in a MPL way, i.e. with traditional metafunctions. If you keep your old MPL habits, you will end up having to decltype function call expressions all the time, which I agree is cumbersome. If you embrace this new way of metaprogramming, you will only have to use decltype at some precise boundaries; when you actually _need_ a type at the end of the whole computation. My assumption in designing Hana was that while all of our current type-level computations are implemented at the type-level, this is just a side effect of using the MPL and we actually need the types only at some thin boundaries.
I suspect (1) the lack of guidelines for doing type-level metaprogramming in Hana (2) the lack of a serious case study (e.g. implementing Phoenix with Hana) could be why people don't see Hana as being fit for a MPL replacement; they don't see how we can achieve type-level metaprogramming easily, which is legitimate given (1) and (2).
I clarified the tutorial[1] to improve the situation of (1), but there is still room for improvement. As for (2), I guess I am the one who should take the lead; if someone has a suggestion for a good guinea pig, please let me know. Otherwise, I started implementing the core of Accumulators with Hana; we'll see how that goes.
Hana can be a completely differently designed library from MPL, as you have expressed it, without negating its value in any way.
FWIW, I am personally not in favor of having two metaprogramming libraries living side by side. This causes code duplication and interoperation issues, and it also increases the learning curve. Also, you won't be able to backport MPL11 to MPL because I diverged from the MPL in a rather major way by using lazy metafunctions in MPL11. They are more composable and you also don't have to write "typename" everywhere, but it breaks backward compatibility pretty severely.
Numerous Boost libraries currently use MPL. If you want to have Hana as a Boost library, if it is accepted after review, I think you will have to get used to the idea that Boost will have more than one metaprogramming library. If Hana proves popular enough with metaprogrammers they will switch away from MPL to Hana. I do not know what you mean by "two metaprogramming libraries living side by side" but I believe that two libraries whose purposes are similar but whose programming interfaces are different is never a detriment to Boost as long as both are quality libraries which other programmers find useful.
As a final note: if you have comments, doubts or other thoughts about Hana, please express them by either posting on this list or (even better) opening a GitHub[2] issue. If I know what people need, I can make sure that Hana satisfies them properly.
I will be asking for an informal review shortly, so stay tuned.
Regards, Louis
[1]: http://ldionne.github.io/hana [2]: https://github.com/ldionne/hana
Edward Diener <eldiener <at> tropicsoft.com> writes:
[...]
Numerous Boost libraries currently use MPL. If you want to have Hana as a Boost library, if it is accepted after review, I think you will have to get used to the idea that Boost will have more than one metaprogramming library. If Hana proves popular enough with metaprogrammers they will switch away from MPL to Hana.
I do not know what you mean by "two metaprogramming libraries living side by side" but I believe that two libraries whose purposes are similar but whose programming interfaces are different is never a detriment to Boost as long as both are quality libraries which other programmers find useful.
I apologize; what I said was unclear. I did not mean that the MPL (or Fusion for that matter) should go away. I also do not expect people to port old code from MPL/Fusion to Hana, except in some rare cases. Actually, Hana even provides interoperation with MPL and Fusion, so I do recognize the importance of these libraries. What I meant is that I think we should strive for a unified treatment of metaprogramming in the long term. Hence, I would hope that _new_ libraries are written against Hana instead of MPL/Fusion, of course if those libraries are meant to be used on modern compilers. As for "two metaprogramming libraries living side by side", I meant that working on a _new_ C++11/14 type-only MPL is a bad idea IMO, because Hana is a strict superset of such a type-level only library. Of course, this is my biased opinion, and it is why I switched from MPL11 to Hana; I thought it was more promising. However, I have absolutely nothing bad to say about a revamping of the current MPL in a backward compatible way, except that it might be hard (but not impossible) to achieve without breaking some code. Actually, if someone is interested in attempting a backward compatible MPL, I could probably provide a good part of the code by just looking at an older snapshot of the MPL11, before backward compatibility was broken. Louis
Hana was designed as a replacement for both the MPL and Fusion. Technically, everything you can express with the MPL can be expressed with Hana; I'm working on a mathematical proof for it. The question really is whether it is _convenient_ to do so, and my current opinion is yes. However, metaprogramming with Hana requires you to stop thinking in a MPL way, i.e. with traditional metafunctions. If you keep your old MPL habits, you will end up having to decltype function call expressions all the time, which I agree is cumbersome. If you embrace this new way of metaprogramming, you will only have to use decltype at some precise boundaries; when you actually _need_ a type at the end of the whole computation. My assumption in designing Hana was that while all of our current type-level computations are implemented at the type-level, this is just a side effect of using the MPL and we actually need the types only at some thin boundaries.
Exactly, the question is all about convenience.
I suspect (1) the lack of guidelines for doing type-level metaprogramming in Hana (2) the lack of a serious case study (e.g. implementing Phoenix with Hana) could be why people don't see Hana as being fit for a MPL replacement; they don't see how we can achieve type-level metaprogramming easily, which is legitimate given (1) and (2).
I clarified the tutorial[1] to improve the situation of (1), but there is still room for improvement. As for (2), I guess I am the one who should take the lead; if someone has a suggestion for a good guinea pig, please let me know. Otherwise, I started implementing the core of Accumulators with Hana; we'll see how that goes.
I too believe that once people start seeing some nice examples comparing both approaches, and if Hana does present a neater syntax, then people will become less resistant to switching. Since Hana takes advantage of the newest additions to the C++, it should indeed be more concise than MPL, I believe. Btw, the tutorial is indeed much clearer now, thank you for improving it.
FWIW, I am personally not in favor of having two metaprogramming libraries living side by side. This causes code duplication and interoperation issues, and it also increases the learning curve. Also, you won't be able to backport MPL11 to MPL because I diverged from the MPL in a rather major way by using lazy metafunctions in MPL11. They are more composable and you also don't have to write "typename" everywhere, but it breaks backward compatibility pretty severely.
That's where our opinions diverge a bit. Could Hana be compiled anywhere MPL is, than sure, why not just switch onto something with a neater syntax? It would be a natural process for developers to start porting their codes. However, I'm afraid that's not the case, in fact it could never be, for by simply depending on C++14, it denies support to a whole bunch of platforms and will still for many, many years. In my opinion one of the greatest feats of Boost is the fact its libraries mostly support a very wide range of setups and it is no wonder why so many Boost libraries make use of MPL for that matter. The way I foresee it, applications will be using Hana, while libraries will stick to MPL. *Bruno C. O. Dutra*
Edward Diener wrote:
A while back Stephen Kelly specified changes to MPL to get rid of hacks for old compilers that nobody should be using anymore, and which would simplify the MPL code a bit so that new development could more easily be done with the code. But these changes were never made
The removal of the old code was committed (see 'gitk steveire' in mpl.git). Daniel James reverted it. Feel free to revert the revert. Thanks, Steve.
On 2/27/2015 3:54 PM, Stephen Kelly wrote:
Edward Diener wrote:
A while back Stephen Kelly specified changes to MPL to get rid of hacks for old compilers that nobody should be using anymore, and which would simplify the MPL code a bit so that new development could more easily be done with the code. But these changes were never made
The removal of the old code was committed (see 'gitk steveire' in mpl.git).
Daniel James reverted it.
Feel free to revert the revert.
I am not the maintainer of MPL, although I have been granted rights to change it. I don't think MPL has a primary maintainer any more so any large change needs a consensus AFAICS. The consensus appeared to be that MPL should keep the old code. I agreed with you instead. But I am not going to go against the consensus of others unless there is an updated general consensus that agrees that MPL need no longer support the old compilers that it does.
On Feb 27, 2015 6:23 PM, "Edward Diener" <eldiener@tropicsoft.com> wrote:
On 2/27/2015 3:54 PM, Stephen Kelly wrote:
[...] The removal of the old code was committed (see 'gitk steveire' in
mpl.git).
Daniel James reverted it.
Feel free to revert the revert.
I am not the maintainer of MPL, although I have been granted rights to change it. I don't think MPL has a primary maintainer any more so any large change needs a consensus AFAICS.
The consensus appeared to be that MPL should keep the old code. I agreed with you instead. But I am not going to go against the consensus of others unless there is an updated general consensus that agrees that MPL need no longer support the old compilers that it does.
I'm not familiar with the exact changes to really know which compilers would not be supported anymore, but as a general rule, any compiler which is not regularly tested is, by definition, not strictly supported. Perhaps it would be a good idea to leave changes merged on develop branch for a while so that any testers of affected compilers could report failures. This could be serve as a good indication of which compilers are regarded as obsolete. Meanwhile, how should one proceed to assess current consensus regarding this topic? Would it suffice to just open a discussion on this mailing list? Cheers, Bruno
On 2/27/2015 5:01 PM, Bruno Dutra wrote:
On Feb 27, 2015 6:23 PM, "Edward Diener" <eldiener@tropicsoft.com> wrote:
On 2/27/2015 3:54 PM, Stephen Kelly wrote:
[...] The removal of the old code was committed (see 'gitk steveire' in
mpl.git).
Daniel James reverted it.
Feel free to revert the revert.
I am not the maintainer of MPL, although I have been granted rights to change it. I don't think MPL has a primary maintainer any more so any large change needs a consensus AFAICS.
The consensus appeared to be that MPL should keep the old code. I agreed with you instead. But I am not going to go against the consensus of others unless there is an updated general consensus that agrees that MPL need no longer support the old compilers that it does.
I'm not familiar with the exact changes to really know which compilers would not be supported anymore, but as a general rule, any compiler which is not regularly tested is, by definition, not strictly supported.
I do not believe that any of the compilers whose support Stephen Kelly removed from MPL are ever currently tested in the regression tests matrix. These were really old compiler releases that he intended to remove support in MPL for. The hacks to support some of those really old compilers were brilliant but they tend to complicate understanding MPL code fairly noticeably in a number of situations.
Perhaps it would be a good idea to leave changes merged on develop branch for a while so that any testers of affected compilers could report failures. This could be serve as a good indication of which compilers are regarded as obsolete.
That could certainly be done.
Meanwhile, how should one proceed to assess current consensus regarding this topic? Would it suffice to just open a discussion on this mailing list?
That would suffice IMO. Should be its own topic however.
Le 28/02/15 02:32, Edward Diener a écrit :
On 2/27/2015 5:01 PM, Bruno Dutra wrote:
Perhaps it would be a good idea to leave changes merged on develop branch for a while so that any testers of affected compilers could report failures. This could be serve as a good indication of which compilers are regarded as obsolete.
That could certainly be done.
This is what Stephen did with no success. I suggest you to make a new MPL V2, that has its own folder (module).
Meanwhile, how should one proceed to assess current consensus regarding this topic? Would it suffice to just open a discussion on this mailing list?
I'm not sure what do you want to do. Do you want to modify the MPL library or just do a GSoC project?
That would suffice IMO. Should be its own topic however.
Yes, please start a new thread independently of the concrete extensions. Vicente
On 2/28/2015 11:03 AM, Vicente J. Botet Escriba wrote:
Le 28/02/15 02:32, Edward Diener a écrit :
On 2/27/2015 5:01 PM, Bruno Dutra wrote:
Perhaps it would be a good idea to leave changes merged on develop branch for a while so that any testers of affected compilers could report failures. This could be serve as a good indication of which compilers are regarded as obsolete.
That could certainly be done.
This is what Stephen did with no success.
No. It never was tested. It was reverted before any testing was done on the 'develop' branch with those changes.
I suggest you to make a new MPL V2, that has its own folder (module).
Meanwhile, how should one proceed to assess current consensus regarding this topic? Would it suffice to just open a discussion on this mailing list?
I'm not sure what do you want to do. Do you want to modify the MPL library or just do a GSoC project?
That would suffice IMO. Should be its own topic however.
Yes, please start a new thread independently of the concrete extensions.
Le 28/02/15 18:13, Edward Diener a écrit :
On 2/28/2015 11:03 AM, Vicente J. Botet Escriba wrote:
Le 28/02/15 02:32, Edward Diener a écrit :
On 2/27/2015 5:01 PM, Bruno Dutra wrote:
Perhaps it would be a good idea to leave changes merged on develop branch for a while so that any testers of affected compilers could report failures. This could be serve as a good indication of which compilers are regarded as obsolete.
That could certainly be done.
This is what Stephen did with no success.
No. It never was tested. It was reverted before any testing was done on the 'develop' branch with those changes.
How can you be sure that there was no tester? When I say that there was no success, I meant that the commit was forced to be reverted because there was no consensus. Vicente
Edward Diener wrote:
No. It never was tested. It was reverted before any testing was done on the 'develop' branch with those changes.
I'm not sure that lack of testing on current compilers is the main issue. The main issue is "lack of consensus" for dropping support for old compilers, which aren't tested. I'll say again what I always say when this issue comes up: these compilers are old. They are unlikely to be able to compile new Boost libraries. People who use these compilers can just use older Boost releases (and are probably forced to anyway). We should drop VC++6/7, bcc32, dmc, old sun support from MPL to make it more maintainable - provided that it is going to be maintained at all, or course.
On 28/02/2015 18:43, Peter Dimov wrote:
Edward Diener wrote:
No. It never was tested. It was reverted before any testing was done on the 'develop' branch with those changes.
I'm not sure that lack of testing on current compilers is the main issue. The main issue is "lack of consensus" for dropping support for old compilers, which aren't tested.
I'll say again what I always say when this issue comes up: these compilers are old. They are unlikely to be able to compile new Boost libraries. People who use these compilers can just use older Boost releases (and are probably forced to anyway).
We should drop VC++6/7, bcc32, dmc, old sun support from MPL to make it more maintainable - provided that it is going to be maintained at all, or course.
+1 Requiring a compiler under 10 years old isn't such a stretch, and as you say new code is written for new compilers anyway... plus the old code is never tested on old compilers so is likely to be broken in strange and surprising ways (by dependencies breaking, if not patches to the library in question). Aside: I believe the original changes were reverted for a number of reasons, but a lack of consensus was certainly one ("don't rip apart some elses library without due process" etc). They possibly did both too little and too much as well - dropping support for old compilers without really properly cleaning up and modernising the code (which would be a lot of work). Or to put it another way - if you're going to change such a core library, then the gains had better be big ones, otherwise best leave alone. John.
On 2/28/2015 1:58 PM, John Maddock wrote:
On 28/02/2015 18:43, Peter Dimov wrote:
Edward Diener wrote:
No. It never was tested. It was reverted before any testing was done on the 'develop' branch with those changes.
I'm not sure that lack of testing on current compilers is the main issue. The main issue is "lack of consensus" for dropping support for old compilers, which aren't tested.
I'll say again what I always say when this issue comes up: these compilers are old. They are unlikely to be able to compile new Boost libraries. People who use these compilers can just use older Boost releases (and are probably forced to anyway).
We should drop VC++6/7, bcc32, dmc, old sun support from MPL to make it more maintainable - provided that it is going to be maintained at all, or course.
+1
Requiring a compiler under 10 years old isn't such a stretch, and as you say new code is written for new compilers anyway... plus the old code is never tested on old compilers so is likely to be broken in strange and surprising ways (by dependencies breaking, if not patches to the library in question).
Aside: I believe the original changes were reverted for a number of reasons, but a lack of consensus was certainly one ("don't rip apart some elses library without due process" etc). They possibly did both too little and too much as well - dropping support for old compilers without really properly cleaning up and modernising the code (which would be a lot of work). Or to put it another way - if you're going to change such a core library, then the gains had better be big ones, otherwise best leave alone.
I don't believe Stephen Kelly's changes attempted to do anything but clean out support for compilers which are already largely obsolete. To me this seemed worthwhile as a starting point to make MPL more understandable so that when others want to fix bugs in the code, or make changes or additions which improve things in the library, it becomes much easier to understand the code without all the old hacks in place. Also when such support is removed the programmer does not have to worry about some change fouling up an old, obsolete compiler which does not handle C++ very well to begin with. To me such a cleanup is worth it even if there is no big change otherwise being made. I am a proponent of incremental changes to make sure everything works as one goes along. Makes testing and where one could go wrong trying to make too big a change at a time much easier to do.
-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of John Maddock Sent: 28 February 2015 18:58 To: boost@lists.boost.org Subject: Re: [boost] [mpl] Abandoning old compilers
Or to put it another way - if you're going to change such a core library, then the gains had better be big ones, otherwise best leave alone.
+1 FWIW Paul PS Better start again with Son of MPL (but perhaps we have that in gestation in Hana?). MPL was always forcing something do things that it want not meant to. It is grotesque compared to a language feature doing it properly?
On 02/28/2015 10:43 AM, Peter Dimov wrote:
Edward Diener wrote:
No. It never was tested. It was reverted before any testing was done on the 'develop' branch with those changes.
I'm not sure that lack of testing on current compilers is the main issue. The main issue is "lack of consensus" for dropping support for old compilers, which aren't tested.
I'll say again what I always say when this issue comes up: these compilers are old. They are unlikely to be able to compile new Boost libraries. People who use these compilers can just use older Boost releases (and are probably forced to anyway).
+1 -- Michael Caisse ciere consulting ciere.com
Peter Dimov-2 wrote
Edward Diener wrote: We should drop VC++6/7, bcc32, dmc, old sun support from MPL to make it more maintainable - provided that it is going to be maintained at all, or course.
Wouldn't be best just to leave this question to the person who actually takes responsibility for doing the maintenance? In practical terms, the best usage of one's time is to address that stuff that's broken and leave the other stuff alone. Supporting compilers over a certain vintage shouldn't be a requirement, but neither should it be a requirement that a library NOT support an older compiler or that it be (re) implemented using some newer version. The operative principle is a) that it respect the current/traditional interface unless there is a good reason to break it. b) that it work with some defined set of recent compilers. The rest is up to the person responsible for the library. What I don't want to see implemented is the concept of "drive-by" maintenance whereby someone makes a "fix" and moves on. There has to be some particular person who takes responsibility for consistency and continuity. Robert Ramey -- View this message in context: http://boost.2283326.n4.nabble.com/mpl-multiset-tp4672187p4672592.html Sent from the Boost - Dev mailing list archive at Nabble.com.
On 2/28/2015 5:50 PM, Robert Ramey wrote:
Peter Dimov-2 wrote
Edward Diener wrote: We should drop VC++6/7, bcc32, dmc, old sun support from MPL to make it more maintainable - provided that it is going to be maintained at all, or course.
Wouldn't be best just to leave this question to the person who actually takes responsibility for doing the maintenance? In practical terms, the best usage of one's time is to address that stuff that's broken and leave the other stuff alone. Supporting compilers over a certain vintage shouldn't be a requirement, but neither should it be a requirement that a library NOT support an older compiler or that it be (re) implemented using some newer version. The operative principle is
a) that it respect the current/traditional interface unless there is a good reason to break it. b) that it work with some defined set of recent compilers.
The rest is up to the person responsible for the library.
There is no one person responsible for MPL anymore AFAIK. In fact I think the Boost maintenance team also access to change it.
What I don't want to see implemented is the concept of "drive-by" maintenance whereby someone makes a "fix" and moves on. There has to be some particular person who takes responsibility for consistency and continuity.
Robert Ramey
AMDG On 02/28/2015 04:28 PM, Edward Diener wrote:
On 2/28/2015 5:50 PM, Robert Ramey wrote:
The rest is up to the person responsible for the library.
There is no one person responsible for MPL anymore AFAIK. In fact I think the Boost maintenance team also access to change it.
This is exactly why I'm against applying Stephen's patches. The community maintenance team cannot substitute for a real maintainer. It works okay as long as the libraries in question are mostly stable, but IMHO, we don't have the thorough understanding of the libraries needed to make any major changes safely. In Christ, Steven Watanabe
On 2/28/2015 6:48 PM, Steven Watanabe wrote:
AMDG
On 02/28/2015 04:28 PM, Edward Diener wrote:
On 2/28/2015 5:50 PM, Robert Ramey wrote:
The rest is up to the person responsible for the library.
There is no one person responsible for MPL anymore AFAIK. In fact I think the Boost maintenance team also access to change it.
This is exactly why I'm against applying Stephen's patches. The community maintenance team cannot substitute for a real maintainer. It works okay as long as the libraries in question are mostly stable, but IMHO, we don't have the thorough understanding of the libraries needed to make any major changes safely.
Then who gets to make a change to a library ? Only a person who is wiiling to be the sole maintainer of that library ? Thats seems very limiting to me. If there is a group of maintainers to a library, whether they are members of the community maintenance team or whether they are just those who have been granted write access to that library, and one of those maintainers is willing to make changes and follow it through after getting a consensus of others involved, why should not that be allowed in principle. MPL may be too big or too difficult to make major changes without a thorough knowledge of the library but in the case of Stephen Kelly's patches to it to remove outdated compiler support I think that was doable without having to understand all the code in the library.
AMDG On 02/28/2015 08:37 PM, Edward Diener wrote:
Then who gets to make a change to a library ? Only a person who is wiiling to be the sole maintainer of that library ? Thats seems very limiting to me. If there is a group of maintainers to a library, whether they are members of the community maintenance team or whether they are just those who have been granted write access to that library, and one of those maintainers is willing to make changes and follow it through after getting a consensus of others involved, why should not that be allowed in principle.
I don't object to a library having more than one maintainer, in principle. The trouble is that MPL has none. Like I said, I don't think that the CMT really counts.
MPL may be too big or too difficult to make major changes without a thorough knowledge of the library but in the case of Stephen Kelly's patches to it to remove outdated compiler support I think that was doable without having to understand all the code in the library.
A very dangerous notion. Anyway, making any changes is a risk, and I don't see how there's any real benefit if no one is planning to make any significant changes (which would, of course, require a real maintainer). In Christ, Steven Watanabe
On 2/28/2015 1:43 PM, Peter Dimov wrote:
Edward Diener wrote:
No. It never was tested. It was reverted before any testing was done on the 'develop' branch with those changes.
I'm not sure that lack of testing on current compilers is the main issue. The main issue is "lack of consensus" for dropping support for old compilers, which aren't tested.
I'll say again what I always say when this issue comes up: these compilers are old. They are unlikely to be able to compile new Boost libraries. People who use these compilers can just use older Boost releases (and are probably forced to anyway).
We should drop VC++6/7, bcc32, dmc, old sun support from MPL to make it more maintainable - provided that it is going to be maintained at all, or course.+
I believe it can be maintained by people who are interested in improving it if necessary. I doubt any one person can do it by himself. Cerainly the person changing MPL has to be responsible for his changes and make sure any change is checked against the regression tests of not only MPL but all the other libraries that use MPL. My reason for supporting the changes for MPL which Stephen Kelly wanted to make is that hackery, no matter how brilliant, for compilers whose support for the C++ standard is poor in some way tends to obfuscate the understanding of how code works. If said compilers are outdated and have been superceded by newer versions which the vast majority of programmers now use, or if the compiler is not supported or marketed anymore, I can understand dropping such workarounds rather than carrying them around forever. Code then become easier to understand and change. One does not have to worry about some old compiler which no one rationally should be using when making additions or changes to a library. The versions you mention above nobody is using seriously anymore unless they are forced to do so in the job which they have. In which case most of current Boost is unusable anyway for such old compiler/versions.
Edward Diener-3 wrote
My reason for supporting the changes for MPL which Stephen Kelly wanted to make is that hackery, no matter how brilliant, for compilers whose support for the C++ standard is poor in some way tends to obfuscate the understanding of how code works. If said compilers are outdated and have been superceded by newer versions which the vast majority of programmers now use, or if the compiler is not supported or marketed anymore, I can understand dropping such workarounds rather than carrying them around forever. Code then become easier to understand and change. One does not have to worry about some old compiler which no one rationally should be using when making additions or changes to a library.
I think we're mixing a couple of things here. Stephen's changes were not just one library he was willing to maintain and take responsibility for, but the whole of boost. No one could be responsible for that. I think he underestimated the subtle repercussions of changes which which seemed at first glance to be innocuous. In his defense, I think that is very easy to do since a of the hackery is as non-obvious as it is necessary. So let's set Stephen's changes apart from this discussion. The maintainer has to have the option of doing things in the way he thinks is most expedient. In some cases, that will mean just leaving things as they are, and other cases that will mean throwing out an implementation of some feature and replacing it with a simpler one which is easier to maintain and verify. No one can make an a-priori policy for that. Robert Ramey -- View this message in context: http://boost.2283326.n4.nabble.com/mpl-multiset-tp4672187p4672596.html Sent from the Boost - Dev mailing list archive at Nabble.com.
On 2/28/2015 6:28 PM, Robert Ramey wrote:
Edward Diener-3 wrote
My reason for supporting the changes for MPL which Stephen Kelly wanted to make is that hackery, no matter how brilliant, for compilers whose support for the C++ standard is poor in some way tends to obfuscate the understanding of how code works. If said compilers are outdated and have been superceded by newer versions which the vast majority of programmers now use, or if the compiler is not supported or marketed anymore, I can understand dropping such workarounds rather than carrying them around forever. Code then become easier to understand and change. One does not have to worry about some old compiler which no one rationally should be using when making additions or changes to a library.
I think we're mixing a couple of things here. Stephen's changes were not just one library he was willing to maintain and take responsibility for, but the whole of boost.
When I spoke of Stephen Kelly's change I meant only those for MPL.
No one could be responsible for that. I think he underestimated the subtle repercussions of changes which which seemed at first glance to be innocuous. In his defense, I think that is very easy to do since a of the hackery is as non-obvious as it is necessary. So let's set Stephen's changes apart from this discussion.
The maintainer has to have the option of doing things in the way he thinks is most expedient. In some cases, that will mean just leaving things as they are, and other cases that will mean throwing out an implementation of some feature and replacing it with a simpler one which is easier to maintain and verify. No one can make an a-priori policy for that.
I was not promoting some a-priori policy, only arguing that the removal of support in MPL for compilers which are ancient, non-conforming, and barely usable in creating modern C++ code is a viable goal in order to make MPL more easily understandable. I agree with you that whoever makes changes to any library is responsible for seeing through those changes. I do not agree with you that this can only be done if a library has a single maintainer in charge of everything. The latter may be a laudatory goal and I do think that the creator of a library if he is still active makes the final call on changes, but the creator(s) of MPL are no longer active on Boost and maintaining the library. Do we then say that because a library no longer has a primary maintainer who created the library that no changes to the library except for bug fixes should ever be made ? That is very unrealistic consider that very few creators of a library are willing to maintain that library perpetually, which is only natural.
AMDG On 02/28/2015 08:20 PM, Edward Diener wrote:
Do we then say that because a library no longer has a primary maintainer who created the library that no changes to the library except for bug fixes should ever be made ?
Absolutely, unless someone new is willing to take responsibility for the library. I would be much more conservative about this for MPL than for most other libraries, given how fundamental it is.
That is very unrealistic consider that very few creators of a library are willing to maintain that library perpetually, which is only natural.
In Christ, Steven Watanabe
On 1 March 2015 at 03:40, Steven Watanabe <watanabesj@gmail.com> wrote:
AMDG
On 02/28/2015 08:20 PM, Edward Diener wrote:
Do we then say that because a library no longer has a primary maintainer who created the library that no changes to the library except for bug fixes should ever be made ?
Absolutely, unless someone new is willing to take responsibility for the library. I would be much more conservative about this for MPL than for most other libraries, given how fundamental it is.
That is very unrealistic consider that very few creators of a library are willing to maintain that library perpetually, which is only natural.
I'm not really following this thread, but FWIW I reverted the changes for 2 reasons: 1) they are large and non-trivial, and no one seems to have a particularly good understanding of the library, 2) Many of the the corresponding changes in dependent libraries haven't been merged. I was especially worried about dependants because I had just discovered (by coincidence) that a change in type traits which had broken several libraries in master and no one had noticed. These things really aren't adequately monitored. There's also the possibility that these changes can break third party code - the issue of what is and isn't private to boost has never been clear. I really think people underestimate the value of a stable code base. I imagine all of this would be a lot easier if boost had separate stable and unstable releases. But we don't, so IMO the best way forward for MPL is to have a new, unstable version, probably concentrating on recent versions of the standard. Alternatively the plan to factor out the "core" of the library might result in a simpler, easier to maintain core containing things people care about, and a less stable repo for the rest of it.
Edward Diener wrote:
On 2/27/2015 3:54 PM, Stephen Kelly wrote:
Edward Diener wrote:
A while back Stephen Kelly specified changes to MPL to get rid of hacks for old compilers that nobody should be using anymore, and which would simplify the MPL code a bit so that new development could more easily be done with the code. But these changes were never made
The removal of the old code was committed (see 'gitk steveire' in mpl.git).
Daniel James reverted it.
Feel free to revert the revert.
I am not the maintainer of MPL, although I have been granted rights to change it. I don't think MPL has a primary maintainer any more so any large change needs a consensus AFAICS.
The consensus appeared to be that MPL should keep the old code. I agreed with you instead. But I am not going to go against the consensus of others unless there is an updated general consensus that agrees that MPL need no longer support the old compilers that it does.
You can consider what I wrote to be an invitation to anyone/everyone on this list to decide to coordinate. Good luck! Steve.
Le 13/02/15 18:03, pfultz2 a écrit :
I believe however that some people were interested in doing a new C++11 version of MPL. I think the problem is that every year or so someone finds a new fancy way to do meta-programming with the latest C++ features, with noble goals of unifying MPL and Fusion, so most of these rewrites end up as experiments rather than stable libraries. I think Eric Niebler's meta library is good start for a modern MPL library, and it doesn't try to unify MPL and Fusion.
+1 While I appreciate the work on Hana, I believe a pure meta-programming (pure functional) C++11/C++14 library would make things easier. What about enriching the Eric's library on a GSoC project? Vicente
On 19 Feb 2015 at 15:39, Vicente J. Botet Escriba wrote:
Le 13/02/15 18:03, pfultz2 a écrit :
I believe however that some people were interested in doing a new C++11 version of MPL. I think the problem is that every year or so someone finds a new fancy way to do meta-programming with the latest C++ features, with noble goals of unifying MPL and Fusion, so most of these rewrites end up as experiments rather than stable libraries. I think Eric Niebler's meta library is good start for a modern MPL library, and it doesn't try to unify MPL and Fusion.
+1
While I appreciate the work on Hana, I believe a pure meta-programming (pure functional) C++11/C++14 library would make things easier.
What about enriching the Eric's library on a GSoC project?
Sufficiently able students are extremely tough to find. Also, Eric's Meta is very new, is C++ 14 only, and I assume would internal compiler error any MSVC :) That said if you're willing to mentor such a GSoC Vicente ... Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
Le 19/02/15 16:02, Niall Douglas a écrit :
On 19 Feb 2015 at 15:39, Vicente J. Botet Escriba wrote:
Le 13/02/15 18:03, pfultz2 a écrit :
I believe however that some people were interested in doing a new C++11 version of MPL. I think the problem is that every year or so someone finds a new fancy way to do meta-programming with the latest C++ features, with noble goals of unifying MPL and Fusion, so most of these rewrites end up as experiments rather than stable libraries. I think Eric Niebler's meta library is good start for a modern MPL library, and it doesn't try to unify MPL and Fusion.
+1
While I appreciate the work on Hana, I believe a pure meta-programming (pure functional) C++11/C++14 library would make things easier.
What about enriching the Eric's library on a GSoC project? Sufficiently able students are extremely tough to find. Also, Eric's Meta is very new, is C++ 14 only, and I assume would internal compiler error any MSVC :)
That said if you're willing to mentor such a GSoC Vicente ...
I would accept mentoring it if the library is restricted to only C++14 compilers (with C++14 libraries). IMHO, new Boost libraries (and in particular GSoC projects) should jump on the last train. Vicente
On 24 Feb 2015 at 8:28, Vicente J. Botet Escriba wrote:
While I appreciate the work on Hana, I believe a pure meta-programming (pure functional) C++11/C++14 library would make things easier.
What about enriching the Eric's library on a GSoC project? Sufficiently able students are extremely tough to find. Also, Eric's Meta is very new, is C++ 14 only, and I assume would internal compiler error any MSVC :)
That said if you're willing to mentor such a GSoC Vicente ...
I would accept mentoring it if the library is restricted to only C++14 compilers (with C++14 libraries). IMHO, new Boost libraries (and in particular GSoC projects) should jump on the last train.
Then it's especially hard to find sufficiently able students! Also, I thought you wanted to spend this summer working on Thread v5 instead of mentoring GSoC? Niall --- Boost C++ Libraries Google Summer of Code 2015 admin https://svn.boost.org/trac/boost/wiki/SoC2015
Le 24/02/15 15:57, Niall Douglas a écrit :
On 24 Feb 2015 at 8:28, Vicente J. Botet Escriba wrote:
While I appreciate the work on Hana, I believe a pure meta-programming (pure functional) C++11/C++14 library would make things easier.
What about enriching the Eric's library on a GSoC project? Sufficiently able students are extremely tough to find. Also, Eric's Meta is very new, is C++ 14 only, and I assume would internal compiler error any MSVC :)
That said if you're willing to mentor such a GSoC Vicente ...
I would accept mentoring it if the library is restricted to only C++14 compilers (with C++14 libraries). IMHO, new Boost libraries (and in particular GSoC projects) should jump on the last train. Then it's especially hard to find sufficiently able students! You are surely right. Also, I thought you wanted to spend this summer working on Thread v5 instead of mentoring GSoC?
Thanks for recalling it to me ;-) Vicente
On 2/19/2015 7:02 AM, Niall Douglas wrote:
On 19 Feb 2015 at 15:39, Vicente J. Botet Escriba wrote:
Le 13/02/15 18:03, pfultz2 a écrit :
I believe however that some people were interested in doing a new C++11 version of MPL. I think the problem is that every year or so someone finds a new fancy way to do meta-programming with the latest C++ features, with noble goals of unifying MPL and Fusion, so most of these rewrites end up as experiments rather than stable libraries. I think Eric Niebler's meta library is good start for a modern MPL library, and it doesn't try to unify MPL and Fusion.
+1
While I appreciate the work on Hana, I believe a pure meta-programming (pure functional) C++11/C++14 library would make things easier.
What about enriching the Eric's library on a GSoC project?
Sufficiently able students are extremely tough to find. Also, Eric's Meta is very new, is C++ 14 only, and I assume would internal compiler error any MSVC :)
It's C++11 actually, and of course MSVC can't compile it. :-)
That said if you're willing to mentor such a GSoC Vicente ...
Not sure about the GSoC project idea. I'm pretty difficult to work with because I have strong ideas about how things should be done. HOWEVER, there is one interesting research direction that I don't have time to explore, and that could be an interesting project. (See below.) I should say briefly why I felt the need to write my own metaprogramming library. It's *not* (as some here seem to think) because I have no faith in Hana. Briefly: - I needed some metaprogramming utilities for my range library, and I didn't want to take a dependence on an external lib. - I wanted something small, lightweight and simple. - I think the C++ standard library needs some utilities for manipulating variadic parameter packs. After all, they're a core language feature, and the library should support them. My little lib could easily be turned into a proposal. I know only a little about Hana, enough to be excited, impressed, and a little intimidated by its scope. I admit to being surprised at the suggestion to use constexpr functions to do pure type-level computation, but I haven't tried it so I can't speak from experience. Hana might very well be a good addition to Boost. I think it's an experiment that should be tried, at least. Given the size and scope of Hana that the standardization committee would run screaming. (Function programming! Run away!) Standards work has taken up more of my attention, so tiny, self-contained libraries that accentuate the features of the language are more interesting to me these days. But boost:: and std:: are very different beasts, and I don't use the same measuring stick for both. MPL has its place. It's a legacy library. Its design -- STL-like containers, iterators and algorithms -- is strange for a metaprogramming library, and it probably wouldn't be designed like that today, but it's what we've got, and too many things depend on it for us to consider replacing it now. I think the future of metaprogramming in C++ is very much an open question at this point. I see no problem with having 2, 3 or even 4 different metaprogramming libraries of different philosophies and scope in Boost. Hana should almost certainly be one. My $0.02. Eric [*] My GSoC idea: my Meta library is built around variadic parameter packs, aka lists, and I think I'm happy with that. But it has been suggested that it could be built instead around the Foldable concept. The project would be to redesign Meta around Foldable types, and add other "containers" to Meta besides lists, like sets, and have the algorithms work with anything that is Foldable. -- Eric Niebler Boost.org http://www.boost.org
Eric Niebler <eniebler <at> boost.org> writes:
[...]
[*] My GSoC idea: my Meta library is built around variadic parameter packs, aka lists, and I think I'm happy with that. But it has been suggested that it could be built instead around the Foldable concept. The project would be to redesign Meta around Foldable types, and add other "containers" to Meta besides lists, like sets, and have the algorithms work with anything that is Foldable.
Thanks for the feedback, Eric. However, there's something that has been tickling me for a while with Meta. I am seeing some kind of convergence in Meta towards the MPL11, especially with the addition of the lazy operations. If you go down the Foldable road, then you would end up with something incredibly similar. The MPL11 does exactly that work of splitting sequences into Foldable, Iterable and other similar concepts. Frankly, my impression is that Meta is reinventing MPL11's wheel, and I would like to know whether (1) I am wrong (2) this is done unconsciously (3) this is done consciously because you think the MPL11 doesn't cut it. Of course, it is the possibility of (3) that has been tickling me :-). Louis
On 3/5/2015 4:52 AM, Louis Dionne wrote:
Eric Niebler <eniebler <at> boost.org> writes:
[...]
[*] My GSoC idea: my Meta library is built around variadic parameter packs, aka lists, and I think I'm happy with that. But it has been suggested that it could be built instead around the Foldable concept. The project would be to redesign Meta around Foldable types, and add other "containers" to Meta besides lists, like sets, and have the algorithms work with anything that is Foldable.
Thanks for the feedback, Eric. However, there's something that has been tickling me for a while with Meta. I am seeing some kind of convergence in Meta towards the MPL11, especially with the addition of the lazy operations. If you go down the Foldable road, then you would end up with something incredibly similar. The MPL11 does exactly that work of splitting sequences into Foldable, Iterable and other similar concepts. Frankly, my impression is that Meta is reinventing MPL11's wheel, and I would like to know whether (1) I am wrong (2) this is done unconsciously (3) this is done consciously because you think the MPL11 doesn't cut it.
Of course, it is the possibility of (3) that has been tickling me :-).
I didn't know you were tracking the progress of Meta. Good, I've been meaning to talk with you about it. I haven't looked at MPL11; any similarity is purely coincidental, or more likely, a case of convergent evolution. I find the idea of Foldable interesting, but I also see great appeal in keeping the library small and strictly about manipulating variadic parameter packs (type lists). My feeling is that a set of utilities for manipulating parameter packs would be an easier sell to the committee, and that's really where my thrust is. I'm curious to hear your thoughts about that. If I find time, I'll take a look at MPL11. Where can I find it? -- Eric Niebler Boost.org http://www.boost.org
On 3/5/2015 4:52 AM, Louis Dionne wrote:
Eric Niebler <eniebler <at> boost.org> writes:
[...]
[*] My GSoC idea: my Meta library is built around variadic parameter packs, aka lists, and I think I'm happy with that. But it has been suggested that it could be built instead around the Foldable concept. The project would be to redesign Meta around Foldable types, and add other "containers" to Meta besides lists, like sets, and have the algorithms work with anything that is Foldable.
Thanks for the feedback, Eric. However, there's something that has been tickling me for a while with Meta. I am seeing some kind of convergence in Meta towards the MPL11, especially with the addition of the lazy operations. If you go down the Foldable road, then you would end up with something incredibly similar. The MPL11 does exactly that work of splitting sequences into Foldable, Iterable and other similar concepts. Frankly, my impression is that Meta is reinventing MPL11's wheel, and I would like to know whether (1) I am wrong (2) this is done unconsciously (3) this is done consciously because you think the MPL11 doesn't cut it.
Of course, it is the possibility of (3) that has been tickling me :-).
I found MPL11 and read some of its docs. Although there are many similarities (naturally, we're both influenced by the MPL), I think there is a fundamental difference. Meta isn't built around metafunctions as folks here know them. The Meta library is built around template aliases. The difference is illustrated in the way the quote feature is implemented in the two libraries. In MPL11, it's called "lift" and it looks like: template <template <typename ...> class f> struct lift { template <typename ...xs> struct apply : f<xs...> { }; }; Here, the assumption seems to be that "f" is a what you're calling a thunk or a boxed value; aka a metafunction. Something with a nested ::type. In contrast, here is (in essence) how Meta defines quote: template <template <typename ...> class f> struct quote { template <typename ...xs> using apply = f<xs...>; }; In Meta, the template aliases are used extensively, and types evaluate directly to their results. Things are done eagerly. There are no "thunks". Of course, when writing a lambda or a let expression, evaluation needs to be deferred until the substitutions are made. I use a template called "defer" for that. It's only intended for use by let and lambda. Although it does give things a nested "::type", it doesn't strictly need to; indeed when I first added it, it didn't. Anyway, that may seem like a subtle difference, but it feels like sea change to me. I find it much nicer working this way. -- Eric Niebler Boost.org http://www.boost.org
Eric Niebler <eniebler <at> boost.org> writes:
On 3/5/2015 4:52 AM, Louis Dionne wrote:
Eric Niebler <eniebler <at> boost.org> writes:
[...]
[*] My GSoC idea: my Meta library is built around variadic parameter packs, aka lists, and I think I'm happy with that. But it has been suggested that it could be built instead around the Foldable concept. The project would be to redesign Meta around Foldable types, and add other "containers" to Meta besides lists, like sets, and have the algorithms work with anything that is Foldable.
Thanks for the feedback, Eric. However, there's something that has been tickling me for a while with Meta. I am seeing some kind of convergence in Meta towards the MPL11, especially with the addition of the lazy operations. If you go down the Foldable road, then you would end up with something incredibly similar. The MPL11 does exactly that work of splitting sequences into Foldable, Iterable and other similar concepts. Frankly, my impression is that Meta is reinventing MPL11's wheel, and I would like to know whether (1) I am wrong (2) this is done unconsciously (3) this is done consciously because you think the MPL11 doesn't cut it.
Of course, it is the possibility of (3) that has been tickling me .
I found MPL11 and read some of its docs. Although there are many similarities (naturally, we're both influenced by the MPL), I think there is a fundamental difference. Meta isn't built around metafunctions as folks here know them. The Meta library is built around template aliases. The difference is illustrated in the way the quote feature is implemented in the two libraries.
In MPL11, it's called "lift" and it looks like:
template <template <typename ...> class f> struct lift { template <typename ...xs> struct apply : f<xs...> { }; };
Here, the assumption seems to be that "f" is a what you're calling a thunk or a boxed value; aka a metafunction. Something with a nested ::type.
Strictly speaking, the definition of `lift` in the MPL11 is as above to workaround a GCC bug. Otherwise, it is exactly the same as in Meta: template <template <typename ...> class f> struct lift { using type = lift; #if defined(BOOST_MPL11_GCC_PACK_EXPANSION_BUG) template <typename ...x> struct apply : f<x...> { }; #else template <typename ...x> using apply = f<x...>; #endif }; That being said, it is true that MPL11 uses "thunks", i.e. nullary metafunctions. The reason for that is exactly what I have been advocating during the whole construction of the MPL11; it is easier to build up more complex metafunctions when they are lazy, because you don't risk instantiating metafunctions that would fail whenever you branch.
In contrast, here is (in essence) how Meta defines quote:
template <template <typename ...> class f> struct quote { template <typename ...xs> using apply = f<xs...>; };
In Meta, the template aliases are used extensively, and types evaluate directly to their results. Things are done eagerly. There are no "thunks".
Here's my understanding of how Meta works: Meta still uses the classical concept of a metafunction with a nested type, but it is hidden behind `meta::eval`. Basically, the main interface of the library is the `*_t` version of the actual metafunctions. Then, Meta uses `defer` to systematically provide a lazy version of each eager metafunction in the `lazy` namespace, because lazy metafunctions are often useful as you rightfully noted. In contrast, MPL11 just uses lazy metafunctions all the time, and you only need to use `eval` (or actually `typename ::type`) at the very end of a computation. It would thus be equivalent to provide `*_t` aliases for all MPL11 metafunctions.
Of course, when writing a lambda or a let expression, evaluation needs to be deferred until the substitutions are made. I use a template called "defer" for that. It's only intended for use by let and lambda. Although it does give things a nested "::type", it doesn't strictly need to; indeed when I first added it, it didn't.
Anyway, that may seem like a subtle difference, but it feels like sea change to me. I find it much nicer working this way.
I don't find that defaulting to eager metafunctions is nicer working. It has been, at least for me, the source of a lot of pain because I was required to write helper metafunctions to branch lazily. Plus, when you use lazy metafunctions all the time, you almost never have to type `typename f<x>::type` (instead you just use `f<x>`), and so the syntax looks pretty much the same as when you use template aliases. Regarding the usage of other concepts like Foldable, you say it was suggested that everything could be implemented around folds. This is false in general, because stuff like `transform` require a different structure than what is needed to be Foldable. However, something can be said about the link between Foldable and an hypothetical Iterable concept (which would allow iteration over a data structure one element at a time): In an eager context, both do not coincide and you will need to introduce other concepts like Hana does (e.g. Iterable, Searchable) to be able to express most operations on type lists. In a non-strict context, e.g. where you can right-fold infinite data structures without crashing the compiler, I think both coincide but frankly one step is missing from my proof. Bottom line: while Foldable is a nice abstraction, it does not encompass everything (far from that) and you would need to introduce a bunch of other concepts to "conceptualize" everything. If you go down that road, just add `*_t` aliases to the MPL11 and you're done. Otherwise, keep it as simple as possible and just manipulate dumb type lists, which is what 90% of the people need anyway. That's my .02. Louis
Hello guys. I have been reading the whole discussion for a while, and I think I should put my little humble opinion here. I have been working on my own C++11 metaprogramming library for two years, it's called Turbo (https://github.com/Manu343726/Turbo). I know Eric is aware of my work, due to some spamming on twitter :) Similar to Meta, all the Turbo machinery is based on tml::eval, a template alias encapsulating an expression evaluator. The point of eval is not to do typename::type only, but to evaluate an expression and its arguments first (recursively). This week I have been working on adding metafunction classes support to Turbo, that is, now tml::eval is able to evaluate expressions like std::remove_cv<std::remove_pointer<const int*>> in an eager way like Meta, and at the same time metafunction classes like the lift example you provided above: using Int = tml::eval<std::remove_cv<std::remove_pointer<const int*>>>; using Tuple = tml::eval<lift<std::tuple>, int, char, bool, double>; I understand the eager Meta approach, that's what I picked at the beginning of Turbo, but metafunction classes make customization and composition far easier in some scenarios. I think the right thing is to take the best of two worlds, as Turbo currently does. As an example, the main.cpp file contains several examples of what I'm currently working on and the power of the eager + metafunction classes way: There are foldable typelists, mappable typelists, a functor to translate between typelist categories, a shorthand for haskell-like do notation etc: https://github.com/Manu343726/Turbo/blob/master/blocks/manu343726/turbo_main... Eager approach by default makes evaluation of simple expressions easier, but I'm agree with Louis, it's a pain when dealing with lazy and/or conditional expressions. Try to support both approaches, take the best tool for each scenario. PD: The main issue with my tml::eval approach is the complexity of its implementation. But I'm sure the concept holds, and should be a way to write this thing simpler from scratch. The current eval.hpp file is the result of two years of meta-nightmares, workarounds to GCC bugs, etc. Please let me know if there's any issue with format, correctness, rules, etc on this message. It's my first time on the mailing list. Manu Sánchez manu343726.github.io
I don't find that defaulting to eager metafunctions is nicer working. It has been, at least for me, the source of a lot of pain because I was required to write helper metafunctions to branch lazily. Plus, when you use lazy metafunctions all the time, you almost never have to type `typename f<x>::type` (instead you just use `f<x>`), and so the syntax looks pretty much the same as when you use template aliases.
I agree. The big reason for having lazy evaluation is conditionals, not necessarily lambdas and such. The problem is that Boost.MPL is half-lazy evaluation, which is why things like `typename f<x>::type` need to be sprinkled everywhere.
Regarding the usage of other concepts like Foldable, you say it was suggested that everything could be implemented around folds.
I am the one who made the suggestion, however, I didn't say just `Foldable` alone. I said they could be implemented using `Foldable` and `Insertable` concepts, as these are essentially dual categories. Looking at this closer, I don't think there is an efficient way(or at least I have yet to find a way) to implement some algorithms(such as `drop`) using just these concepts. I think a better approach would be how Paul Mensonides defines generic data structures in his Chaos library. Its pretty simple and lightweight.
This is false in general, because stuff like `transform`
Well, `transform` could be implement using `Foldable` but it may not be the semantics the user expects. Paul -- View this message in context: http://boost.2283326.n4.nabble.com/mpl-multiset-tp4672187p4672823.html Sent from the Boost - Dev mailing list archive at Nabble.com.
pfultz2 <pfultz2 <at> yahoo.com> writes:
[...]
Regarding the usage of other concepts like Foldable, you say it was suggested that everything could be implemented around folds.
I am the one who made the suggestion, however, I didn't say just `Foldable` alone. I said they could be implemented using `Foldable` and `Insertable` concepts, as these are essentially dual categories.
That's an interesting line of thought. More generally, I think it is true that any structure that can be folded can be reconstructed using the dual fold. In category theory these are called catamorphisms and anamorphisms, but I'm not there yet :D.
Looking at this closer, I don't think there is an efficient way (or at least I have yet to find a way) to implement some algorithms (such as `drop`) using just these concepts.
I have tried hard to do that in the MPL11, and my conclusion was that those concepts provide very nice and general abstractions, but the implementation has to be dirty and lowlevel internally in order to be efficient. That's because the execution model of template metaprograms is weird. However, I think this is going to be true almost regardless of which abstraction you choose.
I think a better approach would be how Paul Mensonides defines generic data structures in his Chaos library. Its pretty simple and lightweight.
I'll definitely take a look at this.
This is false in general, because stuff like `transform`
Well, `transform` could be implement using `Foldable` but it may not be the semantics the user expects.
Are you referring to fmap f xs = foldr ((:) . f) [] xs ? If so, Foldable is missing (:) and []. If that is not what you meant, I must admit that I had no idea `transform` could be implemented with Foldable alone. Regards, Louis
That's an interesting line of thought. More generally, I think it is true that any structure that can be folded can be reconstructed using the dual fold. In category theory these are called catamorphisms and anamorphisms, but I'm not there yet :D.
Yes. Catamorphism is used to "destroy" the list, and anamorphism is used to build the list. Now, `Iterable` and `Foldable` can both be considered catamorphic, since folding can be built on top of `Iterable`. This is the same way unfolding can be built on top of `Insertable`. So using the lower level concepts such as `Iterable` and `Insertable`, we can do some algorithms more efficiently.
I think a better approach would be how Paul Mensonides defines generic data structures in his Chaos library. Its pretty simple and lightweight.
I'll definitely take a look at this.
Now in the Chaos library, it has these functions: * CONS - push_front * HEAD - front * TAIL - pop_front * NIL - An empty data structure(perhaps called remove_all) * IS_NIL - is_empty However, all these functions could be written using `Iterable` and `Insertable` as well: Iterable: * front * pop_front Insertable: * push_front So the other two methods(`remove_all` and `is_empty`) can be derived from the above three methods: template<class Sequence> using is_empty = is_same<Sequence, pop_front<Sequence>>; template<class Sequence> using remove_all = if_<is_empty<Sequence>, Sequence, remove_all<pop_front<Sequence>>>;
Are you referring to? If so, Foldable is missing (:) and []. If that is not what you meant, I must admit that I had no idea `transform` could be implemented with Foldable alone.
No, I am referring to the fact that you would return a new Foldable sequence that would do the transform while it folds. Paul -- View this message in context: http://boost.2283326.n4.nabble.com/mpl-multiset-tp4672187p4672849.html Sent from the Boost - Dev mailing list archive at Nabble.com.
On 3/7/2015 6:41 AM, Louis Dionne wrote:
Strictly speaking, the definition of `lift` in the MPL11 is as above to workaround a GCC bug. Otherwise, it is exactly the same as in Meta:
template <template <typename ...> class f> struct lift { using type = lift;
#if defined(BOOST_MPL11_GCC_PACK_EXPANSION_BUG) template <typename ...x> struct apply : f<x...> { }; #else template <typename ...x> using apply = f<x...>; #endif };
FWIW, this implementation will quickly run afoul of core issue 1430[*], which I know you're aware of because you commented on a gcc bug about it. :-) You should have a look at how I avoided the problem in Meta. meta::quote is somewhat subtle. [*] http://open-std.org/jtc1/sc22/wg21/docs/cwg_active.html#1430
That being said, it is true that MPL11 uses "thunks", i.e. nullary metafunctions. The reason for that is exactly what I have been advocating during the whole construction of the MPL11; it is easier to build up more complex metafunctions when they are lazy, because you don't risk instantiating metafunctions that would fail whenever you branch.
I don't disagree about the importance of laziness. The question is how to deal with it, and whether casual users need to be aware of it.
In contrast, here is (in essence) how Meta defines quote:
template <template <typename ...> class f> struct quote { template <typename ...xs> using apply = f<xs...>; };
In Meta, the template aliases are used extensively, and types evaluate directly to their results. Things are done eagerly. There are no "thunks".
Here's my understanding of how Meta works:
Meta still uses the classical concept of a metafunction with a nested type, but it is hidden behind `meta::eval`.
No. Metafunctions in Meta are incidental, not fundamental. If it were possible to specialize template aliases, there would be no nested ::type anywhere in Meta -- it's used as an implementation detail only for some of the algorithms. I use some tricks to *implement* Meta that I don't use when I'm *using* Meta. When using Meta, laziness is best (IMO) achieved with defer, lambda, and let (although nothing is stopping someone from creating metafunctions and using the meta::lazy namespace in a more "traditional" way).
Basically, the main interface of the library is the `*_t` version of the actual metafunctions. Then, Meta uses `defer` to systematically provide a lazy version of each eager metafunction in the `lazy` namespace, because lazy metafunctions are often useful as you rightfully noted.
Defer isn't used with metafunctions. There would be no point. Defer is used with aliases (which are not metafunctions, but some are implemented that was under the covers, see above). Defer turns eager computations into lazy ones. How the eager computations are implemented is beside the point.
In contrast, MPL11 just uses lazy metafunctions all the time, and you only need to use `eval` (or actually `typename ::type`) at the very end of a computation. It would thus be equivalent to provide `*_t` aliases for all MPL11 metafunctions.
In meta, a lazy computation is evaluated by wrapping it in let<>. Without local variables, a let<> is nothing more than a nullary lambda invocation.
Of course, when writing a lambda or a let expression, evaluation needs to be deferred until the substitutions are made. I use a template called "defer" for that. It's only intended for use by let and lambda. Although it does give things a nested "::type", it doesn't strictly need to; indeed when I first added it, it didn't.
Anyway, that may seem like a subtle difference, but it feels like sea change to me. I find it much nicer working this way.
I don't find that defaulting to eager metafunctions is nicer working. It has been, at least for me, the source of a lot of pain because I was required to write helper metafunctions to branch lazily.
I never need to use helper metafunctions. Meta's expression evaluator handles laziness.
Plus, when you use lazy metafunctions all the time, you almost never have to type `typename f<x>::type` (instead you just use `f<x>`), and so the syntax looks pretty much the same as when you use template aliases.
<snip discussion of Foldable>
If you go down that road, just add `*_t` aliases to the MPL11 and you're done. Otherwise, keep it as simple as possible and just manipulate dumb type lists, which is what 90% of the people need anyway. That's my .02.
This, IMO. And I suspect 90% is a low estimate. I feel like Meta's approach to laziness hasn't been understood. Here, for instance, is a SFINAE-friendly implementation of std::common_type; it has a ::type when a common type exists, but otherwise it doesn't. When it was implemented with metafunctions it was a huge mess. With meta::defer and meta::let, it's simple and straightforward. (NOTE: No metafunctions, no eval except to define common_type_t.) namespace m = ranges::meta; namespace ml = ranges::meta::lazy; template<typename T, typename U> using builtin_common_t = decltype(true? std::declval<T>() : std::declval<U>()); template<typename T, typename U> using lazy_builtin_common_t = m::defer<builtin_common_t, T, U>; template<typename...Ts> struct common_type {}; template<typename ...Ts> using common_type_t = m::eval<common_type<Ts...>>; template<typename T> struct common_type<T> : std::decay<T> {}; template<typename T, typename U> struct common_type<T, U> : m::if_c< ( std::is_same<decay_t<T>, T>::value && std::is_same<decay_t<U>, U>::value ), ml::let<lazy_builtin_common_t<T, U>>, common_type<decay_t<T>, decay_t<U>>> {}; template<typename T, typename U, typename... Vs> struct common_type<T, U, Vs...> : ml::let<ml::fold<m::list<U, Vs...>, T, m::quote<common_type_t>>> {}; // TESTS static_assert(std::is_same<common_type_t<char, short, char, short>, int>::value, ""); static_assert(std::is_same<common_type_t<char, short, float, short>, float>::value, ""); // HAS NO COMMON TYPE: static_assert(!m::has_type<common_type<int, int, int*>>::value, ""); What's interesting here is that you get a SFINAE-friendly common_type for free. Since Meta's expression evaluator is handling laziness, it can be SFINAE-friendly itself. Nowhere do you need to test whether a computation has succeeded or failed. If any substitution failure occurs in an immediate context, the whole computation is aborted. It just falls out of the lambda/defer interaction. (You can get backtraces by moving computations into non-immediate contexts.) I would be curious to see this implemented in Hana and in Turbo. As for lazy branches, that can also be handled simply by let/defer: // Test that the unselected branch does not get evaluated: template<typename T> using test_lazy_if_ = let<lazy::if_<std::is_void<T>, T, defer<std::pair, T> > >; static_assert(std::is_same<test_lazy_if_<void>, void>::value, ""); Obviously, std::pair can't be instantiated with only one argument. It is never tried since the condition is true and the branch is never taken. The code compiles. (Short-circuiting in lambdas is a recent addition.) FWIW, I find the MPL's lambda evaluator *extremely* confusing and frustrating. That business of "after substitutions, test to see if ::type exists; if so that's the result ... but only if parameter substitutions were made!" is just awful. -- Eric Niebler Boost.org http://www.boost.org
I feel like Meta's approach to laziness hasn't been understood.
So in Meta, instead of sprinkling my code with `typename` and `::type`, I need to sprinkle it with `let` and `defer`. With Louis' approach to laziness, you wouldn't need either of these. Paul -- View this message in context: http://boost.2283326.n4.nabble.com/mpl-multiset-tp4672187p4672879.html Sent from the Boost - Dev mailing list archive at Nabble.com.
On 3/9/2015 9:38 AM, pfultz2 wrote:
I feel like Meta's approach to laziness hasn't been understood.
So in Meta, instead of sprinkling my code with `typename` and `::type`, I need to sprinkle it with `let` and `defer`. With Louis' approach to laziness, you wouldn't need either of these.
You use the algorithms in the meta::lazy namespace that are already deferred, and only in the places where you need laziness. I find I only need laziness in a few small places in my computation. The rest of the time it obscures semantics. IMO. Did you find the use of let/defer in my common_type example off-putting? I find the solution elegant. There's probably some bias toward laziness in metaprogramming here in Boost due to the influence of the MPL. It's helpful to remember that in the wider C++ community, computing a computation and /then/ evaluating it is probably not the most intuitive thing. From my years on the Boost list, I know that new users of the MPL are terribly confused about if/when to access the nested ::type during a computation. (Arguably, they would be similarly confused about when to use let/defer, but they won't be confronted with that problem until they need laziness.) Meta is aimed at people with light metaprogramming needs. It scales well IMO, but the choice to put eager computation first is to make simple things as simple as possible. It comes from the observation that, when using Meta on real-world problems, I was almost exclusively using the *_t aliases. So I flipped the interface around and found a different way to express laziness. Also, the nested ::type idiom feels like a hack around the lack of template aliases in the language. Now that we have them, I thought it would be interesting to try to build a MP library that assumes them. There's room here for different metaprogramming libraries with different approaches. I'm explaining Meta's approach because I felt it wasn't understood, not criticizing anybody else's approach. I like Hana. I am curious how it handles lambdas (does it have them?), and whether it manages eagerness/laziness in lambdas in a more sane way than MPL. P.S. I still would find it informative to see how my common_type example looks in Hana and Turbo. P.P.S. "l-e-t" is shorter than "t-y-p-e-n-a-m-e- -:-:-t-y-p-e" :-) -- Eric Niebler Boost.org http://www.boost.org
On Sun, Mar 8, 2015 at 2:56 PM, Eric Niebler <eniebler@boost.org> wrote: [snip] I feel like Meta's approach to laziness hasn't been understood. Here,
for instance, is a SFINAE-friendly implementation of std::common_type; it has a ::type when a common type exists, but otherwise it doesn't. When it was implemented with metafunctions it was a huge mess. With meta::defer and meta::let, it's simple and straightforward.
(NOTE: No metafunctions, no eval except to define common_type_t.)
namespace m = ranges::meta; namespace ml = ranges::meta::lazy;
template<typename T, typename U> using builtin_common_t = decltype(true? std::declval<T>() : std::declval<U>()); template<typename T, typename U> using lazy_builtin_common_t = m::defer<builtin_common_t, T, U>;
template<typename...Ts> struct common_type {};
template<typename ...Ts> using common_type_t = m::eval<common_type<Ts...>>;
template<typename T> struct common_type<T> : std::decay<T> {};
template<typename T, typename U> struct common_type<T, U> : m::if_c< ( std::is_same<decay_t<T>, T>::value && std::is_same<decay_t<U>, U>::value ), ml::let<lazy_builtin_common_t<T, U>>, common_type<decay_t<T>, decay_t<U>>> {};
template<typename T, typename U, typename... Vs> struct common_type<T, U, Vs...> : ml::let<ml::fold<m::list<U, Vs...>, T, m::quote<common_type_t>>> {};
// TESTS static_assert(std::is_same<common_type_t<char, short, char, short>, int>::value, ""); static_assert(std::is_same<common_type_t<char, short, float, short>, float>::value, "");
// HAS NO COMMON TYPE: static_assert(!m::has_type<common_type<int, int, int*>>::value, "");
What's interesting here is that you get a SFINAE-friendly common_type for free. Since Meta's expression evaluator is handling laziness, it can be SFINAE-friendly itself. Nowhere do you need to test whether a computation has succeeded or failed. If any substitution failure occurs in an immediate context, the whole computation is aborted. It just falls out of the lambda/defer interaction.
(You can get backtraces by moving computations into non-immediate contexts.)
I would be curious to see this implemented in Hana and in Turbo.
Yes! We need to see apples-to-apples comparisons of nontrivial, useful examples like this under the various libraries' approaches.
As for lazy branches, that can also be handled simply by let/defer:
// Test that the unselected branch does not get evaluated: template<typename T> using test_lazy_if_ = let<lazy::if_<std::is_void<T>, T, defer<std::pair, T> > >; static_assert(std::is_same<test_lazy_if_<void>, void>::value, "");
And don't forget this part! :) Zach
On 3/9/2015 10:40 AM, Zach Laine wrote:
On Sun, Mar 8, 2015 at 2:56 PM, Eric Niebler <eniebler@boost.org> wrote:
[snip]
Here, for instance, is a SFINAE-friendly implementation of std::common_type <snip>
template<typename T, typename U> struct common_type<T, U> : m::if_c<
<snip>
I would be curious to see this implemented in Hana and in Turbo.
Yes! We need to see apples-to-apples comparisons of nontrivial, useful examples like this under the various libraries' approaches.
I should also add that common_type also needs to be implemented such that people can provide user-specializations of common_type<T, U>. It makes this an interesting trait to implement from a metaprogramming perspective. (Huh, this would make a good blog post. :-) If there are other real-world problems that show the strengths of Hana/Turbo, let's see those, too. I'll play along as time allows. -- Eric Niebler Boost.org http://www.boost.org
Zach Laine <whatwasthataddress <at> gmail.com> writes:
On Sun, Mar 8, 2015 at 2:56 PM, Eric Niebler <eniebler <at> boost.org> wrote:
[...]
As for lazy branches, that can also be handled simply by let/defer:
// Test that the unselected branch does not get evaluated: template<typename T> using test_lazy_if_ = let<lazy::if_<std::is_void<T>, T, defer<std::pair, T> > >; static_assert(std::is_same<test_lazy_if_≤void>, void>::value, "");
And don't forget this part! :)
Oops, I forgot. Ok, so lazy branches in Hana work as follows. First, you use the `eval_if` function, which takes a condition and two branches in the form of lambdas. But that's not all; the lambdas must accept a parameter (usually called _), which can be used to defer the compile-time evaluation of expressions as required. An example: template <typename N> auto fact(N n) { return hana::eval_if(n == hana::int_<0>, [](auto _) { return hana::int_<1>; }, [=](auto _) { return n * fact(_(n) - hana::int_<1>); } ); } What happens here is that `eval_if` will pass an identity function to the selected branch. Hence, `_(x)` is always the same as `x`, but the compiler can't tell until the lambda has been called! Hence, the compiler has to wait before it instantiates the body of the lambda and no infinite recursion happens. Also note that `always` can be used to make `eval_if` easier to work with: template <typename N> auto fact(N n) { return hana::eval_if(n == hana::int_<0>, always(hana::int_<1>), [=](auto _) { return n * fact(_(n) - hana::int_<1>); } ); } A lot of sugar could be added to `eval_if`, like a better syntax (similar to Phoenix's). Other obvious extensions would be that nullary lambdas could be supported too. None of this was not done yet because I lack time and wanted to concentrate on the "essence", not the sugar. Also, there are several caveats. First, because we're using lambdas, it means that the function's result can't be used in a constant expression. This is IMO a stupid limitation of the language and one that should be lifted. If someone is willing to team up with me for writing a proposal, I would be interested. The second caveat is that compilers currently have several bugs regarding deeply nested lambdas with captures. So you always risk crashing the compiler, but this is a question of time before it is not a problem anymore. Finally, it means that conditionals can't be written directly inside unevaluated contexts. The reason is that a lambda can't appear in an unevaluated context, for example in `decltype`. There are good reasons for this restriction in the language, but the current wording is way too strict IMO. I would be willing to team up for a paper regarding this too. One way to workaround this is to completely lift your type computations into variable templates instead. So instead of writing e.g. (stupid example, just to show) template <typename T> struct f : decltype(eval_if(true_, [](auto _) { return type<T>; }, [](auto _) { return type<T>; } )) { }; you could instead write template <typename T> auto f_impl(_type<T> t) { return eval_if(true_, [](auto) { return type<T>; }, [](auto) { return type<T>; } ); } template <typename T> using f = decltype(f_impl(type<T>)); Now, this hoop-jumping only has to be done in one place, because you use (or should be using) normal function notation everywhere else in your metaprogram to perform type computations. So the syntactic cost is amortized. Also, it should be the case that template <typename T> auto f_impl = eval_if(true_, [](auto) { return type<T>; }, [](auto) { return type<T>; } ); is valid. Hence, you could use variable templates to easily bridge between Hana and any type-only (weaker!) interface. However, the above currently fails to compile on Clang (it used to work, I'll investigate and fill a bug report). Regards, Louis
On Tue, Mar 10, 2015 at 4:31 PM, Louis Dionne <ldionne.2@gmail.com> wrote:
Also, there are several caveats. First, because we're using lambdas, it means that the function's result can't be used in a constant expression. This is IMO a stupid limitation of the language and one that should be lifted. If someone is willing to team up with me for writing a proposal, I would be interested.
I, too, really want this both for hana and for much more general uses. IMO, it's a really unfortunate limitation. I investigated this not too long ago to see just what the difficulties would be here if one were to try and get constexpr lambdas, and there is some hairiness that wasn't immediately apparent to me at the time: If you expect the results to be usable in all places that a constexpr could be used, such as in argument to a template or anywhere in a function's declaration other than a function's default argument, which is pretty important in the world of metaprogramming, then standardization of constexpr lambdas is not quite as easy as it may initially seem. For a related issue, see: https://isocpp.org/files/papers/n4019.html#1607 In short, the standard currently bans the situations mentioned here due to implementation complexity. The resolution of this in C++14 explicitly made it clear that lambdas should never appear in a signature, which was actually the original intent, though was not properly specified in C++11. I contacted Daniel Krügler about this and he said there was consensus that the issue could/should be addressed in a future standard, though that is still something that needs to be addressed. Assuming these restrictions are lifted or relaxed, SFINAE would then also need to be addressed, as lambdas would be able to appear in template declarations. My assumption there is that SFINAE just wouldn't apply to the body of the lambda (I.E. it would cause a hard error during substitution rather than just substitution failure). -- -Matt Calabrese
On 3/10/2015 4:31 PM, Louis Dionne wrote:
Zach Laine <whatwasthataddress <at> gmail.com> writes:
On Sun, Mar 8, 2015 at 2:56 PM, Eric Niebler <eniebler <at> boost.org> wrote:
[...]
As for lazy branches, that can also be handled simply by let/defer:
// Test that the unselected branch does not get evaluated: template<typename T> using test_lazy_if_ = let<lazy::if_<std::is_void<T>, T, defer<std::pair, T> > >; static_assert(std::is_same<test_lazy_if_≤void>, void>::value, "");
And don't forget this part! :)
Oops, I forgot. Ok, so lazy branches in Hana work as follows. First, you use the `eval_if` function, which takes a condition and two branches in the form of lambdas. But that's not all; the lambdas must accept a parameter (usually called _), which can be used to defer the compile-time evaluation of expressions as required. An example:
template <typename N> auto fact(N n) { return hana::eval_if(n == hana::int_<0>, [](auto _) { return hana::int_<1>; }, [=](auto _) { return n * fact(_(n) - hana::int_<1>); } ); }
What happens here is that `eval_if` will pass an identity function to the selected branch. Hence, `_(x)` is always the same as `x`, but the compiler can't tell until the lambda has been called! Hence, the compiler has to wait before it instantiates the body of the lambda and no infinite recursion happens.
The identity function hack is a little unfortunate, but I understand the need for it. What if the two branches return different types? It seems like it /should/ work when passed a runtime int like 11, as well as when passed a compile-time integral constant wrapper like std::integral_constant<int,11>. Is that how it works? That would be nifty. For reference, with Meta: template<typename N> struct fact : let<lazy::if_c<(N::value > 0), lazy::multiplies<N, defer<fact, dec<N>>>, meta::size_t<1>>> {}; Obviously only a compile-time computation. <snip>
Also, there are several caveats. First, because we're using lambdas, it means that the function's result can't be used in a constant expression.
:-(
The second caveat is that compilers currently have several bugs regarding deeply nested lambdas with captures.
Meh. Bugs can be fixed.
Finally, it means that conditionals can't be written directly inside unevaluated contexts.
:-( <snip> Using lambdas for lazy conditionals brings limitations and pitfalls. Would you consider adding a pure type-level alternative for people doing straight metaprogramming? -- Eric Niebler Boost.org http://www.boost.org
Eric Niebler <eniebler <at> boost.org> writes:
[...]
The identity function hack is a little unfortunate, but I understand the need for it.
It is, but at the same time, when you start mastering it, it becomes a powerful way of controlling exactly what should be deferred and what shouldn't. I agree that for 80% of the use cases, it would be better if the compiler could just "figure it out" himself.
What if the two branches return different types? It seems like it /should/ work when passed a runtime int like 11, as well as when passed a compile-time integral constant wrapper like std::integral_constant<int,11>. Is that how it works? That would be nifty.
Having both branches return objects of different types is definitely possible. Actually, the whole thing was designed to make this possible. However, in order for the branches to be able to return unrelated objects, i.e. ones without a common_type, the condition has to be known at compile-time (e.g. an IntegralConstant). If the eval_if is passed a runtime Logical (say a bool or an int), then both branches must return objects with a common_type. For example: Works: std::string s = eval_if(true_, // IntegralConstant condition [](auto _) { return std::string{"abcd"}; }, [](auto _) { return 1; } ); double d = eval_if(true, // bool condition [](auto _) { return 3.4; }, [](auto _) { return 1; } ); Fails (std::string and int don't share a common type): eval_if(true, [](auto _) { return std::string{"abcd"}; }, [](auto _) { return 1; } );
[...]
Using lambdas for lazy conditionals brings limitations and pitfalls. Would you consider adding a pure type-level alternative for people doing straight metaprogramming?
I think that might be a good idea. More generally, It might be possible to provide a mini EDSL for lambda expressions that could look like Proto's transforms? I'm really just thinking out loud, but what do you think? For simple laziness, Hana also has a tool called `lazy`, which allows creating function calls that can be evaluated later. For example: // nothing is evaluated nor instantiated at this point auto lazy_result = lazy(f)(x1, x2, x3); // everything happens now auto result = eval(lazy_result); Lazy is a Monad, which allows chaining lazy computations as shown in [1]. However, Lazy is not integrated with eval_if right now, which makes it pretty much useless. The reason why it is not integrated is because I think it can also be made a Comonad, and I wanted to see how lazy branching might be generalizable to arbitrary Comonads. I haven't had the time to explore this further for now though. Regards, Louis [1]: http://ldionne.github.io/hana/structboost_1_1hana_1_1_lazy.html
On 3/11/2015 8:00 AM, Louis Dionne wrote:
Eric Niebler <eniebler <at> boost.org> writes:
[...]
The identity function hack is a little unfortunate, but I understand the need for it.
It is, but at the same time, when you start mastering it, it becomes a powerful way of controlling exactly what should be deferred and what shouldn't. <snip>
Yes. Likewise for meta::defer.
What if the two branches return different types? [...]
Having both branches return objects of different types is definitely possible. Actually, the whole thing was designed to make this possible. However, in order for the branches to be able to return unrelated objects, i.e. ones without a common_type, the condition has to be known at compile-time (e.g. an IntegralConstant). <snip>
Naturally.
[...]
Using lambdas for lazy conditionals brings limitations and pitfalls. Would you consider adding a pure type-level alternative for people doing straight metaprogramming?
I think that might be a good idea. More generally, It might be possible to provide a mini EDSL for lambda expressions that could look like Proto's transforms? I'm really just thinking out loud, but what do you think?
I wasn't thinking of a (runtime) EDSL like Proto. I was thinking of a compile-time one like mpl::lambda. I know that's not the Hana way, but Hana is bumping up against the limits of the language.
For simple laziness, Hana also has a tool called `lazy`, which allows creating function calls that can be evaluated later. For example:
// nothing is evaluated nor instantiated at this point auto lazy_result = lazy(f)(x1, x2, x3);
// everything happens now auto result = eval(lazy_result);
This is *exactly* the runtime equivalent of meta::defer. And the example at the bottom of [1] is exactly like my "test_lazy_if_" example a few posts back. Needless to say, I like this approach to laziness. It is simple and intuitive (to me). :-)
Lazy is a Monad, which allows chaining lazy computations as shown in [1]. However, Lazy is not integrated with eval_if right now, which makes it pretty much useless. The reason why it is not integrated is because I think it can also be made a Comonad, and I wanted to see how lazy branching might be generalizable to arbitrary Comonads. I haven't had the time to explore this further for now though.
FWIW, I think this is important. What are the benefits of eval_if over the if_ with lazy branches that you show at the bottom of [1]?
[1]: http://ldionne.github.io/hana/structboost_1_1hana_1_1_lazy.html
-- Eric Niebler Boost.org http://www.boost.org
Eric Niebler <eniebler <at> boost.org> writes:
[...]
I wasn't thinking of a (runtime) EDSL like Proto. I was thinking of a compile-time one like mpl::lambda. I know that's not the Hana way, but Hana is bumping up against the limits of the language.
I think it might be nice. Something like auto f = hana::lambda< // similar to mpl lambdas, but better >; where `lambda` is a variable template creating a Hana Metafunction inline. I do think it could simplify some type-level programming; it's worth exploring. Thanks for the suggestion.
[...]
Lazy is a Monad, which allows chaining lazy computations as shown in [1]. However, Lazy is not integrated with eval_if right now, which makes it pretty much useless. The reason why it is not integrated is because I think it can also be made a Comonad, and I wanted to see how lazy branching might be generalizable to arbitrary Comonads. I haven't had the time to explore this further for now though.
FWIW, I think this is important.
I think so too, and it is on my (super long) todo list. Unfortunately, I'm only a man not an army. I'll try to have this in time for the formal review, which is aimed for April.
What are the benefits of eval_if over the if_ with lazy branches that you show at the bottom of [1]?
The if_ with lazy branches won't work if the condition is known at runtime and the branches have incompatible types. For example: eval(if_(true, lazy([x{1}] { return 0; }), lazy([x{"abcd"}] { return 0; }) )) is actually somewhat equivalent to auto branch = if_(true, lazy([x{1}] { return 0; }), lazy([x{"abcd"}] { return 0; }) ); eval(branch); However, since both lambdas have captures, their type is not compatible. Since the condition is not an IntegralConstant, it will fail inside the if_. Regards, Louis
Louis Dionne <ldionne.2 <at> gmail.com> writes:
Eric Niebler <eniebler <at> boost.org> writes:
[...]
Lazy is a Monad, which allows chaining lazy computations as shown in [1]. However, Lazy is not integrated with eval_if right now, which makes it pretty much useless. The reason why it is not integrated is because I think it can also be made a Comonad, and I wanted to see how lazy branching might be generalizable to arbitrary Comonads. I haven't had the time to explore this further for now though.
FWIW, I think this is important.
I think so too, and it is on my (super long) todo list. Unfortunately, I'm only a man not an army. I'll try to have this in time for the formal review, which is aimed for April.
I'm now fairly sure laziness can be expressed as a Comonad. I explained it on my (new!) blog [1]. In summary, it would be correct to have eval_if :: Comonad w => Bool -> w a -> w a -> a In other words, if_(cond, then, else) would take a Boolean condition, a then branch which is actually an arbitrary Comonad and an else branch which also is an arbitrary Comonad. It would then call `extract` on the Comonad that has been selected by the condition. One instance of this would be with the Lazy Comonad, which would give rise to the well-known eval_if since `extract`ing a Lazy value is just evaluating it: eval_if :: Bool -> Lazy a -> Lazy a -> a Whether it's useful for something else than lazy computations is still unclear to me, but it's worth exploring. Louis [1]: http://ldionne.com
Eric Niebler <eniebler <at> boost.org> writes:
On 3/7/2015 6:41 AM, Louis Dionne wrote: [...]
FWIW, this implementation will quickly run afoul of core issue 1430[*], which I know you're aware of because you commented on a gcc bug about it. You should have a look at how I avoided the problem in Meta. meta::quote is somewhat subtle.
[*] http://open-std.org/jtc1/sc22/wg21/docs/cwg_active.html#1430
Lol, thanks. I've no such problems now with Hana (or I'm not aware of it).
In contrast, here is (in essence) how Meta defines quote:
template <template <typename ...> class f> struct quote { template <typename ...xs> using apply = f<xs...>; };
In Meta, the template aliases are used extensively, and types evaluate directly to their results. Things are done eagerly. There are no "thunks".
Here's my understanding of how Meta works:
Meta still uses the classical concept of a metafunction with a nested type, but it is hidden behind `meta::eval`.
No. Metafunctions in Meta are incidental, not fundamental. If it were possible to specialize template aliases, there would be no nested ::type anywhere in Meta -- it's used as an implementation detail only for some of the algorithms. I use some tricks to *implement* Meta that I don't use when I'm *using* Meta. When using Meta, laziness is best (IMO) achieved with defer, lambda, and let (although nothing is stopping someone from creating metafunctions and using the meta::lazy namespace in a more "traditional" way).
Ok, I understand. It's mostly a philosophical difference, but one that's worth noting.
Of course, when writing a lambda or a let expression, evaluation needs to be deferred until the substitutions are made. I use a template called "defer" for that. It's only intended for use by let and lambda. Although it does give things a nested "::type", it doesn't strictly need to; indeed when I first added it, it didn't.
Anyway, that may seem like a subtle difference, but it feels like sea change to me. I find it much nicer working this way.
I don't find that defaulting to eager metafunctions is nicer working. It has been, at least for me, the source of a lot of pain because I was required to write helper metafunctions to branch lazily.
I never need to use helper metafunctions. Meta's expression evaluator handles laziness.
Quick question; is the expression evaluator compile-time efficient? In my experience, such expression evaluators end up being expensive at compile-time, especially if you sprinkle all your code with it. Of course, this _might_ only be relevant to hardcore computations, and in that case Hana will be slower anyway (with current compiler technology) so I'm just saying you might want to benchmark it for your own curiosity.
[...]
I feel like Meta's approach to laziness hasn't been understood. Here, for instance, is a SFINAE-friendly implementation of std::common_type; it has a ::type when a common type exists, but otherwise it doesn't. When it was implemented with metafunctions it was a huge mess. With meta::defer and meta::let, it's simple and straightforward.
(NOTE: No metafunctions, no eval except to define common_type_t.)
[...]
What's interesting here is that you get a SFINAE-friendly common_type for free. Since Meta's expression evaluator is handling laziness, it can be SFINAE-friendly itself. Nowhere do you need to test whether a computation has succeeded or failed. If any substitution failure occurs in an immediate context, the whole computation is aborted. It just falls out of the lambda/defer interaction.
It's an interesting feature, but I think even better would be to give people the choice whether they want SFINAE-friendliness or not. Anyway, implementing common_type turned out to be quite challenging until I realized I could use the Maybe Monad to encode SFINAE, and then common_type was just a monadic fold with the Maybe Monad. Here you go: using namespace boost::hana; auto builtin_common_t = sfinae([](auto t, auto u) -> decltype(type< std::decay_t<decltype(true ? traits::declval(t) : traits::declval(u))> >) { return {}; }); template <typename ...T> struct common_type { }; template <typename T, typename U> struct common_type<T, U> : std::conditional_t<std::is_same<std::decay_t<T>, T>{} && std::is_same<std::decay_t<U>, U>{}, decltype(builtin_common_t(type<T>, type<U>)), common_type<std::decay_t<T>, std::decay_t<U>> > { }; template <typename T1, typename ...Tn> struct common_type<T1, Tn...> : decltype(foldlM<Maybe>(tuple_t<Tn...>, type<std::decay_t<T1>>, sfinae(metafunction<common_type>))) { }; template <typename ...Ts> using common_type_t = typename common_type<Ts...>::type; // tests static_assert(std::is_same< common_type_t<char, short, char, short>, int>{} , ""); static_assert(std::is_same< common_type_t<char, double, short, char, short, double>, double>{} , ""); static_assert(std::is_same< common_type_t<char, short, float, short>, float >{}, ""); static_assert( sfinae(metafunction<common_type>)(type<int>, type<int>, type<int*>) == nothing , ""); Of course, you can enable/disable SFINAE-friendliness just by using a non-monadic fold and removing usages of `sfinae`: auto builtin_common_t = [](auto t, auto u) -> decltype(type< std::decay_t<decltype(true ? traits::declval(t) : traits::declval(u))> >) { return {}; }; template <typename ...T> struct common_type { }; template <typename T, typename U> struct common_type<T, U> : std::conditional_t<std::is_same<std::decay_t<T>, T>{} && std::is_same<std::decay_t<U>, U>{}, decltype(builtin_common_t(type<T>, type<U>)), common_type<std::decay_t<T>, std::decay_t<U>> > { }; template <typename T1, typename ...Tn> struct common_type<T1, Tn...> : decltype(foldl(tuple_t<Tn...>, type<std::decay_t<T1>>, metafunction<common_type>)) { }; I'll try to prepare another challenge soon, but time is missing today. Regards, Louis
On 3/10/2015 6:46 AM, Louis Dionne wrote:
Eric Niebler <eniebler <at> boost.org> writes:
On 3/7/2015 6:41 AM, Louis Dionne wrote: [...]
FWIW, this implementation will quickly run afoul of core issue 1430[*], which I know you're aware of because you commented on a gcc bug about it. You should have a look at how I avoided the problem in Meta. meta::quote is somewhat subtle.
[*] http://open-std.org/jtc1/sc22/wg21/docs/cwg_active.html#1430
Lol, thanks. I've no such problems now with Hana (or I'm not aware of it).
It's possible that your "metafunction" variable template somehow doesn't hit this problem. I confess I don't really understand core's problem with template aliases here. <snip>
No. Metafunctions in Meta are incidental, not fundamental. If it were possible to specialize template aliases, there would be no nested ::type anywhere in Meta -- it's used as an implementation detail only for some of the algorithms. I use some tricks to *implement* Meta that I don't use when I'm *using* Meta. When using Meta, laziness is best (IMO) achieved with defer, lambda, and let (although nothing is stopping someone from creating metafunctions and using the meta::lazy namespace in a more "traditional" way).
Ok, I understand. It's mostly a philosophical difference, but one that's worth noting.
Not philosophical, since it changes how the library is used idiomatically. <snip>
Quick question; is the expression evaluator compile-time efficient? In my experience, such expression evaluators end up being expensive at compile-time, especially if you sprinkle all your code with it. Of course, this _might_ only be relevant to hardcore computations, and in that case Hana will be slower anyway (with current compiler technology) so I'm just saying you might want to benchmark it for your own curiosity.
I haven't benchmarked it. It isn't free certainly, but I would expect it to be cheaper than an MPL lambda invocation since it doesn't need to keep track of whether/where substitutions were made or check for nested ::type aliases. And it is a cost that is paid only where laziness is needed. If laziness is needed in a "hardcore computation", nothing is preventing users from using the meta::lazy namespace as if it were boost::mpl and dealing with nested ::type aliases. I could make the expression evaluator faster by making placeholder types special, but I've avoided doing that for now. I like the ability to create named placeholders in-situ: lambda<class XYZ, lazy::eval<std::add_pointer<XYZ>>>; <snip>
What's interesting here is that you get a SFINAE-friendly common_type for free. Since Meta's expression evaluator is handling laziness, it can be SFINAE-friendly itself. Nowhere do you need to test whether a computation has succeeded or failed. If any substitution failure occurs in an immediate context, the whole computation is aborted. It just falls out of the lambda/defer interaction.
It's an interesting feature, but I think even better would be to give people the choice whether they want SFINAE-friendliness or not.
They have the choice. They can move can-fail computations into non-immediate contexts. In the common_type example, that would mean changing: template<typename T, typename U> using builtin_common_t = decltype(true? std::declval<T>() : std::declval<U>()); to something like: template<typename T, typename U> struct builtin_common { using type = decltype(true? std::declval<T>() : std::declval<U>()); }; template<typename T, typename U> using builtin_common_t = eval<builtin_common<T, U>>; This technique can be made into a general utility like defer that turns SFINAEs into hard errors. Might be useful for debugging.
Anyway, implementing common_type turned out to be quite challenging until I realized I could use the Maybe Monad to encode SFINAE, and then common_type was just a monadic fold with the Maybe Monad. Here you go:
using namespace boost::hana;
auto builtin_common_t = sfinae([](auto t, auto u) -> decltype(type< std::decay_t<decltype(true ? traits::declval(t) : traits::declval(u))> >) { return {}; });
This is the price of unifying MPL and Fusion.
template <typename ...T> struct common_type { };
template <typename T, typename U> struct common_type<T, U> : std::conditional_t<std::is_same<std::decay_t<T>, T>{} && std::is_same<std::decay_t<U>, U>{}, decltype(builtin_common_t(type<T>, type<U>)), common_type<std::decay_t<T>, std::decay_t<U>> > { };
template <typename T1, typename ...Tn> struct common_type<T1, Tn...> : decltype(foldlM<Maybe>(tuple_t<Tn...>, type<std::decay_t<T1>>, sfinae(metafunction<common_type>))) { };
template <typename ...Ts> using common_type_t = typename common_type<Ts...>::type;
// tests static_assert(std::is_same< common_type_t<char, short, char, short>, int>{} , "");
static_assert(std::is_same< common_type_t<char, double, short, char, short, double>, double>{} , "");
static_assert(std::is_same< common_type_t<char, short, float, short>, float >{}, "");
static_assert( sfinae(metafunction<common_type>)(type<int>, type<int>, type<int*>) == nothing , "");
<snip> More or less the same as the Meta implementation aside from the boxing/unboxing, which is distracting when doing a pure compile-time computation, IMO. (I've also never been clear on Monadic folds, but that's my thing.) The trade-off is that Hana is usable at runtime and Meta is not. Fair 'nuf. At least for light-to-medium metaprogramming, Meta or Hana serve. Thanks, -- Eric Niebler Boost.org http://www.boost.org
I never need to use helper metafunctions. Meta's expression evaluator handles laziness.
Yes you do use helper metafunctions. Look at your implementation of fold. With a library built with full laziness, you should be able to build a SFINAE- friendly fold like this: template<class Iterable, class State, class Fun> using fold = eval_if<empty<Iterable>, State, fold<pop_front<Iterable>, lazy_apply<Fun, State, front<Iterable>>, Fun>
;
Is something like this possible with Meta library? Paul -- View this message in context: http://boost.2283326.n4.nabble.com/mpl-multiset-tp4672187p4673091.html Sent from the Boost - Dev mailing list archive at Nabble.com.
On 3/12/2015 1:47 PM, pfultz2 wrote:
I never need to use helper metafunctions. Meta's expression evaluator handles laziness.
Yes you do use helper metafunctions. Look at your implementation of fold.
You have a point. There are lots of dirty tricks in the implementation of Meta to work around language shortcomings and improve compile-times. That's not generally how Meta is *used*, but when I said "never" I was being too strong.
With a library built with full laziness, you should be able to build a SFINAE- friendly fold like this:
template<class Iterable, class State, class Fun> using fold = eval_if<empty<Iterable>, State, fold<pop_front<Iterable>, lazy_apply<Fun, State, front<Iterable>>, Fun> ;
Template aliases cannot be self-referential. The alias isn't in scope until the semicolon. It's a bit of a bummer.
Is something like this possible with Meta library?
Sadly not, at least not right now. For some things, like recursion and specialization, there's no escaping class templates, and hence metafunctions. That's OK, Meta works fine with metafunctions; just use meta::lazy as if it were the MPL. The inconsistency bugs me though. I'd very much like to add support for recursive lambdas. I can imagine the factorial example looking like this: using factorial_ = lambda_rec<_a, lazy::if_c<lazy::greater<_a, meta::size_t<0>>, lazy::multiplies<N, lazy::apply<_self, lazy::dec<_a>>> , meta::size_t<1>>>; template<std::size_t N> using factorial = apply<factorial_, meta::size_t<N>>; The _self placeholder recurses to the nearest enclosing lambda_rec. It shouldn't be too hard to implement, actually. A recursive fold could be implemented this way (not that it would, necessarily). And some day I'd like to implement let_rec, which would let you perform evaluations in terms of two or more mutually recursive lambdas. But that's crazy-town, and I don't see a huge need for it. -- Eric Niebler Boost.org http://www.boost.org
On 3/12/2015 3:55 PM, Eric Niebler wrote:
I'd very much like to add support for recursive lambdas. I can imagine the factorial example looking like this:
using factorial_ = lambda_rec<_a, lazy::if_c<lazy::greater<_a, meta::size_t<0>>, lazy::multiplies<N, lazy::apply<_self, lazy::dec<_a>>> , meta::size_t<1>>>; template<std::size_t N> using factorial = apply<factorial_, meta::size_t<N>>;
Well, that was easy: template<typename ...Ts> struct lambda_rec { template<typename...Us> using apply = meta::apply< let<var<_self, lambda_rec>, lambda<Ts...>>, Us...>; }; using factorial_ = lambda_rec<_a, lazy::if_<lazy::greater<_a, meta::size_t<0>>, lazy::multiplies<_a, lazy::apply<_self, lazy::dec<_a>>>, meta::size_t<1>>>; template<std::size_t N> using factorial = apply<factorial_, meta::size_t<N>>; static_assert(factorial<0>::value == 1, ""); static_assert(factorial<1>::value == 1, ""); static_assert(factorial<2>::value == 2, ""); static_assert(factorial<3>::value == 6, ""); static_assert(factorial<4>::value == 24, ""); Metafunctions? We don't need no stinkin' metafunctions! :-) -- Eric Niebler Boost.org http://www.boost.org
Eric Niebler <eniebler <at> boost.org> writes:
[...]
using factorial_ = lambda_rec<_a, lazy::if_<lazy::greater<_a, meta::size_t<0>>, lazy::multiplies<_a, lazy::apply<_self, lazy::dec<_a>>>, meta::size_t<1>>>; template<std::size_t N> using factorial = apply<factorial_, meta::size_t<N>>;
Metafunctions? We don't need no stinkin' metafunctions!
FWIW, here's how you can implement a factorial with lazy metafunctions, here done with MPL11: template <typename N> struct fact : if_c<N::value == 0, ullong<1>, mult<N, fact<pred<N>>> > { }; Lambda expressions are a nice hammer, but a factorial metafunction is seemingly not a nail. Regards, Louis
On 3/12/2015 5:05 PM, Louis Dionne wrote:
Eric Niebler <eniebler <at> boost.org> writes:
[...]
using factorial_ = lambda_rec<_a, lazy::if_<lazy::greater<_a, meta::size_t<0>>, lazy::multiplies<_a, lazy::apply<_self, lazy::dec<_a>>>, meta::size_t<1>>>; template<std::size_t N> using factorial = apply<factorial_, meta::size_t<N>>;
Metafunctions? We don't need no stinkin' metafunctions!
FWIW, here's how you can implement a factorial with lazy metafunctions, here done with MPL11:
template <typename N> struct fact : if_c<N::value == 0, ullong<1>, mult<N, fact<pred<N>>> > { };
Lambda expressions are a nice hammer, but a factorial metafunction is seemingly not a nail.
Ha! That works in Meta too: using namespace meta; template<typename N> struct factorial : eval< if_c<N::value == 0, meta::size_t<1>, lazy::multiplies<N, factorial<dec<N>>>>> {}; But now that I think about it, there *is* a difference between MPL-style lazy metafunctions and meta::lazy that could complicate these uses of Meta. The metafunctions in meta::lazy don't assume their arguments are themselves lazy metafunctions, which would make lazy metafunction composition difficult. It's really just not the Meta way. -- Eric Niebler Boost.org http://www.boost.org
On 3/12/2015 7:48 PM, Eric Niebler wrote:
The metafunctions in meta::lazy don't assume their arguments are themselves lazy metafunctions, which would make lazy metafunction composition difficult. It's really just not the Meta way.
FWIW, I decided this was a legitimate shortcoming of Meta's approach, so I changed it. Now when you access the nested ::type of a lazy:: computation, it evaluates all nested lazy:: computations[*]. That makes composition much nicer. For instance: template<typename N> struct factorial : eval< if_c<N::value == 0, meta::size_t<1>, lazy::multiplies<N, factorial<lazy::dec<N>>>>> {}; This behaves differently than MPL[11]. Rather than passing a thunk representing dec<N> to factorial, lazy::dec<N> gets fully evaluated when the ::type of the outer lazy::multiplies is accessed. That way, factorial never gets instantiated with anything that isn't an Integral Constant. This is much more The Meta Way: types, not thunks. -- Eric Niebler Boost.org http://www.boost.org
Vicente J. Botet Escriba <vicente.botet <at> wanadoo.fr> writes:
[...] +1
While I appreciate the work on Hana, I believe a pure meta-programming (pure functional) C++11/C++14 library would make things easier.
What about enriching the Eric's library on a GSoC project?
Could you please elaborate on why you think it is better to have a pure type only metaprogramming library (if I understood correctly)? What you think is important as this could lead on improvements in Hana. Louis
Le 24/02/15 16:23, Louis Dionne a écrit :
Vicente J. Botet Escriba <vicente.botet <at> wanadoo.fr> writes:
[...] +1
While I appreciate the work on Hana, I believe a pure meta-programming (pure functional) C++11/C++14 library would make things easier.
What about enriching the Eric's library on a GSoC project?
Could you please elaborate on why you think it is better to have a pure type only metaprogramming library (if I understood correctly)? What you think is important as this could lead on improvements in Hana.
Either I expressed myself incorrectly or you miss-understood me. When I said easier I was thinking on a new library compared to a library that has to preserve MPL compatibility. BTW, I like a lot your library. However I don't think it would be easy to introduce its design on the C++ standard library. IMHO pure meta-programming additions should be easier to introduce. Of course, I can be wrong. Vicente
2015-02-24 18:46 GMT-03:00 Vicente J. Botet Escriba < vicente.botet@wanadoo.fr>:
Either I expressed myself incorrectly or you miss-understood me. When I said easier I was thinking on a new library compared to a library that has to preserve MPL compatibility.
BTW, I like a lot your library. However I don't think it would be easy to introduce its design on the C++ standard library. IMHO pure meta-programming additions should be easier to introduce. Of course, I can be wrong.
Do you see anything so broken within MPL's design that justifies dropping backwards compatibility and redesign a new library from scratch? Is it really necessary, considering the existence of other modern options which already focus on an strict C++14 approach, such as Hana? IMO the fact it mimics the standard library is very positive, for it makes transition much easier for someone used to the STL first experimenting with metaprogramming. It sure is not the best design for an essentially functional library, but I don't see why it could not provide various interfaces, one of them being iterators, i.e. preserving its current interface. Taking a closer look at the code, the bulk of it really is just dealing with broken compilers. On one hand I totally agree that getting rid of (some of) them would simplify things a lot, counting on the fact that anyone that can put up with such an outdated compiler as MSVC 6 could as well tolerate using a not so outdated version of boost. On the other hand, I think this is one of the greatest achievements of MPL and I would resist the idea at first, until it proves really necessary. In any case, I fail to see a very good reason for deprecating MPL in favor of a completely new design, assuming the desire to also maintain a pure metaprogramming library that is. *Bruno C. O. Dutra*
On 24 Feb 2015 at 20:06, Bruno Dutra wrote:
Do you see anything so broken within MPL's design that justifies dropping backwards compatibility and redesign a new library from scratch? Is it really necessary, considering the existence of other modern options which already focus on an strict C++14 approach, such as Hana?
I'm curious: could Hana be wrapped with a MPL compatible shim API? As in, would that be a sane or insane idea? Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
Niall Douglas <s_sourceforge <at> nedprod.com> writes:
On 24 Feb 2015 at 20:06, Bruno Dutra wrote:
Do you see anything so broken within MPL's design that justifies dropping backwards compatibility and redesign a new library from scratch? Is it really necessary, considering the existence of other modern options which already focus on an strict C++14 approach, such as Hana?
I'm curious: could Hana be wrapped with a MPL compatible shim API? As in, would that be a sane or insane idea?
I think it could, but some parts of it would be a bit hard to implement. Specifically, the fact that Hana does not use the concept of an iterator means that we would have to emulate iterators in some other way. Also, the lambda expression part would have to be the same as in MPL. That being said, I'm not sure what the gain would be, except perhaps compile-time performance. Louis
On 25 Feb 2015 at 15:11, Louis Dionne wrote:
That being said, I'm not sure what the gain would be, except perhaps compile-time performance.
1. Stops someone else reinventing the wheel. 2. Provides users of MPL a bridge into Hana. 3. A useful functional unit testing of Hana. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
Niall Douglas <s_sourceforge <at> nedprod.com> writes:
On 25 Feb 2015 at 15:11, Louis Dionne wrote:
That being said, I'm not sure what the gain would be, except perhaps compile-time performance.
1. Stops someone else reinventing the wheel.
2. Provides users of MPL a bridge into Hana.
3. A useful functional unit testing of Hana.
Fine, I did it: http://goo.gl/rRijB1 It's incomplete and it does not support iterators, but it still supports all the algorithms I have ever used in the MPL (but not the intrinsics). Most intrinsic sequence operations could also be added. Louis
Louis, have you considered providing this MPL bridge within Hana so as to ease the transition for new users coming from MPL? Off course using it would be suboptimal and should be discouraged, but I think it could make it easier to port MPL code to Hana, for it could be done gradually. Bruno
Bruno Dutra <brunocodutra <at> gmail.com> writes:
Louis,
have you considered providing this MPL bridge within Hana so as to ease the transition for new users coming from MPL? Off course using it would be suboptimal and should be discouraged, but I think it could make it easier to port MPL code to Hana, for it could be done gradually.
We're thinking about it. I'm not exactly sure how it would be made available yet, but it probably will. Louis
I am still bit confused about the logic library idea that I am pursuing. I would like to build it from scratch all over again so that it can have good performance and minimal dependency, but would it be better to develop it only for C++14 and the future or keep it backward compatible as well ? Also, are there any requirements here in boost for a logic metaprogramming library and if there is what should be my design goals ? Best Wishes Ganesh Prasad On 27 February 2015 at 20:46, Louis Dionne <ldionne.2@gmail.com> wrote:
Bruno Dutra <brunocodutra <at> gmail.com> writes:
Louis,
have you considered providing this MPL bridge within Hana so as to ease
the
transition for new users coming from MPL? Off course using it would be suboptimal and should be discouraged, but I think it could make it easier to port MPL code to Hana, for it could be done gradually.
We're thinking about it. I'm not exactly sure how it would be made available yet, but it probably will.
Louis
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
2015-02-24 18:46 GMT-03:00 Vicente J. Botet Escriba < vicente.botet@wanadoo.fr>:
Either I expressed myself incorrectly or you miss-understood me. When I said easier I was thinking on a new library compared to a library that has to preserve MPL compatibility.
BTW, I like a lot your library. However I don't think it would be easy to introduce its design on the C++ standard library. IMHO pure meta-programming additions should be easier to introduce. Of course, I can be wrong.
Do you see anything so broken within MPL's design that justifies dropping backwards compatibility and redesign a new library from scratch? Don't misunderstand me. I'm not against a GSoC project that work on extensions of the MPL library. Just that I don't want to mentor it. I
Le 25/02/15 00:06, Bruno Dutra a écrit : prefer to spend my time on a meta library that can take advantage of the new C++11/c++14 features without having the constraint of backward compatibility.
Is it really necessary, considering the existence of other modern options which already focus on an strict C++14 approach, such as Hana? I find as moder the Eric's Meta library than the Louis's Hana library. The scope is just different.
IMO the fact it mimics the standard library is very positive, for it makes transition much easier for someone used to the STL first experimenting with metaprogramming. It sure is not the best design for an essentially functional library, but I don't see why it could not provide various interfaces, one of them being iterators, i.e. preserving its current interface. I believe that a meta-programming library should be based more on pure functional programming design than on STL. This doesn't mean that the names shouldn't be adapted to the C++ world. Taking a closer look at the code, the bulk of it really is just dealing with broken compilers. On one hand I totally agree that getting rid of (some of) them would simplify things a lot, counting on the fact that anyone that can put up with such an outdated compiler as MSVC 6 could as well tolerate using a not so outdated version of boost. On the other hand, I think this is one of the greatest achievements of MPL and I would resist the idea at first, until it proves really necessary.
In any case, I fail to see a very good reason for deprecating MPL in favor of a completely new design, assuming the desire to also maintain a pure metaprogramming library that is.
It is not my intention to deprecate MPL. I'm just looking for some meta-programming utilities that can be proposed to the C++ standard. Vicente
IMO the fact it mimics the standard library is very positive, for it makes transition much easier for someone used to the STL first experimenting with metaprogramming.
Actually, that is what has helped me when I started using MPL. However, iterator-based approach for metaprogramming has a huge impact on performance. There is no such thing as random access in TMP. Since MPL is a core component of many libraries, it is important to have a library that is fast. A redesigned MPL doesn't need to be completely functional as its foreign too many C++ programmer. However, it should be lightweight and familiar. And being familiar doesn't necessarily require an iterator-based approach. Paul -- View this message in context: http://boost.2283326.n4.nabble.com/mpl-multiset-tp4672187p4672515.html Sent from the Boost - Dev mailing list archive at Nabble.com.
2015-02-25 13:42 GMT-03:00, Ganesh Prasad <sir.gnsp@gmail.com>:
Porting metalog to hana as an add on might be a good idea, but I quite disagree from the user's point of view. Hana is a beautifully designed library and it does precisely what it's designed for. In short it has good modularity. And , indeed, libraries are supposed to be highly modular.
So, it might be better to keep metalog as a separate entity and provide good interoperability between Hana and metalog.
Exactly. I don't have any experience with Hana yet and, hence, couldn't know whether logic programming would suite it well, so my initial intention is just to provide a use case for it and learn some C++14 in the process. As I said, right now, I'm convinced metalog could be a nice concept for an enhancement of MPL instead. 2015-02-25 14:02 GMT-03:00, pfultz2 <pfultz2@yahoo.com>:
IMO the fact it mimics the standard library is very positive, for it makes transition much easier for someone used to the STL first experimenting with metaprogramming.
Actually, that is what has helped me when I started using MPL. However, iterator-based approach for metaprogramming has a huge impact on performance. There is no such thing as random access in TMP.
Since MPL is a core component of many libraries, it is important to have a library that is fast. A redesigned MPL doesn't need to be completely functional as its foreign too many C++ programmer. However, it should be lightweight and familiar. And being familiar doesn't necessarily require an iterator-based approach.
By "complete backwards compatibility" I don't mean a redesigned MPL should rely on iterators as a core functionality as it does today. Rather, it could profit much more of an efficient design better suited to its functional nature, on top of which iterators and such could be emulated. The iterator based interface in this context could and certainly would be much less efficient than its core interface, but not necessarily less efficient than it is today, for it could be written to benefit of C++11/14 variadics for example. I believe that was the original motivation behind Louis' MPL11, which ended up breaking compatibility for a good reason I'm sure. Perhaps it's just not practically feasible, but I intent to look into it regardless. -- *Bruno C. O. Dutra*
On 02/25/2015 11:02 AM, pfultz2 wrote:
IMO the fact it mimics the standard library is very positive, for it makes transition much easier for someone used to the STL first experimenting with metaprogramming.
Actually, that is what has helped me when I started using MPL. However, iterator-based approach for metaprogramming has a huge impact on performance. There is no such thing as random access in TMP.
Not now, but maybe in the future: http://atpp.irrequietus.eu/atpp-c79f4b7.pdf -regards, Larry
On 02/25/2015 11:02 AM, pfultz2 wrote:
There is no such thing as random access in TMP.
But we can surely imitate a Random Access Container in TMP. Just this morning it came to my mind that we can make a Compile time random access container using templates and preprocessor macros. Here is the code, pretty roughly and quickly written. 1 #define STATIC_ASSERT(expr) { char assertion_stage[((expr) ? 0:-1)]; } 2 3 #define RANDOM_ACCESS_CONTAINER_META(KEY_TYPE, val_type, name) \ 4 template<KEY_TYPE KEY> \ 5 struct meta_container_##name##_{ \ 6 typedef val_type value_type; \ 7 typedef KEY_TYPE key_type; \ 8 static const value_type value = static_cast<value_type>(0); \ 9 }; \ 10 typedef KEY_TYPE meta_container_##name##_key_type_; \ 11 typedef val_type meta_container_##name##_val_type_; 12 13 #define RANDOM_ACCESS_INSERT(name, key, val) \ 14 template<> struct meta_container_##name##_<key>{ \ 15 typedef meta_container_##name##_val_type_ value_type; \ 16 typedef meta_container_##name##_key_type_ key_type; \ 17 static const value_type value = val; \ 18 }; 19 20 #define RANDOM_ACCESS_GET(name, key) meta_container_##name##_<key>::value 21 22 RANDOM_ACCESS_CONTAINER_META(int, int, TMP_INT_MAP); 23 24 RANDOM_ACCESS_INSERT(TMP_INT_MAP, 5, 15); 25 RANDOM_ACCESS_INSERT(TMP_INT_MAP, 6, 18); 26 RANDOM_ACCESS_INSERT(TMP_INT_MAP, 7, 21); 27 28 29 int main(){ 30 31 STATIC_ASSERT ( RANDOM_ACCESS_GET(TMP_INT_MAP, 5) == 15 ); 32 STATIC_ASSERT ( RANDOM_ACCESS_GET(TMP_INT_MAP, 6) == 18 ); 33 STATIC_ASSERT ( RANDOM_ACCESS_GET(TMP_INT_MAP, 7) == 21 ); 34 STATIC_ASSERT ( RANDOM_ACCESS_GET(TMP_INT_MAP, 1) == 0 ); 35 36 return 0; 37 } I might be wrong and this might not be the general case, but it's definitely possible to imitate. After all TMP is turing complete, hence, random access containers should not be non-existent. Best wishes Ganesh Prasad On 26 February 2015 at 05:20, Larry Evans <cppljevans@suddenlink.net> wrote:
On 02/25/2015 11:02 AM, pfultz2 wrote:
IMO the fact it mimics the standard library is very positive, for it makes transition much easier for someone used to the STL first experimenting with metaprogramming.
Actually, that is what has helped me when I started using MPL. However, iterator-based approach for metaprogramming has a huge impact on performance. There is no such thing as random access in TMP.
Not now, but maybe in the future:
http://atpp.irrequietus.eu/atpp-c79f4b7.pdf
-regards, Larry
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Well, now somehow, I think I need a better solution for this random access thing. Looks like I am actually polluting the global namespace in this pursuit. Also macros can break easily on extension. Best wishes Ganesh Prasad On 26 February 2015 at 17:55, Ganesh Prasad <sir.gnsp@gmail.com> wrote:
On 02/25/2015 11:02 AM, pfultz2 wrote:
There is no such thing as random access in TMP.
But we can surely imitate a Random Access Container in TMP. Just this morning it came to my mind that we can make a Compile time random access container using templates and preprocessor macros. Here is the code, pretty roughly and quickly written.
1 #define STATIC_ASSERT(expr) { char assertion_stage[((expr) ? 0:-1)]; } 2
3 #define RANDOM_ACCESS_CONTAINER_META(KEY_TYPE, val_type, name) \
4 template<KEY_TYPE KEY> \
5 struct meta_container_##name##_{ \
6 typedef val_type value_type; \
7 typedef KEY_TYPE key_type; \
8 static const value_type value = static_cast<value_type>(0); \
9 }; \
10 typedef KEY_TYPE meta_container_##name##_key_type_; \
11 typedef val_type meta_container_##name##_val_type_;
12
13 #define RANDOM_ACCESS_INSERT(name, key, val) \
14 template<> struct meta_container_##name##_<key>{ \
15 typedef meta_container_##name##_val_type_ value_type; \
16 typedef meta_container_##name##_key_type_ key_type; \
17 static const value_type value = val; \
18 };
19
20 #define RANDOM_ACCESS_GET(name, key) meta_container_##name##_<key>::value 21
22 RANDOM_ACCESS_CONTAINER_META(int, int, TMP_INT_MAP);
23
24 RANDOM_ACCESS_INSERT(TMP_INT_MAP, 5, 15);
25 RANDOM_ACCESS_INSERT(TMP_INT_MAP, 6, 18);
26 RANDOM_ACCESS_INSERT(TMP_INT_MAP, 7, 21);
27
28
29 int main(){
30
31 STATIC_ASSERT ( RANDOM_ACCESS_GET(TMP_INT_MAP, 5) == 15 );
32 STATIC_ASSERT ( RANDOM_ACCESS_GET(TMP_INT_MAP, 6) == 18 );
33 STATIC_ASSERT ( RANDOM_ACCESS_GET(TMP_INT_MAP, 7) == 21 );
34 STATIC_ASSERT ( RANDOM_ACCESS_GET(TMP_INT_MAP, 1) == 0 );
35
36 return 0;
37 }
I might be wrong and this might not be the general case, but it's definitely possible to imitate. After all TMP is turing complete, hence, random access containers should not be non-existent.
Best wishes Ganesh Prasad
On 26 February 2015 at 05:20, Larry Evans <cppljevans@suddenlink.net> wrote:
On 02/25/2015 11:02 AM, pfultz2 wrote:
IMO the fact it mimics the standard library is very positive, for it makes transition much easier for someone used to the STL first experimenting with metaprogramming.
Actually, that is what has helped me when I started using MPL. However, iterator-based approach for metaprogramming has a huge impact on performance. There is no such thing as random access in TMP.
Not now, but maybe in the future:
http://atpp.irrequietus.eu/atpp-c79f4b7.pdf
-regards, Larry
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
AMDG On 02/26/2015 05:25 AM, Ganesh Prasad wrote:
<snip> After all TMP is turing complete, hence, random access containers should not be non-existent.
Turing completeness does not guarantee the existence of random access. Complexity classes (i.e. L, P, NP) are invariant across computation models, but random access is not a distinct complexity class. With that being said, mpl emulates random access like this: template<class Base, int N, class T> struct v_item; template<class Base, class T> struct v_item<Vector, 0, T> : Base { typedef T item0; }; template<class Base, class T> struct v_item<Vector, 1, T> : Base { typedef T item1; }; ... This can also be acheived using overload resolution and decltype: template<class Base, int N, class T> struct v_item : Base { using Base::item; wrap<T> item(int_<N>); }; This isn't actually constant time, however, because of the way most compilers implement overload resolution. In Christ, Steven Watanabe
participants (23)
-
Bruno Dutra
-
Daniel James
-
Edward Diener
-
Eric Niebler
-
Ganesh Prasad
-
Gordon Woodhull
-
John Maddock
-
Larry Evans
-
Louis Dionne
-
Manu Sánchez
-
Mathias Gaunard
-
Matt Calabrese
-
Michael Caisse
-
Niall Douglas
-
Paul A. Bristow
-
Peter Dimov
-
pfultz2
-
Robert Ramey
-
Stephen Kelly
-
Steve M. Robbins
-
Steven Watanabe
-
Vicente J. Botet Escriba
-
Zach Laine