
In order to measure the compile-time efficiency of various C++11 metaprogramming techniques, I'd like to put together a set of benchmarks, and I wanted to discuss here what kinds of tests might be appropriate. Aleksey and I came up with some benchmarks for an appendix to http://boostpro.com/mplbook/, but some of those are lost and they're all getting a bit crusty. Certainly, I have no confidence that they are realistic or useful. If anyone has ideas about this, I'd be very glad to hear them. Thanks in advance, -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Some compile-time computations can be better (faster) implemented as recursive constexpr functions. I would like to see some performance comparisons between constexpr and vanilla metaprogramming. Sumant On 3 April 2012 10:56, Dave Abrahams <dave@boostpro.com> wrote:
In order to measure the compile-time efficiency of various C++11 metaprogramming techniques, I'd like to put together a set of benchmarks, and I wanted to discuss here what kinds of tests might be appropriate. Aleksey and I came up with some benchmarks for an appendix to http://boostpro.com/mplbook/, but some of those are lost and they're all getting a bit crusty. Certainly, I have no confidence that they are realistic or useful. If anyone has ideas about this, I'd be very glad to hear them.
Thanks in advance,
-- Dave Abrahams BoostPro Computing http://www.boostpro.com
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- int main(void) { while(1) KeepWorking(); }

on Tue Apr 03 2012, Sumant Tambe <sutambe-AT-gmail.com> wrote:
Some compile-time computations can be better (faster) implemented as recursive constexpr functions. I would like to see some performance comparisons between constexpr and vanilla metaprogramming.
Well, yes, that's the point. The question is, what kinds of computations should we try to perform in order to discover where to use the new techniques? Do we have libraries in Boost with slow-to-compile examples that would make good representative test cases? -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 04/04/2012 06:08 PM, Dave Abrahams wrote:
on Tue Apr 03 2012, Sumant Tambe<sutambe-AT-gmail.com> wrote:
Some compile-time computations can be better (faster) implemented as recursive constexpr functions. I would like to see some performance comparisons between constexpr and vanilla metaprogramming. Well, yes, that's the point. The question is, what kinds of computations should we try to perform in order to discover where to use the new techniques?
Do we have libraries in Boost with slow-to-compile examples that would make good representative test cases?
Certainly, yes, essentially every library that uses proto/fusion/mpl excessively are compile time hogs. I, personally, haven't come around to actually extract meaningful benchmarks from, for example, phoenix. One thing that immediately comes to my mind is whether to deferr the instantation of a template as long as possible, or to have it instantiated immediately. Example: template <typename T> struct meta_function1 { typedef T type; }; template <> struct meta_function1<int> { typedef int type; }; template <typename T, typename Dummy = void> struct meta_function2 { typedef T type; }; template <typename Dummy> struct meta_function2<int, Dummy> { typedef int type; }; The question now is: which is faster meta_function1 or meta_function2? Of course, in this case we won't get meaningful numbers. However, some claim that meta_function2 is generally faster, especially in the context of many full specializations. One example where the meta_function1-style is used almost all over the place is fusion (in the traits for example). This is not something C++11 related though. However, there is still much uncertainty about what metaprogramming techniques are costly in C++03.

on Thu Apr 05 2012, Thomas Heller <thom.heller-AT-googlemail.com> wrote:
On 04/04/2012 06:08 PM, Dave Abrahams wrote:
on Tue Apr 03 2012, Sumant Tambe<sutambe-AT-gmail.com> wrote:
Some compile-time computations can be better (faster) implemented as recursive constexpr functions. I would like to see some performance comparisons between constexpr and vanilla metaprogramming. Well, yes, that's the point. The question is, what kinds of computations should we try to perform in order to discover where to use the new techniques?
Do we have libraries in Boost with slow-to-compile examples that would make good representative test cases?
Certainly, yes, essentially every library that uses proto/fusion/mpl excessively are compile time hogs.
Well, I'm asking for specifics, please. Like, a path in the Boost SVN repository.
I, personally, haven't come around to actually extract meaningful benchmarks from, for example, phoenix. One thing that immediately comes to my mind is whether to deferr the instantation of a template as long as possible, or to have it instantiated immediately. Example:
template <typename T> struct meta_function1 { typedef T type; };
template <> struct meta_function1<int> { typedef int type; };
template <typename T, typename Dummy = void> struct meta_function2 { typedef T type; };
template <typename Dummy> struct meta_function2<int, Dummy> { typedef int type; };
The question now is: which is faster meta_function1 or meta_function2? Of course, in this case we won't get meaningful numbers. However, some claim that meta_function2 is generally faster, especially in the context of many full specializations.
That's a bit surprising. In C++03 compilers, fully lazy evaluation seems to become expensive in surprising ways that aren't yet well-understood. http://lists.boost.org/Archives/boost/2010/07/169562.php http://lists.boost.org/Archives/boost/att-169562/Lazy_MPL__strictness_analys...
One example where the meta_function1-style is used almost all over the place is fusion (in the traits for example). This is not something C++11 related though. However, there is still much uncertainty about what metaprogramming techniques are costly in C++03.
Yep. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 4/5/2012 11:55 PM, Dave Abrahams wrote:
on Thu Apr 05 2012, Thomas Heller <thom.heller-AT-googlemail.com> wrote:
I, personally, haven't come around to actually extract meaningful benchmarks from, for example, phoenix. One thing that immediately comes to my mind is whether to deferr the instantation of a template as long as possible, or to have it instantiated immediately. Example:
template <typename T> struct meta_function1 { typedef T type; };
template <> struct meta_function1<int> { typedef int type; };
template <typename T, typename Dummy = void> struct meta_function2 { typedef T type; };
template <typename Dummy> struct meta_function2<int, Dummy> { typedef int type; };
The question now is: which is faster meta_function1 or meta_function2? Of course, in this case we won't get meaningful numbers. However, some claim that meta_function2 is generally faster, especially in the context of many full specializations.
That's a bit surprising.
In C++03 compilers, fully lazy evaluation seems to become expensive in surprising ways that aren't yet well-understood.
Thomas, Fusion is an easy target. If anyone would like to try this out --grep or write a script that converts Fusion's meta_function1 style to meta_function2 (lazy) style, please do so and show us the benchmark results (e.g. by running the Fusion test suite over either versions). Without such *real* numbers, it will remain a myth and we will never really know. I'd love to tweak Fusion if such a trick is found to really improve CT. Regards, -- Joel de Guzman http://www.boostpro.com http://boost-spirit.com

On 04/03/2012 07:56 PM, Dave Abrahams wrote:
In order to measure the compile-time efficiency of various C++11 metaprogramming techniques, I'd like to put together a set of benchmarks, and I wanted to discuss here what kinds of tests might be appropriate. Aleksey and I came up with some benchmarks for an appendix to http://boostpro.com/mplbook/, but some of those are lost and they're all getting a bit crusty. Certainly, I have no confidence that they are realistic or useful. If anyone has ideas about this, I'd be very glad to hear them.
Thanks in advance,
What about some programs and examples from this thread http://thread.gmane.org/gmane.comp.lib.boost.devel/211140 I remember that the compilation of a simple sine needed about 10 minutes to compile. But I am not sure if new techniques from C++11 will helf to increase the performance here.

on Wed Apr 04 2012, Karsten Ahnert <karsten.ahnert-AT-ambrosys.de> wrote:
On 04/03/2012 07:56 PM, Dave Abrahams wrote:
In order to measure the compile-time efficiency of various C++11 metaprogramming techniques, I'd like to put together a set of benchmarks, and I wanted to discuss here what kinds of tests might be appropriate. Aleksey and I came up with some benchmarks for an appendix to http://boostpro.com/mplbook/, but some of those are lost and they're all getting a bit crusty. Certainly, I have no confidence that they are realistic or useful. If anyone has ideas about this, I'd be very glad to hear them.
Thanks in advance,
What about some programs and examples from this thread
http://thread.gmane.org/gmane.comp.lib.boost.devel/211140
I remember that the compilation of a simple sine needed about 10 minutes to compile. But I am not sure if new techniques from C++11 will helf to increase the performance here.
Well, I happen to know that such numeric computations can be done comparatively quickly using constexpr, e.g. see the computation of pi in https://github.com/dabrahams/mpl11/blob/master/standalone/constexpr_demo.cpp I don't know enough about how the Van Wijngaarden method works to identify a converged result, but IMO the techniques used there prove the concept. Note, for example, the creation and traversal of compile-time linked datastructures using constexpr with regular pointers :-) -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 03/04/12 18:56, Dave Abrahams wrote:
In order to measure the compile-time efficiency of various C++11 metaprogramming techniques, I'd like to put together a set of benchmarks, and I wanted to discuss here what kinds of tests might be appropriate. Aleksey and I came up with some benchmarks for an appendix to http://boostpro.com/mplbook/, but some of those are lost and they're all getting a bit crusty. Certainly, I have no confidence that they are realistic or useful. If anyone has ideas about this, I'd be very glad to hear them.
I can't suggest a particular benchmark, but I will say that these days I find compile-time memory usage a bigger problem than compile-time runtime when it comes to complex metaprograms. So, whatever benchmarks you do, please include memory usage statistics too. In particular, I was hoping that constexpr would help in this case because it wouldn't be memoized like templates are, but I believe gcc at least has chosen to implement constexpr with memoization, so I guess it won't be more than a constant factor better. John Bytheway

On Apr 3, 2012, at 1:56 PM, Dave Abrahams wrote:
In order to measure the compile-time efficiency of various C++11 metaprogramming techniques, I'd like to put together a set of benchmarks, and I wanted to discuss here what kinds of tests might be appropriate. Aleksey and I came up with some benchmarks for an appendix to http://boostpro.com/mplbook/, but some of those are lost and they're all getting a bit crusty. Certainly, I have no confidence that they are realistic or useful. If anyone has ideas about this, I'd be very glad to hear them.
If you have different versions of MPL utilizing different techniques, running the MPL.Graph examples in libs/msm/example/mpl_graph could be a good benchmark. It's also easy to construct bigger examples if you need them. I know that Christophe would appreciate finding out how to make the MSM validators run faster! As John Bytheway pointed out, many of the problems we've run into result from an explosion in memory usage. Cheers, Gordon
participants (7)
-
Dave Abrahams
-
Gordon Woodhull
-
Joel de Guzman
-
John Bytheway
-
Karsten Ahnert
-
Sumant Tambe
-
Thomas Heller