Le 03/08/2010 15:35, Wolfram Brenig a écrit :
Hi,
as a casual/newbie boost-user I'm trying to come to grips with why/when nested functional composition with boost::bind might be of help - following Karlsson's book, in its fifth printing.
---------------------- 1) It seems that he suggests, that if one is into doing something like
T faa(int); T foo(const& T);
int i; vector<T> v(10); // container for(i=0; i<10; i++) { v[i] = faa(i); // set it v[i] = foo(v[i]); // operate on it }
then instead of the 'old style' for-loops it would be more desirable to use 'STL style' algorithms with the functors created from boost::bind, say like
transform(v.begin(),v.end(),boost::bind(&foo,_1));
You might as well write boost::transform(v, foo); But, personally, since that's a fairly trivial algorithm, I don't find that much more interesting than foreach(int& i, v) i = foo(i); There are other situations, though, where using higher-order programming really shines.
2) Then, it is suggested furthermore that if foo() is a (simple?) nested function, say T=double and,
double foo(double x) { return 2*x; }
then one should do 'functional composition', i.e. create the functor for the previous point 1) by
boost::bind(std::multiplies<double>(),2,_1)
Replace all that by just 2*_1
Now, my question is, if such nested construction of functors at the call site, which is suggested also in the 'Bind Summary' in Karlsson's book, will actually lead to code which has no relevant performance loss as compared to 'old-style' for-loops?
Chances are it will be *more* efficient, actually. Algorithms typically do their task the best and fastest way (e.g. the way you did it your iteration in your original post isn't really the best way to iterate, albeit most compilers will optimize that), and for simple algorithms they will be fully inlined. With some compilers, function objects actually get inlined better than function pointers. One other advantage is that algorithms are generic, idiomatic, more high-level and thus better reflect what the code does and reduces potential bugs.
To check this, I do something rather simple (simplistic?). Namely, I take a vector of some size, then I fill it with random doubles between 0 and M_PI, then (stupid me) I test if sin(v[i])^2 + cos([i])^2 = 1 for each element of the vector. The code is attached.
I do this many many times and measure the runtime.
I do this for two cases:
case 1: with old-style for loops case 3: with STL algorithms and boost::bind
At least for this little program, the old-style code is faster by roughly a factor of 2 as compared to the boost::bind approach. (compiled with gcc version 4.3.2 and optimized)
I ran your test. Case 3 runs in 0.65s, case 1 and 2 in 0.82s. That's with GCC 4.4.3. Comparing 3 and 1 is not even fair, since 3 does the unnecessary sin and cos calculations like 2, but the compiler seems to optimize these. Your code, however, is needlessy complicated (std::multiplies etc.). By making it use boost.lambda, however, it goes up to 0.78s. I suppose it would fare better with Phoenix. I admit I have no idea whatsoever why all these cases don't perform the same. Still, none of these is as optimized as it could be.