
On Jan 9, 2006, at 11:55 AM, Peter Dimov wrote:
Yes. It would be interesting to measure the performance for larger buffer sizes, though. The break-even point should occur somewhere around 32 or 48, maybe even 64 if the allocator is bad enough.
Yeah, that's possible. I don't think it's just a matter of finding the break-even point for performance, though. function<> is supposed to replace function pointers, closures, etc. If it's significantly larger than those entities, it becomes harder to justify the use of function<>. We have two things to optimize here :(
If we go to 16 bytes, then bind(&X::f, &x, _1, true) will fit but bind(&X::f, &x, _1, true, true) won't.
It probably will unless 'true' is 4 bytes.
Some ABIs are actually that strange :)
In my experience, the closure case is indeed very common in code written by people who don't take advantage of the full expressive power of boost::bind, probably because they have a Borland/delegate background. [snip] and synthesize show/hide with boost::bind. My code tends towards the latter variety, so I won't be seeing much of the SBO with a &X::f+&x cutoff.
Me too :)
BTW, this talk about 12 byte buffer is assuming g++. A member pointer is 4-16 on MSVC, 8 on g++, 12 on Borland, 4 on Digital Mars (!). There's a nice table in
Yeah, I know. The actual code for function<> has a union containing a struct with an unknown member pointer and a void pointer in it. I guess we could pad that with an integer or two if we want to expand the buffer...
Anyway, I committed a storage optimization to the CVS. <crosses fingers>
Very cool. Works like a charm on GCC, at least. Once I get a chance to write up documentation for the changes to Function, I'll commit everything to CVS and we'll see who screams :) Doug