
Mine. Tested and found with Quantify, and although it was our own policy-based smart pointer, the mutex locking was the high peak on the graph. I halved my app's response latency by using a single-threaded smart pointer in a multi-threaded app.
Which of course is precisely the right time to start making oprimisations - after you've profiled it and found it to be a problem. Still I'm curious about this though, why was your code copying so many smart pointers?
This was a policy-based smart pointer that performed method-level locking before going into the pointee. No copying going on. Not a boost smart pointer. In fact, I'm definitely not qualified to comment on the boost smart pointers per-se, since I've never used them, but I wanted to put in my two cents worth in about a single smart pointer class that uses orthogonal policies. That's what we're using, with policies for copying (ctor/clone), method locking, object locking (intrusive/non-intrusive) and ownership (MT/ST, intrusive/non-intrusive).
Also I wonder if you had tried the shared_ptr lock free locking (huh?!?) you'd have noticed similar (but obviously not *as* good) perfomance gains?
Not using the boost smart pointers.
In general I agree - I'm dead against macros to enable/disable features. But I also find using complex policy classes equally annoying. Default template parameters get you so far, but have problems of their own.
I'd prefer a really explicit separate class called shared_ptr_no_threads<> or something. They have very different behaviour, they should be called different things.
Yeah, they have different behavior, but that's the point, really. You have one code base that invokes policies, without really knowing what the policies are specifically implementing. Here's a link to the one we're using: http://www.weirdsolutions.com/developersCentral/tools/ace_additions.tar.bz2 - Bud