
"David Abrahams" <dave@boost-consulting.com> wrote in message news:<ufyn5wggo.fsf@boost-consulting.com>...
"David Maisonave" <dmaisonave@commvault.com> writes:
"David Abrahams" <dave@boost-consulting.com> wrote in message news:<874q3ld907.fsf@boost-consulting.com>...
"David Maisonave" <boost@axter.com> writes:
"Daniel Wallin" <dalwan01@student.umu.se> wrote in message news:<drkmls$kb5$1@sea.gmane.org>...
It's in the FAQ:
http://www.boost.org/libs/smart_ptr/shared_ptr.htm#FAQ
Q. Why doesn't shared_ptr use a linked list implementation?
A. A linked list implementation does not offer enough advantages to offset the added cost of an extra pointer. See timings page. In addition, it is expensive to make a linked list implementation thread safe.
You can avoid having to make the implementation thread safe by making the pointee thread safe.
One of us here understands nothing about the problem. I don't know that much about threading, but I think I have a grip on this issue at least. As I understand the problem, if two neighboring shared_ptr's in a reference-linked chain are destroyed at the same time, they will be modifying the same pointer values simultaneously -- without a lock in your case. I don't see how making the pointee threadsafe is going to help one bit.
IMHO, you don't fully understand my propose solution. Please look at
the current smart pointer locking method: http://code.axter.com/smart_ptr.h By using intrusive lock logic, you
can lock the pointee, and thereby locking all the shared_ptr objects. Here's the smart pointer destructor: inline ~smart_ptr() throw() { m_ownership_policy.destructor_lock_policy(m_type); CHECKING_POLICY::before_release(m_type); m_ownership_policy.release(m_type, m_clone_fct, m_ownership_policy); CHECKING_POLICY::after_release(m_type); }
There's similar logic in constructor and assignment operator. This should work on all three main types of reference policies, to include reference-link. Do you understand intrusive logic? You need
to fully understand how intrusive logic works, in order to understand the method.
It sounds like you're saying that, essentially, all the shared_ptrs that point to the same object share a single mutex, and in your case, that mutex happens to be embedded in the pointee, which is why you call it "an intrusive lock." Have I got that right?
IIUC, the thread-safety problem with reference-linked implementation isn't that so much that it's hard to achieve -- anyone can use a shared mutex -- it's that it's hard to make a thread-safe implementation efficient. That is to say, you pay for the cost of locking and unlocking a mutex, and there's no way around it (**). Locking and unlocking mutexes is way more expensive than performing the lock-free operations used by boost::shared_ptr.
That's true. But it's been my experience that the majority of development don't have objects access via multiplethreads or don't run in a multithread environment. In this environment, you're paying additional price for using boost::shared_ptr reference-count logic, but not getting any benefits from it. With a policy base smart pointer, you can pick and choose what's best for paticular requirement, instead of being stuck with a one less than optimal method.
(**) Or so I thought: http://www.cs.chalmers.se/~dcs/ConcurrentDataStructures/phd_chap7.pdf seems to contradict that.
This can be done by using an intrusive lock.
On my test, reference-link is over 25% faster than reference-count logic for initialization. With BOOST_SP_USE_QUICK_ALLOCATOR defined,
than reference-link is over 30% faster than reference-count logic for initialization. I get the above results using VC++ 7.1, and if I
use the GNU 3.x compiler, than the difference is even greater in favor of reference-link logic. With the Borland compiler, the difference is about 22%
Are you claiming that using BOOST_SP_USE_QUICK_ALLOCATOR actually slows boost::shared_ptr down on all these compilers?
Of course not.... When you define BOOST_SP_USE_QUICK_ALLOCATOR, that does not just increase the performance of boost objects. It increases the performance for all object within the translation unit that has the define, and that is using allocators. So even though shared_ptr gets
an increase performance boost, the smart_ptr gets an even greater performance boost, which increases the performance ratio.
Oh, I had no idea you were using the allocator for your reference-linked smart pointers. I see no mention of that macro in your header. Where do you use it?
Please check out the test code. If you're testing code within the same translation unit, and you declare BOOST_SP_USE_QUICK_ALLOCATOR at the top of the translation unit, then it's going to effect all the code in that translation unit. I don't use BOOST_SP_USE_QUICK_ALLOCATOR in my smart_ptr, any more than boost::shared_ptr uses it in it's header. Not only would it be more difficult for me to compile the code so as to only use BOOST_SP_USE_QUICK_ALLOCATOR for boost::shared_ptr, but it also wouldn't make much sence just to give boost::shared_ptr that advantage, and not do so for the other smart pointers when trying to make a level comparison test.