RE: [boost] Re: Boost.Threads: Do we need all those mutexes?

From: Howard Hinnant [mailto:hinnant@twcny.rr.com] On Jul 9, 2004, at 7:10 PM, Glen Knowles wrote:
Without try locks, lock_both can't be made to work without risking deadlock.
Lock_both can be made to work with try locks? Either you have a way to deterministicly set the order you lock the mutexes or you have problems. You can avoid deadlocking with try lock, but you will fail to get the pair of locks in cases were you would have gotten it by waiting. I suppose you could just keep try-locking both ways until you get it, but that's not good for performance. :)
Well actually keep try-locking both ways is what I've been doing. I'd agree with you that in a heavily contested context, performance could theoretically suffer arbitrarily badly.
One solution to this problem would be mutex::operator<
<nod> I just implemented lock_both::lock two ways: Once just keep try-locking both ways, and once locking in the order of the addresses of the locks (with only two locks in the test, this was a safe operator<). My test looks like:
The problem case for spinning is when one thread gets the locks and the holds them for a non-trivial length of time. During that time the other, spinning thread, just wastes cpu as fast as it can. This just gets magnified as more threads are involved. Glen

On Jul 9, 2004, at 11:00 PM, Glen Knowles wrote:
<nod> I just implemented lock_both::lock two ways: Once just keep try-locking both ways, and once locking in the order of the addresses of the locks (with only two locks in the test, this was a safe operator<). My test looks like:
The problem case for spinning is when one thread gets the locks and the holds them for a non-trivial length of time. During that time the other, spinning thread, just wastes cpu as fast as it can. This just gets magnified as more threads are involved.
You are describing a spin-lock, but not lock_both (except for a theoretical and unlikely scenario). On Jul 9, 2004, at 9:35 AM, Peter Dimov wrote:
I don't have much experience with try locks, though. Pretty much the only example that comes to mind is Howard's lock_both:
for(;;) { l1.lock(); if( l2.try_lock() ) return; l1.unlock();
l2.lock(); if( l1.try_lock() ) return; l2.unlock(); }
(I hope I got that right) and I don't see how it can be improved by the if() idiom.
Yes Peter, you got it right. If thread A already holds l1, thread B blocks, not spins, until thread A unlocks l1. If l1 is initially unlocked, but thread A holds l2, thread B locks l1, try and fails to lock l2, unlocks l1, then blocks (not spins) on l2 until thread A unlocks l2. It is possible for thread A to continually cause thread B to spin by locking and unlocking l1 and l2 at just the right (or wrong!) times, but in practice this dynamically unstable state does not appear to persist. Specifically, in the case you site (thread A holding l1 and l2 for a long time), thread B will simply block on l1 and not consume cpu except the system time required for maintaining the block. Implementing lock_both::lock with an ordering algorithm could, on the other hand, more likely reduce throughput in comparison. Consider three ordered locks: l1, l2, l3. Thread A needs to lock l1 and l3, thread B needs to lock l2 and l3, and thread C needs to lock l1: Thread A Thread B Thread C lock l1 lock l2 lock l3 block on l3 block on l1 processing... At this point thread B is causing both thread A and thread C to block. However in the try-and-back-off formulation of lock_both the scenario would be different: Thread A Thread B Thread C lock l1 lock l2 lock l3 try but fail l3 block on l1 unlock l1 block on l3 lock l1 processing... processing... Similar timing, but now both thread B and C are able to process concurrently, only thread A is blocked. In a nutshell, if thread A can't get both locks, it tries to put itself to sleep on whichever lock it can't get in order to /not/ waste cpu. Additionally it holds no locks while it is sleeping so as to increase potential throughput of other threads. -Howard
participants (2)
-
Glen Knowles
-
Howard Hinnant