
Hey all, What happened to the multi-lock semantics? I had expected by now to see something like: // Takes two mutexes as arguments, locking them in a pre-defined order // (ie. lowest ptr first). Blocks until both locked. boost::mutex::dual_scoped_lock // Same as above but recursive. boost::recursive_mutex::dual_scoped_lock // Same as boost::mutex::dual_scoped_lock, however if one cannot be // acquired, both are released (ie. the first is unlocked again). boost::try_mutex::dual_scoped_lock // Same as above but recursive. boost::recursive_try_mutex::dual_scoped_lock // Same as boost::mutex::dual_scoped_lock, however if both can't be // acquired in the specified time limit, both are released. boost::timed_mutex::dual_scoped_lock // Same as above but recursive. boost::recursive_timed_mutex::dual_scoped_lock I assume that a dual_scoped_lock would then have a first() and second() member that would yeild the underlying scoped_lock for the first-locked and second-locked mutex (from which you could compare the mutex it locks to one of your mutexes and figure out which was locked first). Similarly, it would be nice to say, have a 'multi' version of the above for an arbitary number of locks (and an appropriate operator[] rather than first() and second()). We all know the old thing about the way to avoid deadlocks is to acquire them in the same order, and the threads FAQ says some kind of way to do this without the ugly syntax that is in the FAQ being necessary (see http://www.boost.org/doc/html/threads/faq.html#id2786718 and the code segment below which includes implementing such wise advice). I'm sure I could knock something that implements the above (at least for dual) in not too long, implementing something for multi would be harder (though I immagine someone who is familiar with how boost::function works might not have a tough time making an equivalent for mutex locking). Anyway, anyone want to get on to this? -- PreZ :) Death is life's way of telling you you've been fired. -- R. Geis

On Jan 1, 2006, at 5:38 AM, Preston A. Elder wrote:
Hey all,
What happened to the multi-lock semantics?
I had expected by now to see something like:
// Takes two mutexes as arguments, locking them in a pre-defined order // (ie. lowest ptr first). Blocks until both locked. boost::mutex::dual_scoped_lock
If this is to be done, I recommend a series of templated algorithms: template <class TryLock1, class TryLock2> void lock(TryLock1& l1, TryLock2& l2); template <class TryLock1, class TryLock2, class TryLock3> void lock(TryLock1& l1, TryLock2& l2, TryLock3& l3); ... As the template parameter suggests, the locks must model try-lock. I once did a performance study on the best algorithm to use for implementing the two and three-lock versions of these. On single processor machines it didn't seem to make a lot of difference. On dual processor machines, the "try and back off" algorithm was significantly faster than the "lock in order" algorithm, especially if a thread::yield() was strategically placed just before one waits on a lock that just failed to lock via a try_lock. The lock-in-order algorithm tended to rapidly put the machine in a "partial deadlock" which would have the effect of temporarily idling a processor which was otherwise available for work. A brief analysis can reveal this: Thread 1 locks mutex A and then waits for mutex B to be unlocked. Thread 2 is busy with mutex B (but not A). Thread 3 blocks on mutex A, even though it doesn't need B, only because thread 1 is sitting on it doing nothing. If Thread 1 had released A before waiting on B, then both thread 2 and thread 3 could get work done. In a "dining philosophers" simulation, system performance was greatly increased by try-and-back-off while starvation was virtually eliminated (on dual processor machines). I also found these algorithms handy: template <class TryLock1, class TryLock2> unsigned try_lock(TryLock1& l1, TryLock2& l2); template <class TryLock1, class TryLock2, class TryLock3> unsigned try_lock(TryLock1& l1, TryLock2& l2, TryLock3& l3); ... These return 0 if all locks were locked. Otherwise they return 1 if the first lock failed to lock, 2 if the second lock failed to lock, etc. On return, all TryLocks are either locked (if the return is 0) or unlocked (if the return is non-zero). It turns out that try_lock (lock,lock) is very handy in implementing lock(lock,lock,lock) (and etc.). The general pattern is: Pick a lock to lock first. call try_lock on the remaining locks. If it fails, record which lock failed, unlock the first lock, yield, and then repeat with the failed lock first. Otherwise the try_lock succeeded so return. I know the use of the yield is counter intuitive. I tested it on two different OS's and it increased performance. At least try it and stress test it (on a multiprocessor platform). Use case for the 2-lock lock algorithm might look like: struct A { public: A& operator=(const A&); private: int data_; mutable rw_mutex mut_; typedef rw_mutex::sharable_lock read_lock; typedef rw_mutex::scoped_lock write_lock; }; A& operator=(const A& a) { if (this != &a) { write_lock w( mut_, defer_lock); read_lock r(a.mut_, defer_lock); lock(w, r); // Use generic 2-lock algorithm data_ = a.data_; // *this now write-locked, a now read-locked } return *this; } -Howard

Howard Hinnant wrote:
Thread 1 locks mutex A and then waits for mutex B to be unlocked. Thread 2 is busy with mutex B (but not A). Thread 3 blocks on mutex A, even though it doesn't need B, only because thread 1 is sitting on it doing nothing.
If Thread 1 had released A before waiting on B, then both thread 2 and thread 3 could get work done. While I agree this is a good way to increase efficiency and such, there is one problem with it being that it is not FIFO.
Traditionally, mutexes are locked in the order of request, making the order of locking like a queue, when you request you go to the back of the queue. The method you describe makes it more of a 'best available' model, which is nice, however means that thread 1 (in your above example) potentially could NEVER get its lock. If thread 2 continually requests lock B and thread 3 continually gets lock A, and there is some overlap but never a point where both are free, thread 1 will wait forever. A policy model that supports either fifo or your 'best available' model would be the best route to go I think, as both models have their own advantages. -- PreZ :) Death is life's way of telling you you've been fired. -- R. Geis
participants (2)
-
Howard Hinnant
-
Preston A. Elder