
Howard Hinnant wrote:
Thread 1 locks mutex A and then waits for mutex B to be unlocked. Thread 2 is busy with mutex B (but not A). Thread 3 blocks on mutex A, even though it doesn't need B, only because thread 1 is sitting on it doing nothing.
If Thread 1 had released A before waiting on B, then both thread 2 and thread 3 could get work done. While I agree this is a good way to increase efficiency and such, there is one problem with it being that it is not FIFO.
Traditionally, mutexes are locked in the order of request, making the order of locking like a queue, when you request you go to the back of the queue. The method you describe makes it more of a 'best available' model, which is nice, however means that thread 1 (in your above example) potentially could NEVER get its lock. If thread 2 continually requests lock B and thread 3 continually gets lock A, and there is some overlap but never a point where both are free, thread 1 will wait forever. A policy model that supports either fifo or your 'best available' model would be the best route to go I think, as both models have their own advantages. -- PreZ :) Death is life's way of telling you you've been fired. -- R. Geis