
David Abrahams wrote:
"Peter Dimov" <pdimov@mmltd.net> writes:
Anthony Williams wrote:
The case given in Alexander's email above is:
A: lock [lockcount == 0] B: lock [lockcount == 1, slow path] A: unlock [lockcount == 0, slow path/post sema] A: lock - [lockcount == 1, slow path]
He claims that the second A: lock takes the slow path, when it could have taken the fast path because the lock is free. I disagree. The lock is NOT free, since B is waiting on it. If A can take the fast path here, that's opening the door for B to be starved.
This is correct. The mutex does not guarantee fairness (for the above definition of fairness) at the expense of performance. It is indeed true that in heavily contended scenarios thread A can finish well ahead of thread B; however, the total time taken by threads A+B can be (and is) significantly less than that with the "fairer" CS design.
Probably dumb question: does this reasoning apply when A and B need to run "continuously" (e.g. in a server)?
I think that it does, because A and B are roles, not permanent labels. At the last lock above, A is the currently running thread; B is a thread that is currently blocked on the mutex. Since the scheduler distrubutes the "currently running" role among the threads based on its notion of fairness, a synchronization primitive that favors the currently running thread is as fair as the scheduler in the long run. I don't claim to be an expert on this stuff, though. :-)