
Howard Hinnant wrote:
On Jul 12, 2004, at 4:51 PM, Peter Dimov wrote:
That was the intent. But I stand corrected. I evidently got it wrong except on Mac OS X where I do explicitly decrement the count to 0 of the pthread_mutex during a wait (recursive mutex case only). Looks like I need to dig back into this on other platforms...
Doesn't this impose some overhead on your recursive_mutex, even if the user never takes advantage of this feature? (I have to admit that I don't have the slightest idea how is this possible to implement correctly.)
I'm not familiar with how a native pthread_mutex is made recursive.
This kind of answers my question. ;-) See pthread_mutexattr_settype and PTHREAD_MUTEX_RECURSIVE. Note also that an implementation is allowed to make the default pthread_mutex recursive; in this case your users pay for the recursive overhead twice. Not that they don't deserve it for using a recursive_mutex. ;-)
But with a native non-recursive mutex, the added space overhead simply to handle recursive locking was also sufficient to negotiate use with condition variables without further space overhead needed just for the condition variables. To support the condition variables, a little more code is needed (maybe a dozen lines of C++) executed from within the wait function, and maybe a dozen or so bytes of stack space within the condition's wait function. Essentially the wait function saves the state of the mutex before the wait, then frees it for the wait, then restores the state of the mutex after the wait.
That's how Boost.Threads behaves, but (AFAICS) it doesn't protect itself against a thread switch and lock immediately after freeing the mutex for the wait, so it doesn't meet the "correctly" requirement. ;-)