
Peter Dimov wrote:
What are the design goals of read_write_mutex? (Lock proliferation aside for a moment.)
My understanding has always been that read/write locks are an optimization, that is, a read/write lock-based algorithm should deliver better performance than the same algorithm with "ordinary" locks. Right?
Wrong.
The program at the end of this message demonstrates that in this specific (contrived?) case ordinary mutexes outperform a read/write mutex by a factor of 2.5 or more in almost every scenario, even when no writers are active!
In some scenarios (16 readers, 4 writers, 10,000,000 iterations) the ordinary mutex case completes in under 8 seconds on my machine, but the read/write case exceeded my patience threshold.
Am I missing something?
It doesn't help that the Boost read/write mutex, as I've mentioned before, is pretty inefficient; I'm not very happy with it. There are at least three reasons for the inefficiency: 1) Supporting four scheduling policies in one code base makes the code too complex; 2) Trying to adhere to the scheduling policies too rigidly is also adding code complexity and inefficiency; 3) The complexity makes it harder to optimize, and not much time has been spent optimizing it in the first place. I mentioned elsewhere that I have some thoughts about ways to improve Boost.Threads; some of these ideas deal with addressing these three points and getting a much better read/write mutex in place. Mike