Lock unification (shared vs. upgradable)

Is there any reason for distinction between shared lock and upgradable shared lock? I mean - obviously updgradable shared lock can be upgraded to exclusive one, but do we need separate type for this? What's rationale behind this distinciton? I can easily imagine design (and implementation) where every shared lock is also upgradable one, if it's mutex type support upgrade operation. Under such design one can upgrade shared lock only if it's the only (currently locking) lock on synchromization object (supporting such transition). Implementation of such mutex could be very similar to semaphore (or just using semaphore, if present on platform). Here are some simple use cases: some_mutex_type_not_supporting_shared_locks m1; shared_lock<some...> l1(m1); // compilation error some_mutex_type_fully_supporting_shared_locks m2; shared_lock<some...> l2(m2); // ok exclusive_lock<some...> l3 = l2.try_upgrade(); // ok, non-blocking some_mutex_type_with_limited_support_for_shared_locks m3; shared_lock<some...> l4(m3); // ok exclusive_lock<some...> l5 = l4.try_upgrade(); // compilation error exclusive_lock<some...> l6 = l4.upgrade(); // ok, blocking some_mutex_type_with_minimal_support_for_shared_locks m4; { shared_lock<some...> l7(m4); // ok exclusive_lock<some...> l8 = l7.try_upgrade(); // compilation error exclusive_lock<some...> l9 = l7.upgrade(); // compilation error } exclusive_lock<some...> l10(m4); // ok B.

On Wed, 04 Aug 2004 22:45:26 +0200, Bronek Kozicki <brok@rubikon.pl> wrote:
Is there any reason for distinction between shared lock and upgradable shared lock? I mean - obviously updgradable shared lock can be upgraded to exclusive one, but do we need separate type for this? What's rationale behind this distinciton? I can easily imagine design (and implementation) where every shared lock is also upgradable one, if it's mutex type support upgrade operation. Under such design one can upgrade shared lock only if it's the only (currently locking) lock on synchromization object (supporting such transition). Implementation of such mutex could be very similar to semaphore (or just using semaphore, if present on platform).
<snip> An upgradeable lock that is sharing reading needs to be an exclusive upgradeable lock. There are use cases for it but they are rare due to the conditions to make the taking of such a lock worthwhile, especially considering its exclusiveness property and priortization requirements. The implementation of the mutex would likely need additional state as well, but that is fairly moot. Another rationale for keeping it separate would be that shared / exclusive (r/w) locking and the interface should be minimal to support exisiting os's, standards, libraries and patterns of use. It would be nice to keep this mechanism simple enough to map to alternative implementations where possible. So I see two issues, risks of use/overuse and ease of mapping to existing practice. Therefore I think it makes sense to keep the upgradeable feature separate. $0.005 Matt Hurd.

On Aug 4, 2004, at 4:45 PM, Bronek Kozicki wrote:
Is there any reason for distinction between shared lock and upgradable shared lock? I mean - obviously updgradable shared lock can be upgraded to exclusive one, but do we need separate type for this? What's rationale behind this distinciton? I can easily imagine design (and implementation) where every shared lock is also upgradable one, if it's mutex type support upgrade operation. Under such design one can upgrade shared lock only if it's the only (currently locking) lock on synchromization object (supporting such transition). Implementation of such mutex could be very similar to semaphore (or just using semaphore, if present on platform).
The advantage of the separate upgradable_lock type is that when an upgradable_lock transfers ownership to a scoped_lock, then the thread can assume that what was read while holding upgradable ownership is still valid (does not need to be re-read, re-computed) while holding the scoped_lock. This is true even if the thread had to block while transferring mutex ownership from the upgradable_lock to the scoped_lock. If sharable_lock can transfer mutex ownership to scoped_lock, then the thread can no longer make this assumption. If the thread blocks for this ownership transfer, it is possible that another thread will interrupt, upgrade a sharable_lock, and write. The original thread, upon obtaining the scoped_lock, is forced to re-read or re-compute its data upon obtaining the scoped_lock ownership because of this possibility. Thus it might as well just have relinquished shared ownership and blocked for exclusive ownership. Having said that, it is true that we could add: bool try_unlock_sharable_and_lock(); to the mutex functionality. This would mean adding the following to scoped_lock: scoped_lock(rvalue_ref<sharable_lock<mutex_type> > r, detail::try_lock_type); scoped_lock& operator<<=(rvalue_ref< sharable_lock<mutex_type> > r); Meaning, you could try to upgrade a sharable_lock to a scoped_lock, but there would be no way to force the upgrade. It adds an unsymmetric aspect to the interface: every other try-function has a corresponding blocking function. Aside from the asymmetry, I also wonder about the practicality of such an operation. A shared_lock context is usually set up because one anticipates common simultaneous non-destructive locking. If simultaneous non-destructive locking is rare, there is no advantage over exclusive locking. So if simultaneous non-destructive locking is common, that means a successful call to try_unlock_sharable_and_lock() will be rare (and if it isn't, then a simple exclusive mutex would have probably been a better design). It occurs to me that offering such an interface might be encouraging an interface that blocks a thread indefinitely (just keeps trying). Whereas if the code had just relinquished shared ownership and blocked for exclusive, then the mutex design is likely to already be set up to not starve writers. -Howard

Howard Hinnant wrote:
The advantage of the separate upgradable_lock type is that when an upgradable_lock transfers ownership to a scoped_lock, then the thread can assume that what was read while holding upgradable ownership is still valid (does not need to be re-read, re-computed) while holding the scoped_lock. This is true even if the thread had to block while transferring mutex ownership from the upgradable_lock to the scoped_lock.
If we give this ability to shared_lock, would there be still need for separate class upgradable_lock? Or in other words : does upgradable_lock have any ability that is conflicting with abilities that we need in shared_lock, so that we may not put all these abilities in one class and drop the other one? I cannot see such conflict right now, which make me wonder why we need two classes (shared_lock and upgradable_lock) at all?
If sharable_lock can transfer mutex ownership to scoped_lock, then the thread can no longer make this assumption. If the thread blocks for this ownership transfer, it is possible that another thread will interrupt, upgrade a sharable_lock, and write. The original thread, upon obtaining the scoped_lock, is forced to re-read or re-compute its data upon obtaining the scoped_lock ownership because of this possibility. Thus it might as well just have relinquished shared ownership and blocked for exclusive ownership.
I do not see conflict here, ie. if we give shared_lock ability to upgrade with warranty that lock has not been released during operation (atomic operation, possibly blocking, and/or non-blocking "try" operation), operation you mention above would still be doable as simple release shared lock and then acquire exclusive lock (non-atomic). As you noticed in last (cited above) sentence. B.

On Aug 5, 2004, at 9:17 AM, Bronek Kozicki wrote:
Howard Hinnant wrote:
The advantage of the separate upgradable_lock type is that when an upgradable_lock transfers ownership to a scoped_lock, then the thread can assume that what was read while holding upgradable ownership is still valid (does not need to be re-read, re-computed) while holding the scoped_lock. This is true even if the thread had to block while transferring mutex ownership from the upgradable_lock to the scoped_lock.
If we give this ability to shared_lock, would there be still need for separate class upgradable_lock? Or in other words : does upgradable_lock have any ability that is conflicting with abilities that we need in shared_lock, so that we may not put all these abilities in one class and drop the other one? I cannot see such conflict right now, which make me wonder why we need two classes (shared_lock and upgradable_lock) at all?
Consider this code: void read_write(rw_mutex& m) { upgradable_lock<rw_mutex> read_lock(m); bool b = compute_expensve_result(); if (b) { scoped_lock<rw_mutex> write_lock(move(read_lock)); modify_state(b); } } This code works (under the design at http://home.twcny.rr.com/hinnant/cpp_extensions/threads.html ), allowing other threads to simultaneously lock a sharable_lock on the same mutex during this thread's execution of compute_expensive_result(). Under the write_lock in this example, the code assumes that the results of compute_expensive_result() are still valid (no other thread has obtained write access). Now consider a variation: void read_write(rw_mutex& m) { sharable_lock<rw_mutex> read_lock(m); bool b = compute_expensve_result(); if (b) { scoped_lock<rw_mutex> write_lock(move(read_lock)); modify_state(b); } } Assuming this compiles and attempts to work the same way upgradable did, there is either a deadlock, or a logic problem, depending upon how rw_mutex is implemented. If rw_mutex is implemented such that it will successfully negotiate multiple threads requesting upgrade from sharable_lock to scoped_lock, then the mutex must block all but one thread requesting such an upgrade. The consequence of this is that "b" may no longer be valid under the write_lock. Some other thread may have simultaneously upgraded m from sharable to exclusive, and changed the protected data. In order to protect against this possibility, the author of read_write must now code: void read_write(rw_mutex& m) { sharable_lock<rw_mutex> read_lock(m); bool b = compute_expensve_result(); if (b) { scoped_lock<rw_mutex> write_lock(move(read_lock)); modify_state(compute_expensve_result()); } } I.e. compute_expensve_result() has been defensively executed twice. If on the other hand rw_mutex guarantees that upgrading a shared_lock to a scoped_lock will not invalidate previously read information, then the rw_mutex has no choice but to deadlock if two threads semi-simultaneously request that upgrade. Consider: Both thread A and thread B hold a shared_lock on the same mutex. Thread A requests an upgrade. It must block until all other threads release their shared_lock (i.e. thread B). But instead of releasing its shared_lock, thread B decides to upgrade it. It must block until all other threads release their shared_lock (i.e. thread A). The mutex can give exclusive access atomically to thread A or thread B, but not to both. upgradable_lock doesn't suffer this defect of sharable_lock, but it pays a price for this ability: Only one thread can hold an upgradable_lock at a time. But other threads can simultaneously hold sharable_locks with the unique upgradable_lock, so an upgradable_lock is "more friendly" than an exclusive lock. -Howard

Howard Hinnant wrote:
Now consider a variation:
void read_write(rw_mutex& m) { sharable_lock<rw_mutex> read_lock(m); bool b = compute_expensve_result(); if (b) { scoped_lock<rw_mutex> write_lock(move(read_lock)); modify_state(b); } }
Assuming this compiles and attempts to work the same way upgradable did, there is either a deadlock, or a logic problem, depending upon how rw_mutex is implemented. If rw_mutex is implemented such that it will
Thank you for detailed explanation. Indeed, there is a difference in semantics of lock operations. However, this difference could be expressed with different means. I can imagine two different designs: * single template class template <typename Mutex, bool Upgradable = false> class shared_lock {/* ... */}; this design does not change current design much, but allows for easier changes in code when shared lock needs to be updated to upgradable one. * extended interface of shared_lock template <typename Mutex> class shared_lock { public: // new members only shared_lock(Mutex&, const upgradable_t&); void lock(const upgradable_t&); bool try_lock(const upgradable_t&); bool try_lock(const timespan&, const upgradable_t&); bool upgradable() const; scope_lock<Mutex> upgrade() throw (thread::non_upgradable); scope_lock<Mutex> try_upgrade() throw (thread::non_upgradable); scope_lock<Mutex> try_upgrade(const timespan&) throw (thread::non_upgradable); } here decision to lock with ability to upgrade may be deffered to point where mutex is actually locked, which does not have to be place where lock object is created (assuming that you created deffered lock). It also gives more flexibiliy in runtime. You may even create shared lock (non-deffered), then release it and lock again, this time with ability to upgrade - all in one shared_lock variable. Proposed interface does not have atomic function to transform from shared non-upgradable lock to upgradable one in order to avoid deadlocks. I'm not saying that these designs are superior to current, but maybe worth some consideration? B.

Thanks for the thought and alternative suggestions. On Aug 6, 2004, at 3:15 PM, Bronek Kozicki wrote:
Indeed, there is a difference in semantics of lock operations. However, this difference could be expressed with different means. I can imagine two different designs:
* single template class template <typename Mutex, bool Upgradable = false> class shared_lock {/* ... */};
this design does not change current design much, but allows for easier changes in code when shared lock needs to be updated to upgradable one.
<nod> This isn't the first time I've been faced with the question of: Do you name it X and Y, or do you name it X<T> and X<U>? And I've come down in favor of both answers at various times. Exploring... A: void read_write(rw_mutex& m) { upgradable_lock<rw_mutex> read_lock(m); bool b = compute_expensve_result(); if (b) { scoped_lock<rw_mutex> write_lock(move(read_lock)); modify_state(b); } } or B: void read_write(rw_mutex& m) { sharable_lock<rw_mutex, true> read_lock(m); bool b = compute_expensve_result(); if (b) { scoped_lock<rw_mutex> write_lock(move(read_lock)); modify_state(b); } } And of course if the EWG gives us template aliasing, then we could have it both ways. :-) I have a preference for A because it more explicitly says what is going on. It is easier to search for A than for B in a large code base. And the semantics between the upgradable functionality and the sharable functionality are subtly different enough that I think code should distinguish the two fairly clearly.
* extended interface of shared_lock template <typename Mutex> class shared_lock { public: // new members only shared_lock(Mutex&, const upgradable_t&);
void lock(const upgradable_t&); bool try_lock(const upgradable_t&); bool try_lock(const timespan&, const upgradable_t&);
bool upgradable() const;
scope_lock<Mutex> upgrade() throw (thread::non_upgradable); scope_lock<Mutex> try_upgrade() throw (thread::non_upgradable); scope_lock<Mutex> try_upgrade(const timespan&) throw (thread::non_upgradable); }
here decision to lock with ability to upgrade may be deffered to point where mutex is actually locked, which does not have to be place where lock object is created (assuming that you created deffered lock). It also gives more flexibiliy in runtime. You may even create shared lock (non-deffered), then release it and lock again, this time with ability to upgrade - all in one shared_lock variable. Proposed interface does not have atomic function to transform from shared non-upgradable lock to upgradable one in order to avoid deadlocks.
Consider the following scenario: A function takes two objects, needs read access to both, and then might need to atomically write to only the first: void foo(T& t, U& u) { // lock t and u for reading T::mutex::upgradable_lock t_read_lock(t.mutex(), defer_lock); U::mutex::sharable_lock u_read_lock(u.mutex(), defer_lock); lock(t_read_lock, u_read_lock); // generic lock algorithm // read from t and u bool b = t.expensive_compute(u); if (b) { // unlock u and lock t for writing u_read_lock.unlock(); T::mutex::scoped_lock t_write_lock(move(t_read_lock)); // write to t with previously read state in b t.update(b); } } Now if upgradable_lock and sharable_lock are merged, then the merged lock needs different syntax for lock-sharable and lock-upgradable, as you show in your second proposal. But when that happens, the generic lock(lock1,lock2) function no longer works: template <class TryLock1, class TryLock2> void lock(TryLock1& l1, TryLock2& l2) { while (true) { l1.lock(); if (l2.try_lock()) break; l1.unlock(); l2.lock(); if (l1.try_lock()) break; l2.unlock(); } } or if you prefer: template <class TryLock1, class TryLock2> void lock(TryLock1& l1, TryLock2& l2) { if (l1.mutex() < l2.mutex()) { l1.lock(); l2.lock(); } else { l2.lock(); l1.lock(); } } Instead you would need a different lock(l1,l2) function for every combination of sharable/upgradable (and scoped too if that were also merged). Or worse yet, you just replicate one of the above algorithms on the spot every time you need it. Another advantage to the separate locks: Consider the following potential mistake which could have been made in coding the above example: T::mutex::scoped_lock t_write_lock(move(u_read_lock)); // oops, should be t_read_lock! If sharable and upgradable are merged, then the above mistake could transform itself from a compile time error into a run time error. Depending on how things are implemented, it might throw an exception, or it might deadlock, it might corrupt memory, or even worse, it might work most of the time. -Howard

Howard Hinnant wrote: ... void read_write(rw_mutex& m, which_t w) { sharable_lock<rw_mutex, true> read_lock(m); // May need access to many releated which_t's if (compute_expensive_result(w)) { // Upgrade (blocks upcoming readers while pending) scoped_lock<rw_mutex> write_lock(upgrade(read_lock)); if (read_lock.atomic_upgrade()) { modify_state(w); if (write_lock.upgrade_pending()) register_change(w); } else if (!computation_invalidated(w) || // check registry compute_expensive_result(w)) { modify_state(w); write_lock.upgrade_pending() ? register_change(w) : clear_registry(); } else if (!write_lock.upgrade_pending()) { clear_registry(); } } } Oder (verbosity aside for a moment)? regards, alexander.

On Aug 7, 2004, at 2:41 PM, Alexander Terekhov wrote:
Howard Hinnant wrote: ...
void read_write(rw_mutex& m, which_t w) { sharable_lock<rw_mutex, true> read_lock(m); // May need access to many releated which_t's if (compute_expensive_result(w)) { // Upgrade (blocks upcoming readers while pending) scoped_lock<rw_mutex> write_lock(upgrade(read_lock)); if (read_lock.atomic_upgrade()) { modify_state(w); if (write_lock.upgrade_pending()) register_change(w); } else if (!computation_invalidated(w) || // check registry compute_expensive_result(w)) { modify_state(w); write_lock.upgrade_pending() ? register_change(w) : clear_registry(); } else if (!write_lock.upgrade_pending()) { clear_registry(); } } }
Oder (verbosity aside for a moment)?
My current proposal ( http://home.twcny.rr.com/hinnant/cpp_extensions/threads.html ) already has the functionality of "fallible upgrade" but with slightly different semantics and syntax: scoped_lock<rw_mutex> write_lock(upgrade(read_lock)); if (read_lock.atomic_upgrade()) { ... } else { ... } vs: scoped_lock<rw_mutex> write_lock(move(read_lock), try_lock); if (write_lock.locked()) { ... } else { read_lock.unlock(); write_lock.lock(); ... } Hmm... ok the functionality isn't exactly the same as you proposed. But it's pretty close. The difference is your upgrade might block for a while during the try-upgrade operation in the hopes of making it atomic, whereas mine will give up immediately and then block for a non-atomic upgrade. See further down on how I see your block-try semantics as counter productive in another scenario. So I believe you could design code at least partly the way you've proposed with my current proposal. Although I'm sure I'm not fully understanding the write_lock.upgrade_pending() functionality. The way I currently have my rw_mutex implemented, this information is not known. Actually my current implementation is based on your suggested "single entry gate" design. So when the mutex is write locked, it has no idea who or if anyone is waiting outside the gate. I liked the idea that the scheduler, not the mutex, decides if a reader or writer gets priority. Anyway, I could see a "write lock pending" in the case that the mutex is currently in a read state. But only a sharable (non-upgradable) lock could ever see that state. Once an upgradable lock is active, even though it hasn't requested an upgrade, writers are locked behind the entry gate and not detectable. And once a writer gets past the entry gate and is "pending", then upgradable locks are stuck behind the entry gate. But I believe that in addition to the fallible upgrade there needs to be a "for-sure upgrade". You don't always want to have to check for whether you got your upgrade (atomically or otherwise). There is a code size hit in having to write the "else" branch. And it would also become problematic to atomically pair an upgrade with an independent lock. For example consider a function that takes two objects T and U, needs to read T, and if certain conditions are met then atomically upgrade T access from read to write, and lock U: void foo(T& t, U& u) { typedef T::mutex_t::upgradable_lock T_ReadLock; typedef T::mutex_t::scoped_lock T_WriteLock; typedef transfer_lock< T_WriteLock, T_ReadLock > T_Upgrade; typedef U::mutex_t::scoped_lock U_Lock; // read lock t T_ReadLock t_read_lock(t.mutex()); ... if (...) { // upgrade t from read to write, and lock u T_WriteLock t_write_lock(t.mutex(), defer_lock); T_Upgrade t_upgrade(t_write_lock, t_read_lock, defer_lock); U_Lock u_lock(u.mutex(), defer_lock); lock(t_upgrade, u_lock); // ok, t upgraded and u locked, all atomically } } The transfer_lock<T_WriteLock, T_ReadLock> generic utility can make an upgrade look like a lock, but it really needs the "for sure atomic upgrade" semantics in order to pull it off. And it also needs "try but retain read access on fail" semantics. This is in contrast to your "try but do non-atomic upgrade on fail" semantics. And then the lock(lock1,lock2) generic algorithm can atomically do the upgrade on t and lock on u. Having said all that, a word of caution on the above example. If U::mutex is a simple exclusive mutex, things are ok. However if U::mutex is a read/write style mutex, then the above algorithm is in danger of deadlock. If foo2(U& u, T& t) does the same thing as foo(T& t, U& u), but with the roles reversed: read-lock u, atomically(upgrade u, lock t), then we're firmly in deadlock territory. Threads simultaneously executing foo and foo2 would deadlock for sure. The above paragraph is an argument against all-in-one mutex functionality that I hadn't thought of before (at least if you consider "all-in-one" to include read/write capability). If a mutex existed, and did not include read/write capability, I suspect that the above foo could check that fact for U::mutex_t at compile time, and thus ensure that there was no danger of deadlock. -Howard

Howard Hinnant wrote: [...]
So I believe you could design code at least partly the way you've proposed with my current proposal. Although I'm sure I'm not fully understanding the write_lock.upgrade_pending() functionality. The way I currently have my rw_mutex implemented, this information is not known.
I know that.
Actually my current implementation is based on your suggested "single entry gate" design. So when the mutex is write locked, it has no idea who or if anyone is waiting outside the gate.
You'd need a semaphore, not a mutex for "entry gate". (first upgrader/writer would lock it and the last one would unlock it).
I liked the idea that the scheduler, not the mutex, decides if a reader or writer gets priority.
Yes, but this scheme has a drawback. "Serial wakes across context switching" (in addition to locking across context switching). For alternative, see (for example) http://groups.google.com/groups?selm=3D9196B2.9BC29299%40web.de (Subject: Re: rwlock using pthread_cond) regards, alexander.

On Aug 9, 2004, at 10:26 AM, Alexander Terekhov wrote:
Yes, but this scheme has a drawback. "Serial wakes across context switching" (in addition to locking across context switching). For alternative, see (for example)
http://groups.google.com/groups?selm=3D9196B2.9BC29299%40web.de (Subject: Re: rwlock using pthread_cond)
The alternative in this link looks very familiar. I'm currently using two condition variables, an unsigned, and two bools (oh, and of course the mutex). Although after looking at your example I think I could probably merge my unsigned and two bools into an int and one bool (because of padding, may not be worth the effort though). My two condition variables are used slightly differently than yours because of the additional requirements of: start_upgradable() end_upgradable() ( and of course there's the conversion-functions: http://home.twcny.rr.com/hinnant/cpp_extensions/ threads.html#unlock_and_lock_sharable ) Sorry I can't just show the code. -Howard

Howard Hinnant wrote: [...]
The alternative in this link looks very familiar. I'm currently using two condition variables, ...
POSIX condition variables strive for "wait-morphing". So you'll end up with "serial wakes" (quite a waste on, say, 32-way box). The problem is that waiters must reacquire the mutex. tail_* stuff is supposed to NOT have this requirement (it's meant to use the associated mutex for waiters queue "protection"... a la Java monitors, in a way). Well, take also a look at: http://www.cs.rochester.edu/u/scott/synchronization/pseudocode/rw.html regards, alexander.

On Aug 9, 2004, at 5:30 PM, Alexander Terekhov wrote:
Howard Hinnant wrote: [...]
The alternative in this link looks very familiar. I'm currently using two condition variables, ...
POSIX condition variables strive for "wait-morphing". So you'll end up with "serial wakes" (quite a waste on, say, 32-way box). The problem is that waiters must reacquire the mutex. tail_* stuff is supposed to NOT have this requirement (it's meant to use the associated mutex for waiters queue "protection"... a la Java monitors, in a way). Well, take also a look at:
http://www.cs.rochester.edu/u/scott/synchronization/pseudocode/rw.html
Ok, I think I understand now. You are expediting the "thundering herd" of waiting readers when a writer unlocks on a multi-processor. Thanks, Howard

From: Alexander Terekhov <terekhov@web.de>
Howard Hinnant wrote: ...
void read_write(rw_mutex& m, which_t w) { [snip] }
Oder (verbosity aside for a moment)?
Call me ignorant, but what does "Oder" mean? www.m-w.com has an entry indicating that "Oder" is a central European river, but that hardly seems relevant, at least not without some implicit knowledge of relevant characteristics of that river. -- Rob Stewart stewart@sig.com Software Engineer http://www.sig.com Susquehanna International Group, LLP using std::disclaimer;

Rob Stewart wrote: [...]
Call me ignorant, but what does "Oder" mean? www.m-w.com has an
Try http://dict.leo.org ;-) regards, alexander.

From: Alexander Terekhov <terekhov@web.de>
Rob Stewart wrote: [...]
Call me ignorant, but what does "Oder" mean? www.m-w.com has an
Try http://dict.leo.org ;-)
No help. Apparently, "oder" is German for "or." Your original message was:
Howard Hinnant wrote: ...
void read_write(rw_mutex& m, which_t w) { sharable_lock<rw_mutex, true> read_lock(m); // May need access to many releated which_t's if (compute_expensive_result(w)) { // Upgrade (blocks upcoming readers while pending) scoped_lock<rw_mutex> write_lock(upgrade(read_lock)); if (read_lock.atomic_upgrade()) { modify_state(w); if (write_lock.upgrade_pending()) register_change(w); } else if (!computation_invalidated(w) || // check registry compute_expensive_result(w)) { modify_state(w); write_lock.upgrade_pending() ? register_change(w) : clear_registry(); } else if (!write_lock.upgrade_pending()) { clear_registry(); } } }
Oder (verbosity aside for a moment)?
So, you quoted what Howard wrote and then wrote: "Or (verbosity aside for a moment)?" Sorry, but it still makes no sense. -- Rob Stewart stewart@sig.com Software Engineer http://www.sig.com Susquehanna International Group, LLP using std::disclaimer;

Rob Stewart wrote:
From: Alexander Terekhov <terekhov@web.de>
Rob Stewart wrote: [...]
Call me ignorant, but what does "Oder" mean? www.m-w.com has an
Try http://dict.leo.org ;-)
No help. Apparently, "oder" is German for "or." Your original message was:
Howard Hinnant wrote: ...
void read_write(rw_mutex& m, which_t w) { sharable_lock<rw_mutex, true> read_lock(m); // May need access to many releated which_t's if (compute_expensive_result(w)) { // Upgrade (blocks upcoming readers while pending) scoped_lock<rw_mutex> write_lock(upgrade(read_lock)); if (read_lock.atomic_upgrade()) { modify_state(w); if (write_lock.upgrade_pending()) register_change(w); } else if (!computation_invalidated(w) || // check registry compute_expensive_result(w)) { modify_state(w); write_lock.upgrade_pending() ? register_change(w) : clear_registry(); } else if (!write_lock.upgrade_pending()) { clear_registry(); } } }
Oder (verbosity aside for a moment)?
So, you quoted what Howard wrote and ...
Uhmm. Howard wrote "sharable_lock<rw_mutex, true> read_lock(m)" (a few other bits aside for a moment). Howard didn't write the rest. The poem is mine! Oder? ;-) regards, alexander.

Rob Stewart wrote:
From: Alexander Terekhov <terekhov@web.de>
Rob Stewart wrote: [...]
Call me ignorant, but what does "Oder" mean? www.m-w.com has an
Try http://dict.leo.org ;-)
No help. Apparently, "oder" is German for "or." Your original message was:
Howard Hinnant wrote: ...
void read_write(rw_mutex& m, which_t w) { sharable_lock<rw_mutex, true> read_lock(m);
[ snip code ]
Oder (verbosity aside for a moment)?
So, you quoted what Howard wrote and then wrote:
"Or (verbosity aside for a moment)?"
Sorry, but it still makes no sense.
I believe that your confusion stems from the fact that the code is actually Alexander's, not Howard's. He's proposing an alternative implementation to Howard's suggestions for the read_write function and asking whether Howard has any problems with it. At least, that's how I read it. Regards, Angus

From: Howard Hinnant <hinnant@twcny.rr.com>
On Aug 9, 2004, at 12:29 PM, Rob Stewart wrote:
So, you quoted what Howard wrote and then wrote:
For the record, I didn't write that code, Alexander did.
I now understand that Alexander claimed to have quoted something from you ("Howard Hinnant wrote: ...") but didn't, so I took what he wrote as what you wrote. Factor in unknown German and I managed to thoroughly confuse things. -- Rob Stewart stewart@sig.com Software Engineer http://www.sig.com Susquehanna International Group, LLP using std::disclaimer;

Howard Hinnant wrote:
But when that happens, the generic lock(lock1,lock2) function no longer works:
I'm little scared about such lock function, where order of lock operations is disconnected from order of unlock operations. However this is something that possibly could be fixed - I'm thinking about building chains of locks.
If sharable and upgradable are merged, then the above mistake could transform itself from a compile time error into a run time error.
Right, this is weak point of moving difference in lock semantics from compile time to runtime. Now I agree with you that we should take benefit of type system to avoid such problems. B.

On Aug 8, 2004, at 6:00 AM, Bronek Kozicki wrote:
But when that happens, the generic lock(lock1,lock2) function no longer works:
I'm little scared about such lock function, where order of lock operations is disconnected from order of unlock operations. However this is something that possibly could be fixed - I'm thinking about building chains of locks.
I can't think of any reason for concern about unlock order. However you're right that it can be fixed. Just make it an object lock_both<Lock1, Lock2>, and record the order that you lock within the lock_both so that you can use that information to unlock. -Howard

On Aug 9, 2004, at 3:20 PM, Bronek Kozicki wrote:
Howard Hinnant wrote:
I can't think of any reason for concern about unlock order.
<blush> Ehm, I mixed other things with multithreaded programming
<chuckle> I've been thinking that maybe you're right! :-) Though I haven't thought it all the way through yet. Maybe by the time I write this, it will come together... I've been thinking about: template <class TryLock1, class TryLock2> class transfer_lock { ... }; where transfer_lock hold references to two other locks. The lock() function transfers ownership from lock2 to lock1, and the unlock() function transfers ownership from lock1 to lock2. Might be useful for transferring ownership from an upgradable_lock to a scoped_lock, and then back again, within a scope. So far so good. Ok, but what if you do the opposite: typedef scoped_lock<rw_mutex> WriteLock; typedef upgradable_lock<rw_mutex> ReadLock; WriteLock write_lock(m); ... if (...) { ReadLock read_lock(m, defer_lock); transfer_lock<ReadLock, WriteLock> lock(read_lock, write_lock); // here m is downgraded from write to read ... // m is upgraded from read to write before exiting scope } // here m is write locked, whether or not if-branch was taken ... I.e. in this (possibly perverse) example, "lock" doesn't block and "unlock" does. And if you start combining transfer_lock<L1, L2> with a lock_both<L1, L2>, I'll bet you could come up with an example where both "lock" and "unlock" block. And you might get shot for even suggesting such evilness! :-) But at any rate, when "unlock" can block, the rules are turned on their head. And if you start tossing a lock like that into a generic lock-2 algorithm, things start looking very evil. Maybe the best advice is just: don't do that! I.e. a precondition of using a generic lock-2 algorithm (or lock_both<L1,L2>) would be that the unlock() on each of the locks must be non-blocking. Still pondering this one (no, it didn't come together) ... -Howard

On Aug 9, 2004, at 7:08 PM, Howard Hinnant wrote:
Ok, but what if you do the opposite:
typedef scoped_lock<rw_mutex> WriteLock; typedef upgradable_lock<rw_mutex> ReadLock;
WriteLock write_lock(m); ... if (...) { ReadLock read_lock(m, defer_lock); transfer_lock<ReadLock, WriteLock> lock(read_lock, write_lock); // here m is downgraded from write to read ... // m is upgraded from read to write before exiting scope } // here m is write locked, whether or not if-branch was taken ...
After playing with this some more I've realized that transfer_lock simply doesn't work like this in the context of the proposed |= and <<= operators. When transferring ownership from upgradable to scoped, a generic utility such as the above will use (pseudo code): For "lock": scoped_lock |= upgradable_lock For "try lock": scoped_lock <<= upgradable_lock For "unlock": upgradable_lock = scoped_lock If you reverse the roles, it simply will not compile: For "lock": upgradable_lock |= scoped_lock // no such operation For "try lock": upgradable_lock <<= scoped_lock // no such operation For "unlock": scoped_lock = upgradable_lock // no such operation Therefore transfer_lock is really better named promote_lock. And if you wanted a generic demote_lock then that would probably look like: For "lock": upgradable_lock = scoped_lock For "try lock": upgradable_lock = scoped_lock For "unlock": scoped_lock |= upgradable_lock And it would be ill-advised to try to couple a demote_lock with another lock via a generic "lock 2" algorithm. Such a generic algorithm could probably be set up to detect demote_lock at compile time and refuse to compile it. -Howard
participants (6)
-
Alexander Terekhov
-
Angus Leeming
-
Bronek Kozicki
-
Howard Hinnant
-
Matt Hurd
-
Rob Stewart