boost::shared_locks and boost::upgrade_locks
I'm trying to implement a read-write lock, and I was told to use boost::upgrade_lock. However, only one thread can have a boost::upgrade_lock, and everyone else gets blocked, so it's not much on the read end. (Correct me if I am wrong, but boost::upgrade_lock only waits if someone has exclusive, but someone with boost::upgrade_lock will cause everyone else to block) But reads are cheap in my case - I just need to use locks for a cache. So, suppose I have the following: class Cache { std::map cache; boost::shared_mutex mutex; public: Output query(const Input& in); }; It would appear that the proper way to implement Cache::query() is as follows: Output query(const Input& in) { boost::shared_lockboost::shared_mutex readLock(mutex); if (cache.count(in) == 0) { readLock.unlock(); boost::unique_lockboost::shared_mutex writeLock(mutex); // Check to see if the result was added while waiting for lock if (cache.count(in) != 0) return cache[in]; cache[in] = compute_output(in); return cache[in]; } else { return cache[in]; } } Is this correct? Or maybe it should be this: Output query(const Input& in) { boost::shared_lockboost::shared_mutex readLock(mutex); if (cache.count(in) == 0) { readLock.unlock(); boost::upgrade_lockboost::shared_mutex rereadLock(mutex); // Check to see if the result was added while waiting for lock if (cache.count(in) != 0) { return cache[in]; } else { boost::upgrade_to_unique_lockboost::shared_mutex writeLock(rereadLock); cache[in] = compute_output(in); return cache[in]; } } else { return cache[in]; } } A side question is that if compute_output() is an expensive operation, should that be moved to the part between releasing the read lock and getting the write/reread lock?
On Wed, Dec 7, 2011 at 4:05 PM, Kelvin Chung
I'm trying to implement a read-write lock, and I was told to use boost::upgrade_lock. However, only one thread can have a boost::upgrade_lock, and everyone else gets blocked, so it's not much on the read end. (Correct me if I am wrong, but boost::upgrade_lock only waits if someone has exclusive, but someone with boost::upgrade_lock will cause everyone else to block)
But reads are cheap in my case - I just need to use locks for a cache. So, suppose I have the following:
You don't necessarily want to release your lock before upgrading. This is more or less how I'd think about it (through example pseudocode): struct Example { boost::shared_mutex m_mtx; ... Data doRead() { boost::shared_lockboost::shared_mutex rlock(m_mtx); ...read and return } void doWrite(OtherData d) { boost::unique_lockboost::shared_mutex wlock(m_mtx); ...write d somewhere protected by m_mtx } void getCachedOrComputed(Data &d) { boost::upgrade_lockboost::shared_mutex uplock(m_mtx); if(found in cache) ...read data from a cache into d d = compute_output() //upgrade the lock to unique for write boost::upgrade_to_unique_lock<SharedMutex> wlock(uplock); ... write d into the cache } } Obviously "getCachedOrComputed" is most akin to what you want to do (including where I would place compute_output()), but I included the other functions to illustrate other cases (no need for upgrade). I had a difficult time figuring this out myself a while back, so I hope this helps. Brian
On 2011-12-08 03:17:27 +0000, Brian Budge said:
On Wed, Dec 7, 2011 at 4:05 PM, Kelvin Chung
wrote: I'm trying to implement a read-write lock, and I was told to use boost::upgrade_lock. However, only one thread can have a boost::upgrade_lock, and everyone else gets blocked, so it's not much on the read end. (Correct me if I am wrong, but boost::upgrade_lock only waits if someone has exclusive, but someone with boost::upgrade_lock will cause everyone else to block)
But reads are cheap in my case - I just need to use locks for a cache. So, suppose I have the following:
You don't necessarily want to release your lock before upgrading. This is more or less how I'd think about it (through example pseudocode):
struct Example { boost::shared_mutex m_mtx; ...
Data doRead() { boost::shared_lockboost::shared_mutex rlock(m_mtx); ...read and return } void doWrite(OtherData d) { boost::unique_lockboost::shared_mutex wlock(m_mtx); ...write d somewhere protected by m_mtx } void getCachedOrComputed(Data &d) { boost::upgrade_lockboost::shared_mutex uplock(m_mtx); if(found in cache) ...read data from a cache into d d = compute_output() //upgrade the lock to unique for write boost::upgrade_to_unique_lock<SharedMutex> wlock(uplock); ... write d into the cache } }
Obviously "getCachedOrComputed" is most akin to what you want to do (including where I would place compute_output()), but I included the other functions to illustrate other cases (no need for upgrade). I had a difficult time figuring this out myself a while back, so I hope this helps.
The problem is that only one thread can have the upgrade lock, but any number of threads can have shared locks. Going back to my example, if I had Output query(const Input& in) { boost::upgrade_lockboost::shared_mutex readLock; if (cache.count(in) == 0) { boost::upgrade_to_unique_lockboost::shared_mutex writeLock(readLock); cache[in] = compute_output(in); } return cache[in]; } Then only one thread could access query() at a time - this is no better than just doing things serially (ie. I could have just used boost::unique_lock and assumed that I have all unconditional writes). My impression that boost::upgrade_lock is basically "I need write access, but I don't need to write now". So I'm thinking that either of the two solutions I proposed is the "right" way to do query(), where I can let through other threads who are calling query() on Inputs already in the cache, say, rather than locking them out.
On Wed, Dec 7, 2011 at 7:51 PM, Kelvin Chung
On 2011-12-08 03:17:27 +0000, Brian Budge said:
On Wed, Dec 7, 2011 at 4:05 PM, Kelvin Chung
wrote: I'm trying to implement a read-write lock, and I was told to use boost::upgrade_lock. However, only one thread can have a boost::upgrade_lock, and everyone else gets blocked, so it's not much on the read end. (Correct me if I am wrong, but boost::upgrade_lock only waits if someone has exclusive, but someone with boost::upgrade_lock will cause everyone else to block)
But reads are cheap in my case - I just need to use locks for a cache. So, suppose I have the following:
You don't necessarily want to release your lock before upgrading. This is more or less how I'd think about it (through example pseudocode):
struct Example { boost::shared_mutex m_mtx; ...
Data doRead() { boost::shared_lockboost::shared_mutex rlock(m_mtx); ...read and return } void doWrite(OtherData d) { boost::unique_lockboost::shared_mutex wlock(m_mtx); ...write d somewhere protected by m_mtx } void getCachedOrComputed(Data &d) { boost::upgrade_lockboost::shared_mutex uplock(m_mtx); if(found in cache) ...read data from a cache into d d = compute_output() //upgrade the lock to unique for write boost::upgrade_to_unique_lock<SharedMutex> wlock(uplock); ... write d into the cache } }
Obviously "getCachedOrComputed" is most akin to what you want to do (including where I would place compute_output()), but I included the other functions to illustrate other cases (no need for upgrade). I had a difficult time figuring this out myself a while back, so I hope this helps.
The problem is that only one thread can have the upgrade lock, but any number of threads can have shared locks. Going back to my example, if I had
Output query(const Input& in) { boost::upgrade_lockboost::shared_mutex readLock;
if (cache.count(in) == 0) { boost::upgrade_to_unique_lockboost::shared_mutex writeLock(readLock);
cache[in] = compute_output(in); } return cache[in]; }
Then only one thread could access query() at a time - this is no better than just doing things serially (ie. I could have just used boost::unique_lock and assumed that I have all unconditional writes). My impression that boost::upgrade_lock is basically "I need write access, but I don't need to write now". So I'm thinking that either of the two solutions I proposed is the "right" way to do query(), where I can let through other threads who are calling query() on Inputs already in the cache, say, rather than locking them out.
_______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
Upgrade ownership is just shared ownership that can be upgraded to exclusive ownership.
On 2011-12-08 04:27:51 +0000, Brian Budge said:
Upgrade ownership is just shared ownership that can be upgraded to exclusive ownership.
My understanding of the documentation of the UpgradeLockable concept seems to suggest otherwise. And I quote: "a single thread may have upgradeable ownership at the same time as others have shared ownership" This seems to imply that shared and upgrade are very different levels. Especially when it later says "upgradeable ownership can be downgraded to plain shared ownership". If upgrade is just shared with a license to upgrade, why would you ever need to downgrade? Why would "downgrade to shared" even exist in the first place? Here's a scenario that proves my point: Suppose cache::query() is just implemented with upgrade locks. Suppose you have two Inputs, foo and bar. Suppose you also have two threads, one calling Cache::query(foo) and the other Cache::query(bar). Both will be trying to get the upgrade_lock, but according to the UpgradeLockable concept, only one thread gets it, and the other one will be blocked. So, if foo is not in the cache and bar is in the cache, and bar is the one that gets blocked, then bar has to wait for foo to finish (which, as compute_output() could be expensive, could take a while) - this is no better than doing things serially, when you could just let the bar go through (since it only needs to read from the cache, which is cheap) as foo is waiting for the exclusive lock upgrade. Thus, my conception is that you get a shared lock when you don't know that you need to write to cache. When it turns out that you do, you unlock and get an upgrade lock, which expresses the intention of writing to the cache, while still letting other threads get shared locks for their cache lookups. After checking whether your input is in the cache again (since another thread may have written what you needed into the cache while waiting for the upgrade lock), you upgrade to exclusive (you have the upgrade lock and so no one else could have written to the cache while all the other threads with shared locks leave), where you actually write to cache. Then downgrade from exclusive to upgrade to shared when you are done, and return the cached value.
On Wed, Dec 7, 2011 at 9:00 PM, Kelvin Chung
On 2011-12-08 04:27:51 +0000, Brian Budge said:
Upgrade ownership is just shared ownership that can be upgraded to exclusive ownership.
My understanding of the documentation of the UpgradeLockable concept seems to suggest otherwise. And I quote:
"a single thread may have upgradeable ownership at the same time as others have shared ownership"
This seems to imply that shared and upgrade are very different levels. Especially when it later says "upgradeable ownership can be downgraded to plain shared ownership". If upgrade is just shared with a license to upgrade, why would you ever need to downgrade? Why would "downgrade to shared" even exist in the first place?
Here's a scenario that proves my point: Suppose cache::query() is just implemented with upgrade locks. Suppose you have two Inputs, foo and bar. Suppose you also have two threads, one calling Cache::query(foo) and the other Cache::query(bar). Both will be trying to get the upgrade_lock, but according to the UpgradeLockable concept, only one thread gets it, and the other one will be blocked. So, if foo is not in the cache and bar is in the cache, and bar is the one that gets blocked, then bar has to wait for foo to finish (which, as compute_output() could be expensive, could take a while) - this is no better than doing things serially, when you could just let the bar go through (since it only needs to read from the cache, which is cheap) as foo is waiting for the exclusive lock upgrade.
Thus, my conception is that you get a shared lock when you don't know that you need to write to cache. When it turns out that you do, you unlock and get an upgrade lock, which expresses the intention of writing to the cache, while still letting other threads get shared locks for their cache lookups. After checking whether your input is in the cache again (since another thread may have written what you needed into the cache while waiting for the upgrade lock), you upgrade to exclusive (you have the upgrade lock and so no one else could have written to the cache while all the other threads with shared locks leave), where you actually write to cache. Then downgrade from exclusive to upgrade to shared when you are done, and return the cached value.
_______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
Hmmm, I'm not so sure. I wonder if Anthony or one of the other boost thread gurus might chime in?
Kelvin Chung wrote
On 2011-12-08 04:27:51 +0000, Brian Budge said:
Upgrade ownership is just shared ownership that can be upgraded to exclusive ownership.
My understanding of the documentation of the UpgradeLockable concept seems to suggest otherwise. And I quote:
"a single thread may have upgradeable ownership at the same time as others have shared ownership"
This seems to imply that shared and upgrade are very different levels. Especially when it later says "upgradeable ownership can be downgraded to plain shared ownership". If upgrade is just shared with a license to upgrade, why would you ever need to downgrade? Why would "downgrade to shared" even exist in the first place?
I guess that it is to free the single thread that can take a upgrade lock, so that another thread could take an upgrade_lock.
Here's a scenario that proves my point: Suppose cache::query() is just implemented with upgrade locks. Suppose you have two Inputs, foo and bar. Suppose you also have two threads, one calling Cache::query(foo) and the other Cache::query(bar). Both will be trying to get the upgrade_lock, but according to the UpgradeLockable concept, only one thread gets it, and the other one will be blocked. So, if foo is not in the cache and bar is in the cache, and bar is the one that gets blocked, then bar has to wait for foo to finish (which, as compute_output() could be expensive, could take a while) - this is no better than doing things serially, when you could just let the bar go through (since it only needs to read from the cache, which is cheap) as foo is waiting for the exclusive lock upgrade.
You are right, as you are using the same function query. Now suppose that query spend some time on compute_output() but that you needed to do some more things. There you could downgrade the lock so that bar will be unblocked. Thus, my conception is that you get a shared lock when you don't know that you need to write to cache. When it turns out that you do, you unlock and get an upgrade lock, which expresses the intention of writing to the cache, while still letting other threads get shared locks for their cache lookups. After checking whether your input is in the cache again (since another thread may have written what you needed into the cache while waiting for the upgrade lock), you upgrade to exclusive (you have the upgrade lock and so no one else could have written to the cache while all the other threads with shared locks leave), where you actually write to cache. Then downgrade from exclusive to upgrade to shared when you are done, and return the cached value. In general unlocking a shared lock and lock a upgrade lock is not a good idea as the read data on the shared lock scope can be changed by another thread as soon as you release the lock and so you computation is not coherent. But maybe in your particular case it could work. Just my 2cts, Vicente -- View this message in context: http://boost.2283326.n4.nabble.com/boost-shared-locks-and-boost-upgrade-lock... Sent from the Boost - Users mailing list archive at Nabble.com.
On 2011-12-08 16:58:00 +0000, Vicente Botet said:
Kelvin Chung wrote
On 2011-12-08 04:27:51 +0000, Brian Budge said:
Upgrade ownership is just shared ownership that can be upgraded to exclusive ownership.
My understanding of the documentation of the UpgradeLockable concept seems to suggest otherwise. And I quote:
"a single thread may have upgradeable ownership at the same time as others have shared ownership"
This seems to imply that shared and upgrade are very different levels. Especially when it later says "upgradeable ownership can be downgraded to plain shared ownership". If upgrade is just shared with a license to upgrade, why would you ever need to downgrade? Why would "downgrade to shared" even exist in the first place?
I guess that it is to free the single thread that can take a upgrade lock, so that another thread could take an upgrade_lock.
Here's a scenario that proves my point: Suppose cache::query() is just implemented with upgrade locks. Suppose you have two Inputs, foo and bar. Suppose you also have two threads, one calling Cache::query(foo) and the other Cache::query(bar). Both will be trying to get the upgrade_lock, but according to the UpgradeLockable concept, only one thread gets it, and the other one will be blocked. So, if foo is not in the cache and bar is in the cache, and bar is the one that gets blocked, then bar has to wait for foo to finish (which, as compute_output() could be expensive, could take a while) - this is no better than doing things serially, when you could just let the bar go through (since it only needs to read from the cache, which is cheap) as foo is waiting for the exclusive lock upgrade.
You are right, as you are using the same function query. Now suppose that query spend some time on compute_output() but that you needed to do some more things. There you could downgrade the lock so that bar will be unblocked.
Would either try-locking the upgrade lock or doing compute_output() before upgrading work? Something like Output Cache::query(const Input& in) { boost::shared_lockboost::shared_mutex readLock(mutex); if (cache.count(in) == 0) { readLock.unlock(); boost::upgrade_lockboost::shared_mutex rereadLock(mutex, boost::try_to_lock); // Option 1 - try locking rereadLock Output out = compute_output(in); // Option 1 - and while waiting for it, do calculations if (!rereadLock.owns_lock()) rereadLock.lock_upgrade(); // Nothing left to do, forced to wait for the lock if (cache.count(in) == 0) { Output out = compute_output(in); // Option 2 - locking rereadLock do calculations before upgrading boost::upgrade_to_unique_lockboost::shared_mutex writeLock(rereadLock); cache[in] = out; return cache[in]; } else { // Another thread has written to the cache while waiting, so out has been wasted :-( return cache[in]; } } else { return cache[in]; } } (There appears to be no way to "try-upgrade", which I'd imagine would be the best place to do compute_output(). Why is that?) Speaking of downgrading, to downgrade, you just assign to a "lower-class" lock, right? So it's alright to do something like Output Cache::query(const Input& in) { boost::shared_lockboost::shared_mutex readLock(mutex); if (cache.count(in) == 0) { readLock.unlock(); boost::upgrade_lockboost::shared_mutex rereadLock(mutex); if (cache.count(in) == 0) { Output out = compute_output(in); boost::upgrade_to_unique_lockboost::shared_mutex writeLock(rereadLock); cache[in] = out; } // Downgrade upgrade lock to shared lock here readLock = boost::move(rereadLock); } return cache[in]; }
Thus, my conception is that you get a shared lock when you don't know that you need to write to cache. When it turns out that you do, you unlock and get an upgrade lock, which expresses the intention of writing to the cache, while still letting other threads get shared locks for their cache lookups. After checking whether your input is in the cache again (since another thread may have written what you needed into the cache while waiting for the upgrade lock), you upgrade to exclusive (you have the upgrade lock and so no one else could have written to the cache while all the other threads with shared locks leave), where you actually write to cache. Then downgrade from exclusive to upgrade to shared when you are done, and return the cached value.
In general unlocking a shared lock and lock a upgrade lock is not a good idea as the read data on the shared lock scope can be changed by another thread as soon as you release the lock and so you computation is not coherent.
My data is "single-assignment" style: once an Input/Output pair is written into the Cache it is never modified, so that isn't an issue for me as long as I re-read after getting the upgrade lock. However, wouldn't re-reading after getting the upgrade lock address this in general?
Le 08/12/11 18:30, Kelvin Chung a écrit :
On 2011-12-08 16:58:00 +0000, Vicente Botet said:
In general unlocking a shared lock and lock a upgrade lock is not a good idea as the read data on the shared lock scope can be changed by another thread as soon as you release the lock and so you computation is not coherent.
My data is "single-assignment" style: once an Input/Output pair is written into the Cache it is never modified, so that isn't an issue for me as long as I re-read after getting the upgrade lock. However, wouldn't re-reading after getting the upgrade lock address this in general?
Yes, this is equivalent to re-trying the whole 'transaction' Vicente
On Dec 7, 2011, at 7:05 PM, Kelvin Chung wrote:
It would appear that the proper way to implement Cache::query() is as follows:
Output query(const Input& in) { boost::shared_lockboost::shared_mutex readLock(mutex); if (cache.count(in) == 0) { readLock.unlock(); boost::unique_lockboost::shared_mutex writeLock(mutex);
// Check to see if the result was added while waiting for lock if (cache.count(in) != 0) return cache[in];
cache[in] = compute_output(in); return cache[in]; } else { return cache[in]; } }
Is this correct? Or maybe it should be this:
Yes, this is correct. Your code properly checks for the case that someone else got the writelock before this thread did. You have no need for upgrade functionality here. If the check that you do inside the writeLock for another writer was expensive or very inconvenient, then you might consider upgrade functionality. But you are correct that you can give only one thread an upgrade lock at a time, and so does you no good unless you have readers that you *know* will never want upgrade/write access. Howard
participants (5)
-
Brian Budge
-
Howard Hinnant
-
Kelvin Chung
-
Vicente Botet
-
Vicente J. Botet Escriba