[threads] Improvement idea #1: Lock constructors [long]

I've been mentioning that I have some ideas about improvements for Boost.Threads. I intend to bring up these ideas one at a time for discussion. I'm particularly interested in finding out: 1) Whether people think that the idea would be an improvement. 2) If so, suggestions for improving the idea even more. The first idea has to do with the long discussion that occured some time back about the lock class--in particular, its constructors. There were several ideas about whether lock constructors should make the full range of lock, try_lock, and timed_lock functionality available and, if so, how; I don't recall any consensus being reached. I experimented for a while with a series of template classes that would allow a user to choose whatever interface they wanted, but another idea has occured to me since then that I don't remember being mentioned anywhere (if I'm wrong, my apologies to whoever mentioned it): to eliminate the locking contructors altogether. In this scheme, a lock object would no longer be responsible for locking a mutex, only for unlocking it. Instead, a mutex object would lock itself and transfer ownership of the lock to a lock object by way of a lock_transfer object. A sample of what the classes would look like and their usage is given at the end of this message. Some advantages I see to this approach: #1: it simplifies the lock class and allows it to be used to lock any costructor, no matter what its type (it can own a lock to an exclusive mutex, an exclusive lock to a shared_mutex, or a shared lock to a shared_mutex). #2: instead of the following, which in my mind leaves an ambiguity of whether the mutex should be unlocked, shared-locked, or exclusive-locked at the end: shared_lock l1(shared_mutex, SHARED_LOCK); ... { exclusive_lock l2 = l1.promote(); ... } It is possible to write this, which removes the ambiguity: lock l = shared_mutex.shared_lock(); ... l = shared_mutex.promote(l.transfer()); ... Comments? Mike //------------------------------ //Exclusive mutex usage examples //------------------------------ mutex_type m; lock_type l; //Create lock object that doesn't lock anything lock_type l = m.lock(); //Create lock object and lock mutex m lock_type l = m.try_lock(); //Create lock object and conditionally lock mutex m lock_type l = m.timed_lock(xxx); //Create lock object and conditionally lock mutex m l = m.lock(); //Lock mutex m (first unlocking whatever mutex l was previously locking, if any) l = m.try_lock(); //Conditionally lock mutex m (first unlocking whatever mutex l was previously locking, if any) l = m.timed_lock(xxx); //Conditionally lock mutex m (first unlocking whatever mutex l was previously locking, if any) l = other_lock.transfer(); //Transfer the lock from one lock object to another l.unlock(); //Unlock whatever mutex is locked (throw exception if nothing is locked) l.is_locked(); //Is l locking a mutex? //NOT ALLOWED: l.lock(), l.try_lock(), etc.; //------------------------------ //Shared mutex usage examples //------------------------------ lock_type l; //Create unlocked lock object lock_type l = m.lock(); //Create lock object and lock mutex m lock_type l = m.try_lock(); //Create lock object and conditionally lock mutex m lock_type l = m.timed_lock(xxx); //Create lock object and conditionally lock mutex m lock_type l = m.shared_lock(); //Create lock object and shared-lock mutex m lock_type l = m.shared_try_lock(); //Create lock object and conditionally shared-lock mutex m lock_type l = m.shared_timed_lock(xxx); //Create lock object and conditionally shared-lock mutex m l = m.lock(); //Lock mutex m (first unlocking whatever mutex l was previously locking, if any) l = m.try_lock(); //Conditionally lock mutex m (first unlocking whatever mutex l was previously locking, if any) l = m.timed_lock(xxx); //Conditionally lock mutex m (first unlocking whatever mutex l was previously locking, if any) l = m.shared_lock(); //Shared-lock mutex m (first unlocking whatever mutex l was previously locking, if any) l = m.try_shared_lock(); //Conditionally shared-lock mutex m (first unlocking whatever mutex l was previously locking, if any) l = m.timed_shared_lock(xxx); //Conditionally shared-lock mutex m (first unlocking whatever mutex l was previously locking, if any) lock_type l = other_lock.transfer(); l = other_lock.transfer(); l = m.promote(l.transfer()); l = m.try_promote(l.transfer()); l = m.timed_promote(l.transfer(), xxx); l = m.demote(l.transfer()); l = m.try_demote(l.transfer()); l = m.timed_demote(l.transfer(), xxx); l.unlock(); //Unlock whatever mutex is locked (throw exception if nothing is locked) l.is_locked(); //Is l locking a mutex? //NOT ALLOWED: l.lock(), l.try_lock(), etc.; //------------------------------ //Class definitions //------------------------------ class lock_transfer { public: //Used to return ownership of a lock from a mutex or //to transfer ownership of a lock from one lock object to another lock_transfer(...) {...} ~lock_transfer() { //unlock the mutex if ownership of the lock //hasn't been taken by a lock object } ... }; class lock { public: lock(void) { //create lock object that doesn't lock any mutex ... } lock(lock_transfer& transfer) { //copy necessary information out of transfer ... } ~lock() {if (is_locked()) unlock();} lock_transfer transfer(void) { ... } void unlock(void) { //unlock the mutex ... } bool is_locked(void) { ... } }; class mutex { public: mutex(void); ~mutex(); lock_transfer lock(void); //lock the mutex and return a lock_transfer lock_transfer try_lock(void); lock_transfer timed_lock(...); ... }; class shared_mutex { public: mutex(void); ~mutex(); lock_transfer lock(void); lock_transfer try_lock(void); lock_transfer timed_lock(time t); lock_transfer shared_lock(void); lock_transfer try_shared_lock(void); lock_transfer timed_shared_lock(time t); lock_transfer demote(lock_transfer& exclusive_lock); lock_transfer try_demote(lock_transfer& exclusive_lock); lock_transfer timed_demote(lock_transfer& exclusive_lock, time t); lock_transfer promote(lock_transfer& shared_lock); lock_transfer try_promote(lock_transfer& shared_lock); lock_transfer timed_promote(lock_transfer& shared_lock, time t); ... };

Michael Glassford wrote: [...] I'll think about your suggestion a bit more before I'll be able to comment, but just a quick note:
l = m.lock(); //Lock mutex m (first unlocking whatever mutex l was previously locking, if any)
This is not what will happen. m.lock() is executed first, then operator= is called and l is given the opportunity to release its lock. So if l happens to already hold m.lock(), the thread will deadlock. (And a deadlock can also occur if another thread holds a lock on m and is blocked on the mutex currently locked by l.)

Peter Dimov wrote:
Michael Glassford wrote:
[...]
I'll think about your suggestion a bit more before I'll be able to comment, but just a quick note:
l = m.lock(); //Lock mutex m (first unlocking whatever mutex l was previously locking, if any)
This is not what will happen. m.lock() is executed first, then operator= is called and l is given the opportunity to release its lock. So if l happens to already hold m.lock(), the thread will deadlock. (And a deadlock can also occur if another thread holds a lock on m and is blocked on the mutex currently locked by l.)
Actually, the comment in my pseudo-code above is an oversimplification of what I was actually thinking would happen. The actual transfer mechanism (when transfering from a mutex, at least) would be that the mutex would not actually be locked until the information was extracted from the lock_transfer object; the steps would be: 1) The mutex would build a lock_transfer object containing enough information to lock the mutex (this could be as simple as a this pointer and a member function pointer, either "raw" or using boost::bind or boost::function). 2) The lock_transfer object would be passed to the lock object's construtor or operator=. 3) The lock object would unlock itself if it's locked. 4) The lock object would extract the information from the lock_transfer object, which would lock the mutex in the process. This would also have the advantage that, if the information is never extracted from the lock_transfer object (for example, if mutex.lock() were called but the result were not assigned to a lock object), the mutex would never be locked. Mike

Michael Glassford <glassfordm@hotmail.com> wrote: Peter Dimov wrote:
Michael Glassford wrote:
[...]
I'll think about your suggestion a bit more before I'll be able to comment, but just a quick note:
l = m.lock(); //Lock mutex m (first unlocking whatever mutex l was previously locking, if any)
This is not what will happen. m.lock() is executed first, then operator= is called and l is given the opportunity to release its lock. So if l happens to already hold m.lock(), the thread will deadlock. (And a deadlock can also occur if another thread holds a lock on m and is blocked on the mutex currently locked by l.)
Actually, the comment in my pseudo-code above is an oversimplification of what I was actually thinking would happen. The actual transfer mechanism (when transfering from a mutex, at least) would be that the mutex would not actually be locked until the information was extracted from the lock_transfer object; the steps would be:
1) The mutex would build a lock_transfer object containing enough information to lock the mutex (this could be as simple as a this pointer and a member function pointer, either "raw" or using boost::bind or boost::function). 2) The lock_transfer object would be passed to the lock object's construtor or operator=. 3) The lock object would unlock itself if it's locked. 4) The lock object would extract the information from the lock_transfer object, which would lock the mutex in the process.
This would also have the advantage that, if the information is never extracted from the lock_transfer object (for example, if mutex.lock() were called but the result were not assigned to a lock object), the mutex would never be locked.
What is happenning on concurrent m.lock() invocations with the lock transfer object and their subsequent simultaneous use? Matt.

On 4/29/05, Michael Glassford <glassfordm@hotmail.com> wrote:
I've been mentioning that I have some ideas about improvements for Boost.Threads. I intend to bring up these ideas one at a time for discussion. I'm particularly interested in finding out:
Kewl.
1) Whether people think that the idea would be an improvement. 2) If so, suggestions for improving the idea even more.
The first idea has to do with the long discussion that occured some time back about the lock class--in particular, its constructors. There were several ideas about whether lock constructors should make the full range of lock, try_lock, and timed_lock functionality available and, if so, how; I don't recall any consensus being reached. I experimented for a while with a series of template classes that would allow a user to choose whatever interface they wanted, but another idea has occured to me since then that I don't remember being mentioned anywhere (if I'm wrong, my apologies to whoever mentioned it): to eliminate the locking contructors altogether. In this scheme, a lock object would no longer be responsible for locking a mutex, only for unlocking it. Instead, a mutex object would lock itself and transfer ownership of the lock to a lock object by way of a lock_transfer object.
A sample of what the classes would look like and their usage is given at the end of this message.
Some advantages I see to this approach:
#1: it simplifies the lock class and allows it to be used to lock any costructor, no matter what its type (it can own a lock to an exclusive mutex, an exclusive lock to a shared_mutex, or a shared lock to a shared_mutex).
#2: instead of the following, which in my mind leaves an ambiguity of whether the mutex should be unlocked, shared-locked, or exclusive-locked at the end:
shared_lock l1(shared_mutex, SHARED_LOCK); ... { exclusive_lock l2 = l1.promote(); ... }
It is possible to write this, which removes the ambiguity:
lock l = shared_mutex.shared_lock(); ... l = shared_mutex.promote(l.transfer()); ...
Comments?
Mike
Bad example I think. Personally, the risk of dead lock from trying to promote shared to an exclusive lock strikes me as goto-esque. I really struggle to find a compelling use case that makes the danger involved worthwhile. I recall suggesting that if the dangerous efficiency was really required it should be through a special promotable shareable mutex rather than the normal shareable one. I agree with your push to clean up the lock constructor interface. I currently wrap them all and use my own interface so I can have generic code for null, exclusive and shared ops. Not sure about the lock transfer object as it is really just another name for a lock. After all it is the token representing the locked state, or a lock by another name... Perhaps just being able to return a lock from a mutex and being able to transfer ownership by copying a lock like an auto ptr would give you the same thing. I need to think about this some more. Importantly, I believe you need to be able to request a share lock even of an exclusive mutex or null mutex so that you can write generic concurrent code. You write to a shared model and the other models work. In the code below, you can see I simply added an extra parameter to all the constructor with the lock status request to make it orthogonal. With your approach you could have a lock transfer object being able to be constructed from a shared_lock, exclusive_lock, method. I'd want to see all mutexes with shared and exclusive lock methods, including the null mutex that boost doesn't have, but I'm not sure your suggested approach has an advantage. matt. _________________ I use this lock redirection: template<class Mutex> struct null_lock { null_lock( Mutex& m, lock_status::type ls = lock_status::exclusive ) {} bool try_lock() {} bool timed_lock(const boost::xtime &) {} private: null_lock (const null_lock&); null_lock& operator= (const null_lock&); }; template<class Mutex> struct simple_lock { simple_lock( Mutex& m, lock_status::type ls = lock_status::exclusive) : lock_(m) {} bool try_lock() { return lock_.try_lock(); } bool timed_lock(const boost::xtime &xt ) { return timed_lock.timed_lock(xt); } private: simple_lock (const simple_lock&); simple_lock& operator= (const simple_lock&); private: typename Mutex::scoped_lock lock_; }; template<class Mutex> struct shareable_lock { shareable_lock( Mutex& m, lock_status::type ls = lock_status::exclusive) : lock_(m,convert(ls)) {} bool try_lock() { return lock_.try_lock(); } bool timed_lock(const boost::xtime &xt ) { return timed_lock.timed_lock(xt); } private: shareable_lock (const shareable_lock&); shareable_lock& operator= (const shareable_lock&); private: typename Mutex::scoped_read_write_lock lock_; }; With these kind of type wrappers: struct no_synch { struct nil_mutex { nil_mutex( lock_priority::type lp = lock_priority::exclusive ) {} ~nil_mutex() {} }; typedef nil_mutex mutex; typedef nil_mutex try_mutex; typedef nil_mutex timed_mutex; typedef null_lock<mutex> lock; typedef null_lock<try_mutex> try_lock; typedef null_lock<timed_mutex> timed_lock; typedef atomic_op_simple atomic_op; }; struct simple { protected: struct adapted_mutex : public boost::mutex { adapted_mutex( lock_priority::type lp = lock_priority::exclusive ) {} ~adapted_mutex() {} }; struct adapted_try_mutex : public boost::try_mutex { adapted_try_mutex( lock_priority::type lp = lock_priority::exclusive ) {} ~adapted_try_mutex() {} }; struct adapted_timed_mutex : public boost::timed_mutex { adapted_timed_mutex( lock_priority::type lp = lock_priority::exclusive ) {} ~adapted_timed_mutex() {} }; public: typedef adapted_mutex mutex; typedef adapted_try_mutex try_mutex; typedef adapted_timed_mutex timed_mutex; typedef simple_lock<mutex> lock; typedef simple_lock<try_mutex> try_lock; typedef simple_lock<timed_mutex> timed_lock; typedef atomic_op_interlocked atomic_op; }; struct recursive { protected: struct adapted_mutex : public boost::recursive_mutex { adapted_mutex( lock_priority::type lp = lock_priority::exclusive ) {} ~adapted_mutex() {} }; struct adapted_try_mutex : public boost::recursive_try_mutex { adapted_try_mutex( lock_priority::type lp = lock_priority::exclusive ) {} ~adapted_try_mutex() {} }; struct adapted_timed_mutex : public boost::recursive_timed_mutex { adapted_timed_mutex( lock_priority::type lp = lock_priority::exclusive ) {} ~adapted_timed_mutex() {} }; public: typedef adapted_mutex mutex; typedef adapted_try_mutex try_mutex; typedef adapted_timed_mutex timed_mutex; typedef simple_lock<mutex> lock; typedef simple_lock<try_mutex> try_lock; typedef simple_lock<timed_mutex> timed_lock; typedef atomic_op_interlocked atomic_op; }; struct shareable { protected: struct adapted_mutex : public boost::read_write_mutex { adapted_mutex( lock_priority::type lp = lock_priority::exclusive ) : read_write_mutex( convert( lp) ) {} ~adapted_mutex() {} }; struct adapted_try_mutex : public boost::try_read_write_mutex { adapted_try_mutex( lock_priority::type lp = lock_priority::exclusive ) : try_read_write_mutex( convert( lp) ) {} ~adapted_try_mutex() {} }; struct adapted_timed_mutex : public boost::timed_read_write_mutex { adapted_timed_mutex( lock_priority::type lp = lock_priority::exclusive ) : timed_read_write_mutex( convert( lp) ) {} ~adapted_timed_mutex() {} }; public: typedef adapted_mutex mutex; typedef adapted_try_mutex try_mutex; typedef adapted_timed_mutex timed_mutex; typedef shareable_lock<mutex> lock; typedef shareable_lock<try_mutex> try_lock; typedef shareable_lock<timed_mutex> timed_lock; typedef atomic_op_interlocked atomic_op; };

Matt Hurd wrote: [snip other stuff]
#2: instead of the following, which in my mind leaves an ambiguity of whether the mutex should be unlocked, shared-locked, or exclusive-locked at the end:
shared_lock l1(shared_mutex, SHARED_LOCK); ... { exclusive_lock l2 = l1.promote(); ... }
It is possible to write this, which removes the ambiguity:
lock l = shared_mutex.shared_lock(); ... l = shared_mutex.promote(l.transfer()); ...
Comments?
Mike
Bad example I think. Personally, the risk of dead lock from trying to promote shared to an exclusive lock strikes me as goto-esque. I really struggle to find a compelling use case that makes the danger involved worthwhile. I recall suggesting that if the dangerous efficiency was really required it should be through a special promotable shareable mutex rather than the normal shareable one.
If I read this correctly, it's an objection to the idea of promotion in general (except for the special case of a promoteable shared lock), in which case I tend to agree; however, I want to specify the syntax that promotion would use even if the Boost.Threads shared_mutex ends up not using it, because I want to allow custom shared_mutex classes that do use it to be used with Boost.Threads lock classes (better support for custom classes being used with Boost.Threads classes is something I plan to cover in a later posting).
I agree with your push to clean up the lock constructor interface. I currently wrap them all and use my own interface so I can have generic code for null, exclusive and shared ops.
Not sure about the lock transfer object as it is really just another name for a lock. After all it is the token representing the locked state, or a lock by another name... Perhaps just being able to return a lock from a mutex and being able to transfer ownership by copying a lock like an auto ptr would give you the same thing. I need to think about this some more.
Except that lock objects can only be transferred explicitly by calling their transfer() method and have public constructors, while lock_transfer objects transfer implicitly and can only be constructed by mutex and lock objects. You can't tell this from what I posted, however.
Importantly, I believe you need to be able to request a share lock even of an exclusive mutex or null mutex so that you can write generic concurrent code. You write to a shared model and the other models work.
I was intending to have better support for generic code, though I planned to cover that in a later posting. What happens when you request a shared lock from a non-shared mutex? It just gives you an exclusive lock? [snip code] Mike

Michael Glassford
Matt Hurd wrote:
If I read this correctly, it's an objection to the idea of promotion in general (except for the special case of a promoteable shared lock),
Yep, promotion is generally poor practice in my book as you have to try hard if you don't want to deadlock or end up with a false economy. Having a special mutex/lock that supports it is my preferred option for keeping it away from normal behaviour. "It's over there in the closet if you really insist..." ;-)
in which case I tend to agree; however, I want to specify the syntax that promotion would use even if the Boost.Threads shared_mutex ends up not using it, because I want to allow custom shared_mutex classes that do use it to be used with Boost.Threads lock classes (better support for custom classes being used with Boost.Threads classes is something I plan to cover in a later posting).
It still not clear to me how the lock transfers operate under simultaneous requests. What does the transfer object buy you over: That is, why l = m.promote(l.transfer()); l = m.promote(l); By the way I think the l is a bad name to use in examples as it is hard to see 1 l for I. lk might be better short hand.
I agree with your push to clean up the lock constructor interface. I currently wrap them all and use my own interface so I can have generic code for null, exclusive and shared ops.
Not sure about the lock transfer object as it is really just another name for a lock. After all it is the token representing the locked state, or a lock by another name... Perhaps just being able to return a lock from a mutex and being able to transfer ownership by copying a lock like an auto ptr would give you the same thing. I need to think about this some more.
Except that lock objects can only be transferred explicitly by calling their transfer() method and have public constructors, while lock_transfer objects transfer implicitly and can only be constructed by mutex and lock objects. You can't tell this from what I posted, however.
Should locks really be transferable or should shared_ptr< lock > be the idiom if you want to go beyond scoping?
Importantly, I believe you need to be able to request a share lock even of an exclusive mutex or null mutex so that you can write generic concurrent code. You write to a shared model and the other models work.
I was intending to have better support for generic code, though I planned to cover that in a later posting.
What happens when you request a shared lock from a non-shared mutex? It just gives you an exclusive lock?
Yes. It works just dandy as the precondition is weaker and the post condition is stronger. In a similar vein, shared and exclusive do nothing on the null mutex. Write to a shared model and get non-concurrent and exclusive model behaviour also. This is only possible if you write to a shared model as there is no real way to write from a simpler model and have sharing introduced automagically. As an aside, I guess writing to a promotable shareable model would like wise be substitutable back if the promotion always fails (shareable model) or always succeeds (exclusive model). This gives you a substitutable taxonomy of null->exclusive->shareable->promotable. Write the appropriate model and the "lesser" models work making architectural components much more flexible. Similar thing to what ACE has been doing for over a decade... Regards, Matt.

Matt Hurd wrote:
Michael Glassford
Matt Hurd wrote:
If I read this correctly, it's an objection to the idea of promotion in general (except for the special case of a promoteable shared lock),
Yep, promotion is generally poor practice in my book as you have to try hard if you don't want to deadlock or end up with a false economy. Having a special mutex/lock that supports it is my preferred option for keeping it away from normal behaviour. "It's over there in the closet if you really insist..." ;-)
in which case I tend to agree; however, I want to specify the syntax that promotion would use even if the Boost.Threads shared_mutex ends up not using it, because I want to allow custom shared_mutex classes that do use it to be used with Boost.Threads lock classes (better support for custom classes being used with Boost.Threads classes is something I plan to cover in a later posting).
It still not clear to me how the lock transfers operate under simultaneous requests.
"Transfer" from a mutex is really just a delayed call to a lock method, so I don't see a problem: if there are simultaneous requests, one blocks and the other doesn't. Transfer from a lock shouldn't happen simultaneously (locks, by design, are not thread-safe), so that shouldn't be a problem either.
What does the transfer object buy you over:
That is,
why l = m.promote(l.transfer());
l = m.promote(l);
In this case, the .transfer() isn't really necessary; if people prefer, it could be omitted. In the case of actually transfering from one lock to another, I think it's best require the .transfer() in order to make the transfer explicit.
By the way I think the l is a bad name to use in examples as it is hard to see 1 l for I. lk might be better short hand.
You're right; sorry.
I agree with your push to clean up the lock constructor interface. I currently wrap them all and use my own interface so I can have generic code for null, exclusive and shared ops.
Not sure about the lock transfer object as it is really just another name for a lock. After all it is the token representing the locked state, or a lock by another name... Perhaps just being able to return a lock from a mutex and being able to transfer ownership by copying a lock like an auto ptr would give you the same thing. I need to think about this some more.
Except that lock objects can only be transferred explicitly by calling their transfer() method and have public constructors, while lock_transfer objects transfer implicitly and can only be constructed by mutex and lock objects. You can't tell this from what I posted, however.
Should locks really be transferable or should shared_ptr< lock > be the idiom if you want to go beyond scoping?
Importantly, I believe you need to be able to request a share lock even of an exclusive mutex or null mutex so that you can write generic concurrent code. You write to a shared model and the other models work.
I was intending to have better support for generic code, though I planned to cover that in a later posting.
What happens when you request a shared lock from a non-shared mutex? It just gives you an exclusive lock?
Yes. It works just dandy as the precondition is weaker and the post condition is stronger.
OK. Just wanted to be sure I understood you.
In a similar vein, shared and exclusive do nothing on the null mutex. Write to a shared model and get non-concurrent and exclusive model behaviour also. This is only possible if you write to a shared model as there is no real way to write from a simpler model and have sharing introduced automagically.
I understand.
As an aside, I guess writing to a promotable shareable model would like wise be substitutable back if the promotion always fails (shareable model) or always succeeds (exclusive model).
This gives you a substitutable taxonomy of null->exclusive->shareable->promotable.
Write the appropriate model and the "lesser" models work making architectural components much more flexible.
Similar thing to what ACE has been doing for over a decade...
Mike

Michael Glassford wrote:
Matt Hurd wrote:
<snip>
It still not clear to me how the lock transfers operate under simultaneous requests.
"Transfer" from a mutex is really just a delayed call to a lock method, so I don't see a problem: if there are simultaneous requests, one blocks and the other doesn't.
Transfer from a lock shouldn't happen simultaneously (locks, by design, are not thread-safe), so that shouldn't be a problem either.
I was thinking of the initialisation of the lockd from the mutex by transfer, but I fail to see any problem today. I do like the simplicity of the concept in that the destructor just calls the bound method to unlock. However, do you think the overhead of such an approach for all locks would be worthwhile? I'm not sure current compilers will be able to inline and agressively optimise such an approach as well as the current approach. It will have to be benchmarked carefully I guess. Regards, Matt.

On Apr 28, 2005, at 9:31 PM, Michael Glassford wrote:
What happens when you request a shared lock from a non-shared mutex? It just gives you an exclusive lock?
Imho, a compile time error. A mutex has certain capabilities. Some mutexes may support shared access, some not. If a shared lock tries to "lock" a mutex, that means that it is trying to tell the mutex to "lock_sharable". If the mutex doesn't support that function, it should complain and loudly. Failure to do so may cause generic code to compile and cause silent (and subtle) run time errors. The generic code may be expecting shared locking, and may even work correctly if mistakenly given an exclusive lock, but perform orders of magnitude slower. When the performance hit is this big, a compile time error is preferable. (e.g. this is the only reason we don't give std::list an operator[](size_t) ). -Howard PS: Here is my wishlist of mutex capabilities. But this list isn't meant to imply that all mutexes must support all of these capabilities. Locks should be able to pick and choose what they need out of a mutex, using only a subset of this functionality when appropriate. A lock templated on a mutex will only require those mutex capabilities actually instantiated by the lock client. http://home.twcny.rr.com/hinnant/cpp_extensions/ threads_move.html#Summary%20of%20mutex%20operations

Howard Hinnant <hinnant@twcny.rr.com> wrote:
On Apr 28, 2005, at 9:31 PM, Michael Glassford wrote:
What happens when you request a shared lock from a non-shared mutex? It just gives you an exclusive lock?
Imho, a compile time error.
I disagree, which suggests perhaps it should be configurable to make us both happy.
A mutex has certain capabilities. Some mutexes may support shared access, some not. If a shared lock tries to "lock" a mutex, that means that it is trying to tell the mutex to "lock_sharable". If the mutex doesn't support that function, it should complain and loudly.
I can see performance issues, but I fail to see a correctness issue. Do you have an example? I can imagine a failure on code that requires concurrent shared access from two or more threads, but I've never seen such code.
Failure to do so may cause generic code to compile and cause silent (and subtle) run time errors. The generic code may be expecting shared locking, and may even work correctly if mistakenly given an exclusive lock, but perform orders of magnitude slower. When the performance hit is this big, a compile time error is preferable. (e.g. this is the only reason we don't give std::list an operator[](size_t) ).
I already extensively use such a generic capability with success. I find writing a component to a shareable model and being able to use it with exclusive or null mutexes very useful and an advance over model specific structures. $0.02 Matt.
-Howard
PS: Here is my wishlist of mutex capabilities. But this list isn't meant to imply that all mutexes must support all of these capabilities. Locks should be able to pick and choose what they need out of a mutex, using only a subset of this functionality when appropriate. A lock templated on a mutex will only require those mutex capabilities actually instantiated by the lock client.
http://home.twcny.rr.com/hinnant/cpp_extensions/ threads_move.html#Summary%20of%20mutex%20operations
I'll have a look. It is certainly impressive looking. There is a lot of capability I've never needed there...

On Apr 28, 2005, at 11:58 PM, Matt Hurd wrote:
Howard Hinnant <hinnant@twcny.rr.com> wrote:
On Apr 28, 2005, at 9:31 PM, Michael Glassford wrote:
What happens when you request a shared lock from a non-shared mutex? It just gives you an exclusive lock?
Imho, a compile time error.
I disagree, which suggests perhaps it should be configurable to make us both happy.
A mutex has certain capabilities. Some mutexes may support shared access, some not. If a shared lock tries to "lock" a mutex, that means that it is trying to tell the mutex to "lock_sharable". If the mutex doesn't support that function, it should complain and loudly.
I can see performance issues, but I fail to see a correctness issue. Do you have an example?
I can imagine a failure on code that requires concurrent shared access from two or more threads, but I've never seen such code.
The best I can do for now is rather vague: Imagine a system where there is nothing but readers, but one specific reader needs to occasionally promote itself to exclusive access for a write. This specific reader is always either in write mode or read mode, never completely releasing the lock. All other readers are sometimes in read mode, and sometimes not holding a lock. The above system will work fine as long as all of the sharable locks really are sharable. Specifically, if the one promotable lock is really holding an exclusive lock in read mode (by mistake), then all other reader threads are permanently locked out of the resource. It could be that I misunderstood the original question. In an attempt to clarify... A homogenous (generic) interface is achieved at the lock level. That is, sharable_lock has a member function called lock() which calls lock_sharable() on the underlying mutex. A scoped (or exclusive) lock has a member function called lock() which calls lock() (or lock_exclusive()) on the underlying mutex. Code that is generic in locks, for example: // atomically lock two locks (without fear of deadlock) template <class TryLock1, class TryLock2> void lock(TryLock1& l1, TryLock2& l2); will work whether the two locks are exclusive, sharable, promotable, or some mix, as they all share the same generic interface (lock(), try_lock(), etc.). So in a sense, if mutexes retain a heterogeneous interface, implementing only what they can truly deliver, and locks wrap the mutex with a homogenous interface, then we really do have the choice you speak of.
PS: Here is my wishlist of mutex capabilities. But this list isn't meant to imply that all mutexes must support all of these capabilities. Locks should be able to pick and choose what they need out of a mutex, using only a subset of this functionality when appropriate. A lock templated on a mutex will only require those mutex capabilities actually instantiated by the lock client.
http://home.twcny.rr.com/hinnant/cpp_extensions/ threads_move.html#Summary%20of%20mutex%20operations
I'll have a look. It is certainly impressive looking. There is a lot of capability I've never needed there...
It is really nothing more than:
This gives you a substitutable taxonomy of null->exclusive->shareable->promotable.
except null is missing. I've never used null, except when I throw a master switch that says: Ok, everybody out of the pool except one guy! (single thread mode and everything becomes a null lock) :-) I'm also suggesting lock transfers therein, but using the syntax of move semantics suggested in the committee papers. Boost could use that syntax, but it would be much more of a pain to support in the current language, and a few minor things still wouldn't work, for example: template <class Lock, class Mutex> Lock<Mutex> source_lock(Mutex& m) { Lock<Mutex> lock(m); // do something with lock here // ... return lock; } That is, the above code is illegal today assuming locks aren't copyable. But legal tomorrow assuming that locks are movable. However, boost could make syntax like this work: upgradable_lock read_lock(mut); // mut read locked ... scoped_lock write_lock(move(read_lock)); // mut promoted to write lock It would involve auto_ptr_ref-like tricks in the locks. For example (of such tricks applied to smart pointers): http://www.kangaroologic.com/move_ptr/ -Howard

Howard Hinnant wrote:
Matt Hurd wrote:
Howard Hinnant <hinnant@twcny.rr.com> wrote:
<snip>
I can see performance issues, but I fail to see a correctness issue. Do you have an example?
I can imagine a failure on code that requires concurrent shared access from two or more threads, but I've never seen such code.
The best I can do for now is rather vague:
Imagine a system where there is nothing but readers, but one specific reader needs to occasionally promote itself to exclusive access for a write. This specific reader is always either in write mode or read mode, never completely releasing the lock. All other readers are sometimes in read mode, and sometimes not holding a lock.
The above system will work fine as long as all of the sharable locks really are sharable. Specifically, if the one promotable lock is really holding an exclusive lock in read mode (by mistake), then all other reader threads are permanently locked out of the resource.
Yes, this meets my failure mode of requiring two or more reads to succeed. The example frightens me in a goto-esque way, if you know what I mean ;-)
It could be that I misunderstood the original question. In an attempt to clarify...
A homogenous (generic) interface is achieved at the lock level. That is, sharable_lock has a member function called lock() which calls lock_sharable() on the underlying mutex. A scoped (or exclusive) lock has a member function called lock() which calls lock() (or lock_exclusive()) on the underlying mutex.
Code that is generic in locks, for example:
// atomically lock two locks (without fear of deadlock) template <class TryLock1, class TryLock2> void lock(TryLock1& l1, TryLock2& l2);
will work whether the two locks are exclusive, sharable, promotable, or some mix, as they all share the same generic interface (lock(), try_lock(), etc.).
So in a sense, if mutexes retain a heterogeneous interface, implementing only what they can truly deliver, and locks wrap the mutex with a homogenous interface, then we really do have the choice you speak of.
Not sure I get it. Hasn't that just moved the cheese? I'm missing the part where you can write, say for example, a sharable namespace::vector<T, Synch> where Synch is your model that can be sharable, exclusive or null. I can see that your approach will work by redirecting the lock type being used for sharable and exclusive equivalents for each model, but it maintains similar issues. Is that right?
PS: Here is my wishlist of mutex capabilities. But this list isn't meant to imply that all mutexes must support all of these capabilities. Locks should be able to pick and choose what they need out of a mutex, using only a subset of this functionality when appropriate. A lock templated on a mutex will only require those mutex capabilities actually instantiated by the lock client.
http://home.twcny.rr.com/hinnant/cpp_extensions/ threads_move.html#Summary%20of%20mutex%20operations
I'll have a look. It is certainly impressive looking. There is a lot of capability I've never needed there...
It is really nothing more than:
This gives you a substitutable taxonomy of null->exclusive->shareable->promotable.
except null is missing. I've never used null, except when I throw a master switch that says: Ok, everybody out of the pool except one guy! (single thread mode and everything becomes a null lock) :-)
I currently have a system in production that uses single threading and multithreading approaches in the same process space for performance reasons. This is not the normal assumption of most concurrency libs. For example, I had to give up on boost::shared_ptr as I needed single threaded performance in a multi thread space. Some containers have dual ported interfaces, where they have a safe multithreaded interface and a single threaded interface. This posed some interesting challenges and is something to keep in mind. Not using a master null switch was important to me.
I'm also suggesting lock transfers therein, but using the syntax of move semantics suggested in the committee papers. Boost could use that syntax, but it would be much more of a pain to support in the current language, and a few minor things still wouldn't work, for example:
template <class Lock, class Mutex> Lock<Mutex> source_lock(Mutex& m) { Lock<Mutex> lock(m); // do something with lock here // ... return lock; }
That is, the above code is illegal today assuming locks aren't copyable. But legal tomorrow assuming that locks are movable.
However, boost could make syntax like this work:
upgradable_lock read_lock(mut); // mut read locked ... scoped_lock write_lock(move(read_lock)); // mut promoted to write lock
It would involve auto_ptr_ref-like tricks in the locks. For example (of such tricks applied to smart pointers):
Neat. Move will change the way we work one day. Meanwhile if I can conditionally compile out promoteable locks to keep them away from myself, I'd be happy. They remain in my, "very dangerous and not worth the grief" category, but I agree that corner cases may be made for them. I do wonder about the potential for lack of efficiency for a lock transfer object. I'll raise that in a reply to Mike's mail. Regards, Matt.

Matt Hurd wrote:
Howard Hinnant <hinnant@twcny.rr.com> wrote:
On Apr 28, 2005, at 9:31 PM, Michael Glassford wrote:
What happens when you request a shared lock from a non-shared mutex? It just gives you an exclusive lock?
Imho, a compile time error.
I disagree, which suggests perhaps it should be configurable to make us both happy.
I can see both sides of the argument being supported by numbers of people, so perhaps this would be a good solution. It would fit with my strategy of trying to add more configurability. Mike

Howard Hinnant wrote:
On Apr 28, 2005, at 9:31 PM, Michael Glassford wrote:
What happens when you request a shared lock from a non-shared mutex? It just gives you an exclusive lock?
Imho, a compile time error.
A mutex has certain capabilities. Some mutexes may support shared access, some not. If a shared lock tries to "lock" a mutex, that means that it is trying to tell the mutex to "lock_sharable". If the mutex doesn't support that function, it should complain and loudly.
Failure to do so may cause generic code to compile and cause silent (and subtle) run time errors. The generic code may be expecting shared locking, and may even work correctly if mistakenly given an exclusive lock, but perform orders of magnitude slower. When the performance hit is this big, a compile time error is preferable. (e.g. this is the only reason we don't give std::list an operator[](size_t) ).
-Howard
PS: Here is my wishlist of mutex capabilities. But this list isn't meant to imply that all mutexes must support all of these capabilities. Locks should be able to pick and choose what they need out of a mutex, using only a subset of this functionality when appropriate. A lock templated on a mutex will only require those mutex capabilities actually instantiated by the lock client.
This is exactly what the design I suggested is intended to accomplish, except that the lock class no longer needs to be templated on the mutex because the locking interface is actually in the mutex, not in the lock class at all. A single, non-templated lock class works with any mutex. In effect, a lock becomes a specialized ScopeGuard: all it needs to know is whether it is locked and what function to call to unlock. Mike
participants (4)
-
Howard Hinnant
-
Matt Hurd
-
Michael Glassford
-
Peter Dimov