
Hi Phil, ----- Original Message ----- From: "Phil Endecott" <spam_from_boost_dev@chezphil.org> To: <boost@lists.boost.org> Sent: Friday, May 09, 2008 7:06 PM Subject: Re: [boost] [thread] is there an interest onthis unique_to_shared_mutex_adapter class?
vicente.botet wrote:
template <typename Lockable> class shared_lockable_adapter { shared_lockable_adapter(Lockable& mtx): mtx_(mtx) {} ~shared_lockable_adapter() {} void lock_shared() {mtx_.lock();} void unlock_shared() {mtx_.unlock();} bool try_lock_shared() { return mtx_.try_lock();} void lock() {mtx_.lock();} void unlock() {mtx_.unlock();} bool try_lock() { return mtx_.try_lock();} // other functions .... private: Lockable& mtx_; };
So this is a trivial adapter that makes a shared mutex that can't actually be shared.
Yes you are right, you will not allow shared owneship, but that allows to use generic functions that works for a ExclusiveLockable and than can perform better for a SharedLocable mutex, and it is up to the user to choose between an adaptation of mutex or a shared_mutex. If the performances are obtained using the SharedLockable extensions the function must be defined in terms of this concept. Conside an application that must use a lot of some classes and algorithms and preserv a transaction behaviour, so some kind of common mutex will be used to synchonize externally all this classes and algorithms. Supose that some classes or algorithms have been templated with a SharedLockable. Even if the SharedLockable classes and algorithms can perform better with shared_mutex when isolated, this do not implies that the whole application will perform better. This is where the shared_lockable_adapter can be used. All this depends on the user application.
It would also be possible, I think, to make an adapter that actually creates a shared mutex from a normal one. I've never had to write a read/write mutex but I guess that this is how they are implemented internally.
The better adapter will be to take a shared_mutex directly as it can be used directly on contexts waiting a ExclusiveLockable.
There are various usage scenarios. For example, I may have a generic algorithm that really needs a shared lock that can be shared, i.e. it creates threads that must run concurrently. If I concept-check that the passed mutex is a SharedLockable, but it's actually your adaptor, I'll get a runtime deadlock. Maybe we need some more fine-grained concepts to express these requirements.
Surely you are right. The adaptor will limit the concurrence. Only if the thread taking the shared-lock (exclusive lock with the adaptor) needs some kind of active communication from the other thread while the mutex is locked, we will have dead lock. For the other cases, the adapter works as expected. I don't know if your example is a particular case or a common one. I use to do this way and I have never fall in a deadlock to the point that I thought that a sharable lock was substituable by a exclusive lock. Evidently I was completly wrong, we can not substitute in any semantic context. But it works on a lot of situations. This do not means that all the algorithms expecting a SharedLockable will result on a deadlock with the adapter of a ExclusiveLockable mutex, very often the choice of a SharedLockable or a ExclusiveLockable is a matter of efficiency. How to express that is another history. Maybe we need to state that the parameter is a model of a SharedLocable or a shared_lockable_adapter of an ExclusiveLockable. But this seams yet to specific, what if the function works as expected for another adaptation of another concept. Been forced to define two functions with the same algorithms (copy/paste+rename) that differ only on the name of the functions called do not seams better. This is pure theory, we are not talking of a particular function, but IMHO we need to define the function working for model of the SharedLocable concept, and add some textual semantic constraints. shared_lockable_adapter<model of ExclusiveLockable> is a model of SharedLocable that could or not satisfy the semantic constraints. It is up to the developer to use it or not. I 'm sure that this is not the first time we have something like this. Some thoughts? If the requirements you are talking off are related to the semantics of the operations, you are right, the current C++ concepts wouldn't be of any help, concepts do not capture semantics. Which kind of more fine-grained concepts are you referring?
Certainly it would help if algorithms could work with both shared and non-shared mutexes without extra effort.
Note that all the algoritms that works today with exclusive mutexes, can be refactored to use shared-locks instead of exclusive locks on the reading blocks. All these algorithms will work with both shared and exclusive mutexes without any extra effort.
I'm not yet aware of much generic code that uses mutexes and lock concepts. No doubt we will learn more as that code appears.
You are right, there is not too much generic code that let the user the synchronization context, most are not thread safe, and delegate this responsability to the user. I supose that this kind of code will be more frequent than we use to. We can see the work on lock-free algorithms and containers. Thread safety starts to be a must as it was exception safety some years ago. The multi-core/multi-processor/grid architectures push the software to go in this direction, parallelism, parallelism ... and even if this will be more complex without the good abstractions and there will be a long way to get them, the result will be more close to our thinking. Sorry for my english and thanks for your pertinency Vicente