
Hi, I have noticed that in version 1.45 Ion Gaztanaga has added detail/robust_emulation.hpp which contains code that seems to emulate pthread_mutexattr_setrobust_np(). I understand this may be needed for platforms that don't support robust mutexes. My version of linux does seem to support robust mutexes, however I could not find a way to compile or in any other way to use this flag on posix mutexes in boost 1.46.1. Article here ( http://stackoverflow.com/questions/1179685/how-do-i-take-ownership-of-an-aba...) indicates the way to do this. But now emulation for robust mutexes has been added I wonder if I am missing something. Thanks, ddrum

El 29/03/2011 19:34, Rusk . escribió:
Hi,
I have noticed that in version 1.45 Ion Gaztanaga has added detail/robust_emulation.hpp which contains code that seems to emulate pthread_mutexattr_setrobust_np(). I understand this may be needed for platforms that don't support robust mutexes. My version of linux does seem to support robust mutexes, however I could not find a way to compile or in any other way to use this flag on posix mutexes in boost 1.46.1.
It's an experiment to emulate robust mutexes in platforms without them, but using this for platforms with posix process shared mutexes would have a big performance impact. I haven't tested this much and I have now too many fronts and pending updates to work on this, that's why it is in detail directory right now. It's experimental and it won't be stable, I'm afraid. I plan to continue to work on this after Boost.Move is stabilized and Boost.Container review passes. The idea is also to experiment a bit with different C++ implementations to find a correct C++ interface for robust mutexes, since POSIX semantics are not very easily mappable to current lock semantics. Meanwhile you can modify boost/interprocess/sync/posix/pthread_helpers.hpp and set robust mutex attribute in mutexattr_wrapper in you code. Ion

Hi Ion, Thanks very much for your reply. In my application I wanted to use interprocess_condition class as well - what would I need to modify in there to cope with EOWNERDEAD errors? Cheers, ddrum 2011/3/29 Ion Gaztañaga <igaztanaga@gmail.com>
El 29/03/2011 19:34, Rusk . escribió:
Hi,
I have noticed that in version 1.45 Ion Gaztanaga has added detail/robust_emulation.hpp which contains code that seems to emulate pthread_mutexattr_setrobust_np(). I understand this may be needed for platforms that don't support robust mutexes. My version of linux does seem to support robust mutexes, however I could not find a way to compile or in any other way to use this flag on posix mutexes in boost 1.46.1.
It's an experiment to emulate robust mutexes in platforms without them, but using this for platforms with posix process shared mutexes would have a big performance impact. I haven't tested this much and I have now too many fronts and pending updates to work on this, that's why it is in detail directory right now. It's experimental and it won't be stable, I'm afraid. I plan to continue to work on this after Boost.Move is stabilized and Boost.Container review passes.
The idea is also to experiment a bit with different C++ implementations to find a correct C++ interface for robust mutexes, since POSIX semantics are not very easily mappable to current lock semantics.
Meanwhile you can modify
boost/interprocess/sync/posix/pthread_helpers.hpp
and set robust mutex attribute in mutexattr_wrapper in you code.
Ion _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

El 31/03/2011 17:09, Rusk . escribió:
Hi Ion,
Thanks very much for your reply.
In my application I wanted to use interprocess_condition class as well - what would I need to modify in there to cope with EOWNERDEAD errors?
I'm afraid I haven't experimented with condition variables and EOWNERDEAD. Currently there is no interface to know if EOWNERDEAD was returned from internal condition variable wait. I don't know which solution will be best. You will need to experiment a bit... Best, Ion

On Mar 31, 2011, at 1:09 PM, Ion Gaztañaga wrote:
El 31/03/2011 17:09, Rusk . escribió:
Thanks very much for your reply.
In my application I wanted to use interprocess_condition class as well - what would I need to modify in there to cope with EOWNERDEAD errors?
I'm afraid I haven't experimented with condition variables and EOWNERDEAD. Currently there is no interface to know if EOWNERDEAD was returned from internal condition variable wait. I don't know which solution will be best. You will need to experiment a bit...
I developed an API for robust mutexes and associated condition variables that has been in use for a while by my employer. The key to the API is that all locking operations, which includes condition wait operations (since they implicitly unlock and then relock) take a handler function that is called within the lock context if the lock operation indicated EOWNERDEAD. This handler function should do one of the following: - die in turn, leaving the mutex "inconsistent" for the next locker, - determine it can proceed, possibly after performing some fixups, and reset the mutex to "consistent", or - determine that recovery is not possible and set the mutex to "unrecoverable", which cannot later be changed back to "consistent". In this API robust mutexes don't try to model the Boost.Thread Lockable concept. That concept's lock() and try_lock() operations don't provide any direct mechanism for indicating EOWNERDEAD to the caller. While robust mutexes could additionally provide such functions which fail in that situation, I don't think there's any real use-case for such behavior because the primary reason for using a robust mutex is to have access to the EOWNERDEAD information. I've been given permission by my employer to share this code if it will help get support for robust mutexes into boost.interprocess. I might need to sanitize it a bit first though, and the above discussion is really the important bit, the rest is just straight forward code.

On Mar 31, 2011, at 2:41 PM, Kim Barrett wrote:
On Mar 31, 2011, at 1:09 PM, Ion Gaztañaga wrote:
El 31/03/2011 17:09, Rusk . escribió:
Thanks very much for your reply.
In my application I wanted to use interprocess_condition class as well - what would I need to modify in there to cope with EOWNERDEAD errors?
I'm afraid I haven't experimented with condition variables and EOWNERDEAD. Currently there is no interface to know if EOWNERDEAD was returned from internal condition variable wait. I don't know which solution will be best. You will need to experiment a bit...
I developed an API for robust mutexes and associated condition variables that has been in use for a while by my employer.
The key to the API is that all locking operations, which includes condition wait operations (since they implicitly unlock and then relock) take a handler function that is called within the lock context if the lock operation indicated EOWNERDEAD. This handler function should do one of the following:
- die in turn, leaving the mutex "inconsistent" for the next locker,
- determine it can proceed, possibly after performing some fixups, and reset the mutex to "consistent", or
- determine that recovery is not possible and set the mutex to "unrecoverable", which cannot later be changed back to "consistent".
Forgot one thing: unlock of an "inconsistent" (e.g. returned EOWNERDEAD and not set consistent) sets the mutex unrecoverable.
In this API robust mutexes don't try to model the Boost.Thread Lockable concept. That concept's lock() and try_lock() operations don't provide any direct mechanism for indicating EOWNERDEAD to the caller. While robust mutexes could additionally provide such functions which fail in that situation, I don't think there's any real use-case for such behavior because the primary reason for using a robust mutex is to have access to the EOWNERDEAD information.
I've been given permission by my employer to share this code if it will help get support for robust mutexes into boost.interprocess. I might need to sanitize it a bit first though, and the above discussion is really the important bit, the rest is just straight forward code.

El 31/03/2011 21:04, Kim Barrett escribió:
On Mar 31, 2011, at 2:41 PM, Kim Barrett wrote:
I developed an API for robust mutexes and associated condition variables that has been in use for a while by my employer.
Thanks for the ideas. That would require adding a handler function to lock types, right? How does this work with a typical condition variable wait loop? Shouldn't we notify the result to the caller? We need Best, Ion

On Mar 31, 2011, at 4:30 PM, Ion Gaztañaga wrote:
El 31/03/2011 21:04, Kim Barrett escribió:
On Mar 31, 2011, at 2:41 PM, Kim Barrett wrote:
I developed an API for robust mutexes and associated condition variables that has been in use for a while by my employer.
Thanks for the ideas. That would require adding a handler function to lock types, right? How does this work with a typical condition variable wait loop? Shouldn't we notify the result to the caller?
Yes, the lock type associated with robust mutexes also needs to have its locking and constructor functions take a handler function. The caller provides the handler function. So one has something like class robust_mutex { public: ... constructors & etc ... template<class Handler> void lock(Handler handler); void unlock(); void set_consistent(); void set_unrecoverable(); }; class robust_lock { public: // lock mutex template<class Handler> robust_lock(robust_mutex& mutex, Handler handler); // unlock mutex ~robust_lock(); template<class Handler> void lock(Handler handler); void unlock(); robust_mutex& get_mutex(); }; class robust_condition { public: ... constructors & etc ... template<class Lock, class Handler> wait(Lock& lock, Handler handler); template<class Lock, class Handler> bool timed_wait(Lock& lock, <timetype> timeout, Handler handler); void notify_one(); void notify_all(); }; where in all cases the handler is called with an argument which is a reference to the mutex when the locking operation on the mutex returned EOWNERDEAD. The handler is called with the mutex locked.
participants (3)
-
Ion Gaztañaga
-
Kim Barrett
-
Rusk .