[thread] synchronized_value: value and move semantics

Hi, boost::synchronized_value (not released yet [1][2][3]) is based on [4]. See below part of this paper adapted to the Boost.Thread interface. Currently boost::synchronized_value is in addition Copyable and Swappable. I was wondering if it is worth adding value and move semantics to synchronized_value. Making it EqualityComparable, LessThanComparable and Movable if the underlying type satisfy these requirements will allow to store them on a Boost.Container/C++11 container. Do you see something completely wrong with this addition? Has some of you had a need for this? Could you share the context? Best, Vicente [1] https://svn.boost.org/svn/boost/trunk/boost/thread/synchronized_value.hpp [2] https://svn.boost.org/svn/boost/trunk/libs/thread/example/synchronized_value... [3] https://svn.boost.org/svn/boost/trunk/libs/thread/example/synchronized_perso... [4] "Enforcing Correct Mutex Usage with Synchronized Values" by Anthony Williams http://www.drdobbs.com/cpp/enforcing-correct-mutex-usage-with-synch/22520026... ------------- [section The Problem with Mutexes] The key problem with protecting shared data with a mutex is that there is no easy way to associate the mutex with the data. It is thus relatively easy to accidentally write code that fails to lock the right mutex - or even locks the wrong mutex - and the compiler will not help you. std::mutex m1; int value1; std::mutex m2; int value2; int readValue1() { boost::lock_guard<boost::mutex> lk(m1); return value1; } int readValue2() { boost::lock_guard<boost::mutex> lk(m1); // oops: wrong mutex return value2; } Moreover, managing the mutex lock also clutters the source code, making it harder to see what is really going on. The use of synchronized_value solves both these problems - the mutex is intimately tied to the value, so you cannot access it without a lock, and yet access semantics are still straightforward. For simple accesses, synchronized_value behaves like a pointer-to-T; for example: boost::synchronized_value<std::string> value3; std::string readValue3() { return *value3; } void setValue3(std::string const& newVal) { *value3=newVal; } void appendToValue3(std::string const& extra) { value3->append(extra); } Both forms of pointer dereference return a proxy object rather than a real reference, to ensure that the lock on the mutex is held across the assignment or method call, but this is transparent to the user. [endsect] [/The Problem with Mutexes] [section Beyond Simple Accesses] The pointer-like semantics work very well for simple accesses such as assignment and calls to member functions. However, sometimes you need to perform an operation that requires multiple accesses under protection of the same lock, and that's what the synchronize() method provides. By calling synchronize() you obtain an strict_lock_ptr object that holds a lock on the mutex protecting the data, and which can be used to access the protected data. The lock is held until the strict_lock_ptr object is destroyed, so you can safely perform multi-part operations. The strict_lock_ptr object also acts as a pointer-to-T, just like synchronized_value does, but this time the lock is already held. For example, the following function adds a trailing slash to a path held in a synchronized_value. The use of the strict_lock_ptr object ensures that the string hasn't changed in between the query and the update. void addTrailingSlashIfMissing(boost::synchronized_value<std::string> & path) { boost::strict_lock_ptr<std::string> u=path.synchronize(); if(u->empty() || (*u->rbegin()!='/')) { *u+='/'; } } [endsect] [/Beyond Simple Accesses] [section Operations Across Multiple Objects] Though synchronized_value works very well for protecting a single object of type T, nothing that we've seen so far solves the problem of operations that require atomic access to multiple objects unless those objects can be combined within a single structure protected by a single mutex. One way to protect access to two synchronized_value objects is to construct a strict_lock_ptr for each object and use those to access the respective protected values; for instance: synchronized_value<std::queue<MessageType> > q1,q2; void transferMessage() { strict_lock_ptr<std::queue<MessageType> > u1 = q1.synchronize(); strict_lock_ptr<std::queue<MessageType> > u2 = q2.synchronize(); if(!u1->empty()) { u2->push_back(u1->front()); u1->pop_front(); } } This works well in some scenarios, but not all - if the same two objects are updated together in different sections of code then you need to take care to ensure that the strict_lock_ptr objects are constructed in the same sequence in all cases, otherwise you have the potential for deadlock. This is just the same as when acquiring any two mutexes. In order to be able to use the dead-lock free lock algorithms we need to use instead unique_lock_ptr, which is Lockable. synchronized_value<std::queue<MessageType> > q1,q2; void transferMessage() { unique_lock_ptr<std::queue<MessageType> > u1 = q1.unique_synchronize(boost::defer_lock); unique_lock_ptr<std::queue<MessageType> > u2 = q2.unique_synchronize(boost::defer_lock); boost::lock(u1,u2); // dead-lock free algorithm if(!u1->empty()) { u2->push_back(u1->front()); u1->pop_front(); } } [endsect] [/Operations Across Multiple Objects]

I just discovered synchronized_value from reading the 1.54.0 beta documentation. On Wed, Feb 20, 2013 at 7:02 PM, Vicente J. Botet Escriba < vicente.botet@wanadoo.fr> wrote:
Do you see something completely wrong with this addition?
No idea yet but I'm willing to test it in production code. See below for minor details.
Has some of you had a need for this? Could you share the context?
Yes, I think so. I have some cases (which I'm still working on but will be published as OSS soon) where I do something like that: struct ThingInfo { Id<ObjectInfo> id; std::string name; URI location; }; struct ThingContent { std::vector< Id<Foo> > foo_id_list; std::vector< Id<Bar> > bar_id_list; }; // this type MUST have thread-safe interface class Thing { public: explicit Thing( ThingInfo info ); ThingInfo info() const { boost::lock_guard<boost::mutex> lg( m_info_mutex ); return m_info; } ThingContent content() const { boost::lock_guard<boost::mutex> lg( m_content_mutex ); return m_content; } // + several modifying functions that use the work queue like that: void add( std::shared<Bar> bar ) { m_work_queue.push( [this, bar]{ m_bars.emplace_back( bar ); { boost::lock_guard<boost::mutex> lg( m_content_mutex ); m_content.bar_id_list.emplace_back( bar->id() ); } }); } void rename( std::string new_name ) { m_work_queue.push( [this, new_name]{ boost::lock_guard<boost::mutex> lg( m_content_mutex ); // SILENT ERROR!!! m_info.name = new_name; }); } private: WorkQueue m_work_queue; // tasks executed later std::vector< std::shared<Foo> > m_foos; // manipulated only by code in the work queue std::vector< std::shared<Bar> > m_bars; // idem ThingInfo m_info; // should be manipulated only with m_info_mutex locked ThingContent m_content; // should be manipulated only with m_content_mutex locked boost::mutex m_info_mutex; // protect m_info boost::mutex m_content_mutex; // protect m_content }; This is not real code but just how it looks like in my work-in-progress classes in my OSS project.
From reading the documentation, using synchronized_value I would then change the Thing implementation to:
// this type MUST have thread-safe interface class Thing { public: explicit Thing( ThingInfo info ); ThingInfo info() const { return *m_info; } ThingContent content() const { *return m_content; } // + several modifying functions that use the work queue like that: void add( std::shared<Bar> bar ) { m_work_queue.push( [this, bar]{ m_bars.emplace_back( bar ); m_content.synchronize()->bar_id_list.emplace_back( bar->id() ); }); } void rename( std::string new_name ) { m_work_queue.push( [this, new_name]{ m_info.synchronize()->name = new_name; // OK: CAN'T USE THE WRONG MUTEX!! }); } private: WorkQueue m_work_queue; // tasks executed later std::vector< std::shared<Foo> > m_foos; // manipulated only by code in the work queue std::vector< std::shared<Bar> > m_bars; // idem boost::sychronized_value<ThingInfo> m_info; boost::sychronized_value<ThingContent> m_content; }; This version is to me: - more explicit on reading; - shorter; - avoid some mistakes like the one pointed in the comments - which might be hard to spot; The only thing that bother me is that synchronized_value is a long name and the "_value" part is, in my opinion, too much. Also, maybe using the pointer-semantic operators is not the best idea. I guess that if std::optional does, then it's ok. Once Boost 1.54.0 is released I will have the opportunity to try it by refactoring my code. I'll report if I find issues. Joel Lamotte

Looks like I could have wrote: void add( std::shared<Bar> bar ) { m_work_queue.push( [this, bar]{ m_bars.emplace_back( bar ); m_content->bar_id_list.emplace_back( bar->id() ); // simplified }); } void rename( std::string new_name ) { m_work_queue.push( [this, new_name]{ m_info->name = new_name; // simplified }); } Which is even better. Now I have a question: Should the current (1.54.0 beta) version of synchronized_value work with multiple-readers-single-writer mutexes? Joel Lamotte

On 21/02/13 02:02, Vicente J. Botet Escriba wrote:
Hi,
boost::synchronized_value (not released yet [1][2][3]) is based on [4]. See below part of this paper adapted to the Boost.Thread interface.
Currently boost::synchronized_value is in addition Copyable and Swappable. I was wondering if it is worth adding value and move semantics to synchronized_value. Making it EqualityComparable, LessThanComparable and Movable if the underlying type satisfy these requirements will allow to store them on a Boost.Container/C++11 container.
Do you see something completely wrong with this addition? Has some of you had a need for this? Could you share the context?
Sorry, I'm a bit late to the party and not really answering the question. I can see the usefulness of synchronized_value for C++03, but not in C++11. It's just too easy to forget to call the synchronise() member: boost::synchronized_value<std::queue<int>> synch_queue; if(!synch_queue->empty()) synch_queue->pop(); when was was meant was (excuse use of auto, I've become lazy): boost::synchronized_value<std::queue<int>> synch_queue; { auto lock = synch_queue.synchronize(); if(!synch_queue->empty()) synch_queue->pop(); } This is neither safe or efficient (2 lock/unlocks). I think this should just not exist in C++11 and instead be replaced by something like monitor<T> described by Herb Sutter [1]: monitor<std::queue<int>> synch_queue; queue([](std::queue& q) { if(!q.empty()) q.pop(); } Now we're safe and efficient (one lock/unlock per block). Since movability is a C++11 thing, can we have something harder to use incorrectly than synchronized_value for C++11? I guess my point is rather than C++11ifying synchronized_value, can we have monitor instead? Thoughts? Ben [1] http://channel9.msdn.com/Shows/Going+Deep/C-and-Beyond-2012-Herb-Sutter-Conc... From 0:40:00

Ben Pope wrote
On 21/02/13 02:02, Vicente J. Botet Escriba wrote:
Hi,
boost::synchronized_value (not released yet [1][2][3]) is based on [4]. See below part of this paper adapted to the Boost.Thread interface.
Currently boost::synchronized_value is in addition Copyable and Swappable. I was wondering if it is worth adding value and move semantics to synchronized_value. Making it EqualityComparable, LessThanComparable and Movable if the underlying type satisfy these requirements will allow to store them on a Boost.Container/C++11 container.
Do you see something completely wrong with this addition? Has some of you had a need for this? Could you share the context?
Sorry, I'm a bit late to the party and not really answering the question.
I can see the usefulness of synchronized_value for C++03, but not in C++11.
Why?
It's just too easy to forget to call the synchronise() member:
boost::synchronized_value<std::queue<int>> synch_queue; if(!synch_queue->empty()) synch_queue->pop();
when was was meant was (excuse use of auto, I've become lazy):
boost::synchronized_value<std::queue<int>> synch_queue; { auto lock = synch_queue.synchronize(); if(!synch_queue->empty()) synch_queue->pop(); }
This is neither safe or efficient (2 lock/unlocks).
this should be boost::synchronized_value<std::queue<int>> synch_queue; { auto lock = synch_queue.synchronize(); if(!lock->empty()) lock->pop(); }
I think this should just not exist in C++11 and instead be replaced by something like monitor <T> described by Herb Sutter [1]:
monitor<std::queue<int>> synch_queue; queue([](std::queue& q) { if(!q.empty()) q.pop(); }
Sorry I don't understand the syntax.
Now we're safe and efficient (one lock/unlock per block).
This was just the intention of synchronize ;-)
Since movability is a C++11 thing, can we have something harder to use incorrectly than synchronized_value for C++11?
Could you elaborate?
I guess my point is rather than C++11ifying synchronized_value, can we have monitor instead?
Thoughts?
Could you summarize here what are the advantages the monitor you are referring to? Best, Vicente -- View this message in context: http://boost.2283326.n4.nabble.com/thread-synchronized-value-value-and-move-... Sent from the Boost - Dev mailing list archive at Nabble.com.

Le 26/06/13 18:03, Vicente Botet a écrit :
Ben Pope wrote
On 21/02/13 02:02, Vicente J. Botet Escriba wrote:
Hi,
boost::synchronized_value (not released yet [1][2][3]) is based on [4]. See below part of this paper adapted to the Boost.Thread interface.
Currently boost::synchronized_value is in addition Copyable and Swappable. I was wondering if it is worth adding value and move semantics to synchronized_value. Making it EqualityComparable, LessThanComparable and Movable if the underlying type satisfy these requirements will allow to store them on a Boost.Container/C++11 container.
Do you see something completely wrong with this addition? Has some of you had a need for this? Could you share the context? Sorry, I'm a bit late to the party and not really answering the question.
I can see the usefulness of synchronized_value for C++03, but not in C++11. Why?
It's just too easy to forget to call the synchronise() member:
boost::synchronized_value<std::queue<int>> synch_queue; if(!synch_queue->empty()) synch_queue->pop();
when was was meant was (excuse use of auto, I've become lazy):
boost::synchronized_value<std::queue<int>> synch_queue; { auto lock = synch_queue.synchronize(); if(!synch_queue->empty()) synch_queue->pop(); }
This is neither safe or efficient (2 lock/unlocks). this should be
boost::synchronized_value<std::queue<int>> synch_queue; { auto lock = synch_queue.synchronize(); if(!lock->empty()) lock->pop(); }
I think this should just not exist in C++11 and instead be replaced by something like monitor <T> described by Herb Sutter [1]:
monitor<std::queue<int>> synch_queue; queue([](std::queue& q) { if(!q.empty()) q.pop(); } Sorry I don't understand the syntax.
After looking at the slides I understand now what it means. I guess you meant sync_queue([](std::queue& q) { if(!q.empty()) q.pop(); } ); I like execute around a function locking pattern, and yes without lambdas this pattern is not practical, so no portable to C++03. Adding the function to synchronized_value would not be too hard. Best, Vicente

On 27/06/13 01:05, Vicente J. Botet Escriba wrote:
Le 26/06/13 18:03, Vicente Botet a écrit :
Ben Pope wrote
I can see the usefulness of synchronized_value for C++03, but not in C++11. Why?
It's just too easy to forget to call the synchronise() member:
boost::synchronized_value<std::queue<int>> synch_queue; if(!synch_queue->empty()) synch_queue->pop();
when was was meant was (excuse use of auto, I've become lazy):
boost::synchronized_value<std::queue<int>> synch_queue; { auto lock = synch_queue.synchronize(); if(!synch_queue->empty()) synch_queue->pop(); }
This is neither safe or efficient (2 lock/unlocks). this should be
boost::synchronized_value<std::queue<int>> synch_queue; { auto lock = synch_queue.synchronize(); if(!lock->empty()) lock->pop(); }
OK, I should have paid more attention to detail, my apologies. I think it's just a little bit too easy to forget to use the stack based lock, especially during code maintenance and moving from one call to more than one call.
I think this should just not exist in C++11 and instead be replaced by something like monitor <T> described by Herb Sutter [1]:
monitor<std::queue<int>> synch_queue; queue([](std::queue& q) { if(!q.empty()) q.pop(); } Sorry I don't understand the syntax.
After looking at the slides I understand now what it means. I guess you meant
sync_queue([](std::queue& q)
{ if(!q.empty()) q.pop(); } );
D'oh! Always compile example code. My bad.
I like execute around a function locking pattern, and yes without lambdas this pattern is not practical, so no portable to C++03. Adding the function to synchronized_value would not be too hard.
I'd settle for that, with a note that's it's preferable in C++11 :) Thanks. Ben

On Wed, Jun 26, 2013 at 12:37 PM, Ben Pope <benpope81@gmail.com> wrote:
I think this should just not exist in C++11 and instead be replaced by something like monitor<T> described by Herb Sutter [1]:
I'm using several variants of Monitor in my projects. It don't work well in cases where what you really want is syncrhonization, which happen sometime when the design suggests it. In this case you need a mutex, so synchronized_value helps with that. In my opinion, If the multiple access is a bottleneck in real application, it is very easy to spot with a profiler and modify the code as you pointed. Joel Lamotte

On Wed, Feb 20, 2013 at 10:02 PM, Vicente J. Botet Escriba < vicente.botet@wanadoo.fr> wrote:
Hi,
boost::synchronized_value (not released yet [1][2][3]) is based on [4]. See below part of this paper adapted to the Boost.Thread interface.
Currently boost::synchronized_value is in addition Copyable and Swappable. I was wondering if it is worth adding value and move semantics to synchronized_value. Making it EqualityComparable, LessThanComparable and Movable if the underlying type satisfy these requirements will allow to store them on a Boost.Container/C++11 container.
Do you see something completely wrong with this addition? Has some of you had a need for this? Could you share the context?
Sorry, I missed this discussion somehow. I've taken a quick look at the interface and have a few questions: 1. Why are there strict_lock_ptr and unique_lock_ptr? What are the differences and why we can't have one such ptr (presumably, unique_lock_ptr)? 2. I find value() and get() a bit confusing, since it is not apparent what is the difference between them. Maybe value() could be renamed to get_ref() or unsafe_get()? 3. Am I correct that having strict_lock_ptr/unique_lock_ptr acquired by calling synchronize() will not deadlock with operator-> when a non-recursive mutex is used?

Andrey Semashev-2 wrote
On Wed, Feb 20, 2013 at 10:02 PM, Vicente J. Botet Escriba <
vicente.botet@
wrote:
Hi,
boost::synchronized_value (not released yet [1][2][3]) is based on [4]. See below part of this paper adapted to the Boost.Thread interface.
Currently boost::synchronized_value is in addition Copyable and Swappable. I was wondering if it is worth adding value and move semantics to synchronized_value. Making it EqualityComparable, LessThanComparable and Movable if the underlying type satisfy these requirements will allow to store them on a Boost.Container/C++11 container.
Do you see something completely wrong with this addition? Has some of you had a need for this? Could you share the context?
Sorry, I missed this discussion somehow. I've taken a quick look at the interface and have a few questions:
1. Why are there strict_lock_ptr and unique_lock_ptr?
These a locking pointers that lock at construction and unlock at destruction. Both classes have pointer semantics.
What are the differences
strict_lock_ptr cannot be unlocked (something like lock_guard) and unique_lock_ptr is a model of Lockable so it provides the lock/unlock functions as unique_lock.
and why we can't have one such ptr (presumably, unique_lock_ptr)?
Sorry I don't understand.
2. I find value() and get() a bit confusing, since it is not apparent what is the difference between them. Maybe value() could be renamed to get_ref() or unsafe_get()?
You are right the names are not clear. get() is a explicit conversion. value() return a const reference.
3. Am I correct that having strict_lock_ptr/unique_lock_ptr acquired by calling synchronize() will not deadlock with operator-> when a non-recursive mutex is used?
To which operator-> are you referring to? the one from strict_lock_ptr/unique_lock_ptr? synchronize() is used to lock at block scope. The user must use the obtained locking pointer to access to the functions of the synchronized value using the operator->. I'm not sure to understand what could be the issue. best, Vicente -- View this message in context: http://boost.2283326.n4.nabble.com/thread-synchronized-value-value-and-move-... Sent from the Boost - Dev mailing list archive at Nabble.com.

On Wednesday 26 June 2013 08:55:33 Vicente Botet wrote:
Andrey Semashev-2 wrote
1. Why are there strict_lock_ptr and unique_lock_ptr?
These a locking pointers that lock at construction and unlock at destruction. Both classes have pointer semantics.
What are the differences
strict_lock_ptr cannot be unlocked (something like lock_guard) and unique_lock_ptr is a model of Lockable so it provides the lock/unlock functions as unique_lock.
and why we can't have one such ptr (presumably, unique_lock_ptr)?
Sorry I don't understand.
What I mean is that there is no apparent advantage of using strict_lock_ptr instead of unique_lock_ptr. strict_lock_ptr is a bit more limited than unique_lock_ptr but it doesn't provide anything in return. Yet it complicates synchronized_value interface and adds confusion (when should I call synchronize() and when unique_synchronize()?). So I don't see the point in having strict_lock_ptr. Note that although there exist both lock_guard and unique_lock, the former is more efficient when you don't need movability and optional locking of the latter. This is not the case with strict_lock_ptr and unique_lock_ptr since both use unique_lock internally.
3. Am I correct that having strict_lock_ptr/unique_lock_ptr acquired by calling synchronize() will not deadlock with operator-> when a non-recursive mutex is used?
To which operator-> are you referring to? the one from strict_lock_ptr/unique_lock_ptr? synchronize() is used to lock at block scope. The user must use the obtained locking pointer to access to the functions of the synchronized value using the operator->. I'm not sure to understand what could be the issue.
I was referring to synchronized_value<>::operator->. What I mean is this: synchronized_value< foo, mutex > sync_foo; auto locked = sync_foo.synchronize(); sync_foo->bar(); // deadlock?? If I use a non-recursive mutex, like the above, and store the strict_lock_ptr or unique_lock_ptr in locked, will I be able to call bar()? The operator-> is supposed to create a new strict_lock_ptr, which is supposed to attempt to lock the mutex again, isn't it? Perhaps, it would be better to restrict the number of lock_ptrs for a single synchronized_value that can exist concurrently to just one? Then the above code would be invalid and should be written as follows: synchronized_value< foo, mutex > sync_foo; auto locked = sync_foo.synchronize(); // sync_foo->bar(); // <-- assertion failure or exception locked->bar(); // ok

Le 26/06/13 19:55, Andrey Semashev a écrit :
On Wednesday 26 June 2013 08:55:33 Vicente Botet wrote:
Andrey Semashev-2 wrote
1. Why are there strict_lock_ptr and unique_lock_ptr? These a locking pointers that lock at construction and unlock at destruction. Both classes have pointer semantics.
What are the differences strict_lock_ptr cannot be unlocked (something like lock_guard) and unique_lock_ptr is a model of Lockable so it provides the lock/unlock functions as unique_lock.
and why we can't have one such ptr (presumably, unique_lock_ptr)? Sorry I don't understand. What I mean is that there is no apparent advantage of using strict_lock_ptr instead of unique_lock_ptr. strict_lock_ptr is a bit more limited than unique_lock_ptr but it doesn't provide anything in return. Yet it complicates synchronized_value interface and adds confusion (when should I call synchronize() and when unique_synchronize()?). So I don't see the point in having strict_lock_ptr.
Note that although there exist both lock_guard and unique_lock, the former is more efficient when you don't need movability and optional locking of the latter. This is not the case with strict_lock_ptr and unique_lock_ptr since both use unique_lock internally. The current implementation uses unique_lock, but I plan to use a specific implementation that is more efficient.
3. Am I correct that having strict_lock_ptr/unique_lock_ptr acquired by calling synchronize() will not deadlock with operator-> when a non-recursive mutex is used? To which operator-> are you referring to? the one from strict_lock_ptr/unique_lock_ptr? synchronize() is used to lock at block scope. The user must use the obtained locking pointer to access to the functions of the synchronized value using the operator->. I'm not sure to understand what could be the issue. I was referring to synchronized_value<>::operator->. What I mean is this:
synchronized_value< foo, mutex > sync_foo; auto locked = sync_foo.synchronize(); sync_foo->bar(); // deadlock??
If I use a non-recursive mutex, like the above, and store the strict_lock_ptr or unique_lock_ptr in locked, will I be able to call bar()? The operator-> is supposed to create a new strict_lock_ptr, which is supposed to attempt to lock the mutex again, isn't it? Right. Perhaps, it would be better to restrict the number of lock_ptrs for a single synchronized_value that can exist concurrently to just one? Then the above code would be invalid and should be written as follows:
synchronized_value< foo, mutex > sync_foo; auto locked = sync_foo.synchronize(); // sync_foo->bar(); // <-- assertion failure or exception locked->bar(); // ok
You could get the same behavior with a mutex that checks if the current thread is locking the mutex alteady. Boost.Thread contains a boost::testable_mutex (that I have not documented yet) that maybe can be updated to assert in this case. So the following would behave as you are requesting. synchronized_value< foo, testable_mutex > sync_foo; auto locked = sync_foo.synchronize(); // sync_foo->bar(); // <-- assertion failure or exception locked->bar(); // ok Best, Vicente

On Wednesday 26 June 2013 15:39:27 you wrote:
On Wed, Feb 20, 2013 at 10:02 PM, Vicente J. Botet Escriba <
vicente.botet@wanadoo.fr> wrote:
Hi,
boost::synchronized_value (not released yet [1][2][3]) is based on [4]. See below part of this paper adapted to the Boost.Thread interface.
Currently boost::synchronized_value is in addition Copyable and Swappable. I was wondering if it is worth adding value and move semantics to synchronized_value. Making it EqualityComparable, LessThanComparable and Movable if the underlying type satisfy these requirements will allow to store them on a Boost.Container/C++11 container.
Do you see something completely wrong with this addition? Has some of you had a need for this? Could you share the context?
Sorry, I missed this discussion somehow. I've taken a quick look at the interface and have a few questions:
1. Why are there strict_lock_ptr and unique_lock_ptr? What are the differences and why we can't have one such ptr (presumably, unique_lock_ptr)? 2. I find value() and get() a bit confusing, since it is not apparent what is the difference between them. Maybe value() could be renamed to get_ref() or unsafe_get()? 3. Am I correct that having strict_lock_ptr/unique_lock_ptr acquired by calling synchronize() will not deadlock with operator-> when a non-recursive mutex is used?
Also, if it's not too late yet: 4. Could synchronized_value be renamed to just synchronized? Besides being shorter, this naming seems to be aligned with optional and reads more naturally. Consider: optional< int > oi; synchronized< queue< int > > sqi; Just a thought.

On Wednesday 26 June 2013 23:56:27 you wrote:
On Wednesday 26 June 2013 15:39:27 you wrote:
On Wed, Feb 20, 2013 at 10:02 PM, Vicente J. Botet Escriba <
vicente.botet@wanadoo.fr> wrote:
Hi,
boost::synchronized_value (not released yet [1][2][3]) is based on [4]. See below part of this paper adapted to the Boost.Thread interface.
Currently boost::synchronized_value is in addition Copyable and Swappable. I was wondering if it is worth adding value and move semantics to synchronized_value. Making it EqualityComparable, LessThanComparable and Movable if the underlying type satisfy these requirements will allow to store them on a Boost.Container/C++11 container.
Do you see something completely wrong with this addition? Has some of you had a need for this? Could you share the context?
Sorry, I missed this discussion somehow. I've taken a quick look at the interface and have a few questions:
1. Why are there strict_lock_ptr and unique_lock_ptr? What are the differences and why we can't have one such ptr (presumably, unique_lock_ptr)? 2. I find value() and get() a bit confusing, since it is not apparent what is the difference between them. Maybe value() could be renamed to get_ref() or unsafe_get()? 3. Am I correct that having strict_lock_ptr/unique_lock_ptr acquired by calling synchronize() will not deadlock with operator-> when a non-recursive mutex is used?
Also, if it's not too late yet:
4. Could synchronized_value be renamed to just synchronized? Besides being shorter, this naming seems to be aligned with optional and reads more naturally. Consider:
optional< int > oi; synchronized< queue< int > > sqi;
...and also atomic: atomic< int > ai;

On 06/26/2013 09:56 PM, Andrey Semashev wrote:
4. Could synchronized_value be renamed to just synchronized? Besides being shorter, this naming seems to be aligned with optional and reads more naturally. Consider:
optional< int > oi; synchronized< queue< int > > sqi;
And the synchronize() function could be renamed to hold() to make the names more discernible.

Le 27/06/13 09:49, Bjorn Reese a écrit :
On 06/26/2013 09:56 PM, Andrey Semashev wrote:
4. Could synchronized_value be renamed to just synchronized? Besides being shorter, this naming seems to be aligned with optional and reads more naturally. Consider:
optional< int > oi; synchronized< queue< int > > sqi;
And the synchronize() function could be renamed to hold() to make the names more discernible.
I could change it if there is an agreement of the Boost community. Vicente

On Thu, Jun 27, 2013 at 8:48 PM, Vicente J. Botet Escriba < vicente.botet@wanadoo.fr> wrote:
I could change it if there is an agreement of the Boost community.
(sorry for the delay, I missed this one) I agree that it should be changed, in synchronized_value the value part is not necessary, and very verbose. I have another question: Why is there no move operator? I see a move constructor but no move operator, but I suppose it could be implemented if T is movable? Or is it not possible?

On Tuesday 24 September 2013 18:12:31 Klaim - Joël Lamotte wrote:
On Thu, Jun 27, 2013 at 8:48 PM, Vicente J. Botet Escriba <
vicente.botet@wanadoo.fr> wrote:
I could change it if there is an agreement of the Boost community.
(sorry for the delay, I missed this one)
I agree that it should be changed, in synchronized_value the value part is not necessary, and very verbose.
Do we plan to extract it from Boost.Thread to Boost.Sync? If so, the rename could be done in the process.
I have another question: Why is there no move operator? I see a move constructor but no move operator, but I suppose it could be implemented if T is movable? Or is it not possible?
synchronized_value has an internal mutex, which is not movable. But I think you should be able to move the contained value using the synchronize() functions.

On Tue, Sep 24, 2013 at 7:19 PM, Andrey Semashev <andrey.semashev@gmail.com>wrote:
synchronized_value has an internal mutex, which is not movable. But I think you should be able to move the contained value using the synchronize() functions.
Yes that's what I was thinking about, the copy operator already copy the value but not the mutex (from the documentation).

Le 26/06/13 21:56, Andrey Semashev a écrit :
On Wednesday 26 June 2013 15:39:27 you wrote: Also, if it's not too late yet:
4. Could synchronized_value be renamed to just synchronized? Besides being shorter, this naming seems to be aligned with optional and reads more naturally. Consider:
optional< int > oi; synchronized< queue< int > > sqi;
I could change it if there is an agreement of the Boost community. Vicente
participants (1)
-
Andrey Semashev
-
Ben Pope
-
Bjorn Reese
-
Klaim - Joël Lamotte
-
Vicente Botet
-
Vicente J. Botet Escriba