
In the "Future of Boost.Thread" thread, Jeff Garland said:
I believe exception propagation can be a useful model at times, <snip> But as I believe I said earlier I hardly believe it is the most important capability that should be added to Boost threads.
In another post he said:
There are lots of other things that boost.threads lacks when stacked up against more comprehensive solutions which are probably more important than cross-thread exceptions...
In yet another post, Matt Hurd said:
I do need to set thread priorities however. This is more important to me than cancellation.
This leads me to ask, while the interest sparked by the "Future of Boost.Thread" thread is still hot: what are the most important things, in your opinion, that Boost.Thread lacks? Mike

Michael Glassford wrote:
I do need to set thread priorities however. This is more important to me than cancellation.
This leads me to ask, while the interest sparked by the "Future of Boost.Thread" thread is still hot: what are the most important things, in your opinion, that Boost.Thread lacks?
With the caveat that I do not use threads all that much, I can ask for - barrier (needed it recently, and besides, it's already on thread_dev branch). - thread-safe queue (no immediate need, but is likely to arise in future) - Volodya

Vladimir Prus <ghost@cs.msu.su> writes:
Michael Glassford wrote:
I do need to set thread priorities however. This is more important to me than cancellation.
This leads me to ask, while the interest sparked by the "Future of Boost.Thread" thread is still hot: what are the most important things, in your opinion, that Boost.Thread lacks?
With the caveat that I do not use threads all that much, I can ask for - barrier (needed it recently, and besides, it's already on thread_dev branch). - thread-safe queue (no immediate need, but is likely to arise in future)
I implemented a thread-safe queue on top of Boost.Threads about 18 months ago. It has a locking proxy that provides access to the raw container, as well as providing block-on-empty and block-on-full operations. e.g. // typedef mt::safe_queue<std::deque<Element> > queue_type; // queue_type *g_queue_ptr; { queue_type::proxy_type const &proxy (g_queue_ptr->get_proxy()); // Locks mutex. Noncompliant because scoped_lock is noncopyable // but works on gcc at least. This could be made compliant e.g. // using shared_ptr<proxy_type> proxy.block_while_empty (timeout); // Uses timed_wait (temporarily releases lock) while (!proxy->empty()) // Accesses raw container { // ... proxy->pop_front(); // Accesses raw container } } // Proxy destructor uses notify_one or notify_all The proxy destructor detects changes in the size of the container and notifies the queue's full or empty mutex as appropriate. I'm sure there are other and probably better implementations out there, but I guess blocking read/write would be a pretty common requirement... -- Raoul Gough. export LESS='-X'

Raoul Gough wrote:
- thread-safe queue (no immediate need, but is likely to arise in future)
I implemented a thread-safe queue on top of Boost.Threads about 18 months ago. It has a locking proxy that provides access to the raw container, as well as providing block-on-empty and block-on-full operations. e.g.
// typedef mt::safe_queue<std::deque<Element> > queue_type; // queue_type *g_queue_ptr;
{ queue_type::proxy_type const &proxy (g_queue_ptr->get_proxy()); // Locks mutex. Noncompliant because scoped_lock is noncopyable // but works on gcc at least. This could be made compliant e.g. // using shared_ptr<proxy_type>
Hmm.. I don't see scoped_ptr anywhere in the example?
proxy.block_while_empty (timeout); // Uses timed_wait (temporarily releases lock)
while (!proxy->empty()) // Accesses raw container { // ... proxy->pop_front(); // Accesses raw container }
I guess proxy's operator-> locks the mutex. Hmm... what happens if 1. The proxy->empty() returns false. 2. Another thread, using the fact that mutex is not locked now, extracts all the elements from queue. 3. The first thread executes proxy->pop_front() I might be missing something, but is wrapping of all operation in mutex (i.e. making them synchronized in Java-speak), makes the resulting container sufficiently safe? - Volodya

Vladimir Prus <ghost@cs.msu.su> writes:
Raoul Gough wrote:
- thread-safe queue (no immediate need, but is likely to arise in future)
I implemented a thread-safe queue on top of Boost.Threads about 18 months ago. It has a locking proxy that provides access to the raw container, as well as providing block-on-empty and block-on-full operations. e.g.
// typedef mt::safe_queue<std::deque<Element> > queue_type; // queue_type *g_queue_ptr;
{ queue_type::proxy_type const &proxy (g_queue_ptr->get_proxy()); // Locks mutex. Noncompliant because scoped_lock is noncopyable // but works on gcc at least. This could be made compliant e.g. // using shared_ptr<proxy_type>
Hmm.. I don't see scoped_ptr anywhere in the example?
I forget to say, proxy_type "has-a" scoped_lock, which of course makes the proxy_type objects also noncopyable.
proxy.block_while_empty (timeout); // Uses timed_wait (temporarily releases lock)
while (!proxy->empty()) // Accesses raw container { // ... proxy->pop_front(); // Accesses raw container }
I guess proxy's operator-> locks the mutex. Hmm... what happens if
Actually, the proxy object holds the lock all the time it exists, except during the calls to timed_wait within block_while_xxx.
1. The proxy->empty() returns false. 2. Another thread, using the fact that mutex is not locked now, extracts all the elements from queue. 3. The first thread executes proxy->pop_front()
I might be missing something, but is wrapping of all operation in mutex (i.e. making them synchronized in Java-speak), makes the resulting container sufficiently safe?
You're absolutely right, of course. Locking within each operation wouldn't be enough because you almost always need multiple operations to achieve anything useful. That's why the timed_wait facility is so good - it automatically releases the lock and reacquires it before returning. Other than those wait operations, the proxy object completely blocks access to the container, so you have to make sure it gets destroyed at the right time. e.g. here's the implementation of the somewhat high-level function safe_queue<>::push_front template<typename Container> class safe_queue { // ... void push_front ( value_type const &x, int tmout_msec = s_unlimited_time) { proxy_type proxy (*this); proxy.block_on_full (tmout_msec); proxy->push_front (x); } // ... }; If there's any interest, I could fix up the naming style in the code to match boost conventions and post it somewhere. I don't know how portable it would be, because of the non-copyable proxy type, but that could be fixed as I mentioned earlier. I developed it on my own time, so there wouldn't be any copyright issues. -- Raoul Gough. export LESS='-X'

both Ted Yuan's producer-consumer templates (cujjan2004) and the synchronized template at http://libcalc.sourceforge.net/synchronized.hpp are nice thread-safe container implementations... On Feb 11, 2004, at 10:33 AM, Vladimir Prus wrote:
Michael Glassford wrote:
I do need to set thread priorities however. This is more important to me than cancellation.
This leads me to ask, while the interest sparked by the "Future of Boost.Thread" thread is still hot: what are the most important things, in your opinion, that Boost.Thread lacks?
With the caveat that I do not use threads all that much, I can ask for - barrier (needed it recently, and besides, it's already on thread_dev branch). - thread-safe queue (no immediate need, but is likely to arise in future)
- Volodya
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

John Fuller wrote:
both Ted Yuan's producer-consumer templates (cujjan2004) and the synchronized template at http://libcalc.sourceforge.net/synchronized.hpp are nice thread-safe container implementations...
I could not find the first reference. Looking at the second, it seems to implement the same idea as Raoul Gough's presented. And of course, I'll ask the same question: if I have code like if (!proxy->empty()) { int value = proxy->front(); //.... } what prevents other thread from extracting all elements between the two calls? - Volodya

You're right about synchronized<> as far as containers are concerned. Here's Ted Yuan's ProducerConsumer code from CUJ Jan '04 which does explicitly lock around these cases (ex: in offer and poll calls for a channel (a template wrapper for a container type)). On Feb 13, 2004, at 2:23 AM, Vladimir Prus wrote:
John Fuller wrote:
both Ted Yuan's producer-consumer templates (cujjan2004) and the synchronized template at http://libcalc.sourceforge.net/synchronized.hpp are nice thread-safe container implementations...
I could not find the first reference. Looking at the second, it seems to implement the same idea as Raoul Gough's presented. And of course, I'll ask the same question: if I have code like
if (!proxy->empty()) { int value = proxy->front(); //.... }
what prevents other thread from extracting all elements between the two calls?
- Volodya
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

John Fuller <jfuller@wernervas.com> writes:
On Feb 13, 2004, at 2:23 AM, Vladimir Prus wrote:
John Fuller wrote:
both Ted Yuan's producer-consumer templates (cujjan2004) and the synchronized template at http://libcalc.sourceforge.net/synchronized.hpp are nice thread-safe container implementations...
I could not find the first reference. Looking at the second, it seems to implement the same idea as Raoul Gough's presented. And of course, I'll ask the same question: if I have code like
if (!proxy->empty()) { int value = proxy->front(); //.... }
what prevents other thread from extracting all elements between the two calls? You're right about synchronized<> as far as containers are concerned.
Here's Ted Yuan's ProducerConsumer code from CUJ Jan '04 which does explicitly lock around these cases (ex: in offer and poll calls for a channel (a template wrapper for a container type)).
This code uses a lot of reserved names (starting underscore upper-case letter) which is interesting. I can only assume the CUJ isn't too bothered by that kind of stuff.
#ifndef _PRO_CON_H #define _PRO_CON_H
// // Copyright (c) 2002 by Ted T. Yuan. // // Permission is granted to use this code without restriction as // long as this copyright notice appears in all source files. //
[snip]
// for producer thread... bool offer(_Tp item, long msecs = -1) // ignore msecs for now { Lock lk(monitor_); while (maxSize_ == ((_queueTp *)this)->size()) { bufferNotFull_.wait(lk); }
// push front push(item); bufferNotEmpty_.notify_one(); return true; }
The only problem with this approach is that you have to acquire and release the lock for every operation (e.g. each call to "offer" only inserts one object). By using an external proxy object, it is possible to grab the lock, check how much space is available and shutgun a bunch of stuff into the queue in one go: { Queue::proxy_type const &proxy (gQueuePtr->getProxy()); proxy.blockOnFull (timeout); // Queue::sUnlimitedTime); while (!proxy.full()) { proxy->push_back (something_or_other); // ... } } I guess it opens up the possibilities for misuse as well. BTW, the proxy destructor does a notify_one or notify_all by checking for changes in the size of container during its existence. This adds some complexity, of course, and might be a problem with std::list. -- Raoul Gough. export LESS='-X'

At Wednesday 2004-02-11 08:49, you wrote:
In the "Future of Boost.Thread" thread, Jeff Garland said:
I believe exception propagation can be a useful model at times, <snip> But as I believe I said earlier I hardly believe it is the most important capability that should be added to Boost threads.
In another post he said:
There are lots of other things that boost.threads lacks when stacked up against more comprehensive solutions which are probably more important than cross-thread exceptions...
In yet another post, Matt Hurd said:
I do need to set thread priorities however. This is more important to me than cancellation.
This leads me to ask, while the interest sparked by the "Future of Boost.Thread" thread is still hot: what are the most important things, in your opinion, that Boost.Thread lacks?
for me? 1) priority modification (both absolute and relative) 2) return value | exception propagation back to the "joiner"
Mike
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Victor A. Wagner Jr. http://rudbek.com The five most dangerous words in the English language: "There oughta be a law"

for me, cancellation and suspension would be nice. In a recent project I ended up using ACE tasks for the activities that required this functionality. Additionally, I ended up implementing a descendent of thread group that mapped names to threads so you could identify threads by name (so they could be address by a uri, for example). On Feb 11, 2004, at 12:45 PM, Victor A. Wagner, Jr. wrote:
At Wednesday 2004-02-11 08:49, you wrote:
In the "Future of Boost.Thread" thread, Jeff Garland said:
I believe exception propagation can be a useful model at times, <snip> But as I believe I said earlier I hardly believe it is the most important capability that should be added to Boost threads.
In another post he said:
There are lots of other things that boost.threads lacks when stacked up against more comprehensive solutions which are probably more important than cross-thread exceptions...
In yet another post, Matt Hurd said:
I do need to set thread priorities however. This is more important to me than cancellation.
This leads me to ask, while the interest sparked by the "Future of Boost.Thread" thread is still hot: what are the most important things, in your opinion, that Boost.Thread lacks?
for me? 1) priority modification (both absolute and relative) 2) return value | exception propagation back to the "joiner"
Mike
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Victor A. Wagner Jr. http://rudbek.com The five most dangerous words in the English language: "There oughta be a law" _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Behalf Of Alexander Nasonov Subject: [boost] Re: Future of Boost.Thread part II
Victor A. Wagner, Jr. wrote:
for me? 1) priority modification (both absolute and relative) 2) return value | exception propagation back to the "joiner" Thread pool + function execution in any available thread from the pool
-- Alexander Nasonov Independent Developer and Consultant
The thread_group is a thread pool as you describe. Does this meet your requirements? Matt Hurd.

Matthew Hurd wrote:
The thread_group is a thread pool as you describe. Does this meet your requirements?
No. I meant special group of threads all ready to execute user function on demand. thread_pool th_pool(5); // 5 threads inside // Execute f and g in a thread from th_pool asynchronous_result<int> i = th_pool(f(1, 2, 3)); asynchronous_result<double> d = th_pool(g()); -- Alexander Nasonov Independent Developer and Consultant

Alexander Nasonov wrote:
Matthew Hurd wrote:
The thread_group is a thread pool as you describe. Does this meet your requirements?
No. I meant special group of threads all ready to execute user function on demand.
Here is a scheduler based on Boost.Threads which does it... URL: www.rhapsodia.org Slawomir Lisznianski
participants (8)
-
Alexander Nasonov
-
John Fuller
-
Matthew Hurd
-
Michael Glassford
-
Raoul Gough
-
Slawomir Lisznianski
-
Victor A. Wagner, Jr.
-
Vladimir Prus