
Hello, the new version of Boost.Threadpool depends on Boost.Fiber. regards, Oliver

k-oli@gmx.de wrote:
Hello,
the new version of Boost.Threadpool depends on Boost.Fiber.
Forgive my ignorance but http://tinyurl.com/6r3l6u claims that "fibers do not provide advantages over a well-designed multithreaded application". What are the benefits of using them in a threadpool? Thanks, -- Michael Marcin

Am Samstag, 1. November 2008 00:53:11 schrieb Michael Marcin:
k-oli@gmx.de wrote:
Hello,
the new version of Boost.Threadpool depends on Boost.Fiber.
Forgive my ignorance but http://tinyurl.com/6r3l6u claims that "fibers do not provide advantages over a well-designed multithreaded application".
What are the benefits of using them in a threadpool?
Thanks,
Hello Michael, using fibers in a threadpool enables fork/join semantics (algorithms which recursively splitt an action into smaller sub-actions; forking the sub-actions into separate worker threads, so that they run in parallel on multiple cores) For instance the recursive calculation of fibonacci numbers (I know it is not the best algorithm for fibonacci calculation): class fibo { private: pool_type & pool_; int offset_; int seq_( int n) { if ( n <= 1) return n; else return seq_( n - 2) + seq_( n - 1); } int par_( int n) { if ( n <= offset_) return seq_( n); else { tp::task< int > t1( pool_.submit( boost::bind( & fibo::par_, boost::ref( * this), n - 1) ) ); tp::task< int > t2( pool_.submit( boost::bind( & fibo::par_, boost::ref( * this), n - 2) ) ); // without fibers this code line would block until t1 and t2 are executed // if all worker-threads of the pool are waiting in this line the app // blocks forever return t1.get() + t2.get(); } } public: fibo( pool_type & pool, int offset) : pool_( pool), offset_( offset) {} int execute( int n) { return par_( n); } }; int main( int argc, char *argv[]) { try { pool_type pool( tp::poolsize( 2) ); fibo fib( pool, 1); // calulates the fibonacci numbers from 0 - 5 for( int i = 0; i <= 5; ++i) { tp::task< int > t( pool.submit( boost::bind( & fibo::execute, boost::ref( fib), i) ) ); std::cout << "fibonacci of " << i << " == " << t.get() << std::endl; } pool.shutdown(); } catch ( boost::thread_interrupted const& ) { std::cerr << "thread_interrupted: thread was interrupted" << std::endl; } catch ( std::exception const& e) { std::cerr << "exception: " << e.what() << std::endl; } catch ( ... ) { std::cerr << "unhandled" << std::endl; } return EXIT_SUCCESS; } regards, Oliver

k-oli wrote:
Am Samstag, 1. November 2008 00:53:11 schrieb Michael Marcin:
k-oli@gmx.de wrote:
Hello,
the new version of Boost.Threadpool depends on Boost.Fiber.
What are the benefits of using them in a threadpool?
Hello Michael,
using fibers in a threadpool enables fork/join semantics (algorithms which recursively splitt an action into smaller sub-actions; forking the sub-actions into separate worker threads, so that they run in parallel on multiple cores)
Hi, IMO, the implementation of the fork/join semantics do not need fibers. The wait/get functions can call to the thread_pool scheduler without context-switch. Which are the advantages of using fibers over calling recursively to the scheduler? Vicente -- View this message in context: http://www.nabble.com/-threadpool--new-version-v12-tp20274374p20283153.html Sent from the Boost - Dev mailing list archive at Nabble.com.

Am Samstag, 1. November 2008 19:35:23 schrieb Vicente Botet Escriba:
IMO, the implementation of the fork/join semantics do not need fibers. The wait/get functions can call to the thread_pool scheduler without context-switch. Which are the advantages of using fibers over calling recursively to the scheduler?
Fork/join semantics means: a taks creates a tree ov several sub-tasks (forking) and uses the results of this sub-tasks for its computiations (joining). You know the recursive algorithm for calculating fibonacci numbers - each task would insert two new sub-task in the scheduler until (n==0 or n==1) - it creates a tree of sub-tasks each caculating a fibonacci number. Because you have to wait for the result of fibonacci(n)=fibonacci(n-1)+fibonacci(n-2) you would blockallwork-threads if the tree of sub-task becomes too large (original n). Please take alook into the example folder of threadpool. You will find two exmaples for recursivly calculate fibonacci. Configure the pool with tp::fibers_disabled and try to calulate fibonacci(3) with two worker-threads. Your application will block forever. Use the option tp::fiber_enabled and you can calculate any fibonacci number without blocking Oliver

k-oli@gmx.de writes:
Am Samstag, 1. November 2008 19:35:23 schrieb Vicente Botet Escriba:
IMO, the implementation of the fork/join semantics do not need fibers. The wait/get functions can call to the thread_pool scheduler without context-switch. Which are the advantages of using fibers over calling recursively to the scheduler?
Please take alook into the example folder of threadpool. You will find two exmaples for recursivly calculate fibonacci. Configure the pool with tp::fibers_disabled and try to calulate fibonacci(3) with two worker-threads. Your application will block forever. Use the option tp::fiber_enabled and you can calculate any fibonacci number without blocking
I haven't looked at Oliver's use of Fibers, but you don't need to use fibers to do this. Whenever you would switch fibers to a new task, just call the task recursively on the current stack instead. The problem here is that you may run out of stack space if the recursion is too deep: by creating a Fiber for the new task you can control the stack space. The problem with doing this (whether you use Fibers or just recurse on the same stack) is that the nested task inherits context from its parent: locked mutexes, thread-local data, etc. If the tasks are not prepared for this the results may not be as expected (e.g. thread waits on a task, resumes after waiting and finds all its thread-local variables have been munged). Anthony -- Anthony Williams Author of C++ Concurrency in Action | http://www.manning.com/williams Custom Software Development | http://www.justsoftwaresolutions.co.uk Just Software Solutions Ltd, Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL, UK

Am Samstag, 1. November 2008 23:03:07 schrieb Anthony Williams:
k-oli@gmx.de writes:
Am Samstag, 1. November 2008 19:35:23 schrieb Vicente Botet Escriba:
IMO, the implementation of the fork/join semantics do not need fibers. The wait/get functions can call to the thread_pool scheduler without context-switch. Which are the advantages of using fibers over calling recursively to the scheduler?
Please take alook into the example folder of threadpool. You will find two exmaples for recursivly calculate fibonacci. Configure the pool with tp::fibers_disabled and try to calulate fibonacci(3) with two worker-threads. Your application will block forever. Use the option tp::fiber_enabled and you can calculate any fibonacci number without blocking
I haven't looked at Oliver's use of Fibers, but you don't need to use fibers to do this.
The idea behind using fiber inside a threadpool is, that each worker-thread executes multiple fibers - fiber would yield its execution if future::get() would block (because future is not ready) Oliver

k-oli@gmx.de writes:
Am Samstag, 1. November 2008 23:03:07 schrieb Anthony Williams:
k-oli@gmx.de writes:
Am Samstag, 1. November 2008 19:35:23 schrieb Vicente Botet Escriba:
IMO, the implementation of the fork/join semantics do not need fibers. The wait/get functions can call to the thread_pool scheduler without context-switch. Which are the advantages of using fibers over calling recursively to the scheduler?
Please take alook into the example folder of threadpool. You will find two exmaples for recursivly calculate fibonacci. Configure the pool with tp::fibers_disabled and try to calulate fibonacci(3) with two worker-threads. Your application will block forever. Use the option tp::fiber_enabled and you can calculate any fibonacci number without blocking
I haven't looked at Oliver's use of Fibers, but you don't need to use fibers to do this.
The idea behind using fiber inside a threadpool is, that each worker-thread executes multiple fibers - fiber would yield its execution if future::get() would block (because future is not ready)
Exactly. That's what my prototype does too. My point is that you can do this without fibers too, but you might run out of stack. One thing you can do with fibers that you can't easily do with a single stack is switch back to the parent task when the nested task blocks. Doing so allows you to run *other* tasks from the pool if a thread blocks and the task it is waiting for is already running elsewhere. You can also migrate tasks between threads. Doing either of these requires that the task is prepared for it. Anthony -- Anthony Williams Author of C++ Concurrency in Action | http://www.manning.com/williams Custom Software Development | http://www.justsoftwaresolutions.co.uk Just Software Solutions Ltd, Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL, UK

Am Sonntag, 2. November 2008 00:03:44 schrieb Anthony Williams:
The idea behind using fiber inside a threadpool is, that each worker-thread executes multiple fibers - fiber would yield its execution if future::get() would block (because future is not ready)
Exactly. That's what my prototype does too. My point is that you can do this without fibers too, but you might run out of stack.
Interessting - I'd like to see how the code works. Is it possible to access your prototype?
One thing you can do with fibers that you can't easily do with a single stack is switch back to the parent task when the nested task blocks. Doing so allows you to run *other* tasks from the pool if a thread blocks and the task it is waiting for is already running elsewhere. You can also migrate tasks between threads.
That's the way boost.threadpool uses boost.fiber (but fibers are not exchanged between worker-threads -> work-stealing is done only on un-fibered tasks waiting in the local queue of a worker-thread).
Doing either of these requires that the task is prepared for it.
in which sense (sorry I'm not aware of)
Anthony
regards, Oliver

The idea behind using fiber inside a threadpool is, that each worker-thread executes multiple fibers - fiber would yield its execution if future::get() would block (because future is not ready)
Exactly. That's what my prototype does too. My point is that you can do this without fibers too, but you might run out of stack.
Interessting - I'd like to see how the code works. Is it possible to access your prototype?
One thing you can do with fibers that you can't easily do with a single stack is switch back to the parent task when the nested task blocks. Doing so allows you to run *other* tasks from the pool if a thread blocks and the task it is waiting for is already running elsewhere. You can also migrate tasks between threads.
That's the way boost.threadpool uses boost.fiber (but fibers are not exchanged between worker-threads -> work-stealing is done only on un-fibered tasks waiting in the local queue of a worker-thread).
IMHO, work stealing is independent from moving tasks between threads. Work stealing is a specific way of scheduling, while moving tasks between threads is a matter of resource allocation. For instance we have a simple one queue round robin fiber scheduler with several threads picking up the next available task. It's built on top of a lock-free queue and doesn't involve any explicit locking at all. If yielded for some reason tasks get re-scheduled (reinserted into the queue). That's very fast and powerful, and I don't know if it's easy to implement without fibers. Regards Hartmut

Am Sonntag, 2. November 2008 17:17:10 schrieb Hartmut Kaiser:
The idea behind using fiber inside a threadpool is, that each worker-thread executes multiple fibers - fiber would yield its
execution
if future::get() would block (because future is not ready)
Exactly. That's what my prototype does too. My point is that you can do this without fibers too, but you might run out of stack.
Interessting - I'd like to see how the code works. Is it possible to access your prototype?
One thing you can do with fibers that you can't easily do with a single stack is switch back to the parent task when the nested task blocks. Doing so allows you to run *other* tasks from the pool if a thread blocks and the task it is waiting for is already running elsewhere. You can also migrate tasks between threads.
That's the way boost.threadpool uses boost.fiber (but fibers are not exchanged between worker-threads -> work-stealing is done only on un-fibered tasks waiting in the local queue of a worker-thread).
IMHO, work stealing is independent from moving tasks between threads. Work stealing is a specific way of scheduling, while moving tasks between threads is a matter of resource allocation.
yes
For instance we have a simple one queue round robin fiber scheduler with several threads picking up the next available task. It's built on top of a lock-free queue and doesn't involve any explicit locking at all. If yielded for some reason tasks get re-scheduled (reinserted into the queue). That's very fast and powerful, and I don't know if it's easy to implement without fibers.
Boost.Threadpool already implements work-stealing on the basis of stealing tasks but not fibers (what also could be done as Anthony propsed). I did not implement work-stealing for fibers because I've read somewhere that exchanging fibers between threads isn't a good idea on POSIX platforms. But I didn't found any documentation for this topic in the INet - so I decided to let each worker-thread execute its own fibers. BTW Boost.Threadpool allows to disable fibers - you get a simple threapool with work-stealing. regards, Oliver

k-oli@gmx.de writes:
Am Sonntag, 2. November 2008 00:03:44 schrieb Anthony Williams:
The idea behind using fiber inside a threadpool is, that each worker-thread executes multiple fibers - fiber would yield its execution if future::get() would block (because future is not ready)
Exactly. That's what my prototype does too. My point is that you can do this without fibers too, but you might run out of stack.
Interessting - I'd like to see how the code works. Is it possible to access your prototype?
No, sorry. In future::get(), on a fiber-based thread pool you find a new task/fiber and switch to that if the get() would block. On a simple stack-based thread pool, you just look in the task queue for a new task and run it.
One thing you can do with fibers that you can't easily do with a single stack is switch back to the parent task when the nested task blocks. Doing so allows you to run *other* tasks from the pool if a thread blocks and the task it is waiting for is already running elsewhere. You can also migrate tasks between threads.
That's the way boost.threadpool uses boost.fiber (but fibers are not exchanged between worker-threads -> work-stealing is done only on un-fibered tasks waiting in the local queue of a worker-thread).
Doing either of these requires that the task is prepared for it.
in which sense (sorry I'm not aware of)
Fibers are still tied to a particular thread, so thread-local variables and boost::this_thread::get_id() still return the same value for a nested task. This means that a task that calls future::get() might find that its thread-local variables have been overwritten by a nested task when it resumes. It also means that any data structures keyed by the thread::id may have been altered. Finally, the nested task inherits all the locks of the parent, so it may deadlock if it tries to lock the same mutexes (rather than just block if it is running on a separate thread). Anthony -- Anthony Williams Author of C++ Concurrency in Action | http://www.manning.com/williams Custom Software Development | http://www.justsoftwaresolutions.co.uk Just Software Solutions Ltd, Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL, UK

----- Original Message ----- From: "Anthony Williams" <anthony.ajw@gmail.com> To: <boost@lists.boost.org> Sent: Monday, November 03, 2008 9:05 AM Subject: Re: [boost] [threadpool] new version v12
One thing you can do with fibers that you can't easily do with a single stack is switch back to the parent task when the nested task blocks. Doing so allows you to run *other* tasks from the pool if a thread blocks and the task it is waiting for is already running elsewhere. You can also migrate tasks between threads.
Doing either of these requires that the task is prepared for it.
in which sense (sorry I'm not aware of)
Fibers are still tied to a particular thread, so thread-local variables and boost::this_thread::get_id() still return the same value for a nested task. This means that a task that calls future::get() might find that its thread-local variables have been overwritten by a nested task when it resumes. It also means that any data structures keyed by the thread::id may have been altered. Finally, the nested task inherits all the locks of the parent, so it may deadlock if it tries to lock the same mutexes (rather than just block if it is running on a separate thread).
You are right Anthony, task behavior shouldn't depend on thread specifics, either thread id, locks or thread-locals data because other sub-tasks can migrate to this thread while waiting for the sub-task completion. The same occurs for programs that run well sequentially and crash when multi-threading takes place. This do not have as consequence that we can not use threads neither global variables but that we need to implement thread-safe functions and some times use for that some kind of thread synchronization, other use threads specific variables instead of global variables. For task the same applies; there are some entities(functions, classes, ..) that are task-safe and others that need some task specific synchronization tools to ensure task safety. I think the fork/join framework work well for task-safe entities while will have unexpected behavior otherwise. * The first question is whether we can use such a framework knowing that we need to take care of the task safety or discard it because it is dangerous when the entities are not task-safe. * The second question if we use this kind of framework is how can we make task-unsafe entities task-safe using some kind of synchronization or specific context at the task level. I'm really interested in exploring this task space. Vicente

You are right Anthony, task behavior shouldn't depend on thread specifics, either thread id, locks or thread-locals data because other sub-tasks can migrate to this thread while waiting for the sub-task completion. The same occurs for programs that run well sequentially and crash when multi-threading takes place. This do not have as consequence that we can not use threads neither global variables but that we need to implement thread-safe functions and some times use for that some kind of thread synchronization, other use threads specific variables instead of global variables.
For task the same applies; there are some entities(functions, classes, ..) that are task-safe and others that need some task specific synchronization tools to ensure task safety.
If the fibers are not stolen by other worker-threads it should be not be an issue. Only tasks/actions which are stored in the worker-queue can be stolen and will be fiberized by the stealing worker-thread. regards, Oliver -- Psssst! Schon vom neuen GMX MultiMessenger gehört? Der kann`s mit allen: http://www.gmx.net/de/go/multimessenger

"vicente.botet" <vicente.botet@wanadoo.fr> writes:
----- Original Message ----- From: "Anthony Williams" <anthony.ajw@gmail.com> To: <boost@lists.boost.org> Sent: Monday, November 03, 2008 9:05 AM Subject: Re: [boost] [threadpool] new version v12
One thing you can do with fibers that you can't easily do with a single stack is switch back to the parent task when the nested task blocks. Doing so allows you to run *other* tasks from the pool if a thread blocks and the task it is waiting for is already running elsewhere. You can also migrate tasks between threads.
Doing either of these requires that the task is prepared for it.
in which sense (sorry I'm not aware of)
Fibers are still tied to a particular thread, so thread-local variables and boost::this_thread::get_id() still return the same value for a nested task. This means that a task that calls future::get() might find that its thread-local variables have been overwritten by a nested task when it resumes. It also means that any data structures keyed by the thread::id may have been altered. Finally, the nested task inherits all the locks of the parent, so it may deadlock if it tries to lock the same mutexes (rather than just block if it is running on a separate thread).
You are right Anthony, task behavior shouldn't depend on thread specifics, either thread id, locks or thread-locals data because other sub-tasks can migrate to this thread while waiting for the sub-task completion. The same occurs for programs that run well sequentially and crash when multi-threading takes place. This do not have as consequence that we can not use threads neither global variables but that we need to implement thread-safe functions and some times use for that some kind of thread synchronization, other use threads specific variables instead of global variables.
For task the same applies; there are some entities(functions, classes, ..) that are task-safe and others that need some task specific synchronization tools to ensure task safety.
Yes. Unfortunately in some cases the set of functions that are not "task safe" includes particular uses of the standard C library and use of particular C++ constructs in particular ways on some platforms. For example, with MSVC the CRT makes use of thread-local storage for many things, from errno to the buffer for gmtime to the current exception during exception handling. If fiber-local storage is available (Vista), it uses that if you have a sufficiently recent version of MSVC (I forget whether you need VS2005 or VS2008), but on XP or older versions of MSVC it uses plain TLS. You need to be aware that these things will be replaced if the task is suspended and a new fiber scheduled. Doubly so if the task is migrated to another physical thread. This makes it a really bad idea to wait for a future in a catch block, for example.
I think the fork/join framework work well for task-safe entities while will have unexpected behavior otherwise. * The first question is whether we can use such a framework knowing that we need to take care of the task safety or discard it because it is dangerous when the entities are not task-safe.
Of course we can. However, we need to be sure to publicise this fact.
* The second question if we use this kind of framework is how can we make task-unsafe entities task-safe using some kind of synchronization or specific context at the task level.
In the general case, I don't think we can. However, we can do things to mitigate the problem. For example if the thread pool collaborated with boost::thread, it could switch the thread data so you had a different thread::id value, and thread_specific_ptr values were local to the task. This doesn't help with locks or thread-specific data taken from outside of boost (e.g. in the CRT), but it does help somewhat. Also, it might be useful to have a "don't nest tasks" flag, like boost::disable_interruption: void foo() { boost::tp::disable_task_nesting dtn; some_future.get(); // won't schedule another task on this thread } // task nesting enable again Anthony -- Anthony Williams Author of C++ Concurrency in Action | http://www.manning.com/williams Custom Software Development | http://www.justsoftwaresolutions.co.uk Just Software Solutions Ltd, Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL, UK

----- Original Message ----- From: "Anthony Williams" <anthony.ajw@gmail.com> To: <boost@lists.boost.org> Sent: Monday, November 03, 2008 10:25 AM Subject: Re: [boost] [threadpool] new version v12
"vicente.botet" <vicente.botet@wanadoo.fr> writes:
From: "Anthony Williams" <anthony.ajw@gmail.com>
For task the same applies; there are some entities(functions, classes, ..) that are task-safe and others that need some task specific synchronization tools to ensure task safety.
Yes. Unfortunately in some cases the set of functions that are not "task safe" includes particular uses of the standard C library and use of particular C++ constructs in particular ways on some platforms.
You need to be aware that these things will be replaced if the task is suspended and a new fiber scheduled.
Yes as this is already the case for TSS, we need to recover the TSS data on the stack before calling other functions that could modify it. We need to take care of this.
Doubly so if the task is migrated to another physical thread.
I don't think we need to migrate an already started task.
This makes it a really bad idea to wait for a future in a catch block, for example.
Sorry I don't see why. Could you describe the issue?
I think the fork/join framework work well for task-safe entities while will have unexpected behavior otherwise. * The first question is whether we can use such a framework knowing that we need to take care of the task safety or discard it because it is dangerous when the entities are not task-safe.
Of course we can. However, we need to be sure to publicise this fact.
Of course.
* The second question if we use this kind of framework is how can we make task-unsafe entities task-safe using some kind of synchronization or specific context at the task level.
In the general case, I don't think we can. However, we can do things to mitigate the problem. For example if the thread pool collaborated with boost::thread, it could switch the thread data so you had a different thread::id value, and thread_specific_ptr values were local to the task.
Yes this could be a possibility. The other been stating that thread::id and thread-specific data must be used carefully on the context of task scheduling, as global data is not thread-safe.
This doesn't help with locks or thread-specific data taken from outside of boost (e.g. in the CRT), but it does help somewhat.
Are you saying that if we switch the thread data locks will work when we use boost::thread? I though that mutex are associated to the underlying OS thread.
Also, it might be useful to have a "don't nest tasks" flag, like boost::disable_interruption:
void foo() { boost::tp::disable_task_nesting dtn; some_future.get(); // won't schedule another task on this thread } // task nesting enable again
Yes, I was thinking also on this kind of construction to enable/disable task stealing. This could enable to have a single implementation for the thread_pool. This kind of scoped construction will be used to change how the thread_pool schedules tasks. Vicente

"vicente.botet" <vicente.botet@wanadoo.fr> writes:
----- Original Message ----- From: "Anthony Williams" <anthony.ajw@gmail.com> To: <boost@lists.boost.org> Sent: Monday, November 03, 2008 10:25 AM Subject: Re: [boost] [threadpool] new version v12
"vicente.botet" <vicente.botet@wanadoo.fr> writes:
From: "Anthony Williams" <anthony.ajw@gmail.com>
For task the same applies; there are some entities(functions, classes, ..) that are task-safe and others that need some task specific synchronization tools to ensure task safety.
Yes. Unfortunately in some cases the set of functions that are not "task safe" includes particular uses of the standard C library and use of particular C++ constructs in particular ways on some platforms.
You need to be aware that these things will be replaced if the task is suspended and a new fiber scheduled.
Yes as this is already the case for TSS, we need to recover the TSS data on the stack before calling other functions that could modify it. We need to take care of this.
Doubly so if the task is migrated to another physical thread.
I don't think we need to migrate an already started task.
There are scenarios where it's useful. See my other recent posts.
This makes it a really bad idea to wait for a future in a catch block, for example.
Sorry I don't see why. Could you describe the issue?
future<T> some_future; try { throw my_exception(); }catch(...) { some_future.wait(); // may invoke task from pool if some_future // not ready throw; // oops, where's my exception state gone? }
* The second question if we use this kind of framework is how can we make task-unsafe entities task-safe using some kind of synchronization or specific context at the task level.
In the general case, I don't think we can. However, we can do things to mitigate the problem. For example if the thread pool collaborated with boost::thread, it could switch the thread data so you had a different thread::id value, and thread_specific_ptr values were local to the task.
This doesn't help with locks or thread-specific data taken from outside of boost (e.g. in the CRT), but it does help somewhat.
Are you saying that if we switch the thread data locks will work when we use boost::thread? I though that mutex are associated to the underlying OS thread.
No. I was saying that switching boost TSS data doesn't help with locks. It also doesn't help with TSS data from the CRT. Anthony -- Anthony Williams Author of C++ Concurrency in Action | http://www.manning.com/williams Custom Software Development | http://www.justsoftwaresolutions.co.uk Just Software Solutions Ltd, Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL, UK

Anthony Williams:
future<T> some_future;
try { throw my_exception(); }catch(...) { some_future.wait(); // may invoke task from pool if some_future // not ready throw; // oops, where's my exception state gone? }
Are you sure that the above doesn't work? try { throw my_exception(); } catch(...) { try { call_function_that_throws(); } catch( ... ) { } throw; // works } -- Peter Dimov http://www.pdplayer.com

"Peter Dimov" <pdimov@pdimov.com> writes:
Anthony Williams:
future<T> some_future;
try { throw my_exception(); }catch(...) { some_future.wait(); // may invoke task from pool if some_future // not ready throw; // oops, where's my exception state gone? }
Are you sure that the above doesn't work?
try { throw my_exception(); } catch(...) { try { call_function_that_throws(); } catch( ... ) { }
throw; // works }
No, I'm not sure, but if some_future.wait() switches to a new fiber I am concerned that it won't, because the exception state is per-thread, not per-fiber. Anthony -- Anthony Williams Author of C++ Concurrency in Action | http://www.manning.com/williams Custom Software Development | http://www.justsoftwaresolutions.co.uk Just Software Solutions Ltd, Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL, UK

On Mon, Nov 3, 2008 at 3:17 PM, Anthony Williams <anthony.ajw@gmail.com> wrote:
"Peter Dimov" <pdimov@pdimov.com> writes:
Anthony Williams:
future<T> some_future;
try { throw my_exception(); }catch(...) { some_future.wait(); // may invoke task from pool if some_future // not ready throw; // oops, where's my exception state gone? }
Are you sure that the above doesn't work?
try { throw my_exception(); } catch(...) { try { call_function_that_throws(); } catch( ... ) { }
throw; // works }
No, I'm not sure, but if some_future.wait() switches to a new fiber I am concerned that it won't, because the exception state is per-thread, not per-fiber.
This can be dealt with on all ABIs I know of (which admitedly isn't many: gcc with Itanium Abi has no problem, while Win32 fibers take care of it; older gcc ABIs may need help, but is doable). -- gpd

"Giovanni Piero Deretta" <gpderetta@gmail.com> writes:
On Mon, Nov 3, 2008 at 3:17 PM, Anthony Williams <anthony.ajw@gmail.com> wrote:
"Peter Dimov" <pdimov@pdimov.com> writes:
Anthony Williams:
future<T> some_future;
try { throw my_exception(); }catch(...) { some_future.wait(); // may invoke task from pool if some_future // not ready throw; // oops, where's my exception state gone? }
Are you sure that the above doesn't work?
try { throw my_exception(); } catch(...) { try { call_function_that_throws(); } catch( ... ) { }
throw; // works }
No, I'm not sure, but if some_future.wait() switches to a new fiber I am concerned that it won't, because the exception state is per-thread, not per-fiber.
Win32 fibers take care of it;
AFAIK, only on Vista. I'd be glad to know I was mistaken. Anthony -- Anthony Williams Author of C++ Concurrency in Action | http://www.manning.com/williams Custom Software Development | http://www.justsoftwaresolutions.co.uk Just Software Solutions Ltd, Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL, UK

On Mon, Nov 3, 2008 at 3:37 PM, Anthony Williams <anthony.ajw@gmail.com> wrote:
"Giovanni Piero Deretta" <gpderetta@gmail.com> writes:
On Mon, Nov 3, 2008 at 3:17 PM, Anthony Williams <anthony.ajw@gmail.com> wrote:
"Peter Dimov" <pdimov@pdimov.com> writes:
Anthony Williams:
future<T> some_future;
try { throw my_exception(); }catch(...) { some_future.wait(); // may invoke task from pool if some_future // not ready throw; // oops, where's my exception state gone? }
Are you sure that the above doesn't work?
try { throw my_exception(); } catch(...) { try { call_function_that_throws(); } catch( ... ) { }
throw; // works }
No, I'm not sure, but if some_future.wait() switches to a new fiber I am concerned that it won't, because the exception state is per-thread, not per-fiber.
Win32 fibers take care of it;
AFAIK, only on Vista. I'd be glad to know I was mistaken.
I know that Vista changed the exception handling behavior, but I'm fairly sure that fibers should correctly switch context state correctly. Oddly I can't find any detailed description on the 'net (It has been a long time...). I'll have to do a more accurate search, but I do not have time right now. Do you have any proof of the contrary? What I'm sure you cannot do is catching exceptions *across* fibers. Everything else should work fine. -- gpd

Anthony Williams <anthony.ajw@gmail.com> writes:
"Peter Dimov" <pdimov@pdimov.com> writes:
Anthony Williams:
future<T> some_future;
try { throw my_exception(); }catch(...) { some_future.wait(); // may invoke task from pool if some_future // not ready throw; // oops, where's my exception state gone? }
Are you sure that the above doesn't work?
try { throw my_exception(); } catch(...) { try { call_function_that_throws(); } catch( ... ) { }
throw; // works }
No, I'm not sure, but if some_future.wait() switches to a new fiber I am concerned that it won't, because the exception state is per-thread, not per-fiber.
And of course if the thread pool migrates *this* task to another *thread* inside the wait then I'm fairly sure it won't work with MSVC unless you're on Vista. Anthony -- Anthony Williams Author of C++ Concurrency in Action | http://www.manning.com/williams Custom Software Development | http://www.justsoftwaresolutions.co.uk Just Software Solutions Ltd, Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL, UK

Anthony Williams:
No, I'm not sure, but if some_future.wait() switches to a new fiber I am concerned that it won't, because the exception state is per-thread, not per-fiber.
On second thought, yes, it might be possible to construct an example that breaks. You need to switch to another fiber in a catch clause, which then needs to throw _and_ switch back to you in a catch clause. Another problematic case is switching to another fiber during stack unwinding. I'm not sure if this can happen in practice in a straightforward pool implementation that only yields on wait. A waits for B waits for A is a deadlock anyway. -- Peter Dimov http://www.pdplayer.com

Am Montag, 3. November 2008 16:42:47 schrieb Peter Dimov:
Anthony Williams:
No, I'm not sure, but if some_future.wait() switches to a new fiber I am concerned that it won't, because the exception state is per-thread, not per-fiber.
On second thought, yes, it might be possible to construct an example that breaks. You need to switch to another fiber in a catch clause, which then needs to throw _and_ switch back to you in a catch clause. Another problematic case is switching to another fiber during stack unwinding.
I'm not sure if this can happen in practice in a straightforward pool implementation that only yields on wait. A waits for B waits for A is a deadlock anyway.
-- Peter Dimov
We have a long thread of posts - but wat is the final conclusion? Using fibers to implement fork/join smeantics seams to introduce some other pitfalls (at least migration of fibers to other worker-threads). Recusrive scheduling of sub-task can create deadlocks too. ??? Oliver

Fibers are still tied to a particular thread, so thread-local variables and boost::this_thread::get_id() still return the same value for a nested task. This means that a task that calls future::get() might find that its thread-local variables have been overwritten by a nested task when it resumes. It also means that any data structures keyed by the thread::id may have been altered. Finally, the nested task inherits all the locks of the parent, so it may deadlock if it tries to lock the same mutexes (rather than just block if it is running on a separate thread).
OK - yes. If the fibers are not moved to other worker-threads this shouldn't be an issue - right? I read in some news-groups that at least on POSIX platforms the interaction between fibers and threads is not defined - maybe the post was related to the thread-specific stuff. regards, Oliver -- Ist Ihr Browser Vista-kompatibel? Jetzt die neuesten Browser-Versionen downloaden: http://www.gmx.net/de/go/browser

"Oliver Kowalke" <k-oli@gmx.de> writes:
Fibers are still tied to a particular thread, so thread-local variables and boost::this_thread::get_id() still return the same value for a nested task. This means that a task that calls future::get() might find that its thread-local variables have been overwritten by a nested task when it resumes. It also means that any data structures keyed by the thread::id may have been altered. Finally, the nested task inherits all the locks of the parent, so it may deadlock if it tries to lock the same mutexes (rather than just block if it is running on a separate thread).
OK - yes.
If the fibers are not moved to other worker-threads this shouldn't be an issue - right?
No. The problem is if I am a task running on a worker thread, and I call future::get(), if another task/fiber gets scheduled on my thread then it inherits my locks, thread-local variables and thread ID. Also, I just remembered that on Windows prior to Vista, SwitchToFiber will not preserve floating-point registers and state. On Vista, you have to specify FIBER_FLAG_FLOAT_SWITCH in the CreateFiberEx call in order to enable this. I don't know how much of a problem that is in practice.
I read in some news-groups that at least on POSIX platforms the interaction between fibers and threads is not defined - maybe the post was related to the thread-specific stuff.
I don't know much about fibers on POSIX. Anthony -- Anthony Williams Author of C++ Concurrency in Action | http://www.manning.com/williams Custom Software Development | http://www.justsoftwaresolutions.co.uk Just Software Solutions Ltd, Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL, UK

If the fibers are not moved to other worker-threads this shouldn't be an issue - right?
No. The problem is if I am a task running on a worker thread, and I call future::get(), if another task/fiber gets scheduled on my thread then it inherits my locks, thread-local variables and thread ID.
But this seams to me consistent. Because both fibers run on the same thread they should see the same thread ID, the same thread-specific storage. Only inheriting locks seams to be odd at the first look - but this should also be ok because the locks prevent access of other threads. Only if the lock is released twice by the fiber code - nedd to think about this issue.
Also, I just remembered that on Windows prior to Vista, SwitchToFiber will not preserve floating-point registers and state. On Vista, you have to specify FIBER_FLAG_FLOAT_SWITCH in the CreateFiberEx call in order to enable this. I don't know how much of a problem that is in practice.
I didn't read this in the MSDN :( I'll search for it. regards, Oliver -- "Feel free" - 5 GB Mailbox, 50 FreeSMS/Monat ... Jetzt GMX ProMail testen: http://www.gmx.net/de/go/promail

"Oliver Kowalke" <k-oli@gmx.de> writes:
If the fibers are not moved to other worker-threads this shouldn't be an issue - right?
No. The problem is if I am a task running on a worker thread, and I call future::get(), if another task/fiber gets scheduled on my thread then it inherits my locks, thread-local variables and thread ID.
But this seams to me consistent. Because both fibers run on the same thread they should see the same thread ID, the same thread-specific storage. Only inheriting locks seams to be odd at the first look - but this should also be ok because the locks prevent access of other threads. Only if the lock is released twice by the fiber code - nedd to think about this issue.
It's OK if the nested task was spawned by this task. If the nested task was spawned by an unrelated task, then these things become real issues. Also, if your thread runs an unrelated task the condition it was waiting for might become ready, but the task is unable to resume because the new unrelated task is still running. This is why you might choose to migrate fibers between threads: 1. task A spawns tasks B 2. task B gets picked up by other worker threads 3. task N is submitted to the queue, but there are no free workers 4. task A blocks on task B 5. the thread running task A picks up task N 6. task B completes. * task A is now ready to run, but its thread is running task N. * Migrate task A to the thread that just completed task B, and resume task A (easily done with SwitchToFiber). Everything is fine and dandy.... except the thread-local variables and thread ID for task A just changed :-(
Also, I just remembered that on Windows prior to Vista, SwitchToFiber will not preserve floating-point registers and state. On Vista, you have to specify FIBER_FLAG_FLOAT_SWITCH in the CreateFiberEx call in order to enable this. I don't know how much of a problem that is in practice.
I didn't read this in the MSDN :( I'll search for it.
See CreateFiberEx and ConvertThreadToFiberEx Anthony -- Anthony Williams Author of C++ Concurrency in Action | http://www.manning.com/williams Custom Software Development | http://www.justsoftwaresolutions.co.uk Just Software Solutions Ltd, Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL, UK

It's OK if the nested task was spawned by this task. If the nested task was spawned by an unrelated task, then these things become real issues.
Sorry I don't know the semantic of an 'urelated task'. Do you mean a task created by another worker-thread?
Also, if your thread runs an unrelated task the condition it was waiting for might become ready, but the task is unable to resume because the new unrelated task is still running. This is why you might choose to migrate fibers between threads:
1. task A spawns tasks B 2. task B gets picked up by other worker threads 3. task N is submitted to the queue, but there are no free workers 4. task A blocks on task B 5. the thread running task A picks up task N 6. task B completes. * task A is now ready to run, but its thread is running task N. * Migrate task A to the thread that just completed task B, and resume task A (easily done with SwitchToFiber). Everything is fine and dandy.... except the thread-local variables and thread ID for task A just changed :-(
See CreateFiberEx and ConvertThreadToFiberEx
I got it. This happens if migration of fibered tasks between worker-threads happens. But what about executing fibered tasks only in the worker-thread which has created the fiber? In Boost.Threadpool a new fiber is created for each task which has been dequeued from the global queue, the local worker-queue or stolen from the worker-queue of another worker-thread. If the fibered task has yield its execution it is stored into a list in the thread-specific storage (list of unfinished fibers). In the worker-thread loop first all fibers of the unfinished list will be executed. If a fiber becomes finished it is removed from the list in the other case it is sored again in this list. The iteration continues until the size of this list doesn't change. Then a new task is dequeued from the local worker-queue, then from the global -queue and then from worker-queues of other threads. thx! -- "Feel free" - 10 GB Mailbox, 100 FreeSMS/Monat ... Jetzt GMX TopMail testen: http://www.gmx.net/de/go/topmail

"Oliver Kowalke" <k-oli@gmx.de> writes:
It's OK if the nested task was spawned by this task. If the nested task was spawned by an unrelated task, then these things become real issues.
Sorry I don't know the semantic of an 'urelated task'. Do you mean a task created by another worker-thread?
Some (user-level) thread does pool.submit(task1), and another (user-level) thread does pool.submit(task2). These tasks are unrelated: one was not spawned by the other. They may also not share any data: one may be spawned by the GUI thread, and another by a network thread.
Also, if your thread runs an unrelated task the condition it was waiting for might become ready, but the task is unable to resume because the new unrelated task is still running. This is why you might choose to migrate fibers between threads:
1. task A spawns tasks B 2. task B gets picked up by other worker threads 3. task N is submitted to the queue, but there are no free workers 4. task A blocks on task B 5. the thread running task A picks up task N 6. task B completes. * task A is now ready to run, but its thread is running task N. * Migrate task A to the thread that just completed task B, and resume task A (easily done with SwitchToFiber). Everything is fine and dandy.... except the thread-local variables and thread ID for task A just changed :-(
I got it. This happens if migration of fibered tasks between worker-threads happens.
The problem (changed thread ID) happens if you migrate fibered tasks between threads once they've started, yes. That problem was an aside: I was trying to explain why you might do that in the first place.
But what about executing fibered tasks only in the worker-thread which has created the fiber?
There is still a problem (just different).
Then a new task is dequeued from the local worker-queue, then from the global -queue and then from worker-queues of other threads.
Here's the problem: if you take a task from the global queue or the worker queue of another thread and *don't* do thread migration, then you risk delaying a task and potentially deadlocking the pool. Consider the sequence above: at step 5, task A is blocked, and the only task it is waiting for (B) has been picked up by an idle worker thread. The thread running task A now takes a new task from the global queue (task N). Suppose this is a really long-running task (calculate PI to 3 million digits, defrag a hard disk, etc.). Without thread migration, task A cannot resume until task N is complete. Obviously, if task N blocks on a future you can then reschedule task A, but if it doesn't then you don't get that opportunity. If task N waits for another event to be triggered from task A (e.g. a notify on a condition variable) then it will never get it because task A is suspended, and so the pool thread will deadlock *even when task A is ready to run*. Anthony -- Anthony Williams Author of C++ Concurrency in Action | http://www.manning.com/williams Custom Software Development | http://www.justsoftwaresolutions.co.uk Just Software Solutions Ltd, Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL, UK

Here's the problem: if you take a task from the global queue or the worker queue of another thread and *don't* do thread migration, then you risk delaying a task and potentially deadlocking the pool.
Consider the sequence above: at step 5, task A is blocked, and the only task it is waiting for (B) has been picked up by an idle worker thread. The thread running task A now takes a new task from the global queue (task N). Suppose this is a really long-running task (calculate PI to 3 million digits, defrag a hard disk, etc.). Without thread migration, task A cannot resume until task N is complete.
Obviously, if task N blocks on a future you can then reschedule task A, but if it doesn't then you don't get that opportunity. If task N waits for another event to be triggered from task A (e.g. a notify on a condition variable) then it will never get it because task A is suspended, and so the pool thread will deadlock *even when task A is ready to run*.
Wouldn't this also happen in the recursive execution of tasks (without fibers)? Oliver -- Psssst! Schon vom neuen GMX MultiMessenger gehört? Der kann`s mit allen: http://www.gmx.net/de/go/multimessenger

"Oliver Kowalke" <k-oli@gmx.de> writes:
Here's the problem: if you take a task from the global queue or the worker queue of another thread and *don't* do thread migration, then you risk delaying a task and potentially deadlocking the pool.
Consider the sequence above: at step 5, task A is blocked, and the only task it is waiting for (B) has been picked up by an idle worker thread. The thread running task A now takes a new task from the global queue (task N). Suppose this is a really long-running task (calculate PI to 3 million digits, defrag a hard disk, etc.). Without thread migration, task A cannot resume until task N is complete.
Obviously, if task N blocks on a future you can then reschedule task A, but if it doesn't then you don't get that opportunity. If task N waits for another event to be triggered from task A (e.g. a notify on a condition variable) then it will never get it because task A is suspended, and so the pool thread will deadlock *even when task A is ready to run*.
Wouldn't this also happen in the recursive execution of tasks (without fibers)?
Yes. I found that the safest thing to do is spawn another thread rather than recursively executing tasks, unless I could execute the task being waited for (in which case you only get deadlock if you would with separate threads too). You need to pay attention to how many threads are running, but it seems to work better. At least with fibers you *can* migrate the task to another thread, if your task is able to handle it. Anthony -- Anthony Williams Author of C++ Concurrency in Action | http://www.manning.com/williams Custom Software Development | http://www.justsoftwaresolutions.co.uk Just Software Solutions Ltd, Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL, UK

----- Original Message ----- From: "Anthony Williams" <anthony.ajw@gmail.com> To: <boost@lists.boost.org> Sent: Monday, November 03, 2008 1:19 PM Subject: Re: [boost] [threadpool] new version v12
"Oliver Kowalke" <k-oli@gmx.de> writes:
Here's the problem: if you take a task from the global queue or the worker queue of another thread and *don't* do thread migration, then you risk delaying a task and potentially deadlocking the pool.
Consider the sequence above: at step 5, task A is blocked, and the only task it is waiting for (B) has been picked up by an idle worker thread. The thread running task A now takes a new task from the global queue (task N). Suppose this is a really long-running task (calculate PI to 3 million digits, defrag a hard disk, etc.). Without thread migration, task A cannot resume until task N is complete.
Obviously, if task N blocks on a future you can then reschedule task A, but if it doesn't then you don't get that opportunity. If task N waits for another event to be triggered from task A (e.g. a notify on a condition variable) then it will never get it because task A is suspended, and so the pool thread will deadlock *even when task A is ready to run*.
Wouldn't this also happen in the recursive execution of tasks (without fibers)?
Yes. I found that the safest thing to do is spawn another thread rather than recursively executing tasks, unless I could execute the task being waited for (in which case you only get deadlock if you would with separate threads too). You need to pay attention to how many threads are running, but it seems to work better.
This is interesting feature to combine with the disable work stealing, the thread_pool will have no more that n running working threads, others can be blocked, and others are frozen. The working thread scheduler will spawn another thread or unfreeze a frozen thread before blocking. These threads could be added to the thread pool only when some thread is blocked and no frozen thread exists, so at the end there are always no more than the initial number of threads running. The working thread scheduler can put itself on the frozen list when there are already the initial number of threads running.
At least with fibers you *can* migrate the task to another thread, if your task is able to handle it.
I don't see any major issue to migrate tasks when the blocking function get() calls recursivelly to the working thread scheduler. Is there one? Vicente

"vicente.botet" <vicente.botet@wanadoo.fr> writes:
----- Original Message ----- From: "Anthony Williams" <anthony.ajw@gmail.com> To: <boost@lists.boost.org> Sent: Monday, November 03, 2008 1:19 PM Subject: Re: [boost] [threadpool] new version v12
"Oliver Kowalke" <k-oli@gmx.de> writes:
Here's the problem: if you take a task from the global queue or the worker queue of another thread and *don't* do thread migration, then you risk delaying a task and potentially deadlocking the pool.
Consider the sequence above: at step 5, task A is blocked, and the only task it is waiting for (B) has been picked up by an idle worker thread. The thread running task A now takes a new task from the global queue (task N). Suppose this is a really long-running task (calculate PI to 3 million digits, defrag a hard disk, etc.). Without thread migration, task A cannot resume until task N is complete.
Obviously, if task N blocks on a future you can then reschedule task A, but if it doesn't then you don't get that opportunity. If task N waits for another event to be triggered from task A (e.g. a notify on a condition variable) then it will never get it because task A is suspended, and so the pool thread will deadlock *even when task A is ready to run*.
Wouldn't this also happen in the recursive execution of tasks (without fibers)?
Yes. I found that the safest thing to do is spawn another thread rather than recursively executing tasks, unless I could execute the task being waited for (in which case you only get deadlock if you would with separate threads too). You need to pay attention to how many threads are running, but it seems to work better.
This is interesting feature to combine with the disable work stealing, the thread_pool will have no more that n running working threads, others can be blocked, and others are frozen. The working thread scheduler will spawn another thread or unfreeze a frozen thread before blocking. These threads could be added to the thread pool only when some thread is blocked and no frozen thread exists, so at the end there are always no more than the initial number of threads running. The working thread scheduler can put itself on the frozen list when there are already the initial number of threads running.
Yes.
At least with fibers you *can* migrate the task to another thread, if your task is able to handle it.
I don't see any major issue to migrate tasks when the blocking function get() calls recursivelly to the working thread scheduler. Is there one?
If the task migrates across threads its ID changes, and thread locals change. Anthony -- Anthony Williams Author of C++ Concurrency in Action | http://www.manning.com/williams Custom Software Development | http://www.justsoftwaresolutions.co.uk Just Software Solutions Ltd, Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL, UK

----- Original Message ----- From: "Anthony Williams" <anthony.ajw@gmail.com> To: <boost@lists.boost.org> Sent: Monday, November 03, 2008 3:19 PM Subject: Re: [boost] [threadpool] new version v12
"vicente.botet" <vicente.botet@wanadoo.fr> writes:
----- Original Message ----- From: "Anthony Williams" <anthony.ajw@gmail.com> To: <boost@lists.boost.org> Sent: Monday, November 03, 2008 1:19 PM Subject: Re: [boost] [threadpool] new version v12
"Oliver Kowalke" <k-oli@gmx.de> writes:
Here's the problem: if you take a task from the global queue or the worker queue of another thread and *don't* do thread migration, then you risk delaying a task and potentially deadlocking the pool.
Consider the sequence above: at step 5, task A is blocked, and the only task it is waiting for (B) has been picked up by an idle worker thread. The thread running task A now takes a new task from the global queue (task N). Suppose this is a really long-running task (calculate PI to 3 million digits, defrag a hard disk, etc.). Without thread migration, task A cannot resume until task N is complete.
Obviously, if task N blocks on a future you can then reschedule task A, but if it doesn't then you don't get that opportunity. If task N waits for another event to be triggered from task A (e.g. a notify on a condition variable) then it will never get it because task A is suspended, and so the pool thread will deadlock *even when task A is ready to run*.
Wouldn't this also happen in the recursive execution of tasks (without fibers)?
Yes. I found that the safest thing to do is spawn another thread rather than recursively executing tasks, unless I could execute the task being waited for (in which case you only get deadlock if you would with separate threads too). You need to pay attention to how many threads are running, but it seems to work better.
This is interesting feature to combine with the disable work stealing, the thread_pool will have no more that n running working threads, others can be blocked, and others are frozen. The working thread scheduler will spawn another thread or unfreeze a frozen thread before blocking. These threads could be added to the thread pool only when some thread is blocked and no frozen thread exists, so at the end there are always no more than the initial number of threads running. The working thread scheduler can put itself on the frozen list when there are already the initial number of threads running.
Yes.
At least with fibers you *can* migrate the task to another thread, if your task is able to handle it.
I don't see any major issue to migrate tasks when the blocking function get() calls recursivelly to the working thread scheduler. Is there one?
If the task migrates across threads its ID changes, and thread locals change.
So if the task do not depends on thread specific this is safe. Vicente

On Mon, Nov 3, 2008 at 3:26 PM, vicente.botet <vicente.botet@wanadoo.fr> wrote:
----- Original Message ----- From: "Anthony Williams" <anthony.ajw@gmail.com> To: <boost@lists.boost.org> Sent: Monday, November 03, 2008 3:19 PM Subject: Re: [boost] [threadpool] new version v12
"vicente.botet" <vicente.botet@wanadoo.fr> writes:
----- Original Message ----- From: "Anthony Williams" <anthony.ajw@gmail.com>
At least with fibers you *can* migrate the task to another thread, if your task is able to handle it.
I don't see any major issue to migrate tasks when the blocking function get() calls recursivelly to the working thread scheduler. Is there one?
If the task migrates across threads its ID changes, and thread locals change.
So if the task do not depends on thread specific this is safe.
Is there any that doesn't? Even errno is usually thread specific, and most allocators have thread specific paths. -- gpd

----- Original Message ----- From: "Giovanni Piero Deretta" <gpderetta@gmail.com> To: <boost@lists.boost.org> Sent: Monday, November 03, 2008 3:29 PM Subject: Re: [boost] [threadpool] new version v12
On Mon, Nov 3, 2008 at 3:26 PM, vicente.botet <vicente.botet@wanadoo.fr> wrote:
----- Original Message ----- From: "Anthony Williams" <anthony.ajw@gmail.com> To: <boost@lists.boost.org> Sent: Monday, November 03, 2008 3:19 PM Subject: Re: [boost] [threadpool] new version v12
"vicente.botet" <vicente.botet@wanadoo.fr> writes:
----- Original Message ----- From: "Anthony Williams" <anthony.ajw@gmail.com>
At least with fibers you *can* migrate the task to another thread, if your task is able to handle it.
I don't see any major issue to migrate tasks when the blocking function get() calls recursivelly to the working thread scheduler. Is there one?
If the task migrates across threads its ID changes, and thread locals change.
So if the task do not depends on thread specific this is safe.
Is there any that doesn't? Even errno is usually thread specific, and most allocators have thread specific paths.
Sorry, I was not enough precise. If the task manage safely with the thread specific this is safe, i.e. if the tasks stores the errno there is no issue when migrating to another thread. Vicente

On Mon, Nov 3, 2008 at 4:16 PM, vicente.botet <vicente.botet@wanadoo.fr> wrote:
----- Original Message ----- From: "Giovanni Piero Deretta" <gpderetta@gmail.com> To: <boost@lists.boost.org> Sent: Monday, November 03, 2008 3:29 PM Subject: Re: [boost] [threadpool] new version v12
On Mon, Nov 3, 2008 at 3:26 PM, vicente.botet <vicente.botet@wanadoo.fr> wrote:
----- Original Message ----- From: "Anthony Williams" <anthony.ajw@gmail.com> To: <boost@lists.boost.org> Sent: Monday, November 03, 2008 3:19 PM Subject: Re: [boost] [threadpool] new version v12
"vicente.botet" <vicente.botet@wanadoo.fr> writes:
----- Original Message ----- From: "Anthony Williams" <anthony.ajw@gmail.com>
At least with fibers you *can* migrate the task to another thread, if your task is able to handle it.
I don't see any major issue to migrate tasks when the blocking function get() calls recursivelly to the working thread scheduler. Is there one?
If the task migrates across threads its ID changes, and thread locals change.
So if the task do not depends on thread specific this is safe.
Is there any that doesn't? Even errno is usually thread specific, and most allocators have thread specific paths.
Sorry, I was not enough precise. If the task manage safely with the thread specific this is safe, i.e. if the tasks stores the errno there is no issue when migrating to another thread.
Not really. Without compiler help (available on VC++ but not on GCC) there is no way out. See: http://www.crystalclearsoftware.com/soc/coroutine/coroutine/coroutine_thread... for an explaination of the problem. -- gpd

----- Original Message ----- From: "Giovanni Piero Deretta" <gpderetta@gmail.com> To: <boost@lists.boost.org> Sent: Monday, November 03, 2008 4:21 PM Subject: Re: [boost] [threadpool] new version v12
Not really. Without compiler help (available on VC++ but not on GCC) there is no way out.
See: http://www.crystalclearsoftware.com/soc/coroutine/coroutine/coroutine_thread...
for an explaination of the problem.
The __thread bug on gcc with -o1 optimization is not enough convincing to me. Some TSS usage could be more dangerous when the task migrate but not all. Is for this raison that the user must be able to forbid this migration, and otherwise use fibers/continuations specific data instead of thread specific data. Of course the locking issue is a real problem. Is for this reason that I think that either for coroutines or fibers we need to explore different synchronization mechanisms and discourage the use of synchronization at the thread level. _____________________ Vicente Juan Botet Escribá

On Tue, Nov 4, 2008 at 4:33 PM, vicente.botet <vicente.botet@wanadoo.fr> wrote:
----- Original Message ----- From: "Giovanni Piero Deretta" <gpderetta@gmail.com> To: <boost@lists.boost.org> Sent: Monday, November 03, 2008 4:21 PM Subject: Re: [boost] [threadpool] new version v12
Not really. Without compiler help (available on VC++ but not on GCC) there is no way out.
See: http://www.crystalclearsoftware.com/soc/coroutine/coroutine/coroutine_thread...
for an explaination of the problem.
The __thread bug on gcc with -o1 optimization is not enough convincing to me.
There is little to be convinced about. The reality is that, at least on gcc, you cant reliably use TLS if you migrate tasks across threads. Technically it is not a bug because context switching is outside the scope of C++; also, while Posix does not define the interaction between swapcontext and threads (nor won't in the future as swapcontext has been removed from the standard). I do think that preventing that hoisting would have practically zero impact on performance on x86 platforms (using a segment register has negligible performance impact), but It might make some difference on platforms with more limited addressing modes, so I do not think gcc will ever change it (and yes, the problem is known).
Some TSS usage could be more dangerous when the task migrate but not all.
Unfortunately in general the user has no control over it. And compilers are becoming smarter and smarter at compiling (gcc might do whole program optimization soon), so just ignoring the problem and pretend it never happens on real programs is not a solution (better to have restrictive rules than random undebuggable crashes).
Is for this raison that the user must be able to forbid this migration, and otherwise use fibers/continuations specific data instead of thread specific data.
I see little value for fiber specific data. Microsoft had to add it for very practical reasons (i.e. avoiding rewriting hundreds of thousands of lines of code). As long as you do not expect thread local data to persist across a context switch (or any function whose behavior you do not know), which is always a good thing to do, fiber local data adds nothting to it (except wasting lots of memory). The biggest problem, of course, is that FLS is windows specific (i.e. even if we implemented it in boost, it would be useless if the CRT didn't make use of it).
Of course the locking issue is a real problem. Is for this reason that I think that either for coroutines or fibers we need to explore different synchronization mechanisms and discourage the use of synchronization at the thread level.
For cooperative multitasking you never need synchronization between fibers on the same thread (A scoped guard that asserts that the fiber is never scheduled away in a critical region would be useful though). But hoping to get away from mutexes in general is IMVHO wishful thinking. -- gpd

----- Original Message ----- From: "Giovanni Piero Deretta" <gpderetta@gmail.com> To: <boost@lists.boost.org> Sent: Tuesday, November 04, 2008 4:58 PM Subject: Re: [boost] [threadpool] new version v12
The __thread bug on gcc with -o1 optimization is not enough convincing to me.
There is little to be convinced about.
OK, I get it. Sorry for the noise. Vicente

"vicente.botet" <vicente.botet@wanadoo.fr> writes:
I don't see any major issue to migrate tasks when the blocking function get() calls recursivelly to the working thread scheduler. Is there one?
If the task migrates across threads its ID changes, and thread locals change.
So if the task do not depends on thread specific this is safe.
In principle, yes. It depends on to what extent the CRT uses thread-specific stuff. Anthony -- Anthony Williams Author of C++ Concurrency in Action | http://www.manning.com/williams Custom Software Development | http://www.justsoftwaresolutions.co.uk Just Software Solutions Ltd, Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL, UK

Obviously, if task N blocks on a future you can then reschedule task A, but if it doesn't then you don't get that opportunity. If task N waits for another event to be triggered from task A (e.g. a notify on a condition variable) then it will never get it because task A is suspended, and so the pool thread will deadlock *even when task A is ready to run*.
What about this idea: instead of spawning a new thread still use fibers which will not migrated to other worker-threads. Nonblocking functionality of future::get() is moved into a public available function: template< typename Condition > void threadpool::yield( Condition cond) { BOOST_ASSERT( tss_worker_.get() ); // only worker-threads allowed while ( ! cond) current_fiber_->yield(); } Because thread::conditions don't provide a try_wait() a semaphore should be used (bool semaphore::try_wait() ). ... semaphore sem ....; ... pool.yield( bind( & semaphore::try_wait, ref( sem) ) ); ... regards, Oliver -- GMX Download-Spiele: Preizsturz! Alle Puzzle-Spiele Deluxe über 60% billiger. http://games.entertainment.gmx.net/de/entertainment/games/download/puzzle/in...

"Oliver Kowalke" <k-oli@gmx.de> writes:
Obviously, if task N blocks on a future you can then reschedule task A, but if it doesn't then you don't get that opportunity. If task N waits for another event to be triggered from task A (e.g. a notify on a condition variable) then it will never get it because task A is suspended, and so the pool thread will deadlock *even when task A is ready to run*.
What about this idea: instead of spawning a new thread still use fibers which will not migrated to other worker-threads. Nonblocking functionality of future::get() is moved into a public available function:
template< typename Condition > void threadpool::yield( Condition cond) { BOOST_ASSERT( tss_worker_.get() ); // only worker-threads allowed while ( ! cond) current_fiber_->yield(); }
I really don't see what that gets you, except another potential scheduling point: user code still has to actually use this in order to allow rescheduling. There will always be threads that wait for synchronization from elsewhere that don't use any mechanism you can hijack to reschedule the pool (e.g. atomic ops on a boolean flag). Anthony -- Anthony Williams Author of C++ Concurrency in Action | http://www.manning.com/williams Custom Software Development | http://www.justsoftwaresolutions.co.uk Just Software Solutions Ltd, Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL, UK

----- Original Message ----- From: "Anthony Williams" <anthony.ajw@gmail.com> To: <boost@lists.boost.org> Sent: Monday, November 03, 2008 12:09 PM Subject: Re: [boost] [threadpool] new version v12
"Oliver Kowalke" <k-oli@gmx.de> writes:
But what about executing fibered tasks only in the worker-thread which has created the fiber?
There is still a problem (just different).
Then a new task is dequeued from the local worker-queue, then from the global -queue and then from worker-queues of other threads.
Here's the problem: if you take a task from the global queue or the worker queue of another thread and *don't* do thread migration, then you risk delaying a task and potentially deadlocking the pool.
Consider the sequence above: at step 5, task A is blocked, and the only task it is waiting for (B) has been picked up by an idle worker thread. The thread running task A now takes a new task from the global queue (task N). Suppose this is a really long-running task (calculate PI to 3 million digits, defrag a hard disk, etc.). Without thread migration, task A cannot resume until task N is complete.
Obviously, if task N blocks on a future you can then reschedule task A, but if it doesn't then you don't get that opportunity. If task N waits for another event to be triggered from task A (e.g. a notify on a condition variable) then it will never get it because task A is suspended, and so the pool thread will deadlock *even when task A is ready to run*.
Obiously this sequence justify that a task could need to migrate to another thread. It seems that task scheduling is not as easy as I thought. Thanks for draw forth this issue. Vicente

Anthony Williams:
Fibers are still tied to a particular thread, so thread-local variables and boost::this_thread::get_id() still return the same value for a nested task. This means that a task that calls future::get() might find that its thread-local variables have been overwritten by a nested task when it resumes. It also means that any data structures keyed by the thread::id may have been altered. Finally, the nested task inherits all the locks of the parent, so it may deadlock if it tries to lock the same mutexes (rather than just block if it is running on a separate thread).
No. If task A holds a mutex M and waits for task B, and task B tries to lock M, you have deadlock no matter which thread executes B. For the thread locals, they usually are altered in the manner they should've been altered, unless a task uses thread locals instead of ordinary locals and calls itself. Expecting errno to persist across a future wait (or across an arbitrary function call) is not a reasonable assumption. -- Peter Dimov http://www.pdplayer.com

"Peter Dimov" <pdimov@pdimov.com> writes:
Anthony Williams:
Fibers are still tied to a particular thread, so thread-local variables and boost::this_thread::get_id() still return the same value for a nested task. This means that a task that calls future::get() might find that its thread-local variables have been overwritten by a nested task when it resumes. It also means that any data structures keyed by the thread::id may have been altered. Finally, the nested task inherits all the locks of the parent, so it may deadlock if it tries to lock the same mutexes (rather than just block if it is running on a separate thread).
No. If task A holds a mutex M and waits for task B, and task B tries to lock M, you have deadlock no matter which thread executes B.
That's true if the thread runs task B whilst it is waiting. If it runs task C which is completely unrelated and task C tries to acquire the lock then it will deadlock even if task B then completes, which would allow task A to resume and release the lock (since task A is suspended whilst its thread runs task C).
For the thread locals, they usually are altered in the manner they should've been altered, unless a task uses thread locals instead of ordinary locals and calls itself. Expecting errno to persist across a future wait (or across an arbitrary function call) is not a reasonable assumption.
That depends on the thread-local variable. errno is used everywhere, so it is not safe to assume it is unchanged. You might expect a library specific thread-local to persist across a future wait: the futures library knows nothing of your code, so has no reason to write to your thread locals. Unless of course it runs another task on your thread.... Anthony -- Anthony Williams Author of C++ Concurrency in Action | http://www.manning.com/williams Custom Software Development | http://www.justsoftwaresolutions.co.uk Just Software Solutions Ltd, Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL, UK

Anthony Williams:
"Peter Dimov" <pdimov@pdimov.com> writes:
Anthony Williams:
Fibers are still tied to a particular thread, so thread-local variables and boost::this_thread::get_id() still return the same value for a nested task. This means that a task that calls future::get() might find that its thread-local variables have been overwritten by a nested task when it resumes. It also means that any data structures keyed by the thread::id may have been altered. Finally, the nested task inherits all the locks of the parent, so it may deadlock if it tries to lock the same mutexes (rather than just block if it is running on a separate thread).
No. If task A holds a mutex M and waits for task B, and task B tries to lock M, you have deadlock no matter which thread executes B.
That's true if the thread runs task B whilst it is waiting. If it runs task C which is completely unrelated and task C tries to acquire the lock then it will deadlock even if task B then completes, which would allow task A to resume and release the lock (since task A is suspended whilst its thread runs task C).
Yes, of course. If A waits for B, the pool should try to execute B in A's thread, not C. -- Peter Dimov http://www.pdplayer.com

----- Original Message ----- From: "Anthony Williams" <anthony.ajw@gmail.com> To: <boost@lists.boost.org> Sent: Saturday, November 01, 2008 11:03 PM Subject: Re: [boost] [threadpool] new version v12
k-oli@gmx.de writes:
Am Samstag, 1. November 2008 19:35:23 schrieb Vicente Botet Escriba:
IMO, the implementation of the fork/join semantics do not need fibers. The wait/get functions can call to the thread_pool scheduler without context-switch. Which are the advantages of using fibers over calling recursively to the scheduler?
Please take alook into the example folder of threadpool. You will find two exmaples for recursivly calculate fibonacci. Configure the pool with tp::fibers_disabled and try to calulate fibonacci(3) with two worker-threads. Your application will block forever. Use the option tp::fiber_enabled and you can calculate any fibonacci number without blocking
I haven't looked at Oliver's use of Fibers, but you don't need to use fibers to do this. Whenever you would switch fibers to a new task, just call the task recursively on the current stack instead. The problem here is that you may run out of stack space if the recursion is too deep: by creating a Fiber for the new task you can control the stack space.
Thanks Anthony to anwer my question. The 'run out of stack space' problem was already the case of the sequential recursive algotithm. Of course with fibers you can avoid the recursion problem, but now you need to reserve a specific stack space for each fiber. IMO tasks should be more lighwitgth that fibers. I'm interested in knowing the overhead of using fibers respect to the recursive call. I'm not saying that fibers are not useful, the fiber library should be useful in its own in a lot of contexts; I'm also waiting the review of the corutine library. Maybe it should be up to the end user to choose between a fibers implementation or a recursive one.
The problem with doing this (whether you use Fibers or just recurse on the same stack) is that the nested task inherits context from its parent: locked mutexes, thread-local data, etc. If the tasks are not prepared for this the results may not be as expected (e.g. thread waits on a task, resumes after waiting and finds all its thread-local variables have been munged).
You are right, this reenforce my initial thinking. We need to separate between tasks (root tasks) and sub_tasks (which will inherit context from its parent task). * In addition we could have a default thread pool (the_thread_pool) and have access to the current task (this_task). How this default thread pool is built must be defined. This could let write things like int seq_fibo( int n) { if ( n <= 1) return n; else return seq_fibo( n - 2) + seq_fibo( n - 1); } int par_fibo( int n) { using namespace boost::tp; if ( n <= 3) return seq_fibo( n); else { sub_task< int > t1(this_task::submit(par_fibo, n-1)); sub_task< int > t2(this_task::submit(par_fibo, n-2)); return t1.get() + t2.get(); } } int fibo( int n) { using namespace boost::tp; task< int > t(the_thread_pool::submit(fibo, n)); return t.get_future().get(); } * Independently of whether the implementation uses fibers or a recursive call the working thread scheduler, there are other blocking functions that could be wrapped to use the current working thread to schedule other tasks/sub_tasks doing a busy wait instead of a blocking wait. For example a task can wait on a condition that can be provided by other tasks/sub_tasks, so the end user could be able to write something_like this_working_thread::wait(condition); So I think that the library must provide a mechanism allowing writing this kind of wrappers providing a one_step_schedule function. void this_working_thread::wait(boost::condition cnd) { while (!cnd.try_wait()) { this_working_thread::one_step_schedule(); } } * Just one additional feature I would like to see on boost, which could be included on the thread pool library or on a separated library. I would like to be able to submit/schedule tasks at a given time (callouts) which could be oneshot or periodics. Something like timeout_task to = the_thread_pool::submit_at(absolute_time, funct); timeout_task to = the_thread_pool::submit_at_periodically(absolute_time, period, funct); Of course the scope of the library will be wider, and the TaskScheduler could be more adequated name for the library. The implementation of the timeouts scheduler could be based on "Redesigning the BSD Callout and Timer Facilities" by Adam M. Costello, George Varghese http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=B5202FC949FF3EDB0E789F68F509950C?doi=10.1.1.54.6466&rep=rep1&type=pdf BTW Olivier, * could the interrupt function extract the task from the queue if the task is not running already? * as a task can be only on one queue maybe the use of intrusive containers could improve the performances * the fiber queue is a std::list and you use size function which could have a O(n) complexity. This should be improved in some way (intrusive container provides already a constants_size list implementation). Best, Vicente

Am Sonntag, 2. November 2008 19:49:47 schrieb vicente.botet:
Thanks Anthony to anwer my question. The 'run out of stack space' problem was already the case of the sequential recursive algotithm. Of course with fibers you can avoid the recursion problem, but now you need to reserve a specific stack space for each fiber. IMO tasks should be more lighwitgth that fibers. I'm interested in knowing the overhead of using fibers respect to the recursive call. I'm not saying that fibers are not useful, the fiber library should be useful in its own in a lot of contexts; I'm also waiting the review of the corutine library. Maybe it should be up to the end user to choose between a fibers implementation or a recursive one.
Using fibers doesn't prevent you calling functions recursivly in the task object. The purpose of using fibers in Boost.Threadpool is not to block the parent-task until the sub-task becomes ready/fulfilled. The parent-task gets suspended and the worker-thread takes another task from the pool-queue. Later the suspended task is resumed. Boost.Threadpool allows to disable fibers -> tp::fibers_disabled.
The problem with doing this (whether you use Fibers or just recurse on the same stack) is that the nested task inherits context from its parent: locked mutexes, thread-local data, etc. If the tasks are not prepared for this the results may not be as expected (e.g. thread waits on a task, resumes after waiting and finds all its thread-local variables have been munged).
You are right, this reenforce my initial thinking. We need to separate between tasks (root tasks) and sub_tasks (which will inherit context from its parent task).
I believe this separation is not necessary. If all fibers are processes by the same worker-thread we don't have to worry.
int par_fibo( int n) { using namespace boost::tp; if ( n <= 3) return seq_fibo( n); else { sub_task< int > t1(this_task::submit(par_fibo, n-1)); sub_task< int > t2(this_task::submit(par_fibo, n-2)); return t1.get() + t2.get(); } }
What does this_task::submit? Creating a new task in the threadpool?
* Independently of whether the implementation uses fibers or a recursive call the working thread scheduler, there are other blocking functions that could be wrapped to use the current working thread to schedule other tasks/sub_tasks doing a busy wait instead of a blocking wait.
A I wrote above - Boost.Threadpool already does this (with the support of fibers). Currently it is encapsulated in task< R >::get() - but the interface can be extended to provide this_working_thread::wait() function.
BTW Olivier, * could the interrupt function extract the task from the queue if the task is not running already?
This would be complicated because we have different queues; one global queue and local worker-queues. The task has to maintain an iterator after insertion of one of the queues etc. The current implementaion stores a interrupt-flag so that the task becomes interrupted immediatly after dequeuing.
* as a task can be only on one queue maybe the use of intrusive containers could improve the performances * the fiber queue is a std::list and you use size function which could have a O(n) complexity. This should be improved in some way (intrusive container provides already a constants_size list implementation).
I choosed std::list for its fast insertions/deletions - I'll take a look into intrusive containers. regards, Oliver

----- Original Message ----- From: <k-oli@gmx.de> To: <boost@lists.boost.org> Sent: Sunday, November 02, 2008 8:59 PM Subject: Re: [boost] [threadpool] new version v12
Using fibers doesn't prevent you calling functions recursivly in the task object.
You should have misunderstood my concern. What I mean is that the future get function call can *recursivelly* call to the worker thread scheduleer to schedule one sub_task or steel one from others working threads instead of using fibers. This is another possibility for the thread_pool library independent of whether the user has a recursive function.
The purpose of using fibers in Boost.Threadpool is not to block the parent-task until the sub-task becomes ready/fulfilled. The parent-task gets suspended and the worker-thread takes another task from the pool-queue. Later the suspended task is resumed. Boost.Threadpool allows to disable fibers -> tp::fibers_disabled.
Yes I know all that. As you know I'm insterested on the fork_join enabled thread pool variant, which doesn't means I'm for a fiber implementation. Both features are orthogonal.
The problem with doing this (whether you use Fibers or just recurse on the same stack) is that the nested task inherits context from its parent: locked mutexes, thread-local data, etc. If the tasks are not prepared for this the results may not be as expected (e.g. thread waits on a task, resumes after waiting and finds all its thread-local variables have been munged).
You are right, this reenforce my initial thinking. We need to separate between tasks (root tasks) and sub_tasks (which will inherit context from its parent task).
I believe this separation is not necessary. If all fibers are processes by the same worker-thread we don't have to worry. <snip> What does this_task::submit? Creating a new task in the threadpool?
Creates a sub_tasks that inherits from the context of its parent task.
* Independently of whether the implementation uses fibers or a recursive call the working thread scheduler, there are other blocking functions that could be wrapped to use the current working thread to schedule other tasks/sub_tasks doing a busy wait instead of a blocking wait.
A I wrote above - Boost.Threadpool already does this (with the support of fibers). Currently it is encapsulated in task< R >::get() - but the interface can be extended to provide this_working_thread::wait() function.
Yes I know, it does only for the future get() function. What I'm asking is to explore the ability to do that for other blocking functions, even not yet defined blocking functions, i.e. open the interface. Do you plan to open the interface with something like one_step_schedule()?
BTW Olivier, * could the interrupt function extract the task from the queue if the task is not running already?
This would be complicated because we have different queues; one global queue and local worker-queues. The task has to maintain an iterator after insertion of one of the queues etc. The current implementaion stores a interrupt-flag so that the task becomes interrupted immediatly after dequeuing.
This will be better than nothing. Could you tell me where is the code doing this. Anyway with the separation between task and subtask we can avoid the problem. A task could be only on the pool queue and a sub_task only on the internal worker thread queue.
* as a task can be only on one queue maybe the use of intrusive containers could improve the performances * the fiber queue is a std::list and you use size function which could have a O(n) complexity. This should be improved in some way (intrusive container provides already a constants_size list implementation).
I choosed std::list for its fast insertions/deletions - I'll take a look into intrusive containers.
I'm sure you will be convinced with the intrusive containers. Thanks, Vicente

BTW Olivier, * could the interrupt function extract the task from the queue if
Yes I know, it does only for the future get() function. What I'm asking is to explore the ability to do that for other blocking functions, even not the
task is not running already?
This would be complicated because we have different queues; one global queue and local worker-queues. The task has to maintain an iterator after insertion of one of the queues etc. The current implementaion stores a interrupt-flag so that the task becomes interrupted immediatly after dequeuing.
This will be better than nothing. Could you tell me where is the code doing this.
interrupter class
Anyway with the separation between task and subtask we can avoid the problem. A task could be only on the pool queue and a sub_task only on the internal worker thread queue.
you still have to store an iterator inside the task and if you call container::erase( iterator) you don't get an information if the operation succeeded I think it is not worth the the trouble - interrupter flag works regards, Oliver -- Psssst! Schon vom neuen GMX MultiMessenger gehört? Der kann`s mit allen: http://www.gmx.net/de/go/multimessenger

k-oli@gmx.de wrote:
Am Samstag, 1. November 2008 00:53:11 schrieb Michael Marcin:
k-oli@gmx.de wrote:
Hello,
the new version of Boost.Threadpool depends on Boost.Fiber. Forgive my ignorance but http://tinyurl.com/6r3l6u claims that "fibers do not provide advantages over a well-designed multithreaded application".
What are the benefits of using them in a threadpool?
Thanks,
Hello Michael,
using fibers in a threadpool enables fork/join semantics (algorithms which recursively splitt an action into smaller sub-actions; forking the sub-actions into separate worker threads, so that they run in parallel on multiple cores)
For instance the recursive calculation of fibonacci numbers (I know it is not the best algorithm for fibonacci calculation):
<snip example code>
Since the parent task can't complete any work until all of the child tasks complete it seems to me that one of the children should be directly run in the same context as the parent without needing the overhead of the scheduler. It's convenient that you chose this example. TBB uses this same example in their tutorial pdf ( http://tinyurl.com/64ejvp ) in chapter 10.2. Perhaps you could compare and contrast your solution which uses fibers with theirs. Thanks, -- Michael Marcin

on Fri Oct 31 2008, k-oli-AT-gmx.de wrote:
Hello,
the new version of Boost.Threadpool depends on Boost.Fiber.
These two and Boost.Coroutine are all proposed but as-yet-unreviewed libraries? Can we get the authors of Fiber and Coroutine together to take the best of each? -- Dave Abrahams BoostPro Computing http://www.boostpro.com
participants (10)
-
Anthony Williams
-
David Abrahams
-
Giovanni Piero Deretta
-
Hartmut Kaiser
-
k-oli@gmx.de
-
Michael Marcin
-
Oliver Kowalke
-
Peter Dimov
-
Vicente Botet Escriba
-
vicente.botet