boost::defer - generalised execution deferral

"Any interest in...?" This post asks the above question in relation to a possible library boost::defer which would live half-way between the Herb Sutter active/future thread model and the semi-raw facilities of boost::thread. Deferral here means 1. Do something later 2. Let something else decide which thread will execute the work Facilities like boost::function offer a way to wrap up a unit of work that can be deferred. (UOW) boost::defer would offer a way of posting the work unit to a Defer Point object that would call the unit of work in whatever is the appropriate thread. The Defer Point acts as a queue for scheduled work items Different polymorphic styles of Defer Point would implement different strategies for which kind of thread does the callback 1. single thread - owned by the Defer Point 2. Pool - staic or dynamic pool of threads owned by the Defer Point 3. win32 hidden window (dispatches callback on a gui thread) 4. NULL - Calling thread callback (i.e. do it now in the caller's thread) 5. Gating - Calling thread with mutex protection (serializes multiple threads thus protecting the called code) Any given defer point object can either be dedicated to a particular task, or can be shared throughout a program giving rise to a variety of ways of using system threads 1. One single thread defer point, reused throughout a program -> cooperative multi-tasking 2. win32 hidden window reused throughout -> cooperative multi-tasking on the gui thread 3. as above, but with another dedicated thread Defer Point (or pool) used for slow background tasks I've found these threading concepts to be useful in large-scale projects where multiple libraries want to do callbacks into some application code. Defer Points allow you to parameterise those libraries in such a way as to play nicely with eachother - deferring their callback strategy to a style defined by the programmer integrating the libraries into the final app. Its also handy where multiple entities in a program need to call into eachother, potentially recursively (A calls B calls A calls B) Better to break this recursion with queues of work items. So - any interest in more concrete code? More detailed explanations and scenarios? Cheers Simon

Pedro, [SNIP]
I'd happily test such a library. I do need something less "fire and forget" than boost::thread_group...
Thanks for your interest. I'll look at getting something into a suitable format (i.e. reimplemented on a pure boost platform) for you to look at. Looking at Christopher Kohlhoff's lib though, perhaps this job is already done. Cheers Simon

Hi Simon, --- simon meiklejohn <simon@simonmeiklejohn.com> wrote: <snip>
boost::defer would offer a way of posting the work unit to a Defer Point object that would call the unit of work in whatever is the appropriate thread. The Defer Point acts as a queue for scheduled work items
What you describe is similar to functionality provided by my proposed asio library. In particular see the asio::demuxer and asio::locking_dispatcher classes, which implement the Dispatcher concept (see http://asio.sourceforge.net/asio-0.3.5/doc/reference/index.html). For example, given an object d of a class that implements the Dispatcher concept, you can: 1) Request immediate execution of a function object. The function object will be called immediately provided the dispatcher's preconditions(**) for immediate execution are met, otherwise execution is deferred: d.dispatch(some_function_object); 2) Request deferred execution of a function object: d.post(some_function_object); 3) Create a new function object that will automatically dispatch the original function object: new_function_object = d.wrap(some_function_object); ** The preconditions are up to the specific Dispatcher implementation. In the case of the asio::demuxer class, it's that the current thread is currently invoking demuxer::run().
1. One single thread defer point, reused throughout a program -> cooperative multi-tasking
This sort of "cooperative multi-tasking" design is common to apps that use asio to perform asynchronous I/O and other tasks. With careful design (and judicious use of asio::locking_dispatcher), it also works when the demuxer uses a pool of threads.
I've found these threading concepts to be useful in large-scale projects where multiple libraries want to do callbacks into some application code. Defer Points allow you to parameterise those libraries in such a way as to play nicely with eachother - deferring their callback strategy to a style defined by the programmer integrating the libraries into the final app.
Its also handy where multiple entities in a program need to call into each other, potentially recursively (A calls B calls A calls B) Better to break this recursion with queues of work items.
Yep, I also find this approach particularly useful for the same reasons as you. Cheers, Chris

Chris, SNIP
What you describe is similar to functionality provided by my proposed asio library. In particular see the asio::demuxer and asio::locking_dispatcher classes, which implement the Dispatcher concept (see http://asio.sourceforge.net/asio-0.3.5/doc/reference/index.html).
Very nice looking library. Your points regarding how it does the things i have discussed are well taken. Questions: 1. is this in the pipeline for review in boost? 2. Given the contentious nature of I/O discussions in boost would it be possible to unbundle the threading aspect of your lib and get that part into boost sooner? <grin> 3. what's your approach to objects mentioned in the callback handler being deleted before the callback is made? Is it the responsibility of the Handler to manage this? Any wrappers for this functionality (e.g. using weak_ptr) 4. Is a Win32 GUI thread based asio::demuxer available? Couldn't quite get my head around getting windows events into such a thing. Cheers Simon

Hi Simon, --- simon meiklejohn <simon@simonmeiklejohn.com> wrote:
1. is this in the pipeline for review in boost?
Not yet, but should be real soon now.
2. Given the contentious nature of I/O discussions in boost would it be possible to unbundle the threading aspect of your lib and get that part into boost sooner? <grin>
Hopefully it won't come to that :)
3. what's your approach to objects mentioned in the callback handler being deleted before the callback is made? Is it the responsibility of the Handler to manage this? Any wrappers for this functionality (e.g. using weak_ptr)
It is the responsibility of the initiator of the post(), dispatch() or asynchronous operation to ensure that anything referenced by the handler is still alive when the handler is invoked. One way of achieving this is to use shared_ptr data members in the handler. E.g. in the http/server example where the connection object passes shared_from_this() to all asynchronous operations. Another thing is that the library ensures that the handler is invoked exactly once (except in case of demuxer::run() being interrupt()ed and not resumed). So an alternative approach might be to clean up the objects when the handler itself is invoked.
4. Is a Win32 GUI thread based asio::demuxer available?
No.
Couldn't quite get my head around getting windows events into such a thing.
Yeah, sounds like it could be tricky to implement. It might be easier to just implement the Dispatcher concept for windows events and call demuxer::run() in a background thread. Then to have completion handlers called in the windows-event-handling thread you would do something like: my_socket.async_receive(buffers, my_windows_event_dispatcher.wrap(my_handler)); where the wrap() function causes the handler invocation to be forwarded to the windows-event-based dispatcher. Cheers, Chris

On 11/25/2005 05:05 PM, simon meiklejohn wrote: [snip]
Deferral here means 1. Do something later 2. Let something else decide which thread will execute the work
[snip]
Its also handy where multiple entities in a program need to call into eachother, potentially recursively (A calls B calls A calls B) Better to break this recursion with queues of work items.
I'm guessing it could be used to demonstrate a multi-threaded program could deadlock. IOW, the different threads in a program could be "divided" into work_items which execute atomicly. Then by precisely specifying the order in which the work_items in different threads are executed, you demonstrate that the original program could deadlock. I remember doing something like this with signals, where each work_item, at the end of it's code, set a signal which the handler caught and then dispatched to the next work_item, or something like that.
So - any interest in more concrete code? More detailed explanations and scenarios?
Yes.

A bit of a look at the callback facilities of the asio library leads me to think there are some differences between that scheme and the one i'm proposing. These differences make it worth expanding on my proposed Defer model at least for the purposes of comparison. Defer has the following characteristics 1. Polymorphic defer strategies - a base class DeferPoint defines a simple protocol for requesting and receiving callbacks. 2. Concrete defer agents inherit from DeferPoint, and implement its protocol in custom ways (dedicated thread, thread pool, callback int the caller thread, win32 gui event notification etc). 3. Deferral Styles derived from DeferPoint are abstract classes from which to derive concrete deferrers. Client code can use the DeferStyle names in their interfaces to specify their requirements/compatibilities. (things like - thread affinities, thread multiplicity, re-entrancy policy) This is a fairly old-school way to structure things but i believe it leads to greater deployability than doing a generics/templates approach at this base level. Its still possible to build generic code on top - such as using boost::function to define work items like boost::thread does. Benefits of this approach compared to asio 1. Large existing non-template code-bases can be made Defer-friendly comparitively easily - a new constructor parameter here, a deferral request there. 2. The DeferPoint base class allows client code to benefit from different deferral styles without needing to be templated. 3. Strong types for Defer Style concepts allow for compiler enforced reasoning about how to hook different components together. 4. more easily able to hook in to the windows event model and other ad-hoc schemes. Cheers Simon

A simple example - the following program directs the functions f(), g() and h() to be called on the background thread. // kublaikhan.cpp : basic test of DeferThread. // #include "deferthread.h" #include <iostream> // tasks for the worker thread /////////////////////////////////////////////////////////////// void f() {std::cout << "In Amsterdam" << std::endl;} void g() {std::cout << "Did Kublai Khan" << std::endl;} void h() {std::cout << "His stately pleasure tram decree" << std::endl; } /////////////////////////////////////////////////////////////// void thread_sleep(int seconds ) { boost::xtime delay; boost::xtime_get(&delay, boost::TIME_UTC ); delay.sec += seconds; boost::thread::sleep( delay ); } /////////////////////////////////////////////////////////////// void test_worker( defer::DeferThread& worker, const boost::function<void (void)>& func ) { // get the worker to call the function it its thread worker.Defer(func); thread_sleep(3); // arbitrary foreground work } /////////////////////////////////////////////////////////////// int main(int argc, char **argv) { // DeferThread extends DeferPoint // it executes requests on a single bacgkround thread defer::DeferThread worker; // sleep instead of meaningful foreground work thread_sleep(3); // do a variety of tasks in the background test_worker( worker, f ); test_worker( worker, g ); test_worker( worker, h ); thread_sleep(3); worker.Stop(true); // close down }

I've placed a trial implementation of the Defer library in the sandbox under Concurrent Programming. A quick look at the code will show its pretty trivial, and at this point it lacks the windows message based callback signalling. That will follow soon. Its also not optimised in any way. A simple example is included (kublakhan.cpp) which exercises a variety of defer styles. Any and all comments welcome. Many thanks Simon

a second version of library Defer is now in the sandbox under Concurrent Programming. (this version primarily adds the promised win32 message pump version) To recap - this library defines a uniform means to send work items (boost::function objects) to thread pools, single threads and the windows gui thread via a runtime polymorphic interface. Other implementations of the defer concept included in the library are defer_null, which makes no attempt to defer, but invokes the supplied boost::function object immediately and defer_mutex, which also invokes the object on the calling thread, but only after first acquiring a mutex (thus serialising requests from different threads). The benefits of such a library are:. -It allows an efficient reuse of threads. -It provides an abstract way to parameterise threading architecture i.e. -It allows the programmer to define what thread s/he wishes to receive callbacks in from cooperating components. -It can be used to eliminate thread deadlocking issues, and remove mutex use from much application code. -It can help break recursive calling cycles in event driven code. The scheme is wide open for the addition of more models of the defer concept. The base defer class provides the queue facilities, and simply requires the deriving classes to implement a 'signal' function, in which to perform an implementation specific wakeup or notification to the appropriate thread. Three examples of possible other styles are: -demuxer style thread parking - create the object and then call a run() function in an appropriate thread -recursion breaking - first defer in a given thread executes immediately, subsequent recursive defers on the same object are queued to be serviced in turn before the thread finally leaves the object -thread creating - creates an entirely new thread per work item. All comments again welcome, though I know its a busy time for boosters with the huge effort going into the next release. Cheers Simon
participants (4)
-
Christopher Kohlhoff
-
Larry Evans
-
Pedro Lamarão
-
simon meiklejohn