
Firstly, I think the library should be accepted into Boost. I can't see any particular blocking issues that would prevent it being acceptable, and it is high time that some network library is available in boost, to serve as a basis for more abstract facilities. - What is your evaluation of the design? Fine, based on well understood models. - What is your evaluation of the implementation? I haven't looked. - What is your evaluation of the documentation? The documentation is quite good. I would prefer a slightly more discursive style, but the reference information is all there, and the tutorials are a good introduction. I think the examples each need a description, explaining what they exemplify. Also, many of the examples contain some code repetition, where a handler repeats task initiation code also found outside the handler; I think these should be refactored. - What is your evaluation of the potential usefulness of the library? Enormous. - Did you try to use the library? With what compiler? Did you have any problems? No. - How much effort did you put into your evaluation? A glance? A quick reading? In-depth study? Based on reading the documentation only. - Are you knowledgeable about the problem domain? Sockets programming, yes; asynch frameworks, no. I have a number of questions regarding the current proposal (I don't think any of these require changes before acceptance, but I am curious.) Please excuse the lack of structure to these questions :) I don't really follow the intent of the locking dispatcher. It appear to me that it simply achieves the same effect as adding a scoped lock to each dispatched method (each using the same mutex). This gives a small reduction in code, but doesn't immediately strike me as giving the code greater clarity. Also, the shared mutex that would otherwise be used in each method would be available for protecting other related code, while the locking_dispatcher hides the mutex, so that all protected code must be invoked via the demultiplexer. Perhaps a better example than that in the tutorial would make the benefit of the dispatcher clearer? The locking_dispatcher's 'wrap' method seems to be its primary purpose. Since the terminology is a little unnatural, perhaps it could be operator() ? I would like to see a user-supplied handler option for handling genuine errors, not per-call but at a higher level (maybe per-demuxer?) Typically, the response to an error is to close the socket, once open; it would be handy to supply a callback which received enough context to close down a socket, and in all other code to ignore error-handling completely. I do not think EOF should constitute an error, and I would expect that try-again and would-block errors could be safely ignored by the user. Perhaps EOF should be a testable state of the socket, rather than a potential error resulting from 'receive'. Can the proactor be made to work with multiple event sources, with the current interface? For example, wrapping simultaneous network IO and aio-based file IO on POSIX? I wonder how the demuxer would be incoporated into a GUI event-loop-based program, particularly in toolkits which mandate that GUI updates can occur from a single thread only? Perhaps the 'run' member function could be called in a non-blocking fashion? Or perhaps the demuxer could queue ready handler callbacks for later execution, rather than invoking them directly? Then the GUI thread could execute all pending handler functions on idle. I believe there should be an automatically managed buffer-list class, to provide simple heap-based, garbage-collected buffer management, excluding the possibility of buffer overruns and memory leaks. Perhaps the one in the Giallo library could be used without significant modification, and supported by the asio buffer primitives. Also, this abstraction should be used in tutorials; although it is useful and practical to support automatically allocated buffer storage, it shouldn't be needlessly encouraged. I also would like to request some commentary (in the library documentation) on future developments that are out of scope for the current proposal. Given the late juncture at which we're looking at bringing network programming into boost, I think it's important to consider desirable extensions, and how the current proposal will support them. Much of this will be discussed in the review, of course. One thing I would like to see (even in Asio itself) is a simplest-possible layer over the bare sockets layer, which took care of resource management and the intricacies of sockets programming. Ideally, this would leave an interface comparable to asynch IO programming in python and the like, suitable for the smallest networking tasks. Other, more complicated layers, such as socketstreams and compile-time filter-chaining as in Giallo, could be provided as libraries built on Asio. Matt

On Tue, 13 Dec 2005 20:34:45 +1000, Matthew Vogt <mattvogt@warpmail.net> wrote:
I wonder how the demuxer would be incoporated into a GUI event-loop-based program, particularly in toolkits which mandate that GUI updates can occur from a single thread only? Perhaps the 'run' member function could be called in a non-blocking fashion? Or perhaps the demuxer could queue ready handler callbacks for later execution, rather than invoking them directly? Then the GUI thread could execute all pending handler functions on idle.
I was wondering the same thing. For Windows clients I like WSAAsyncSelect() and just driving the sockets through the GUI message loop, since I don't have to worry about threading issues. Another reason I like it is because I can have several dozen windows open, each one a separate component. I really can't afford to have several threads (at least one to drive the demuxing and one for the async resolves) created for each window, and I would prefer not to require each window to have to register with a shared demuxing thread. Any ideas? -- Be seeing you.

Chris and Thore, ----- Original Message ----- From: "Thore Karlsen" <sid@6581.com> Newsgroups: gmane.comp.lib.boost.devel Sent: Wednesday, December 14, 2005 4:53 AM Subject: Re: Asio formal review
On Tue, 13 Dec 2005 20:34:45 +1000, Matthew Vogt <mattvogt@warpmail.net> wrote:
I wonder how the demuxer would be incoporated into a GUI event-loop-based program, particularly in toolkits which mandate that GUI updates can occur from a single thread only? I was wondering the same thing. For Windows clients I like WSAAsyncSelect() and just driving the sockets through the GUI message loop, since I don't have to worry about threading issues. Another reason I like it is because I can have several dozen windows open, each one a separate component. I really can't afford to have several threads (at least one to drive the demuxing and one for the async resolves) created for each window, and I would prefer not to require each window to have to register with a shared demuxing thread.
Hope Chris doesnt mind me jumping in here. background - In win32 you can send things to the gui thread from any other thread by posting a message to _any_ window. The windows procedure associated with that window will be called with that message in the gui thread (or more specifically in the thread on which the window was created) provided that thread has a message pump running. use with asio Its feasible to use this to create dispatcher class, similar to asio::locking_dispatcher which can be used to send function objects to be executed in the gui thread (call this asio::gui_dispatcher). This component internally creates a hidden window. A call to gui_dispatcher.post() or dispatch() queues the function object and posts a message to the hidden window. The windows proc associated with the window then unqueues the function object and calls it. I dont think you need to register every window with asio. Just create one gui_dispatcher and use it to wrap your Handlers before passing them to asio socket functions. (theres a zip in the vault Defer.zip under Concurrent Programming that does something like this but not as an asio::dispatcher model) Cheers Simon

On Wed, 14 Dec 2005 07:50:38 +1300, "simon meiklejohn" <simon@simonmeiklejohn.com> wrote: [using asio with Windows message loops]
use with asio Its feasible to use this to create dispatcher class, similar to asio::locking_dispatcher which can be used to send function objects to be executed in the gui thread (call this asio::gui_dispatcher). This component internally creates a hidden window. A call to gui_dispatcher.post() or dispatch() queues the function object and posts a message to the hidden window. The windows proc associated with the window then unqueues the function object and calls it.
I dont think you need to register every window with asio. Just create one gui_dispatcher and use it to wrap your Handlers before passing them to asio socket functions.
(theres a zip in the vault Defer.zip under Concurrent Programming that does something like this but not as an asio::dispatcher model)
It still seems to me that you would have to register the sockets with a shared demuxer. That wouldn't work for me, because in addition to using my components internally in my application (where I could do it), I also wrap them up in ActiveX controls. Other applications use several of these controls, and a shared demuxer isn't going to be possible. 2 more threads per control (plus one more if I want timers) is going to push the thread count too high. I don't know if it's feasible to use asio for something like this, but it would be very nice if I could. -- Be seeing you.

Hi Thore, --- Thore Karlsen <sid@6581.com> wrote:
I was wondering the same thing. For Windows clients I like WSAAsyncSelect() and just driving the sockets through the GUI message loop, since I don't have to worry about threading issues. Another reason I like it is because I can have several dozen windows open, each one a separate component. I really can't afford to have several threads (at least one to drive the demuxing and one for the async resolves) created for each window, and I would prefer not to require each window to have to register with a shared demuxing thread.
Any ideas?
I have pondered this from time to time, but haven't been able to decide on the ideal interface. Here are some of the candidates: - demuxer::run() overload that takes a timeout (possibly two overloads, one taking an absolute time and the other taking a duration) which means it will run for that long regardless of how much work it performs. - demuxer::perform_work() function that runs until it has performed exactly one item of work. - demuxer::perfork_work() function that takes a timeout and runs until the time has elapsed or it performs one item of work. The problems I ran into when thinking about this include: - How to tell that the demuxer has finished all its work? Boolean return value from the above functions? Is it better for true to mean its done or that it has more to do? - What happens with multiple thread calling these functions? What is the correct interaction between these functions and demuxer::interrupt() and demuxer::reset()? Do we need an additional return value from run()/perform_work() that means the demuxer was interrupted? I see all these options as additions to the current interface, so I chose to leave it as a task for future thought. I'd welcome any input on it you might have. Cheers, Chris

Christopher Kohlhoff <chris <at> kohlhoff.com> writes:
Hi Chris, sorry if this reply turns up twice; my internet access is borked and I'm having fun with GMANE...
I have pondered this from time to time, but haven't been able to decide on the ideal interface. Here are some of the candidates:
- demuxer::run() overload that takes a timeout (possibly two overloads, one taking an absolute time and the other taking a duration) which means it will run for that long regardless of how much work it performs.
I think this is sensible.
- demuxer::perform_work() function that runs until it has performed exactly one item of work.
- demuxer::perfork_work() function that takes a timeout and runs until the time has elapsed or it performs one item of work.
I don't see the relevance of performing one unit of work. I think my preferred model model might be to queue up events that need to be handled, and to process all previously queued events during a call to perform_work. New items should not be added to the queue that is being processed during the perform_work call, or the call may never return... With this function, my theoretical GUI app which must process the handlers in the main thread would use a timer to priodically process IO events. I think this would give a more deterministic result than relying on idle processing.
The problems I ran into when thinking about this include:
- How to tell that the demuxer has finished all its work? Boolean return value from the above functions? Is it better for true to mean its done or that it has more to do?
Use an enum for the return code?
- What happens with multiple thread calling these functions? What is the correct interaction between these functions and demuxer::interrupt() and demuxer::reset()? Do we need an additional return value from run()/perform_work() that means the demuxer was interrupted?
Sorry, I don't understand the situations in which one would use interrupt and reset, so I can't comment on how they're affected. Also, multi-threaded apps would be a different kettle of fish. Why would multiple threads each want to process an arbitrary subset of pending events? A possible scenario is one using the same GUI system, where a pool of threads process IO events. If only one thread can update the GUI, then operations which result in an update of the GUI would have to be dispatched out of the thread pool, so that the main thread will handle them. Handlers not changing the GUI could be dispatched within the thread pool. Would this design call for two demuxers? Or is it not workable in asio? Matt

Hi Matt, --- Matt Vogt <mattvogt@warpmail.net> wrote:
- demuxer::run() overload that takes a timeout (possibly two overloads, one taking an absolute time and the other taking a duration) which means it will run for that long regardless of how much work it performs.
I think this is sensible.
For now, I am just going to look at doing the above plus having an enum return value from run() indicating "interrupted", "out_of_work", "timed_out", etc. I think these changes have obvious utility, provided I can get the semantics correct and unsurprising. If you pass a timeout of zero i'm going to make it only extract and execute handlers from the queue while the system clock remains unchanged. This will prevent the problem you described where additional handlers being queued might make the run function never return. Also to clarify my original statement about running for that long regardless of how much work it performs: I think it should not wait until the timeout if there is no work to be performed. If no work is pending (i.e. there are no async operations in progress) it should return immediately. This is how the current run() function works, where it has an implicit infinite timeout. <snip>
A possible scenario is one using the same GUI system, where a pool of threads process IO events. If only one thread can update the GUI, then operations which result in an update of the GUI would have to be dispatched out of the thread pool, so that the main thread will handle them. Handlers not changing the GUI could be dispatched within the thread pool.
Would this design call for two demuxers?
You could do it with two demuxers, yes (with changes proposed above). Basically whenever you're ready to have a handler dispatched to the main thread's demuxer, you can either post it direct to that demuxer, or when you initiate the operation wrap the handler so that it goes to the main thread demuxer, e.g.: async_read(s, bufs, main_thread_demuxer.wrap(handler)); An alternative could be some implementation of the Dispatcher concept for dispatching into a windows message loop. This has been discussed in some other posts. Cheers, Chris

----- Original Message ----- From: "Christopher Kohlhoff" <chris@kohlhoff.com> To: <boost@lists.boost.org> Sent: Tuesday, December 20, 2005 4:15 PM Subject: Re: [boost] Asio formal review
Hi Matt,
--- Matt Vogt <mattvogt@warpmail.net> wrote:
- demuxer::run() overload that takes a timeout (possibly two overloads, one taking an absolute time and the other taking a duration) which means it will run for that long regardless of how much work it performs.
I think this is sensible.
For now, I am just going to look at doing the above plus having an enum return value from run() indicating "interrupted", "out_of_work", "timed_out", etc. I think these changes have obvious utility, provided I can get the semantics correct and unsurprising.
[SNIP] I promised myself i'd shut up about these issues. Oh well... Wouldnt this problem be solved more naturally if an interface were exposed out of the demuxer for signalling the availability of more work as it arrives? The GUI thread could then be made to call back into demuxer::run only when appropriate rather than busily during idle time. I'm no expert on GUI frameworks, but if they expect to be called only in some designated thread then i'd be surprised if they didn't provide some channel for reflecting a minimal set of events into that thread. As previously discussed, the win32 message pump can easily be hijacked in this way. Cheers Simon

Hi Simon, --- simon meiklejohn <simon@simonmeiklejohn.com> wrote:
Wouldnt this problem be solved more naturally if an interface were exposed out of the demuxer for signalling the availability of more work as it arrives? The GUI thread could then be made to call back into demuxer::run only when appropriate rather than busily during idle time.
Bit of a chicken-and-egg issue there. You don't want to call demuxer::run unless there's work to do, but you can't know if there's work to do unless you call demuxer::run to dequeue events from the OS's event demultiplexer :) This sort of use case is probably better solved using your proposed windows message pump dispatcher, but I can also see a use for short, timed invocations of demuxer::run. Cheers, Chris

----- Original Message ----- From: "Christopher Kohlhoff" <chris@kohlhoff.com> To: <boost@lists.boost.org> Sent: Wednesday, December 21, 2005 11:59 AM Subject: Re: [boost] Asio formal review
Hi Simon,
--- simon meiklejohn <simon@simonmeiklejohn.com> wrote:
Wouldnt this problem be solved more naturally if an interface were exposed out of the demuxer for signalling the availability of more work as it arrives? The GUI thread could then be made to call back into demuxer::run only when appropriate rather than busily during idle time.
Bit of a chicken-and-egg issue there. You don't want to call demuxer::run unless there's work to do, but you can't know if there's work to do unless you call demuxer::run to dequeue events from the OS's event demultiplexer :)
This sort of use case is probably better solved using your proposed windows message pump dispatcher, but I can also see a use for short, timed invocations of demuxer::run.
Thinking a little bit more on this. Isnt it true that the asio library is essentially looking at what platform its compiled for and self-configuring for the optimal needs of that platform. Thats great for the main use cases - but platforms like windows are essentially a hybrid of gui architecture + 'console' architecture and you have chosen the 'console' side as giving higher throughput. I'd heartily agree that the gui environment on win32 is far from optimal from a performance perspective (i.e. windows messages per io operation sucks). However we're not always interested in high performance - sometimes what we'd prefer is a better mating to the gui platform. i.e. a demuxer implementation that can trade performance for architectural simplicity. i.e. inject the message pump as a delivery mechanism into a specialisation of demuxer. A naive approach (i.e. all that comes to mind for me) would be to add a virtual method to demuxer. Where currently demuxer is calling the Handler directly, instead pass it to the virtual function 'deliver_handler( Handler& handler )'. Then someone who wants gui delivery can create a class derived from asio::demuxer typedef, override deliver_handler manage their own internal queue of Handlers and create a cross-thread notification mechanism appropriate to the platform The virtual function would impose a cost on all platforms so perhaps a special purpose demuxer base class could be created (special policy perhaps) intended for derivation. e.g. asio::forwarding_demuxer Cheers Simon

Hi Simon, Even though you've already left for the holidays, just for the record... --- simon meiklejohn <simon@simonmeiklejohn.com> wrote: <snip>
A naive approach (i.e. all that comes to mind for me) would be to add a virtual method to demuxer. Where currently demuxer is calling the Handler directly, instead pass it to the virtual function 'deliver_handler( Handler& handler )'.
Then someone who wants gui delivery can create a class derived from asio::demuxer typedef, override deliver_handler manage their own internal queue of Handlers and create a cross-thread notification mechanism appropriate to the platform
The virtual function would impose a cost on all platforms so perhaps a special purpose demuxer base class could be created (special policy perhaps) intended for derivation. e.g. asio::forwarding_demuxer
I'll investigate using something similar to my proposed custom allocation hook for this. E.g.: template <typename Handler> class handler_dispatch_hook { public: template <typename Demuxer, typename Function_Object> static void dispatch(Demuxer& d, Handler& h, Function_Object& f) { f(); } }; Or something like that anyway. Hope you enjoy your break. Cheers, Chris

From: "Christopher Kohlhoff" <chris@kohlhoff.com>
I'll investigate using something similar to my proposed custom allocation hook for this. E.g.:
template <typename Handler> class handler_dispatch_hook { public: template <typename Demuxer, typename Function_Object> static void dispatch(Demuxer& d, Handler& h, Function_Object& f) { f(); } }; Or something like that anyway.
That looks fine. I suppose it would be possible for a given program to mix custom and uncustomised demuxers using this technique. Whereas i can't see much of a use case for doing different kinds of poll within a program, a mix of callback strategies does seem genuinely useful.
Hope you enjoy your break.
You too. You probably need one more than the rest of us. Best of luck with the review. Simon

Hi Matt, --- Matthew Vogt <mattvogt@warpmail.net> wrote: <snip>
I don't really follow the intent of the locking dispatcher. It appear to me that it simply achieves the same effect as adding a scoped lock to each dispatched method (each using the same mutex).
It's not quite the same thing. A scoped lock in each dispatched handler means that the method may be blocked while waiting to acquire the mutex. A blocked handler means that other handlers may also be prevented from executing even though they are ready to go. The locking_dispatcher ensures that the handler won't even be dispatched until the lock is acquired. This means that the execution of other handlers can continue unimpeded. I'd also like to note here that I intend asio to allow and encourage (although not enforce) the development of applications without any explicit locking of mutexes. <snip>
The locking_dispatcher's 'wrap' method seems to be its primary purpose. Since the terminology is a little unnatural, perhaps it could be operator() ?
The wrap() function returns a new function object that wraps the provided one. It doesn't actually execute code at that point, so I'm not sure operator() would convey that meaning.
I would like to see a user-supplied handler option for handling genuine errors, not per-call but at a higher level (maybe per-demuxer?) Typically, the response to an error is to close the socket, once open; it would be handy to supply a callback which received enough context to close down a socket, and in all other code to ignore error-handling completely.
I'm not convinced that this belongs as part of asio's interface, since there are a multitude of ways to handle errors. For example, there's the issue of what happens in async operations, such as asio::async_read, that are composed of other async operations. You wouldn't want the application-specific error handling to be invoked until the composed operation had finished what it was doing. I think the go is to use function object composition at the point where you start the asynchronous operation, e.g.: async_recv(s, bufs, add_my_error_handling(my_handler)); or perhaps: async_recv(s, bufs, combine_handlers(error_handler, ok_handler)); Also if the default behaviour you want is to close the socket (although often there's an associated application object to be cleaned up too) you can simply bind a shared_ptr to the object as a parameter to your handlers: async_recv(s, bufs, boost::bind(handler, connection_ptr, ...)); Essentially the object is owned by the chain of operations. When the chain of async operations and their handlers is terminated due to an error (or any other condition) the object is cleaned up automatically.
I do not think EOF should constitute an error, and I would expect that try-again and would-block errors could be safely ignored by the user. Perhaps EOF should be a testable state of the socket, rather than a potential error resulting from 'receive'.
In older versions of asio EOF was indicated by having a read return 0, same as the BSD sockets interface. However this increased complexity in composed synchronous and asynchronous operations, such as the now defunct async_read_n(), which had to return both the total number of bytes transferred *and* the last bytes transferred to allow checking for EOF. Now with EOF as an error the problem can be addressed far more clearly and elegantly. For example, consider: asio::read(socket, buffers, asio::transfer_all()); Its "contract" is to fill the buffers, using multiple underlying read_some calls as necessary. If it is unable to fulfill the contract it must return within an error indicating why. If the stream closed early then that error is EOF.
Can the proactor be made to work with multiple event sources, with the current interface? For example, wrapping simultaneous network IO and aio-based file IO on POSIX?
Yes, although in the current implementation one of them must be relegated to a background thread. <snip>
I believe there should be an automatically managed buffer-list class, to provide simple heap-based, garbage-collected buffer management, excluding the possibility of buffer overruns and memory leaks. Perhaps the one in the Giallo library could be used without significant modification, and supported by the asio buffer primitives. Also, this abstraction should be used in tutorials; although it is useful and practical to support automatically allocated buffer storage, it shouldn't be needlessly encouraged.
I would rather see such a buffer-list utility developed as a separate boost library. As you say, it can integrate with easily with asio provided it supports the asio buffer primitives, or if it implements the Mutable_Buffers and Const_Buffers concepts. However, I don't agree that automatically allocated buffer storage shouldn't be encouraged. No allocation means no leaks, predictable memory usage, less fragmentation etc. I.e. it's one of the strengths of C++. Furthermore, it may be more appropriate in many applications to use a buffer that is a data member of some connection class, where the connection class as a whole is garbage collected, not the buffer.
I also would like to request some commentary (in the library documentation) on future developments that are out of scope for the current proposal. Given the late juncture at which we're looking at bringing network programming into boost, I think it's important to consider desirable extensions, and how the current proposal will support them. Much of this will be discussed in the review, of course.
What sort of things did you have in mind?
One thing I would like to see (even in Asio itself) is a simplest-possible layer over the bare sockets layer, which took care of resource management and the intricacies of sockets programming. Ideally, this would leave an interface comparable to asynch IO programming in python and the like, suitable for the smallest networking tasks.
I'm not sure what you mean here. Did you mean to say that such a layer would leave out asynch I/O? Otherwise if it includes asynch I/O then that's the role that asio is already trying to fill - i.e. be a thin yet portable asynchronous I/O abstraction. <snip> Cheers, Chris

Hi Chris, thanks for your responses.
I don't really follow the intent of the locking dispatcher. It appear to me that it simply achieves the same effect as adding a scoped lock to each dispatched method (each using the same mutex).
It's not quite the same thing. A scoped lock in each dispatched handler means that the method may be blocked while waiting to acquire the mutex. A blocked handler means that other handlers may also be prevented from executing even though they are ready to go.
The locking_dispatcher ensures that the handler won't even be dispatched until the lock is acquired. This means that the execution of other handlers can continue unimpeded.
Ok, I see now. In retrospect, this is adequately described in the tutorial, but the tutorial itself doesn't take advantage of the improvement. Perhaps a more complicated example demonstrating this advantage would be useful?
I'd also like to note here that I intend asio to allow and encourage (although not enforce) the development of applications without any explicit locking of mutexes.
That's an excellent goal.
The locking_dispatcher's 'wrap' method seems to be its primary purpose. Since the terminology is a little unnatural, perhaps it could be operator() ?
The wrap() function returns a new function object that wraps the provided one. It doesn't actually execute code at that point, so I'm not sure operator() would convey that meaning.
Yes, I see your objection, although in terms of user code, it seems a little artificial. I would suggest 'serialise' as another name for 'wrap' - but not unless people other than me find it obtuse.
I would like to see a user-supplied handler option for handling genuine errors, not per-call but at a higher level (maybe per-demuxer?) Typically, the response to an error is to close the socket, once open; it would be handy to supply a callback which received enough context to close down a socket, and in all other code to ignore error-handling completely.
I'm not convinced that this belongs as part of asio's interface, since there are a multitude of ways to handle errors. For example, there's the issue of what happens in async operations, such as asio::async_read, that are composed of other async operations. You wouldn't want the application-specific error handling to be invoked until the composed operation had finished what it was doing.
Sorry, can you give a code example of what you're describing here?
I think the go is to use function object composition at the point where you start the asynchronous operation, e.g.:
async_recv(s, bufs, add_my_error_handling(my_handler));
or perhaps:
async_recv(s, bufs, combine_handlers(error_handler, ok_handler));
Yes, but you don't want to obscure every asynch operation by repeating the error code handling directions at the call site, if your error handling will be very similar. Looking at the examples, I expect that existing code using asio must contain large portions of replicated error-handling code, even if the repetition is merely in passing the same handler over and over. Perhaps someone who has a significant body of code using asio could comment?
Also if the default behaviour you want is to close the socket (although often there's an associated application object to be cleaned up too) you can simply bind a shared_ptr to the object as a parameter to your handlers:
async_recv(s, bufs, boost::bind(handler, connection_ptr, ...));
Yes. Although, if you had a defined error hook receiving a reference to the socket in error, it would also be easy to map any other dependant variables or actions to the socket, outside the asio library's functions.
Essentially the object is owned by the chain of operations. When the chain of async operations and their handlers is terminated due to an error (or any other condition) the object is cleaned up automatically.
Sorry, I don't follow this remark. The tutorial and example code seems to show explicit socket close in error conditions...
I do not think EOF should constitute an error, and I would expect that try-again and would-block errors could be safely ignored by the user. Perhaps EOF should be a testable state of the socket, rather than a potential error resulting from 'receive'.
In older versions of asio EOF was indicated by having a read return 0, same as the BSD sockets interface. However this increased complexity in composed synchronous and asynchronous operations, such as the now defunct async_read_n(), which had to return both the total number of bytes transferred *and* the last bytes transferred to allow checking for EOF.
Now with EOF as an error the problem can be addressed far more clearly and elegantly. For example, consider:
asio::read(socket, buffers, asio::transfer_all());
Its "contract" is to fill the buffers, using multiple underlying read_some calls as necessary. If it is unable to fulfill the contract it must return within an error indicating why. If the stream closed early then that error is EOF.
Yes, I see your point. However, I'm looking at this from the point of view that I would prefer errors to be handled out of the normal code path, and for the handling to be as global as possible. With error-handling set up this way, you don't want EOF to be an error, since it is part of the normal sequence of events rather than an exceptional event. Reading an unknown amount of data is a tricky interface question; I think that I would prefer to test that the socket is still not-at-EOF before continuing to read, as opposed to checking for EOF return code after each read. This all presumes that the error-handling is configured so that checking the return code for each operation is not required, to deal with exceptional events. The other non-exceptional errors are try-again and would-block, right? I think these can be ignored entirely, since the bytes_transferred will give the correct information to enable another read/write operation to initiated.
Can the proactor be made to work with multiple event sources, with the current interface? For example, wrapping simultaneous network IO and aio-based file IO on POSIX?
Yes, although in the current implementation one of them must be relegated to a background thread.
Good.
I believe there should be an automatically managed buffer-list class, to provide simple heap-based, garbage-collected buffer management, excluding the possibility of buffer overruns and memory leaks. Perhaps the one in the Giallo library could be used without significant modification, and supported by the asio buffer primitives. Also, this abstraction should be used in tutorials; although it is useful and practical to support automatically allocated buffer storage, it shouldn't be needlessly encouraged.
I would rather see such a buffer-list utility developed as a separate boost library. As you say, it can integrate with easily with asio provided it supports the asio buffer primitives, or if it implements the Mutable_Buffers and Const_Buffers concepts.
Ok. I don't think it is a significant undertaking, but it might require changes to the existing buffer concepts. I should look into it :)
However, I don't agree that automatically allocated buffer storage shouldn't be encouraged. No allocation means no leaks, predictable memory usage, less fragmentation etc. I.e. it's one of the strengths of C++. Furthermore, it may be more appropriate in many applications to use a buffer that is a data member of some connection class, where the connection class as a whole is garbage collected, not the buffer.
Sorry, by 'not encouraging', I mean that they shouldn't be seen as the first-choice way to use the library. People developing high performance, multi-connection servers will not need to be shown that there are more efficient methods to provide access to data buffers. The people who need the best example are those who are using asio as their first exposure to networking, because it's in that boost thing. These people should see safe ways to use the library before they encounter fast ways to use the library. The current audience for your tutorials is boost developers, but if accepted, the tutorials will be read by many more people with less experience, and I think the tutorials should be targetted at them.
I also would like to request some commentary (in the library documentation) on future developments that are out of scope for the current proposal. Given the late juncture at which we're looking at bringing network programming into boost, I think it's important to consider desirable extensions, and how the current proposal will support them. Much of this will be discussed in the review, of course.
What sort of things did you have in mind?
Basically, a 'Rationale' section and a 'Future Development' section similar to that in the (to pick the first that come to mind) Serialization library and the Iostreams library. The content could all be cropped from review discussions, since the questions raised now will be questions for later users as well.
One thing I would like to see (even in Asio itself) is a simplest-possible layer over the bare sockets layer, which took care of resource management and the intricacies of sockets programming. Ideally, this would leave an interface comparable to asynch IO programming in python and the like, suitable for the smallest networking tasks.
I'm not sure what you mean here. Did you mean to say that such a layer would leave out asynch I/O? Otherwise if it includes asynch I/O then that's the role that asio is already trying to fill - i.e. be a thin yet portable asynchronous I/O abstraction.
Well, I shouldn't have asked this, really. It's hard to look at Asio on it own merits, without factoring in that it will be the first networking library in boost, and likely the only one for some time. New users, and people needing portable, supported networking functionality will have Asio as an obvious option, whether or not their code calls for a 'thin yet portable asynchronous I/O abstraction'. Whether or not concessions should be made for this, I don't know :) Matt -- Matthew Vogt mattvogt@warpmail.net -- http://www.fastmail.fm - IMAP accessible web-mail

Hi Matt, --- Matthew Vogt <mattvogt@warpmail.net> wrote:
Ok, I see now. In retrospect, this is adequately described in the tutorial, but the tutorial itself doesn't take advantage of the improvement. Perhaps a more complicated example demonstrating this advantage would be useful?
Yep, fair enough. <snip>
Yes, I see your objection, although in terms of user code, it seems a little artificial. I would suggest 'serialise' as another name for 'wrap' - but not unless people other than me find it obtuse.
This name is part of the wider Dispatcher concept and so can be implemented by things other than locking_dispatcher -- e.g. it is also on the demuxer class. The name 'wrap' didn't feel perfect when I first chose it either, but it has grown on me. <snip>
I'm not convinced that this belongs as part of asio's interface, since there are a multitude of ways to handle errors. For example, there's the issue of what happens in async operations, such as asio::async_read, that are composed of other async operations. You wouldn't want the application-specific error handling to be invoked until the composed operation had finished what it was doing.
Sorry, can you give a code example of what you're describing here?
One possible example: consider a high level async function that receives a complex data structure using multiple underlying asynchronous operations: async_read_my_data(Stream& s, my_data& d, Handler h) { // Start read of first bit of message. asio::async_read(s, ...buffers..., boost::bind(my_data_handler_1, _1, s, d, h)); } void my_data_handler_1(error& e, Stream& s, my_data& d, Handler h) { if (e) { // Need to clean up stuff added to my_data here. // Call application handler h(e); } else { // Start read of second bit of message. asio::async_read(s, ...buffers..., boost::bind(my_data_handler_2, _1, s, d, h)); } } ... void my_data_handler_N(error& e, Stream& s, my_data& d, Handler h) { if (e) { // Need to clean up stuff added to my_data here. } // Call application handler h(e); } Let's say that this function guarantees that, on failure, any intermediate data is removed from the my_data structure. On the flip side, the caller is required to guarantee that the my_data structure is valid until the handler is called. Therefore this composed async operation must be able to ensure that the application handler is not called until it has finished with the my_data structure. <snip>
Essentially the object is owned by the chain of operations. When the chain of async operations and their handlers is terminated due to an error (or any other condition) the object is cleaned up automatically.
Sorry, I don't follow this remark. The tutorial and example code seems to show explicit socket close in error conditions...
For example: class connection : public enable_shared_from_this<connection> { private: stream_socket socket_; public: ... void start() { // start reading some data. socket_.async_read_some(buffers, boost::bind(&connection::handler, shared_from_this(), _1)); } void handler(error& e) { if (!e) { // process data, then read some more. socket_.async_read_some(buffers, boost::bind(&connection::handler, shared_from_this(), _1)); } } }; The connection object is automatically destroyed, and the socket closed, when there are no more operations associated with it. <snip>
Yes, I see your point. However, I'm looking at this from the point of view that I would prefer errors to be handled out of the normal code path, and for the handling to be as global as possible. With error-handling set up this way, you don't want EOF to be an error, since it is part of the normal sequence of events rather than an exceptional event.
EOF is only in the normal sequence of events for some protocols, and even then often only at certain points in the protocol. E.g. with HTTP you can have responses terminated by EOF, but an early EOF is an error if a content length is specified. Therefore I prefer it to be explicitly handled in those cases, rather than the other way around. <snip>
The other non-exceptional errors are try-again and would-block, right? I think these can be ignored entirely, since the bytes_transferred will give the correct information to enable another read/write operation to initiated.
At some point I need to document all the possible errors for each operation, since I don't believe these examples are widely applicable, especially not with asynchronous operations. In general no errors should be ignored. <snip>
What sort of things did you have in mind?
Basically, a 'Rationale' section and a 'Future Development' section similar to that in the (to pick the first that come to mind) Serialization library and the Iostreams library. The content could all be cropped from review discussions, since the questions raised now will be questions for later users as well.
Actually I meant what sort of ideas did you have for future development, so I can add them to the list? :) <snip> Cheers, Chris

Christopher Kohlhoff <chris <at> kohlhoff.com> writes:
Yes, I see your objection, although in terms of user code, it seems a little artificial. I would suggest 'serialise' as another name for 'wrap' - but not unless people other than me find it obtuse.
This name is part of the wider Dispatcher concept and so can be implemented by things other than locking_dispatcher -- e.g. it is also on the demuxer class. The name 'wrap' didn't feel perfect when I first chose it either, but it has grown on me.
Ok, I withdraw my objections :)
I'm not convinced that this belongs as part of asio's interface, since there are a multitude of ways to handle errors. For example, there's the issue of what happens in async operations, such as asio::async_read, that are composed of other async operations. You wouldn't want the application-specific error handling to be invoked until the composed operation had finished what it was doing.
Sorry, can you give a code example of what you're describing here?
<snip example, showing error handling varying slightly during a linked series of dependent IO operations>
Let's say that this function guarantees that, on failure, any intermediate data is removed from the my_data structure. On the flip side, the caller is required to guarantee that the my_data structure is valid until the handler is called. Therefore this composed async operation must be able to ensure that the application handler is not called until it has finished with the my_data structure.
Ok, here's what I would like to see. First, I want to supply a handler for errors, to the demuxer. I want this handler to be called back when required, with error code that occurred, and the socket to which error pertains. Something like void my_error_handler(asio::error e, shared_ptr<socket> s) { // clean up resources associated with this socket } Then I want to ignore error handling completely in mainline code. If I have a chain of interconnected operations, I can have a map<socket_id, my_operation_data> to which I add the context required to revert an operation which fails after partial completion. I can use this map to free any resources or to revert any state changes in my_error_handler, or I can remove the context when the entire operation completes. Of course, not all errors pertain to sockets, but the idea can be generalised.
Essentially the object is owned by the chain of operations. When the chain of async operations and their handlers is terminated due to an error (or any other condition) the object is cleaned up automatically.
Sorry, I don't follow this remark. The tutorial and example code seems to show explicit socket close in error conditions...
For example:
class connection : public enable_shared_from_this<connection> { private: stream_socket socket_; public: ... void start() { // start reading some data. socket_.async_read_some(buffers, boost::bind(&connection::handler, shared_from_this(), _1)); }
void handler(error& e) { if (!e) { // process data, then read some more. socket_.async_read_some(buffers, boost::bind(&connection::handler, shared_from_this(), _1)); } } };
The connection object is automatically destroyed, and the socket closed, when there are no more operations associated with it.
Well, that's subtle. I'm not sure if it's a good idea or not.
EOF is only in the normal sequence of events for some protocols, and even then often only at certain points in the protocol. E.g. with HTTP you can have responses terminated by EOF, but an early EOF is an error if a content length is specified. Therefore I prefer it to be explicitly handled in those cases, rather than the other way around.
In this case, the situation can still be handled explicitly by the caller, but they would make their decision based on socket.eof(), rather than inspection of a resulting error-code.
The other non-exceptional errors are try-again and would-block, right? I think these can be ignored entirely, since the bytes_transferred will give the correct information to enable another read/write operation to initiated.
At some point I need to document all the possible errors for each operation, since I don't believe these examples are widely applicable, especially not with asynchronous operations. In general no errors should be ignored.
Ok, that's preferable. I think both of the currently available error-handling options are inferior. C-style error return codes have long been shown to be inadequate, and the exception throwing option is rendered almost useless because the thrown object is an error code. This means you essentially need a catch for every operation in order to know the context, and it can't be used for aynch operations. If it were a pair of error-code and socket, and could be used asynchronously, it would be useful, but propagating the exception outside demuxer::run seems messy, since multiple threads could be calling 'run'.
What sort of things did you have in mind?
Basically, a 'Rationale' section and a 'Future Development' section similar to that in the (to pick the first that come to mind) Serialization library and the Iostreams library. The content could all be cropped from review discussions, since the questions raised now will be questions for later users as well.
Actually I meant what sort of ideas did you have for future development, so I can add them to the list? :)
Well, socketstreams are what people obviously want. I've never used one, so I don't know how they play out in the real world, unfortunately. One idea that seems useful is the compile-time stream composition Hugo Duncan implemented in his giallo library. It allows composing sequences of data handlers-and-possibly-transformers, similar to the Boost Iostreams Library. You can see an example of it here: http://tinyurl.com/cmfby [Split for gmane: <http://cvs.sourceforge.net/viewcvs.py/giallo/giallo/ libs/net/example/http_server.cpp?rev=1.10&view=auto> ] Of course, these things need to be tested with real code to show their actual value. Matt

Hi Matt, --- Matt Vogt <mattvogt@warpmail.net> wrote:
Ok, here's what I would like to see.
First, I want to supply a handler for errors, to the demuxer. I want this handler to be called back when required, with error code that occurred, and the socket to which error pertains.
Something like void my_error_handler(asio::error e, shared_ptr<socket> s) { // clean up resources associated with this socket }
Then I want to ignore error handling completely in mainline code.
But we've still got the problem of composed operations. Perhaps what you want could be achieved by wrapping the socket in an application class that automatically created a new handler function object with the appropriate error handling. E.g.: template <typename Stream> class stream_wrapper { public: stream_wrapper(..., function<void(const error&)> f); ... template <typename Const_Buffers, typename Handler> void async_read_some(const Const_Buffers& bufs, Handler h) { stream_.async_read_some(bufs, split_error(f_, h)); } ... }; <snip>
The connection object is automatically destroyed, and the socket closed, when there are no more operations associated with it.
Well, that's subtle. I'm not sure if it's a good idea or not.
This approach removes the need for explicit resource management (i.e. closing the socket) which I think is, on balance, a good thing.
EOF is only in the normal sequence of events for some protocols, and even then often only at certain points in the protocol. E.g. with HTTP you can have responses terminated by EOF, but an early EOF is an error if a content length is specified. Therefore I prefer it to be explicitly handled in those cases, rather than the other way around.
In this case, the situation can still be handled explicitly by the caller, but they would make their decision based on socket.eof(), rather than inspection of a resulting error-code.
An implementation of what you propose would, I think, require too great a coupling between the socket and the completion handler for it to be useful generally (since I believe handling EOF as a non-error is the exception rather than the rule). Let's assume that the socket implementation contained a boolean data member indicating whether EOF had been reached. Firstly, this member would have to be updated upon completion of an operation. There's no guarantee that this will occur in any particular thread if multiple threads call demuxer::run, so some synchronisation (and associated cost) would be required. Secondly there is no guarantee that the original socket object still exists at the time the completion handler is delivered. <snip>
I think both of the currently available error-handling options are inferior. C-style error return codes have long been shown to be inadequate, and the exception throwing option is rendered almost useless because the thrown object is an error code. This means you essentially need a catch for every operation in order to know the context, and it can't be used for aynch operations.
For the synchronous operations you can create your own error handler object to add the required context, e.g.: socket.connect(endpoint, throw_my_error("connect"));
If it were a pair of error-code and socket, and could be used asynchronously, it would be useful, but propagating the exception outside demuxer::run seems messy, since multiple threads could be calling 'run'.
In respect of exceptions and demuxer::run, I hope I have defined the behaviour clearly. Specifically, the demuxer cannot know about all exception types, so exceptions are allowed to safely propagate to outside the run call where they can be handled. This only affects the thread where the exception is raised. After handling the exception, that thread may immediately call demuxer::run again to return to the pool. <snip>
One idea that seems useful is the compile-time stream composition Hugo Duncan implemented in his giallo library. It allows composing sequences of data handlers-and-possibly-transformers, similar to the Boost Iostreams Library. You can see an example of it here: http://tinyurl.com/cmfby
[Split for gmane: <http://cvs.sourceforge.net/viewcvs.py/giallo/giallo/ libs/net/example/http_server.cpp?rev=1.10&view=auto> ]
Asio has something vaguely similar in its stream layers. For example the ssl::stream template can be layered over a stream socket like so: typedef ssl::stream<stream_socket> ssl_stream_socket; Maybe this can be extended to support more complex composition scenarios. Cheers, Chris

Christopher Kohlhoff <chris <at> kohlhoff.com> writes:
But we've still got the problem of composed operations. Perhaps what you want could be achieved by wrapping the socket in an application class that automatically created a new handler function object with the appropriate error handling. E.g.:
template <typename Stream> class stream_wrapper { public: stream_wrapper(..., function<void(const error&)> f); ... template <typename Const_Buffers, typename Handler> void async_read_some(const Const_Buffers& bufs, Handler h) { stream_.async_read_some(bufs, split_error(f_, h)); } ... };
Yes, that's not a bad idea. Of course, you don't want to rewrite all the forwarding functions with any regularity, so it's probably better to do this with a Handler object, constructed with an error-handling function and then supplied with the success-handling function for each new operation. Is split_error documented somewhere?
Well, that's subtle. I'm not sure if it's a good idea or not.
This approach removes the need for explicit resource management (i.e. closing the socket) which I think is, on balance, a good thing.
Yes, I guess it is, once enable_shared_from_this is in your working vocabulary.
Let's assume that the socket implementation contained a boolean data member indicating whether EOF had been reached. Firstly, this member would have to be updated upon completion of an operation. There's no guarantee that this will occur in any particular thread if multiple threads call demuxer::run, so some synchronisation (and associated cost) would be required. Secondly there is no guarantee that the original socket object still exists at the time the completion handler is delivered.
Well, that sounds like an insurmountable problem :)
For the synchronous operations you can create your own error handler object to add the required context, e.g.:
socket.connect(endpoint, throw_my_error("connect"));
Perhaps it's been discussed, but is there any particular reason you wouldn't want to define an 'asio::socket_error' subclass of asio::error? Attaching the relevant shared_ptr<socket> to the error code would make some error- handling strategies simpler.
If it were a pair of error-code and socket, and could be used asynchronously, it would be useful, but propagating the exception outside demuxer::run seems messy, since multiple threads could be calling 'run'.
In respect of exceptions and demuxer::run, I hope I have defined the behaviour clearly. Specifically, the demuxer cannot know about all exception types, so exceptions are allowed to safely propagate to outside the run call where they can be handled. This only affects the thread where the exception is raised. After handling the exception, that thread may immediately call demuxer::run again to return to the pool.
Sorry if I'm missing the point, but I don't see what your saying here. If there are some exceptions that you cannot deal with, why should that prevent you from handling the ones you do know about? If users supply handlers that throw, they will need to deal with the resulting exceptions manually. I don't see how anything is simplified by requiring the user to deal with library exceptions in the same place (the call to demuxer::run)...
Asio has something vaguely similar in its stream layers. For example the ssl::stream template can be layered over a stream socket like so:
typedef ssl::stream<stream_socket> ssl_stream_socket;
Maybe this can be extended to support more complex composition scenarios.
Yes, that seems plausible. Matt

Hi Matt, --- Matt Vogt <mattvogt@warpmail.net> wrote:
Yes, that's not a bad idea. Of course, you don't want to rewrite all the forwarding functions with any regularity, so it's probably better to do this with a Handler object, constructed with an error-handling function and then supplied with the success-handling function for each new operation.
It'd be great if you could have a think about the interface for this object.
Is split_error documented somewhere?
It doesn't exist yet :) <snip>
Perhaps it's been discussed, but is there any particular reason you wouldn't want to define an 'asio::socket_error' subclass of asio::error? Attaching the relevant shared_ptr<socket> to the error code would make some error- handling strategies simpler.
I'm not sure that would be being useful enough in general, but perhaps a function object that automatically created a std::pair (or maybe boost::tuple) from the underlying asio::error and whatever you want to attach to it, e.g.: sock->read(bufs, throw_error_pair(sock));
If it were a pair of error-code and socket, and could be used asynchronously, it would be useful, but propagating the exception outside demuxer::run seems messy, since multiple threads could be calling 'run'.
In respect of exceptions and demuxer::run, I hope I have defined the behaviour clearly. Specifically, the demuxer cannot know about all exception types, so exceptions are allowed to safely propagate to outside the run call where they can be handled. This only affects the thread where the exception is raised. After handling the exception, that thread may immediately call demuxer::run again to return to the pool.
Sorry if I'm missing the point, but I don't see what your saying here. If there are some exceptions that you cannot deal with, why should that prevent you from handling the ones you do know about? If users supply handlers that throw, they will need to deal with the resulting exceptions manually. I don't see how anything is simplified by requiring the user to deal with library exceptions in the same place (the call to demuxer::run)...
I think we've lost track of the original point we were discussing here, whatever it was :) Basically asynchronous operations do not throw unless: - Something really serious has happened, like running out of a critical OS resource. - The user code throws an exception, or calls some other code that throws an exception. In both cases I think the appropriate place to handle these exceptions is outside of demuxer::run(). Cheers, Chris

Christopher Kohlhoff <chris <at> kohlhoff.com> writes:
Yes, that's not a bad idea. Of course, you don't want to rewrite all the forwarding functions with any regularity, so it's probably better to do this with a Handler object, constructed with an error-handling function and then supplied with the success-handling function for each new operation.
It'd be great if you could have a think about the interface for this object.
Sure, although probably not before the end of the review period.
Perhaps it's been discussed, but is there any particular reason you wouldn't want to define an 'asio::socket_error' subclass of asio::error? Attaching the relevant shared_ptr<socket> to the error code would make some error- handling strategies simpler.
I'm not sure that would be being useful enough in general, but perhaps a function object that automatically created a std::pair (or maybe boost::tuple) from the underlying asio::error and whatever you want to attach to it, e.g.:
sock->read(bufs, throw_error_pair(sock));
Thats sounds very reasonable. I wish I had some code to test it on... <snip>
I think we've lost track of the original point we were discussing here, whatever it was :) Basically asynchronous operations do not throw unless:
- Something really serious has happened, like running out of a critical OS resource.
- The user code throws an exception, or calls some other code that throws an exception.
In both cases I think the appropriate place to handle these exceptions is outside of demuxer::run().
Yes, I was talking more about the case of a generic handler-wrapper that responded to error conditions by throwing, or a variant of the existing library that would throw exceptions. But you're right, the point is lost. I guess my question boils down to, 'is there an error-handling strategy that could be applied to asio, that frees sockets programming from (tedious, error-prone) manual inspection of return codes?'. You seem to be accustomed to the current situation, while I'm still groping around and hoping you'll hit on a solution for me :) Matt

Hi Chris and Matthew,
I don't really follow the intent of the locking dispatcher. It appear to me that it simply achieves the same effect as adding a scoped lock to each dispatched method (each using the same mutex).
It's not quite the same thing. A scoped lock in each dispatched handler means that the method may be blocked while waiting to acquire the mutex. A blocked handler means that other handlers may also be prevented from executing even though they are ready to go.
The locking_dispatcher ensures that the handler won't even be dispatched until the lock is acquired. This means that the execution of other handlers can continue unimpeded.
From a closer reading of the documents i gather that a call to locking_dispatcher.post( func ) is required to NOT execute func() immediately. (i.e. it puts the request in the underlying demuxer queue).
Is that the case? If its not so, then what occurs when you call locking_dispatcher.post( func ) from a thread that has not called demuxer::run(). Perhaps you've given one elsewhere, but a description of exactly when func() gets called in a range of scenario might be nice. e.g. for method() in [post,dispatch] if locking_dispatcher.method( func ) called from 1. a foreground thread (no demuxer::run() called) 2. a thread in demuxer::run() 3. ditto, that is currently executing another handler requested through a different locking_dispatcher 4. ditto, executing another handler requested through the same locking_dispatcher. Is this behaviour something that varies between platforms (demuxer service implemenations)? Thanks again Simon

Hi Simon, --- simon meiklejohn <simon@simonmeiklejohn.com> wrote:
From a closer reading of the documents i gather that a call to locking_dispatcher.post( func ) is required to NOT execute func() immediately. (i.e. it puts the request in the underlying demuxer queue).
Yes, although I would word it slightly differently: a call to post() is required to *guarantee* that func() is not executed immediately. <snip>
Perhaps you've given one elsewhere, but a description of exactly when func() gets called in a range of scenario might be nice.
e.g. for method() in [post,dispatch] if locking_dispatcher.method( func ) called from 1. a foreground thread (no demuxer::run() called) 2. a thread in demuxer::run() 3. ditto, that is currently executing another handler requested through a different locking_dispatcher 4. ditto, executing another handler requested through the same locking_dispatcher.
First, lets review the guarantees made by the demuxer and locking dispatcher in respect of when handlers are executed: - The demuxer will only execute a handler from within a call to demuxer::run(). - The locking dispatcher will only execute a handler when no other handler for the same locking dispatcher is executing, in addition to the associated demuxer's guarantee (i.e. to only execute the handler within a call to demuxer::run()). I'll add these sort of examples to the locking_dispatcher doc, but here it is now: 1. Foreground thread (no demuxer::run() called): 1.a. locking_dispatcher::post() always puts the handler on the queue, since post() will never execute the handler immediately. 1.b. locking_dispatcher::dispatch() puts the handler in the queue, since the locking_dispatcher's guarantee cannot be met. 2. A thread in demuxer::run(): 2.a. locking_dispatcher::post() always puts the handler on the queue, since post() will never execute the handler immediately. 2.b. locking_dispatcher::dispatch() will execute the handler immediately if no other handler for the same locking dispatcher is currently executing. The demuxer's guarantee is already met. 3. A handler requested through a different locking dispatcher: 3.a. locking_dispatcher::post() always puts the handler on the queue, since post() will never execute the handler immediately. 3.b. locking_dispatcher::dispatch() will execute the handler immediately if no other handler for the same locking dispatcher is currently executing. The demuxer's guarantee is already met. Handlers dispatched through other locking dispatchers have no effect. 4. A handler requested through the same locking dispatcher: 4.a. locking_dispatcher::post() always puts the handler on the queue, since post() will never execute the handler immediately. 4.b. locking_dispatcher::dispatch() will not execute the handler immediately since the locking_dispatcher's guarantee cannot be met. That is, a handler for the same locking dispatcher is already executing (i.e. the one we're in).
Is this behaviour something that varies between platforms (demuxer service implemenations)?
No. It's the same everywhere. Cheers, Chris

Chris wrote
Yes, although I would word it slightly differently: a call to post() is required to *guarantee* that func() is not executed immediately. [SNIP] First, lets review the guarantees made by the demuxer and locking dispatcher in respect of when handlers are executed:
- The demuxer will only execute a handler from within a call to demuxer::run().
- The locking dispatcher will only execute a handler when no other handler for the same locking dispatcher is executing, in addition to the associated demuxer's guarantee (i.e. to only execute the handler within a call to demuxer::run()).
I'll add these sort of examples to the locking_dispatcher doc, but here it is now: [SNIP]
Thats great thanks. I appreciate that the behaviour described is sufficiently specified by the guarantees given for dispatcher and lock_dispatcher, but for us slow ones it helps to be led through it. Incorporating your comments into the docs would be very useful. The one other degenerate case i guess is if post or dispatch are called from a foreground thread and no thread ever calls demuxer::run, then the handler simply wont be called. One final concern i had is about use of asio in multiple static libs linked into a final application. (An example that would come up in my work in VOIP would be separate libs for media streaming using udp/rtp, SIP protocol using tcp or udp, application networking on tcp, logging to file etc) Are there issues with this setup? Would there end up being some per lib static objects/threads, or could i call demuxer::run from a single point in the app and have it service i/o throughout the app? I probably should be able to figure this out from the source, but when i drill down through the code the answer eludes me. Cheers Simon

Hi Simon, --- simon meiklejohn <simon@simonmeiklejohn.com> wrote: <snip>
The one other degenerate case i guess is if post or dispatch are called from a foreground thread and no thread ever calls demuxer::run, then the handler simply wont be called.
Yep, it will also leak objects on some platforms.
One final concern i had is about use of asio in multiple static libs linked into a final application. (An example that would come up in my work in VOIP would be separate libs for media streaming using udp/rtp, SIP protocol using tcp or udp, application networking on tcp, logging to file etc)
Are there issues with this setup? Would there end up being some per lib static objects/threads, or could i call demuxer::run from a single point in the app and have it service i/o throughout the app? I probably should be able to figure this out from the source, but when i drill down through the code the answer eludes me.
If these are static libraries then I believe the linker will coalesce the symbols so that you only end up with one copy of each. I.e. linking static libs isn't really any different to linking object files, as I understand it. In general, most resources in asio, like threads, hang off a demuxer in some way, rather than being static variables. However, sharing a single demuxer across the application also sounds like a viable design to me. Although this maybe does not apply in your case, it's a design approach that lets you combine independent modules into a program that has only one thread. Cheers, Chris

From: "Christopher Kohlhoff" <chris@kohlhoff.com>
Hi Matt,
--- Matthew Vogt <mattvogt@warpmail.net> wrote: <snip>
I would like to see a user-supplied handler option for handling genuine errors, not per-call but at a higher level (maybe per-demuxer?) Typically, the response to an error is to close the socket, once open; it would be handy to supply a callback which received enough context to close down a socket, and in all other code to ignore error-handling completely.
I'm not convinced that this belongs as part of asio's interface, since there are a multitude of ways to handle errors. For example, there's the issue of what happens in async operations, such as asio::async_read, that are composed of other async operations. You wouldn't want the application-specific error handling to be invoked until the composed operation had finished what it was doing.
I think the go is to use function object composition at the point where you start the asynchronous operation, e.g.:
async_recv(s, bufs, add_my_error_handling(my_handler));
or perhaps:
async_recv(s, bufs, combine_handlers(error_handler, ok_handler));
I agree with Matt that a clearer separate error path is worthwhile, but also with Chris that functional composition is the way to go, but also with Matt that it should be part of the library. I'd prefer an interface like the latter option above, but named something more appropriate to its error filtering approach than combine_handlers async_recv(s, bufs, asio::split_errors(error_handler, ok_handler)); I've used a similar approach in writing asynch voice applications and found that the approach scales up well, i.e. that higher and higher level composed objects can be created right up in to the application level code (e.g. in voice apps implement a menu, with complex internal logic that chooses an exit path based on what key the user hit, and with error paths if they pressed nothing, or hung up) In networking this can correspond to implementing branching in the reception of messages (e.g. based on the contents of a header portion, branch down a suitable path to receive/parse different variable portsion). Do you think there's scope for a toolkit to support this style of coding in asio, or would that be a higher level thing. Cheers Simon
participants (5)
-
Christopher Kohlhoff
-
Matt Vogt
-
Matthew Vogt
-
simon meiklejohn
-
Thore Karlsen