
Let me start off by saying I'm primarily a Windows developer, and don't have any experience with the high perf apis available on *nix. So many of my issues may be related to that. I have extensively used I/O Completion Ports in C/C++ and have used the async methods in .NET. I have also used ASIO on and off for the last few months. My immediate thoughts on asio. Generally it provides a very good interface for scalable i/o. But I think there is some room for improvement, both in API and implementation. I would *not* recommend asio for inclusion in boost until at least the IPv6 issue is taken care of. For the API, one thing that struck me is the almost C-style error handling. I believe the .NET way is better. It provides a point that the developer must call to get the result of the operation, and that the sockets can throw any exceptions that would normally be thrown in sync operations. ie, void MyHandler(IAsyncResult res) { int len; try { len=sock.EndRecv(res); } } I was disappointed to not see IPv6 there. Yes, it hasn't reached critical mass yet, but it is coming fast and should be mandatory for any socket libraries. With Windows Vista having a dual stack I believe IPv6 is something everyone should be preparing for. This is the only showstopper for me. I don't want to see a ton of IPv4-centric applications made as they can be a pain to make version agnostic. Talking about IPv6, I would love to see some utility functions for resolving+opening+connecting via host/port and host/service in a single operation. This would encourage a future-proof coding style and is something almost all client applications would use. I dislike how it forces manual thread management - I would be much happier if the demuxer held threads internally. Because threading is important to get maximum performance and scalability on many platforms, the user is now forced to handle pooling if he wants that. On Windows 2000+ it is common to use the built-in thread pool which will increase/decrease the amount of threads being used to get maximum cpu usage with IO Completion Ports. Which brings me to the next thing: the timers should be using the lightweight timer queues which come with win2k. These timer queues also have the advantage of using the built-in thread pool. Async connecting does not use ConnectEx on platforms which support it. This isn't such a big issue, but it's still a disappointment. ASIO lacks one important feature, async disconnects. I don't have experience in *nix, but in Windows creating sockets is expensive and a high perf app can get a significant benefit by recycling them. A minor issue, but I'm not liking the names of the _some methods. It would be better to just document the fact that it might not read/write everything you give it instead of forcing it down the user's throat every time they want to use it. As for documentation, it is fair. It is a bit loopy at times because of the basic_* templates. I don't expect normal usage will care very much about implementation, so perhaps these can be seperated from the main documentation. I see no problems beyond that. -- Cory Nelson http://www.int64.org

On 12/12/05, Cory Nelson <phrosty@gmail.com> wrote: [snip]
For the API, one thing that struck me is the almost C-style error handling. I believe the .NET way is better. It provides a point that the developer must call to get the result of the operation, and that the sockets can throw any exceptions that would normally be thrown in sync operations.
ie,
void MyHandler(IAsyncResult res) { int len; try { len=sock.EndRecv(res); } }
Could you elaborate a little more about what you mean with C-style error handling and the error handling you prefer (.NET) ? [snip]
Async connecting does not use ConnectEx on platforms which support it. This isn't such a big issue, but it's still a disappointment.
I agree, ConnectEx is very good. I wouldnt say a disappointment though, it is very easy to integrate this to the library. [snip]
-- Cory Nelson http://www.int64.org
best regards, -- Felipe Magno de Almeida

On 12/12/05, Felipe Magno de Almeida <felipe.m.almeida@gmail.com> wrote:
On 12/12/05, Cory Nelson <phrosty@gmail.com> wrote:
[snip]
For the API, one thing that struck me is the almost C-style error handling. I believe the .NET way is better. It provides a point that the developer must call to get the result of the operation, and that the sockets can throw any exceptions that would normally be thrown in sync operations.
[snip]
Could you elaborate a little more about what you mean with C-style error handling and the error handling you prefer (.NET) ?
In the current asio implementation, a handler looks like this: void on_recv(const asio::error &err, size_t len) { if(!err) { ... } else { ... } } In .NET, a handler looks like this: void on_recv(IAsyncResult res) { int len; try { len=sock.EndRecv(res); } catch(SocketException ex) { ...; return; } ... } The key difference is that asio allows you to check the error but doesn't force it. In .NET, EndRecv *must* be called to get the result of the operation. A result either being a valid return or an exception. By forcing accountability it plugs a reliability hole. An added benefit of this is that calling an End* method in .NET will block if the operation isn't finished, so that option is there if you need it. Nearly all of the STL (the only exception I can immediately think of being streams) throws exceptions on error, why should asio break from the norm?
[snip]
Async connecting does not use ConnectEx on platforms which support it. This isn't such a big issue, but it's still a disappointment.
I agree, ConnectEx is very good. I wouldnt say a disappointment though, it is very easy to integrate this to the library.
[snip]
-- Cory Nelson http://www.int64.org
best regards, -- Felipe Magno de Almeida
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- Cory Nelson http://www.int64.org

On 12/12/05, Cory Nelson <phrosty@gmail.com> wrote: [snip]
In the current asio implementation, a handler looks like this:
void on_recv(const asio::error &err, size_t len) { if(!err) { ... } else { ... } }
In .NET, a handler looks like this:
void on_recv(IAsyncResult res) { int len; try { len=sock.EndRecv(res); } catch(SocketException ex) { ...; return; }
... }
The key difference is that asio allows you to check the error but doesn't force it.
In .NET, EndRecv *must* be called to get the result of the operation. A result either being a valid return or an exception. By forcing accountability it plugs a reliability hole.
An added benefit of this is that calling an End* method in .NET will block if the operation isn't finished, so that option is there if you need it.
Nearly all of the STL (the only exception I can immediately think of being streams) throws exceptions on error, why should asio break from the norm?
Though I agree with you that the asio error handling doesnt enforce the error check. IMO, the way .NET does isnt correct either. Not all errors in STL are reported by exceptions, and really shouldnt (Some are done by assertions, which means that error handling should be done by the application alone, like validations). Exceptions (as the name says) are for exceptional cases, I think that network errors arent that exceptional, and as such the use of exceptions could really slow down your program in certain patterns of use. I think that one possible solution would be that error checks would be than by the asio, and redirect it to another function if an error is found. That way could be binded two functions. It even seems to be possible to create auxiliary classes that does this things on top of the current asio library. It would even make the code a lot clearer, comparing to both .NET and current asio usage. What do you think Cory? and you Christopher?
-- Cory Nelson http://www.int64.org
-- Felipe Magno de Almeida

Though I agree with you that the asio error handling doesnt enforce the error check. IMO, the way .NET does isnt correct either. Not all errors in STL are reported by exceptions, and really shouldnt (Some are done by assertions, which means that error handling should be done by the application alone, like validations). Exceptions (as the name says) are for exceptional cases, I think that network errors arent that exceptional, and as such the use of exceptions could really slow down your program in certain patterns of use.
I agree completely. Exception generation should be optional. Exceptions can be easily layered on top of a more conventional error handling without performance penalities, but the other way around is much trickier.
I think that one possible solution would be that error checks would be than by the asio, and redirect it to another function if an error is found. That way could be binded two functions. It even seems to be possible to create auxiliary classes that does this things on top of the current asio library. It would even make the code a lot clearer, comparing to both .NET and current asio usage.
I've done exactly that in my network library (although it does support only synchronous behaviour). An optional error handler functor is passed to every functions that can generate an error. If no handler is passed a default handler is used that converts errors in exceptions. It works quite well. The user need to explicitly check for errors only if it wants to, else exceptions are used.

On 12/12/05, Giovanni P. Deretta <gpderetta@gmail.com> wrote: [snip]
I've done exactly that in my network library (although it does support only synchronous behaviour). An optional error handler functor is passed to every functions that can generate an error. If no handler is passed a default handler is used that converts errors in exceptions. It works quite well. The user need to explicitly check for errors only if it wants to, else exceptions are used.
IMO, exceptions should be explicitly enabled if someone wants it. Because exceptions very hardly will be what conceptually should be used in this case. I think it may even be possible to have the three ways: - pass two functors, one being for error handling; - pass one functor, this functor should have the asio::error parameter (the way is being done now in asio); - pass one functor, but exception being enabled. Though I dont know how the functor should catch the exception that way... As the last item seems to show, using exceptions not only isnt efficient, it makes the code uglier. Using try and catch around a call to receive the results to each different IO operation is too much code. And if someone doesnt try catch? The exception will be caught by the asio demuxer. What demuxer should do with it? I'm against introduzing exceptions. But I agree it may be possible to work a little more in the error handling. But, IMO, it should be like adding helper classes for this, but not throwing exceptions. Thanks, -- Felipe Magno de Almeida

IMO, exceptions should be explicitly enabled if someone wants it. Because exceptions very hardly will be what conceptually should be used in this case. I think it may even be possible to have the three ways: - pass two functors, one being for error handling; - pass one functor, this functor should have the asio::error parameter (the way is being done now in asio); - pass one functor, but exception being enabled. Though I dont know how the functor should catch the exception that way...
As the last item seems to show, using exceptions not only isnt efficient, it makes the code uglier. Using try and catch around a call to receive the results to each different IO operation is too much code. And if someone doesnt try catch? The exception will be caught by the asio demuxer. What demuxer should do with it?
If an user is not interested to exceptions it will simply ignore them, the asio documentation explicitly states that the demuxer is transparent to exceptions thrown from handlers. The function calling demuxer::run will catch the exception. If the user is interested in catching errors will provide the error handler. I did some experiment in using a diferent *type* for each error, and using a variant-like wrapper to allow easy handling of them. A vistor object can then be applied to retrive the error type. If the user is interested in all errors it writes a templated handler that sinks all errors, else it writes operator() overloads only for those errors it is interested in. For all other errors a default strategy is used. BTW, while this works quite well in pratice, probably the litle extra safety is not worth the effort.
I'm against introduzing exceptions. But I agree it may be possible to work a little more in the error handling. But, IMO, it should be like adding helper classes for this, but not throwing exceptions.
I think too that the current asio error handling interface is fine, and it is not worth changing it a this time.

(Excuse the crude example) I'm trying to create a simple publisher/subscribe service app, using signals, the issue I'm trying to resolve is how to register the callbacks correctly to each signal. Callbacks will have different parameter types. When attempting to typecast the subscriber, it does not register correctly due to different parameters? Is there a better way? Boost::signal< void (void)> simple_t; boost::signal<int (int,int,int)> signal_a; boost::signal<void (void)> signal_b; typedef void (*callbackFunc) (void); typedef callbackFunc func_t; int Callback1(int, int,int) { ..... return 0;} void callback2( void ) { }; setup() { publish("signal/a", signal_a); publish("signal/b", signal_b); } App() { Subscribe("signal/a", &callback1); Subscribe("signal/b", &callback2); } API: void publish(stl::string uri, simple_t &) { // add uri // add generic boost::signal?? } void subscribe(stl::string uri, func_t func) { //Find entry simple_t t= find(uri); t.Connect(func); //This does }

Hi Giovanni, --- "Giovanni P. Deretta" <gpderetta@gmail.com> wrote: <snip>
I've done exactly that in my network library (although it does support only synchronous behaviour). An optional error handler functor is passed to every functions that can generate an error. If no handler is passed a default handler is used that converts errors in exceptions. It works quite well. The user need to explicitly check for errors only if it wants to, else exceptions are used.
In fact asio uses what sounds like exactly the same approach for the synchronous functions. E.g. sock.write_some(bufs, asio::throw_error()); sock.write_some(bufs, asio::ignore_error()); asio::error error; sock.write_some(bufs, asio::assign_error(error)); sock.write_some(bufs, custom_error_handler); The default is also to convert the error into an exception. Cheers, Chris

Hi Felipe, --- Felipe Magno de Almeida <felipe.m.almeida@gmail.com> wrote: <snip>
I think that one possible solution would be that error checks would be than by the asio, and redirect it to another function if an error is found. That way could be binded two functions. It even seems to be possible to create auxiliary classes that does this things on top of the current asio library. It would even make the code a lot clearer, comparing to both .NET and current asio usage.
This could be done by composing a new function object to be passed as the handler, e.g.: async_read(s, buffers, combine_handlers(handler1, handler2)); Boost.Lambda may also be of some use here. However I am not convinced that this approach would be widely applicable. There are many cases in networking where an error for an individual operation is not an error for the application. I also see the current "exactly-once" semantics of invoking handlers as important in aiding correct program design, so I'd be wary of using anything that changed that. Cheers, Chris

On 12/12/05, Christopher Kohlhoff <chris@kohlhoff.com> wrote:
Hi Felipe,
--- Felipe Magno de Almeida <felipe.m.almeida@gmail.com> wrote: <snip>
[snip]
This could be done by composing a new function object to be passed as the handler, e.g.:
async_read(s, buffers, combine_handlers(handler1, handler2));
Boost.Lambda may also be of some use here.
However I am not convinced that this approach would be widely applicable. There are many cases in networking where an error for an individual operation is not an error for the application.
What I mean is to be able to have like two member functions in a class, one for handling an error in the read operation, and other for handling the correct case.
I also see the current "exactly-once" semantics of invoking handlers as important in aiding correct program design, so I'd be wary of using anything that changed that.
Didnt understood what you meant... You think every functor should be executed exactly-once for correct program design? Wouldnt this still be safe? class handler { void fail(asio::error err) { // handle the error case, and probably delete this handle, or reuse it for retrying } void success(int size) { // do something, or continue reading or delete this } // member variables public: handler() { async_read(s, buffers, combine_handlers( boost::bind(boost::mem_fn(&handler::success), this) , boost::bind(boost::mem_fn(&handler::fail), this) )); } };
Cheers, Chris
Thanks, -- Felipe Magno de Almeida

--- Felipe Magno de Almeida <felipe.m.almeida@gmail.com> wrote: <snip>
You think every functor should be executed exactly-once for correct program design? Wouldnt this still be safe?
class handler { void fail(asio::error err) { // handle the error case, and probably delete this handle, or reuse it for retrying }
void success(int size) { // do something, or continue reading or delete this } // member variables
public: handler() { async_read(s, buffers, combine_handlers( boost::bind(boost::mem_fn(&handler::success), this) , boost::bind(boost::mem_fn(&handler::fail), this) )); } };
Of course, you're right, and there's nothing unsafe with the above. Somehow I had the idea that you wanted to share the same error handler between multiple asynchronous operations as the default behaviour. Cheers, CHris

On 12/12/05, Cory Nelson <phrosty@gmail.com> wrote:
For the API, one thing that struck me is the almost C-style error
handling. I believe the .NET way is better. It provides a point
the developer must call to get the result of the operation, and that
On 12/12/05, Felipe Magno de Almeida <felipe.m.almeida@gmail.com> wrote: that the sockets can throw any exceptions that would normally be thrown in sync operations.
[snip]
Could you elaborate a little more about what you mean with C-style error handling and the error handling you prefer (.NET) ?
In the current asio implementation, a handler looks like this:
void on_recv(const asio::error &err, size_t len) { if(!err) { ... } else { ... } }
In .NET, a handler looks like this:
void on_recv(IAsyncResult res) { int len; try { len=sock.EndRecv(res); } catch(SocketException ex) { ...; return; }
... }
IMHO, I would not accept an implementation that forces exception handling onto the user. Consider a common use case for this library: high performance, internet facing servers. In this .NET style example, an exception would be thrown when the peer drops connection while the application is waiting for data, increasing the overhead of the error case. An attacker attempting a DoS on the server looks for cases which increase the response time. In this example the attacker could flood the server with broken connections forcing a significant amount of CPU time to be spent in exception handling. Exceptions might be appropriate for higher level services, but not a requirement for the core dispatcher.

Hi Cory, --- Cory Nelson <phrosty@gmail.com> wrote: <snip>
For the API, one thing that struck me is the almost C-style error handling. I believe the .NET way is better. It provides a point that the developer must call to get the result of the operation, and that the sockets can throw any exceptions that would normally be thrown in sync operations. ie,
void MyHandler(IAsyncResult res) { int len; try { len=sock.EndRecv(res); } }
My first reaction when I studied the .NET interface was that it was more cumbersome than it needed to be. I put it down to a lack of boost::bind ;) I see a general problem in using exceptions with asynchronous applications, since an exception that escapes from a completion handler breaks the "chain" of async handlers. Therefore I consider it too dangerous to use exceptions for async-related functions except in truly exceptional situations. There's also a question that springs to mind when looking at the .NET code: what happens if I don't call EndRecv? In the asio model, once the handler is called the async operation is already over. You can handle or ignore the errors as you see fit.
I was disappointed to not see IPv6 there. Yes, it hasn't reached critical mass yet, but it is coming fast and should be mandatory for any socket libraries. With Windows Vista having a dual stack I believe IPv6 is something everyone should be preparing for. This is the only showstopper for me. I don't want to see a ton of IPv4-centric applications made as they can be a pain to make version agnostic.
IPv6 is on my to-do list. However I lack first-hand experience with it, don't have ready access to an IPv6 network, and don't see it as impacting the rest of the API (and so my focus has been on getting the rest of the API right instead).
Talking about IPv6, I would love to see some utility functions for resolving+opening+connecting via host/port and host/service in a single operation. This would encourage a future-proof coding style and is something almost all client applications would use.
I see this as a layer of abstraction that can be added on top of asio. For example, one could write a free function async_connect that used a URL-style encoding of the target endpoint. I don't see it as in scope for asio _for_now_.
I dislike how it forces manual thread management - I would be much happier if the demuxer held threads internally. Because threading is important to get maximum performance and scalability on many platforms, the user is now forced to handle pooling if he wants that.
It was a deliberate design decision to make asio independent of all thread management. Things like thread pooling are better addressed by a separate library, like Boost.Thread. (I'd also note it's not *that* hard to start multiple threads that call demuxer::run.) My reasoning includes: - By default most applications should probably use just one thread to call demuxer::run(). This simplifies development cost enormously since you no longer need to worry about synchronisation issues. - What happens if you need to perform per-thread initialisation before any other code runs in the thread? For example on Windows you might be using COM, and so need to call CoInitializeEx in each thread. - By not imposing threads in the interface, asio can potentially run on a platform that has no thread support at all.
On Windows 2000+ it is common to use the built-in thread pool which will increase/decrease the amount of threads being used to get maximum cpu usage with IO Completion Ports.
It is my understanding (and experience) that having multiple threads waiting on GetQueuedCompletionStatus has the same effect. That is, threads will be returned from GetQueuedCompletionStatus to maximise CPU usage.
Which brings me to the next thing: the timers should be using the lightweight timer queues which come with win2k. These timer queues also have the advantage of using the built-in thread pool.
These timers are not portable to NT4. I also found some other fundamental issues when I studied the timer queue API, but unfortunately I can't recall them right now :( If I'm thinking of the same thing as you, the built-in thread pool support is the one where you must provide threads that are in an alertable state (e.g. SleepEx)? If so I have found this model to be a flawed design, since it is an application-wide thread pool. This is particularly a problem if it is used from within a library such as asio, since an application or another library may also perform an alertable wait, thus preventing guarantees about how many threads may call back into application code. The asio model allows scalable lock-free designs where there is say one demuxer per CPU, with sockets assigned to demuxers using some sort of load-balancing scheme. Each demuxer has only one thread calling demuxer::run(), and so there is no need for any synchronisation on the objects associated with that demuxer.
ASIO lacks one important feature, async disconnects. I don't have experience in *nix, but in Windows creating sockets is expensive and a high perf app can get a significant benefit by recycling them.
I haven't implemented these since I was unable to find a way to do it portably without creating a thread per close operation. It is simple enough to write a Windows-specific extension that calls DisconnectEx however.
A minor issue, but I'm not liking the names of the _some methods. It would be better to just document the fact that it might not read/write everything you give it instead of forcing it down the user's throat every time they want to use it.
I used to just document that it would return fewer bytes than requested, but I and others found the usage to be error prone. It is a particular problem for writes, since the write will usually transfer all of the bytes, and only transfer less very occasionally. This can lead to hard to find bugs, so I concluded that an extra 5 characters per call was a reasonable way to make it clearer. Cheers, Chris

On 12/12/05, Christopher Kohlhoff <chris@kohlhoff.com> wrote:
Hi Cory,
--- Cory Nelson <phrosty@gmail.com> wrote: <snip>
For the API, one thing that struck me is the almost C-style error handling. I believe the .NET way is better. It provides a point that the developer must call to get the result of the operation, and that the sockets can throw any exceptions that would normally be thrown in sync operations. ie,
void MyHandler(IAsyncResult res) { int len; try { len=sock.EndRecv(res); } }
My first reaction when I studied the .NET interface was that it was more cumbersome than it needed to be. I put it down to a lack of boost::bind ;)
I see a general problem in using exceptions with asynchronous applications, since an exception that escapes from a completion handler breaks the "chain" of async handlers. Therefore I consider it too dangerous to use exceptions for async-related functions except in truly exceptional situations.
Perhaps asio::error could be changed to assert() that it is checked before it goes out of scope. It is important to make sure the user checks it as inconsistent application states can be a big problem to track.
There's also a question that springs to mind when looking at the .NET code: what happens if I don't call EndRecv? In the asio model, once the handler is called the async operation is already over. You can handle or ignore the errors as you see fit.
I'm uncertain, I have yet to try that :)
I was disappointed to not see IPv6 there. Yes, it hasn't reached critical mass yet, but it is coming fast and should be mandatory for any socket libraries. With Windows Vista having a dual stack I believe IPv6 is something everyone should be preparing for. This is the only showstopper for me. I don't want to see a ton of IPv4-centric applications made as they can be a pain to make version agnostic.
IPv6 is on my to-do list. However I lack first-hand experience with it, don't have ready access to an IPv6 network, and don't see it as impacting the rest of the API (and so my focus has been on getting the rest of the API right instead).
Good to know that it's on your mind, at least. I'll be looking at implementation more in depth in the next few days, so you might see a patch for it.
Talking about IPv6, I would love to see some utility functions for resolving+opening+connecting via host/port and host/service in a single operation. This would encourage a future-proof coding style and is something almost all client applications would use.
I see this as a layer of abstraction that can be added on top of asio. For example, one could write a free function async_connect that used a URL-style encoding of the target endpoint. I don't see it as in scope for asio _for_now_.
It would definately be an abstraction and it could certainly be written by the user. But what I was getting at is that nearly all client apps would make good use of a function like this, so why not prevent such remaking of the wheel and put it right in asio?
I dislike how it forces manual thread management - I would be much happier if the demuxer held threads internally. Because threading is important to get maximum performance and scalability on many platforms, the user is now forced to handle pooling if he wants that.
It was a deliberate design decision to make asio independent of all thread management. Things like thread pooling are better addressed by a separate library, like Boost.Thread. (I'd also note it's not *that* hard to start multiple threads that call demuxer::run.)
My reasoning includes:
- By default most applications should probably use just one thread to call demuxer::run(). This simplifies development cost enormously since you no longer need to worry about synchronisation issues.
- What happens if you need to perform per-thread initialisation before any other code runs in the thread? For example on Windows you might be using COM, and so need to call CoInitializeEx in each thread.
I don't use COM. I'm not sure of what it would entail.
- By not imposing threads in the interface, asio can potentially run on a platform that has no thread support at all.
I don't believe software should be crippled for many so that a few can still use it. Having heard your reasons - how feasible would it be to include a threaded_demuxer type, which would be built with high-perf scalability in mind?
On Windows 2000+ it is common to use the built-in thread pool which will increase/decrease the amount of threads being used to get maximum cpu usage with IO Completion Ports.
It is my understanding (and experience) that having multiple threads waiting on GetQueuedCompletionStatus has the same effect. That is, threads will be returned from GetQueuedCompletionStatus to maximise CPU usage.
This is true. It will awake another thread if an active one blocks. However if all of your threads decide to block you will have a dead cpu. From what I understand, the builtin thread pool is able to see this usage and create/destroy threads to make sure you always have maximum CPU usage regardless of blocking. Does anyone have ideas on how you could do similar intelligent pooling without such OS support?
Which brings me to the next thing: the timers should be using the lightweight timer queues which come with win2k. These timer queues also have the advantage of using the built-in thread pool.
These timers are not portable to NT4. I also found some other fundamental issues when I studied the timer queue API, but unfortunately I can't recall them right now :(
They aren't available on NT4, but again I don't think modern operating systems should be crippled because a select few choose to develop for such an antiquated platform. It would be even better if there was a fallback mechanism.
If I'm thinking of the same thing as you, the built-in thread pool support is the one where you must provide threads that are in an alertable state (e.g. SleepEx)? If so I have found this
Nope. I'm talking about the QueueUserWorkItem and related functions. You don't need to touch the threads - all you need to make sure of is launching your operation in I/O or non-I/O threads (as non-i/o threads may be destroyed by the thread pool if it thinks nothing will be using them, and async ops cancel if the thread which launched them is closed).
model to be a flawed design, since it is an application-wide thread pool. This is particularly a problem if it is used from
It is an application-wide thread pool but it manages itself to make that a good thing. If your entire application uses it, you will always be using as much CPU as possible.
within a library such as asio, since an application or another library may also perform an alertable wait, thus preventing guarantees about how many threads may call back into application code.
The thread pool handles the alertable waiting - there is no reason for the application itself to do so. So that's not a problem.
The asio model allows scalable lock-free designs where there is say one demuxer per CPU, with sockets assigned to demuxers using some sort of load-balancing scheme. Each demuxer has only one thread calling demuxer::run(), and so there is no need for any synchronisation on the objects associated with that demuxer.
You have two threads running seperate demuxers. Suppose the sockets in one of them gets a heavy workload and the other has nothing to do? This might be an unlikely scenario, but it's also an unacceptable and easily avoidable one. Spreading the workload between threads (without forcing a socket to always be on one thread) will always be the optimal way to go, and the only way to do this sanely on multiple platforms is to have the demuxer do the threading.
ASIO lacks one important feature, async disconnects. I don't have experience in *nix, but in Windows creating sockets is expensive and a high perf app can get a significant benefit by recycling them.
I haven't implemented these since I was unable to find a way to do it portably without creating a thread per close operation. It is simple enough to write a Windows-specific extension that calls DisconnectEx however.
It must not be very expensive on other platforms, I guess. Not a major issue :)
A minor issue, but I'm not liking the names of the _some methods. It would be better to just document the fact that it might not read/write everything you give it instead of forcing it down the user's throat every time they want to use it.
I used to just document that it would return fewer bytes than requested, but I and others found the usage to be error prone. It is a particular problem for writes, since the write will usually transfer all of the bytes, and only transfer less very occasionally. This can lead to hard to find bugs, so I concluded that an extra 5 characters per call was a reasonable way to make it clearer.
Cheers, Chris
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- Cory Nelson http://www.int64.org

On 12/12/05, Cory Nelson <phrosty@gmail.com> wrote:
This is true. It will awake another thread if an active one blocks. However if all of your threads decide to block you will have a dead cpu. From what I understand, the builtin thread pool is able to see this usage and create/destroy threads to make sure you always have maximum CPU usage regardless of blocking.
Does anyone have ideas on how you could do similar intelligent pooling without such OS support?
I think thats one of the aims of the "SEDA" architecture: http://www.eecs.harvard.edu/~mdw/proj/seda/ This is a lot more heavyweight of an idea than a simple thread pool however. -- Caleb Epstein caleb dot epstein at gmail dot com

Hi Cory, --- Cory Nelson <phrosty@gmail.com> wrote:
Perhaps asio::error could be changed to assert() that it is checked before it goes out of scope. It is important to make sure the user checks it as inconsistent application states can be a big problem to track.
Hmmm, I dunno. What constitutes checking? What if the application simply wants to post (i.e. via a demuxer::post() call) a copy of the error to another function object where the real work is to be performed? This is likely to occur when you start composing asynchronous operations to create higher levels of abstraction. The mechanism to then keep track of whether the error has been checked could start to be quite heavy. <snip>
I see this as a layer of abstraction that can be added on top of asio. For example, one could write a free function async_connect that used a URL-style encoding of the target endpoint. I don't see it as in scope for asio _for_now_.
It would definately be an abstraction and it could certainly be written by the user. But what I was getting at is that nearly all client apps would make good use of a function like this, so why not prevent such remaking of the wheel and put it right in asio?
I think at this point I'd rather give it time to allow the interface to grow out of common use cases. I.e. see where more experience in the field takes it and then put it into a library.
- What happens if you need to perform per-thread initialisation before any other code runs in the thread? For example on Windows you might be using COM, and so need to call CoInitializeEx in each thread.
I don't use COM. I'm not sure of what it would entail.
The issue of exceptions that escape from handlers also just occurred to me. The current design in asio lets these exceptions propagate through demuxer::run() so that the application can handle them. Correctly handling exceptions presents a problem for an internally managed thread pool. <snip>
Having heard your reasons - how feasible would it be to include a threaded_demuxer type, which would be built with high-perf scalability in mind?
Well, I don't see it as impossible, but I also don't see it as necessary to writing scalable, high-performance apps. Today I was thinking that it should be possible to develop a (possibly portable) thread-pooling solution to this external to asio. The hard part is having a way of detecting that an additional thread is required. Once you've decided you need an additional thread, then you just spawn it and have it call demuxer::run() to donate itself to the pool. <snip>
This is true. It will awake another thread if an active one blocks. However if all of your threads decide to block you will have a dead cpu.
My advice is to avoid blocking ;) Seriously, my experience to date leads me to believe that long running blocking operations should be abstracted behind an asynchronous interface, so that the "main" thread of an application does not block. <snip>
You have two threads running seperate demuxers. Suppose the sockets in one of them gets a heavy workload and the other has nothing to do?
This might be an unlikely scenario, but it's also an unacceptable and easily avoidable one.
Spreading the workload between threads (without forcing a socket to always be on one thread) will always be the optimal way to go, and the only way to do this sanely on multiple platforms is to have the demuxer do the threading.
I just present the demuxer-per-cpu idea as a possible design alternative that asio permits. It may be suitable for some applications but not others. My point is that it is not necessarily unacceptable -- I think it is entirely reasonable to choose to trade off some potential performance in favour of avoiding the development complexity introduced by synchronisation. Oh BTW, I forgot to mention ConnectEx. I was planning to do it before the review but as time was running short I didn't want to risk introducing instability. I intend to enable it for builds that target Windows XP or later. I'm still of two minds as to whether Windows 2000 targeted builds should try to dynamically load it and use it, since that increases the amount of code generated and the complexity of the implementation. Is it worth it? Cheers, Chris
participants (8)
-
Caleb Epstein
-
christopher baus
-
Christopher Kohlhoff
-
Cory Nelson
-
Felipe Magno de Almeida
-
Giovanni P. Deretta
-
Peter Dimov
-
Tim michals