
Hi Cory, --- Cory Nelson <phrosty@gmail.com> wrote: <snip>
For the API, one thing that struck me is the almost C-style error handling. I believe the .NET way is better. It provides a point that the developer must call to get the result of the operation, and that the sockets can throw any exceptions that would normally be thrown in sync operations. ie,
void MyHandler(IAsyncResult res) { int len; try { len=sock.EndRecv(res); } }
My first reaction when I studied the .NET interface was that it was more cumbersome than it needed to be. I put it down to a lack of boost::bind ;) I see a general problem in using exceptions with asynchronous applications, since an exception that escapes from a completion handler breaks the "chain" of async handlers. Therefore I consider it too dangerous to use exceptions for async-related functions except in truly exceptional situations. There's also a question that springs to mind when looking at the .NET code: what happens if I don't call EndRecv? In the asio model, once the handler is called the async operation is already over. You can handle or ignore the errors as you see fit.
I was disappointed to not see IPv6 there. Yes, it hasn't reached critical mass yet, but it is coming fast and should be mandatory for any socket libraries. With Windows Vista having a dual stack I believe IPv6 is something everyone should be preparing for. This is the only showstopper for me. I don't want to see a ton of IPv4-centric applications made as they can be a pain to make version agnostic.
IPv6 is on my to-do list. However I lack first-hand experience with it, don't have ready access to an IPv6 network, and don't see it as impacting the rest of the API (and so my focus has been on getting the rest of the API right instead).
Talking about IPv6, I would love to see some utility functions for resolving+opening+connecting via host/port and host/service in a single operation. This would encourage a future-proof coding style and is something almost all client applications would use.
I see this as a layer of abstraction that can be added on top of asio. For example, one could write a free function async_connect that used a URL-style encoding of the target endpoint. I don't see it as in scope for asio _for_now_.
I dislike how it forces manual thread management - I would be much happier if the demuxer held threads internally. Because threading is important to get maximum performance and scalability on many platforms, the user is now forced to handle pooling if he wants that.
It was a deliberate design decision to make asio independent of all thread management. Things like thread pooling are better addressed by a separate library, like Boost.Thread. (I'd also note it's not *that* hard to start multiple threads that call demuxer::run.) My reasoning includes: - By default most applications should probably use just one thread to call demuxer::run(). This simplifies development cost enormously since you no longer need to worry about synchronisation issues. - What happens if you need to perform per-thread initialisation before any other code runs in the thread? For example on Windows you might be using COM, and so need to call CoInitializeEx in each thread. - By not imposing threads in the interface, asio can potentially run on a platform that has no thread support at all.
On Windows 2000+ it is common to use the built-in thread pool which will increase/decrease the amount of threads being used to get maximum cpu usage with IO Completion Ports.
It is my understanding (and experience) that having multiple threads waiting on GetQueuedCompletionStatus has the same effect. That is, threads will be returned from GetQueuedCompletionStatus to maximise CPU usage.
Which brings me to the next thing: the timers should be using the lightweight timer queues which come with win2k. These timer queues also have the advantage of using the built-in thread pool.
These timers are not portable to NT4. I also found some other fundamental issues when I studied the timer queue API, but unfortunately I can't recall them right now :( If I'm thinking of the same thing as you, the built-in thread pool support is the one where you must provide threads that are in an alertable state (e.g. SleepEx)? If so I have found this model to be a flawed design, since it is an application-wide thread pool. This is particularly a problem if it is used from within a library such as asio, since an application or another library may also perform an alertable wait, thus preventing guarantees about how many threads may call back into application code. The asio model allows scalable lock-free designs where there is say one demuxer per CPU, with sockets assigned to demuxers using some sort of load-balancing scheme. Each demuxer has only one thread calling demuxer::run(), and so there is no need for any synchronisation on the objects associated with that demuxer.
ASIO lacks one important feature, async disconnects. I don't have experience in *nix, but in Windows creating sockets is expensive and a high perf app can get a significant benefit by recycling them.
I haven't implemented these since I was unable to find a way to do it portably without creating a thread per close operation. It is simple enough to write a Windows-specific extension that calls DisconnectEx however.
A minor issue, but I'm not liking the names of the _some methods. It would be better to just document the fact that it might not read/write everything you give it instead of forcing it down the user's throat every time they want to use it.
I used to just document that it would return fewer bytes than requested, but I and others found the usage to be error prone. It is a particular problem for writes, since the write will usually transfer all of the bytes, and only transfer less very occasionally. This can lead to hard to find bugs, so I concluded that an extra 5 characters per call was a reasonable way to make it clearer. Cheers, Chris