On Tue, 22 May 2007 15:52:58 +0200
"Radu-Adrian Popescu"
My 2c: I've found that the easiest way to do this is to use blocking reads. I'm not sure whether you can get any performance benefit by not using blocking reads; I kind of doubt it. Also using blocking reads simplifies things _a lot_, since all you have to do is: 1) read a known number of bytes that tells you how many more you have to read 2) read as many bytes as resulted at step 1)
With Asio (I'm looking at 0.3.8rc3), async reads for this (common) form of "message protocol" is just as easy as doing blocking reads. Specifically, the "asio::async_read" functions specify a "completion_condition", which defaults to the protocol mentioned above. (I like the "completion_condition" design, since it allows an easy override of message boundary logic, while defaulting to the common cases.) Even if Asio didn't provide nice default handling of "message boundaries", it's pretty easy to write the logic yourself in a non-blocking design, whether using an async (proactive) or reactive model. And I think you *would* get performance benefits using the async_read capabilities, or at the very least concurrency benefits, specially with large messages. As a real-world use case, I wrote a networking infrastructure library for a project that typically sent messages between 1K and 8K bytes in length, but at certain times would send messages dozens (or hundreds) of megabytes long. Having the application (or a thread within the app) block while collecting all of the data for a single message was unacceptable. The library multiplexed all network reads and writes transparently and concurrently for the application (across multiple connections). Note that the "concurrency" I'm mentioning here is not threading concurrency, but IO multiplexing (between multiple sockets / connections) concurrency (it had the same general event multiplexing model as Asio does). Cliff