Re: [boost] Re: [network] An RFC - updated

On Fri, 2005-04-22 at 14:23 +0300, Peter Dimov wrote:
Iain Hanson wrote:
On Fri, 2005-04-22 at 12:42 +0300, Peter Dimov wrote:
[ snip ]
This would give a significant performance hit as there will now be two copies of the data. The 1st from kennel space to the library and the second in the callback from the library to the user.
Sometimes, yes, but not always. You don't have to make a copy in the callback, and for small packets and low bandwidth, the extra copy may not be significant. Also note the "by default" in the above. I am not against manual buffer management, just against the absence of automatic buffer management.
I really don't see this as workable in the general case. The library has to guess the size of the read buffer. It would also prevent reading a complete record on a stream by reading a header up to the length field and then making a 2nd read call for that length with the correct size buffer. It would also add dynamic memory allocation to the library and be a source of runtime errors as a result a user not copying the buffer and trying to use it after its lifetime expired. I know we can't always protect users from themselves but we do try to not make it easy for them to make mistakes. /ikh

Iain Hanson wrote:
I really don't see this as workable in the general case. The library has to guess the size of the read buffer. It would also prevent reading a complete record on a stream by reading a header up to the length field and then making a 2nd read call for that length with the correct size buffer.
This is a good point, but async_read can take a size parameter, even when the client does not supply a buffer.
It would also add dynamic memory allocation to the library
I'm not sure that you can beat the library from the client side with respect to the number of memory allocations. With N asynchronous reads active you need to keep N buffers alive. The library can manage with just one in the select case.
and be a source of runtime errors as a result a user not copying the buffer and trying to use it after its lifetime expired.
I'm fairly confident that manual buffer management will introduce even more runtime errors. ;-) (That's been my experience with async reads/writes, at least.)

Peter Dimov wrote:
Iain Hanson wrote:> I'm not sure that you can beat the library from the client side with respect to the number of memory allocations. With N asynchronous reads active you need to keep N buffers alive. The library can manage with just one in the select case.
Should several pending reads be allowed? I have leaned back and forth on this issue in a previous platform specific async com library I wrote. Actually I ended up supporting several pending reads with user supplied buffers, but the actual clients never have or had reason to have more than one pending read.
and be a source of runtime errors as a result a user not copying the buffer and trying to use it after its lifetime expired.
I'm fairly confident that manual buffer management will introduce even more runtime errors. ;-) (That's been my experience with async reads/writes, at least.)
Do you have any preferences as wheter to use basic_streambuf or not as the buffer interface? /Michel

Michel André wrote:
Peter Dimov wrote:
Iain Hanson wrote:> I'm not sure that you can beat the library from the client side with respect to the number of memory allocations. With N asynchronous reads active you need to keep N buffers alive. The library can manage with just one in the select case.
Should several pending reads be allowed? I have leaned back and forth on this issue in a previous platform specific async com library I wrote.
Definitely; a server that has N active connections might have up to N reads active at any time. Not on the same socket, of course. :-)
I'm fairly confident that manual buffer management will introduce even more runtime errors. ;-) (That's been my experience with async reads/writes, at least.)
Do you have any preferences as wheter to use basic_streambuf or not as the buffer interface?
I'm not sure what basic_streambuf would buy me over char[] or vector<char>.

Peter Dimov wrote:
Michel André wrote:
Peter Dimov wrote:
Iain Hanson wrote:> I'm not sure that you can beat the library from the client side with respect to the number of memory allocations. With N asynchronous reads active you need to keep N buffers alive. The library can manage with just one in the select case.
Should several pending reads be allowed? I have leaned back and forth on this issue in a previous platform specific async com library I wrote.
Definitely; a server that has N active connections might have up to N reads active at any time. Not on the same socket, of course. :-)
Of course, I was refering to the same socket/stream object not overall ;). My implentation supported several pending receives on the same socket but none of the clients that i know of or wrote ever used it. And it would require further synch at the session level if ordering between messages are needed since partial messages could arrive out of order if one thread gets descheduled between dequeuing recv1 and dispatching recv1 to callback and the next thread completed and dispatched recv2.
I'm fairly confident that manual buffer management will introduce even more runtime errors. ;-) (That's been my experience with async reads/writes, at least.)
Do you have any preferences as wheter to use basic_streambuf or not as the buffer interface?
I'm not sure what basic_streambuf would buy me over char[] or vector<char>.
The same goes for me, but Jeff Garland proposed looking into basic_streambuf. It would give possibility to supply your own implementation and maybe an easier way to intergrate with io streams. I think vector<char> would be good enough to use as send and receivebuffer. /Michel

Michel André wrote:
Should several pending reads be allowed? I have leaned back and forth on this issue in a previous platform specific async com library I wrote. Actually I ended up supporting several pending reads with user supplied buffers, but the actual clients never have or had reason to have more than one pending read.
I'll offer one possibly related reflection on that, which is also related to experience from my working with an X.25 API. The X.25 API doesn't provide the equivalent to the tcp listen(2) backlog parameter. To drive an X.25 acceptor to be capable of accepting incoming calls at a high rate, one has to implement the backlog oneself. This basically amounts to having multiple async accepts (on the same "port") pending at the same time. Depending on your perspective, an 'accept' could be treated much like a 'read', hence my comment. Mats
participants (4)
-
Iain Hanson
-
Mats Nilsson
-
Michel André
-
Peter Dimov