
On Tue, 28 Dec 2004 21:02:37 -0800 "Robert Ramey" <ramey@rrsd.com> wrote:
However, I don't think the concept of transmission protocal should be mixed into the library - which is already very complex.
I do not think Scott is talking about transmission protocol specifically.
I believe you could easily achive what you want to accomplish by serializaiton to a memory buffer (e.g string_stream) and transmitting this. On the other end the inverse of this process would occur.
If I understand Scott correctly, the problem still exists, if you want to use the lib in that way. Assume an object whose serialization is something like 4K. If reading from a file, or even a TCP stream with the socket in blocking mode, then you just keep reading until you get all the data. However, for a socket in non-blocking mode, you will typically use select or poll or some other notification mechanism to be told when data is available. You will then go read as much as is currently available, and then return to other tasks until more data is ready. Let's say that data is slow, and to ready the entire 4K of data, it takes 10 different "notifications" and 10 different "read" operations. I think Scott is saying that operator>> is insufficient because it can not do a partial read of what is there... it wants to snarf all 4K. I could be missing the boat, but this is the usual problem with serialization methods, when using them with sockets. For this to work, the operator>>() has to know that there is no more data (i.e., correctly interpret return code of read when the fd is in non-blocking mode), and keep its current state so that the next call to operator>>() will continue where the last call left off. I do not see this as a protocol issue but as supporting non blocking reads where you can get the data in many small chunks. Then again, it is possible that the serialization library already does support this in some way...