
On Sun, 12 Jun 2005 01:29:29 -0500, Aaron W. LaFramboise wrote Let me throw a couple wrenches into the dicussion and then I'll go back to lurking...
As a simple example, >> cannot distinguish between different forms of whitespace.
Actually, I think the whitespace can be defined in facets, but if not then you would need a new stream type for the built-in types. For custom types, well they can do whatever they want.
...
<< and >> are great for things related to human-readable formatting, where a human's eyes are the primary discriminator, but I am unconvinced they are useful for reading and writing text to be manipulated by machines. I do not think << and >> are even workable, in the general case, for a protocol who's whitespace and formatting rules do not exactly match those of C and C++.
When implementing the >> operator for a custom class, how do you handle the case where you need to read two primative types, but the second read fails, leaving the operator holding on to data that it has no way of 'putting back'? As near as I can tell, this ends up leaving the stream in a consistant but indeterminant state; something that might be OK for files, but is entirely not OK for a medium that is not rewindable, such as sockets.
Implementing parsers using operartor>> is tough because you only have input iterator - that makes backtracking tough. That said, I don't think this issue is even remotely related to the your statement that << and >> are only good for 'human' output. Plenty of computer only i/o goes thru these operators.
An improved streambuf could help cope with this, but this is tangental to the unsuitability of iostreams.
Yes, but actually I don't think it is tangential overall. I believe a socket library that doesn't work with standard i/o is unacceptable for boost. Anyone that wants to get a socket library into the standard will have to clearly demonstrate why the current standard i/o model doesn't work. I've seen nothing so far that convinces me that standard streambufs can't be used in the core of a socket library for managing the opaque or 'char' level data. If you accept this, well then the iostream level is almost an incidental benefit...
Every "socket library" or socket application without such library support has reinvented a buffer layer; and none of them have considered the usefulness of a formatting layer, leaving "sends" and "receives" as high up as main().
I am not saying, "<iostream> is never useful for sockets." I am only saying that it is not a good primative for general work, done in real programs with real protocols, and hence is somewhat tangental to the path of seeking a general purpose sockets library.
Yes, I've see it work quite well in real programs. It works something like this: 1) Protocol header had message type/size at top of packet 2) Socket core ensured a full read of message into a std::streambuf 3) Application layer received streambuf with message type callback 4) Application would create 'message object' based on type 5) Used i/o streaming/serialization to read message object from streambuf Simple and clean. Socket core doesn't really care about message content -- as it should be. Application layer does that -- has the option of using iostreams or parsing from the buffer directly. BTW, some of the message formats are binary using a different serialization format adapter against the streambuf.
With the Boost Iostreams library, it is extremely easy to form a streambuf from any particular data source. With this, I completely disagree with your former statement, and I'd say: An implementation of a socket streambuf for iostreams is the only thing that a socket library *doesn't* need to provide.
If it's easy than just provide it now.
By the way, take this in no way as criticism of your library, which I have not formed an opinion on yet. I am only stating my belief that iostream implementations are tangental to the primary work of creating a Boost socket stream library.
Ah, obviously I totally disagree. Think about where it fits in now -- before you get called out in the review. Jeff