
On 6/9/05, Simon Richter <Simon.Richter@hogyros.de> wrote:
Caleb Epstein wrote:
Please do! Does either of the implementations offer an interface to the sockets at a lower level than iostreams though? Mine doesn't, so far, as I haven't seen a need for it.
I believe it is an absolute requirement for a C++ Sockets library.
Hrm, I have never missed being able to access files from a lower level than iostreams so far. :-) I never do binary I/O directly in my applications but always implement an inserter/extractor pair that uses streambuf iterators, though.
Sockets are not files, and I think to treat them identically, and build a library assuming that iostreams is the lowest level interface anyone could want to use is folly. There are large, measurable, performance trade-offs associated with iostreams compared to C stdio operations on every platform I have encountered, and similarly when compared to the low level system read and write (or send/recv) calls. I write high speed, network-centric, message-driven applications for a living. I would not be able to write applications that scale properly using a purely iostreams based interface to the network. The high level abstractions are nice for simpler applications, but they simply don't work well when you need to scale to managing many hundreds of connections and guaranteeing a certain quality of service to each. A blocking I/O model is not acceptable for my uses. That said, I think a socket-iostreams library is a GREAT idea. I would use it to write simpler applications that don't require the type of scalability or complexity I mentioned above. I just don't think it should be the ONLY interface.
iostreams' read()/write() should be enough for stream-based I/O, and for datagrams I'd propose going through another step anyway (i.e. have a separate stream class that does not derive from the standard iostreams but rather allows inserting and extracting packets only.
But why not just build the iostreams interface on top of a lower-level interface? Thats all I'm looking for: the lower level ("layer 1" and "layer 2" in Iain's terms) interfaces that are necessary to build highly scalable, complex network services.
What would be really needed in iostreams would be some sort of transaction interface that would allow me to abort insertion and extraction mid-way. It may be possible to emulate that using putback, and I think this will be the way to go here.
Now the iostreams approach is starting to sound pretty complex, isn't it?
Well, I don't think it makes sense to implement iostreams on top of a non-blocking socket interface. If a user wants to use "socketstreams" they can reasonably be forced to use a blocking I/O model.
This would be acceptable for the average client but inhibit writing server code without resorting to C function calls, parsing messages into stringstreams and going from there, at which point you already have two parsers - one to determine whether the message is complete and can be extracted, one to actually extract.
I'm not sure I understand your point here. Are you saying you can implement non-blocking IO with a C++ iostreams interface? Perhaps its do-able, but your code would end up not really looking like "normal" iostreams any more. You'd have to insert checks between each << or >> operation, and figure out where you left off if you got a short read/write. This isn't terribly developer-friendly.
Although the Boost.Iostreams library may make non-blocking doable.
With a little care, it can be done with the current iostreams library -- however the {i,o}stream_iterator classes will have to be exchanged by transaction capable ones and there needs to be a way to distinguish between end-of-stream and end-of-available data on a stream. While iterators would go past-the-end in either case, an application needs to know whether to restart afterwards. Fortunately, this can be added as a stream-specific function.
Again, this sounds quite complex to me. Why not live with an iostreams interface that is blocking-only?
What I currently cannot think of is how to make nonblocking streams go bad() if an extraction fails because no more data is available and noone took care to put back the already extracted characters.
This should not preculde a different user or even another part of the same application from using a non-blocking socket interface at "layer 1". IMHO of course.
Whether the sockets you have are blocking or nonblocking is determined by whether there is a manager attached to them. If you use a manager, you are expected to handle end-of-file conditions that aren't, by asking the stream whether this is really EOF and resetting the stream state accordingly. I can see no problem here.
Why should EWOULDBLOCK be treated as an EOF? I really think this is trying to fit a square peg - non-blocking sockets - into a round hole - iostreams.
Proposed Socket Library Layers: http://thread.gmane.org/gmane.comp.lib.boost.devel/122484
This is more about the big picture, stacking more complex interfaces on top of it. I think we should implement iostreams for sockets first, then we can go on to implement the mighty httpwistream that will give you "wchar_t"s, whatever the document encoding was. :-)
I think we should implement C++ sockets first, and then iostreams on top of those. I know there have been others that have agreed with this approach before. Are any of them following this thread? -- Caleb Epstein caleb dot epstein at gmail dot com