
Hi, Caleb Epstein wrote:
Sockets are not files, and I think to treat them identically, and build a library assuming that iostreams is the lowest level interface anyone could want to use is folly. There are large, measurable, performance trade-offs associated with iostreams compared to C stdio operations on every platform I have encountered, and similarly when compared to the low level system read and write (or send/recv) calls.
[...] I can see your point now. Probably something like this would work (still using the class names I've used so far): There is a base class, impl that provides an abstract interface with lots of socket API calls, but all of them take strings as arguments. This class is subclassed for each address family, implementing the string -> AF specific args conversion and providing AF specific calls. This class also owns the file handle. A wrapper class around a pointer to impl can allow construction of local impl variables with cleanup. A socket_streambuf wraps around an impl*, providing buffering. The socketstream class uses socket_streambuf. That gives three application layers. I think the manager can be taught to deal with any of them, so you get a select()/WFMO() wrapper as well. I wonder whether it would make sense to provide generic (non-socket) nonblocking I/O features here as well, for example terminal or GUI I/O.
What would be really needed in iostreams would be some sort of transaction interface that would allow me to abort insertion and extraction mid-way. It may be possible to emulate that using putback, and I think this will be the way to go here.
Now the iostreams approach is starting to sound pretty complex, isn't it?
Yes, but still needed IMO. I expect that the majority of applications will require formatting/parsing and non-blocking I/O for two or three streams.
I'm not sure I understand your point here. Are you saying you can implement non-blocking IO with a C++ iostreams interface? Perhaps its do-able, but your code would end up not really looking like "normal" iostreams any more. You'd have to insert checks between each << or >> operation, and figure out where you left off if you got a short read/write. This isn't terribly developer-friendly.
For <<, it is pretty easy: If the write buffer is filled to a certain extent, a callback will be invoked whose task it is to throttle the application, for example by telling the socket layer that the next write will fail() before the first byte is written, so you won't lose sync. Then you are back with normal iostreams error handling, with the added twist that your error handler can find out that someone has pulled the emergency brake and attempt to recover. This is not mandatory, the app may just give up the stream, which is default behaviour. For >>, it is nearly the same, just that you are supposed to instantiate a "txn" object before you start to extract, and when your parser changes state and has consumed the chars and will not go back, you call the txn object's commit function to tell the streambuf that it can discard the characters. When the txn object is destroyed without either explicit commit() or rollback(), the stream goes bad(). A special istream_iterator could wrap this.
Again, this sounds quite complex to me. Why not live with an iostreams interface that is blocking-only?
Because that is severely limited. It will not even allow me to write a simple interactive application that uses the network.
Why should EWOULDBLOCK be treated as an EOF? I really think this is trying to fit a square peg - non-blocking sockets - into a round hole - iostreams.
From an iterator standpoint, it is the end of the sequence. That there is a possibility that you can pick up again later does not matter to the algorithms that take iterators.
It might be interesting whether coroutines could be of any use here later on.
I think we should implement C++ sockets first, and then iostreams on top of those.
Yes, I see what you mean. They are basically implemented, all that needs to be done is to make the API public. Simon