
Not clear on what you were asking in the second. I'll try the shotgun approach and hope I hit something :-)
I try again, too. :-) What I wanted to say is that the blocking I/O methods of your socket stream always return immediately. They block but don't need to wait because they just copy data to another application buffer. When
----- Original Message ----- From: "Boris" <boris@gtemail.net> [snip] the
library user calls your operator<< he knows the call will return immediately and not after eg. 10 seconds. Is this correct?
Yes.
My approach to input is (inevitably? ;-) completely different. There are no "operator>>( stream &, application_object & )"'s to match the output operators; the design is asymmetric. This is for the simple reason that a function with such a signature implies blocking; it must wait for potentially multiple network reads to complete the application_object. The design is fully async.
If there is no operator>> don't you think you make socket streams less useful for library users? Isn't the idea of socket streams that library users who are familiar with the interface of iostreams can start sending and receiving data on the network without knowing too much about the details? If you write "std::cin >> a;" you know it will stop your program until something is entered. I'd think it makes sense if socket streams behave similarly?
Tricky to answer (i.e. "less useful"). There are probably several "right" answers. Hopefully I have one of them ;-) In certain scenarios it would be reasonable to provide "stream >> a" 's that block as necessary. I would like to say that those scenarious exist in "simple" applications but that would be too easy. And there is certainly value in being able to write such code in small test programs. The diffifulties begin when you _truly_ need async input, e.g. for reading of application objects off an async socket. What mechanism can we use? I think its significant that there is no standard answer to this. The argument over whether the blocking version should exist is possibly separate to the fact that the non-blocking doesnt? The essence of my solution (which is only one of the "right" ones ;-) starts with a class, say "SMTP_server_session" (the following is a major simplification); class SMTP_server_session { stream remote; .. int some_method() { remote << SMTP_reject(); } }; This includes a method that calls the stream output operator (<<). The _output_ is shown to originate from a class because _input_ is always directed at that same class (well, instance of course). So the originating object always has the following method; class SMTP_server_session { .. void operator()( variant & ); }; If I've judged it right then you can imagine the data flow. All receiving is achieved generically, i.e. the low-level input code deals in variants. On completion of a variant, which may take 1 or more network blocks, it is presented to a "session owner". As above. The session performs the conversion to application types. So to achieve a solution to async I/O I've had to develop a minimal async framework. For me you cant have one without the other. I suspect that this is the root of the difficulty with async I/O. Boost discussions seem to bounce away from the framework issue because it appears adjunct to some, irrelevant to others. Understandable reactions. BTW, conversion to application types looks like this; application_type & operator>>( variant &v, application_type &t ) { variant_array &a = v; a[ 0 ] >> t.member_1; a[ 1 ] >> t.member_2; a[ 2 ] >> t.member_3; } Templates are pre-defined for all the standard types and containers. All variant to application type operators are defined to implement "move" semantics, e.g. the string that is allocated by the low-level socket reading code is the same string that is eventually used at application level; it effectively "moves up" the software stack thanks to "swap". <sigh> Well it was nice when I got it all working but probably meaningless in this discussion. Or maybe not... The significant thing about the above "input" operator is that it is non-blocking; a complete variant has already been recognised by the low-level code. Cheers.