
Hi Giovanni, --- "Giovanni P. Deretta" <gpderetta@gmail.com> wrote: <snip>
Promising. I think that the most important innovation of asio is the dispatcher concept and the async call pattern. Everything else is just *extra*. It probably should be removed in some way from asio and made a separate boost library (or may be just rename asio to something like "asynch lib" or whatever).
The dispatcher concept could certainly be reimplemented in a separate library without being coupled to I/O. <snip>
First of all the name "socket" is too much connected with the BSD socket API. I think that asio should go well beyond it and not limit itself to the system socket types (not that it does, but the names might seem to imply that). This is just a personal preference though.
I have deliberately retained the feel of the BSD socket API because it is widely supported by literature, even to the extent that it is followed in other programming languages' networking libraries. I'm going to reorder the next bits because I think there is a valuable idea here...
Also i think there should be *no* asio::stream_socket. This is my major The current stream socket should be in the namespace asio::ipv4, and should be a different type for example from an eventual asio::posix_pipe::stream_socket or asio::unix::stream_socket. Especially the last can be currently trivially implemented by defining the appropriate protocol. This means that a stream_socket initialized with a ipv4::tcp protocol will interoperate with a stream_socket initialized with a unix::stream protocol. For example, currently i can use my hipotethical unix::stream with ipv4::resolver. Asio should not be type unsafe only because the BSD API is. Of course both should share the same code, but it should be an implementation detail. ... This also means that the binding with the demuxer should *not* be done at creation time, but at open time.
I don't know why I didn't think of this before! It's actually a small change to the interface overall, but I do believe it gives a net gain in usability. Basically the protocol class can become a template parameter of basic_stream_socket. Then, for example, the asio::ipv4::tcp class would be changed to include a socket typedef: class tcp { public: ... class endpoint; typedef basic_stream_socket<tcp> socket; }; Then in user code you would write: asio::ipv4::tcp::socket sock; Now any constructor that takes an io_service (the new name for demuxer) is an opening constructor. So in basic_stream_socket you would have: template <typename Protocol, ...> class basic_stream_socket { ... // Non-opening constructor. basic_stream_socket(); // Opening constructor. explicit basic_stream_socket(io_service_type& io, const Protocol& protocol = Protocol()); // Explicit open. void open(io_service_type& io, const Protocol& protocol = Protocol()); ... }; This basic_stream_socket template would in fact be an implementation of a Socket concept. Why is this important? Because it improves portability by not assuming that a Protocol::socket type will actually be implemented using the platform's sockets API. Take the Bluetooth RFCOMM protocol for example: namespace bluetooth { class rfcomm { ... class endpoint; typedef implementation_defined socket; }; } // namespace bluetooth Here the socket type is implementation defined, because although on some platforms it can be implemented using BSD sockets (e.g. Windows XP SP2), on others it requires a different API or a third party stack. The only wrinkle is in something like accepting a socket. At the moment the socket being accepted must not be open. That's fine, except that I think it is important to allow a different io_service (demuxer) to be specified for the new socket to the acceptor, to allow partitioning of work across io_service objects. I suspect the best way to do this is to have overloads of the accept function: // Use same io_service as acceptor. acceptor.accept(new_socket); // Use separate io_service for new socket. acceptor.accept(new_socket, other_io_service); An alternative is to require the socket to be opened before calling accept (this is what Windows does with AcceptEx) but I think that makes the common case less convenient. <snip>
By the way, the implementation of socket functions (accept, connect, read, write, etc) should not be members of the demuxer service but free functions or, better, static members of a policy class.
I do think these functions at the lowest level should be member functions rather than free functions, particularly because in the case of async functions it makes it clearer that the result will be delivered through the associated io_service (demuxer).
The buffer concept does not really add much, simply using void pointers and size_t length parameters might require less conceptual overhead. They don't add much security (it should be the responsibility of higher layers along with buffer managements), and in practice don't help a lot with scatter/gather i/o: often you can't reuse an existing vector of iovecs, because you need to do an operation from/to the middle of it and thus you need to build a temporary one. As the iovector is usually small (16 elements may be?) and can be stack allocated, I think that having a global buffer type is a premature optimization.
I'm not sure I understand you here. Since these operations use the Mutable_Buffers and Const_Buffers concepts, you don't have to use std::vector, but can use boost::array instead to build the list of temporary buffers on the stack.
Instead there should be a *per stream type*, because a custom stream might use a different iovector implementation that the system one ( io_vec or whatever). It should have at least push_back(void*, size_t) and size() members.
I don't see that this buys anything. You still want to be able to specify the memory regions to be operated on in terms of void*/size_t. The buffer classes are equivalent to this, but with the sharp edges removed.
Scatter gather operations should accept an iterator to first element of the operation (usually the first element of the vector).
The Mutable_Buffers and Const_Buffers concepts are already a pair of iterators for this reason. I'll explain below, but in fact I think the buffers as used in asio already allow an implementation of lazy buffer allocation as you request.
The current interface let the user put a socket in non blocking mode, but there is not much that can be done with that, because no reactor is exported.
I think non-blocking mode can still be useful in asynchronous and synchronous designs, since it allows you to issue an operation opportunistically.
The various reactors/proactor implementations should be removed from the detail namespace and be promoted to public interfaces, albeit in their own namespace (i.e. win32::iocp_proactor, posix::select_reactor, linux::epoll_reactor etc...). This change will make the library a lot more useful.
Over time perhaps, but these are already undergoing changes as part of performance changes (without affecting the public interface). They are also secondary to the portable interface, so there are many costs associated with exposing them.
The buffered stream is almost useless. Any operation requires two copies, one from kernel to user space, and one from the internal buffer to the user buffer.
Well yes, but you're trading off the cost of extra copies for fewer system calls.
The internal buffer should be unlimited in length (using some kind of deque)
I don't think it is helpful to allow unlimited growth of buffers, especially with the possibility of denial of service attacks.
and accessible to eliminate copies. An interface for i/o that does not require copying would be generally usefull and not limited to buffered streams.
In the past I exposed the internal buffer, but removed it as I was unhappy with the way it was presented in the interface. I'm willing to put it back if there is a clean, safe way of exposing it. <snip>
I think that the buffer function does not support vectors with custom allocators, but i might be missing something.
Good point, I'll fix that. <snip>
While I didn't use all of it (No timers nor SSL), as an experiment I did write a simple continuation library using asio::demuxer as a scheduler and using asio callback pattern to restart coroutines waiting for i/o. Asio's demuxer cleaness and callback guarantees made the implementation very straightforward. If some one is interested i may upload the code somewhere. Currently it is posix only (it uses the makecontext family of system calls), but it should be fairly easy to implement win32 fiber support.
This sounds very interesting, especially if it was integrated with socket functions somehow so that it automatically yielded to another coroutine when an asynchronous operation was started. <snip>
- Lazy allocation of buffers. Instead of passing a buffer or a list of buffers to an async io function, a special allocator is passed. When the buffer is needed (before the async syscall on real proactors and before the non blocking call, but after polling on emulated proactors), the allocator is called and it will return a buffer to be used for the operation. The buffer is then passed to the handler. It will eliminate the need to commit buffer memory for inactive connections. This mostly makes sense for reading, but it might be extended to writing too. This optimization make it possible to add...
With a slight tightening of the use of the Mutable_Buffers concept, I think this is already possible :) The Mutable_Buffers concept is an iterator range, where the value_type is required to be a "mutable_buffer or be convertible to an instance of mutable_buffer". The key word here is "convertible". As far as I can see, all you need to do is write an implementation of the Mutable_Buffers concept using a container of some hypothetical lazy_mutable_buffer class. It only needs to provide a real buffer at the time when the value_type (lazy_mutable_buffer) is converted to mutable_buffer. Thanks very much for your comments! Cheers, Chris