
Don G wrote:
The reason I suppose that I preferred not-yet-connected as a state of stream (vs. a separate class) is that it is logically a 1-to-1 relationship (and even the same socket) whereas acceptor is a 1-to-N relationship (the acceptor socket never becomes something else).
I don't see connector as 1 to 1 concept. You can use a connector to establish several connections to the same address or endpoint if that is needed. The connector pattern can be used to model fail over between different endpoints providing the same servcie, by having a service_connector implementation that wraps and coordinates several alternate connectors to different end points over potentially different transports. This also hides/abstracts the fail over strategy new endpoint selection in a nice way from the protocol handler and stream.
The other reason I went with the approach I have is that there is only one object to deal with for cancel. I don't need to worry about the transition from connecting to connected when I want to abort.
The exact ownership of the socket/handle is always clear either the connector owns it during the connection phase and when it's connected the ownership is transferred to the stream. So I don't see this as a potential problem.
I would find this more difficult. With an acceptor, I am doing one thing (accepting connections), but with _each_ stream connection, I am doing one thing. In this respect, the socket approach feels right: an acceptor is a thing, and a not-yet-connected stream is also its own thing (which may become connected eventually).
Seperating the connection establishment would make the stream more stateless and have fewer concerns and it separates responsibility keeping the interfaces easier. I also doesn't imply inheritance as is the case whit a connectable stream. You also don't have to handle questions such as if a stream can be connected again after it is closed and so forth (and i guess this could be different in different implementations).
Some platforms have socket features that aren't available on other platforms. Unix has signal-driven I/O, while Windows does not. Windows has event object based I/O, completion ports, HWND messages, etc. which are unique to it. The common intersection is blocking, non-blocking and select. One can write 99% portable sockets code based on that subset.
So basically layer 0 should support this portable subset.
Except that Windows has restrictions on select(). A single fd_set must be <= 64 sockets. You cannot mix sockets from different providers (which I understand to mean IPv4 vs. v6 vs. IPX vs. whatever). Also, select() is for sockets only, not files, not pipes, not fill_in_the_blank.
You can define FD_SETSIZE to some arbitrary number. On my FC1 including select.hpp FD_SETSIZE is 1024 so we will have to handle arbitrary limits in the interface. On windows select could be implemented using WSAEventSelect and WaitForMultipleObjects but this will have to be cascaded to other threads if the set is to big and in that case handles can be mixed. So either we have to cater for arbitrary limits, letting the user implement on top of the limitst. Or remove arbitrary limits at cost of complexity and probably increased threadswitching.
Which brings me to notification via callback. Whenever user code is called by a library, there must be clear rules and expectations. One good way to make things difficult would be to make callbacks to user code from a signal handler. Of course, threads can also present a similar problem.
Yes the rules must be really clear, and the library should probably never hold any kind of lock when calling a user defined callback since that i genreally very error prone and deadlock creating, but this also makes it really hard to implement ;) but also interesting. /Michel