
Hi Peter,
When an ActiveX object dtor is called, I want _all_ activity related to that instance to stop. Other instances may be held by other threads and I don't want all interfaces to be thread-safe. Anyway, that is a big part of why I avoid designs that have significant apparatus globally managed.
This is a good point. I'll describe how I think your model is supposed to operate, because I can't reconcile it with some of your later statements, though.
The network object owns the pending callback queue and the worker threads. When this object is destroyed, all activity is cancelled, pending callbacks are lost or delivered, and the worker threads are stopped.
Very close. The network owns the worker threads, which own an object per socket, which own the callback boost::function<> (in the original it was not boost::function<> but its moral equivalent<g>). When the network is destroyed, it stops all worker threads which would wait on any callbacks (they are made by the worker threads). The queue of functions is separate, on a per-thread basis. A very common thing to do was to pass an auto-enqueue wrapper function containing the real function and a ref to the destination queue: // called directly by worker thread (be careful!): strm->async_read(&this_type::method, ...); // called in thread that opened the channel (easy). strm->async_read( channel.bind_async_call(&this_type::method), ...);
So, at one level, the serial line is just a stream. Over that stream, one can layer an entire network model, including "me" and "thee" addressing. :)
Yes. You do know that this applies for every stream, of course. You can have a network_over_stream adaptor and layer a network over a TCP connection, a named pipe or over stdin/stdout.
Of course.
This doesn't make any of these streams networks, and neither is a communication port a network.
I won't debate semantics here (too much<g>): Once the line is up, communications between A and B occur in exactly the same way as they would with TCP/IP. Both ends "listen" for incoming stream connections on multiple ports (each for a different "service"), each can accept one or more datagrams on different ports, again for different services. In a nutshell, they are abstractly identical. The big difference is that the entire network consists of two hosts.
The network-over-stream is a good example that demonstrates the strength of your network-centric design. Addresses are naturally network-dependent and do not have a meaning outside of the context of a particular network.
This of course leads to the obvious question: which network gives me a stream over COM1?
network_ptr com1 = new net_over_stream( new stream_over_serial("com1"));
LPT1?
network_ptr lpt1 = new net_over_stream( new stream_over_lpt("lpt1"));
A named pipe?
network_ptr pnet = new net_over_stream( new pipe_stream("\\\\server\\pipe\\foo"));
An arbitrary UNIX file descriptor?
something like above, but this only works for bidirectional streams.
No, the problem is that you are abusing the URI syntax. :-)
Perhaps, but see next point. ;)
An Universal Resource Identifier is universal. It is not context-dependent and completely identifies a resource.
The resource is located under the primary key of <scheme> as in the syntax: <scheme>:<scheme-specific-part> One must know the scheme in order to understand the rest. If not, please enlighten me on the part of the RFC to which you are referring (I want to fix my understand if it is mislead).
Your addresses are network-dependent and cannot be used outside of the context of their network. They may look like URIs, but they are not.
Please see previous point. Why is this universal: "http://www.boost.com"? It is only universal in the sense that it represents a host on the Internet. That fact must be inferred from "http" as in "oh, http is an Internet thing, and the rest is "user:pswd@host:port/path?query because that's what we all agree on". Which simply means a common lexicon of schemes and their interpretation. Again, unless I am missing something.
async_poll is basically what your library does at the moment. It acts as if there were an implicit async_poll call after a network is created.
I understand your meaning now. It isn't "exactly" what my network object does now (it has no queue).
In a network-centric model, BTW, poll and async_poll would be member functions of the network class.
Certainly true of poll(). The async_poll() is a bit different than what I had considered, so I will try to ponder that further.
I can't fit this into my understanding of how your library is organized. Callback queues and worker threads are network-specific. You can't layer a higher-level poll facility on top of these. I must be missing something.
I did post a couple messages describing what I am proposing with regards to the higher-level library, so I won't repeat much here. Internally, I have no queue. It is roughly like this: void worker_thread::main () { fd_set rs, ws, es; while (!stop) { net->load_balance(); for (sock in my collection) add to appropriate set select(); for (sock in my collection) make progress on I/O & do callbacks } } Give or take some locks (almost never contended) and paranoia. It has exactly the same efficiency as doing this manually because no queue is used internally. The next iteration of the loop will know exactly what I/O is needed because the callback will have issued those requests. That is, unless the callback is queued for later or elsewhere.
My interpretation is somewhat different than yours. I believe that some folks want to use select/epoll whenever this model is more efficient, not just because it's uber or single-threaded.
Given the example above, I don't see how it could be much more efficient beyond the fact that the callbacks are using boost::function<> and copies of those objects might add up. I won't try to put words in to other's mouths, so I will let them speak up as to whether your take or mine is accurate (we'll probably hear that both are from different people<g>).
It can't invalidate the contract. Your current contract is that callbacks are run in an unspecified thread. With poll, the application layer can specify the thread. I don't see how can this possibly contradict the assumptions of the protocol library.
Currently, the contract is that _another_ thread is used; one managed by the network. Meaning, I could spin on a condvar while I waited if I was presenting a blocking interface (and started outside the network worker thread context). This would break with the pure single-threaded network. Of course, the right answer might be to define the semantics as you stated and allow room for this choice in the network impl.
- At the app layer, the developer sometimes wants to choose single threaded(!) vs. don't care but deliver it on this thread please vs. use whatever thread context is best + use my quad CPU's please + minimize context switches (aka "give me your best shot, I can take it").
Can you give an example of a developer that absolutely insists on a single-threaded implementation (not interface)?
I think Iain Hanson is that camp. Could be wrong.
I think that the distinction is: invoke callbacks in whichever thread suits you best vs invoke callbacks in _this_ thread please, my application is not reentrant. And regardless of which I've chosen, give me your best shot under these requirements. No need to confine yourself to a single thread/CPU _unless this will be faster_.
That would be my preference. Except that I see it on a per operation basis so that libraries can make their own choice. I have a protocol at work that uses sync writes and async reads and would be hard pressed to use fully async.
- Middle level protocol libraries (like SSL or HTTP) must not be presented with different behaviors from the abstract interfaces. If they followed the rules (TBD), they should continue to work regardless of a choice made by the application developer.
Do you have a specific behavior in mind?
Yes, see above. :) Best regards, Don __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com