
-----Original Message----- From: Matthew Vogt [mailto:mvogt@juptech.com] Sent: Wednesday, 3 March 2004 8:44 AM To: boost@lists.boost.org Subject: [boost] Re: [Threads] Simple active object wrapper, take 2
scott <scottw <at> qbik.com> writes:
Yep. All straight now. I know understand your breakdown of calls to callbacks is very similar to that inherent in SDL. And yes, you/we/it are fully asynchronous.
I'm assuming you prefer this to the reactive object version? Which does a pretty good job of hiding the comms Oh yeah, and doesnt require selection of the method to call in either the receipient or the client (on response), i.e. at the point of making a method call.
Maybe we're down to a matter of taste?
Yes, I guess it depends on whether you're familiar/comfortable with the message passing paradigm, or whether you're more comfortable with C++ shenanigans to obscure the messy details.
It may be the sign of a warped mind but queueing some form of bound functor-like object has almost always seemed to be (broadly) what I wanted to do in any message passing scheme, whether or not fancy machinery existed to do it. However, there are times when it is useful to have the messaging defined independently of either the sender or the receiver, with both being written to conform to the messaging interface. I've tended to think of the tightly/early bound, just queue a functor, and the message passsing + switch case as 2 completely independent patterns, but maybe they should be built on a common framework (at a level above the detail that they both ivolve queuing some sort of object). The AO pattern doesn't require that you identify the object you want to invoke the method on in the proxy, because you send the proxy to that object, and only that object. So one could drop the "this" parameter from the proxy. That would potentially allow the proxy to be routed/forwarded in a more general reactive object model (I think this was one of Scotts requirements)? I'm not 100% sure how useful this is, but it might be nice for the class of problem where you do want concurrent execution - of as many completely independent instances of the object as is "useful". The client/master doesn't care which instance - the next available one will do nicely. Further, it would seem possible to exclude any binding to a particular member function from the proxy itself and instead allow "free proxies" to be defined. To avoid confusion, I'm going to drop the work proxy for these "free proxies" and call them "messages". These would just define a function signature to match, not a particular (member) function. It is not clear to that the type of the "this" parameter need be defined by the message - excluding it would offer better separation between message sender and receiver. The sender need only know that it wants to notify or invoke something by sending the message. Such message types (just a tuple) could be defined with no reference to the wrapper or receiving object type at all. As you have said, the downside is that there is some form of type-switch implied in the receiver (eg the receiver wrapper would accept a boost::variant<message_type_1, message_type_2...> from the activation list and apply a generic visitor to do the dispatch). Maybe this can be avoided if there is a separate queue per message type? The additonal queue/scheduler complexity would probably wipe out any potential advantage though. So a receiver wrapper generator would need to, given the set of message types and corresponding member function pointers to handle them, generate the variant and corresponding visitor. I certainly like this better than a hand coded switch, and it is something like a simplified/limited form of the compile time (actually code generation time) generation of message demux code that is part of scotts existing framework (I think - I only skimmed scotts descriptions and possibly guessed too much from vaguely similar, but more primitive things I've used before).
I would postulate, however, that wrapping the details to allow binding at the call site (or message post) is slightly superior, due to the removal of the dispatch. Of course, this minor gain may easily be outweighed by the contortions required to make it work, and the reduction in the transparency of the code...
The tightly/call site bound case can be written as a simple layer on top of the lose/late bound system. Message type becomes variant<proxy> and the visitor for the one and only type dispatches to a wrapper function that just does the stuff that was done in the thread_function loop in your AO code. I'm not sure that for a reasonably small (any sane?) interface the performance difference due to the demux will be terribly large anyway, so maybe the specialisation isn't needed for that alone. However I've so far ignored the return value in this - the return type could obviously be defined in the message, but what should define how/where it should be delivered? It seems that the caller should define how/where it wants results delivered at the time of invoking the request - which it can do by passing some sort of result handler object in the message instance. Of course this is what the synch object in your proxy is. Perhaps its interface can be made more generic to allow its operation to be policy based. Broadening the definition from "return value" to "message delivery notification" with the sender supplied message delivery notifier object passed the return value from the handler doesn't feel too weird to include at the low level, and shouldn't cost anything for a void return without notification. Interestingly, the option to generate a a notification that a message with a handler returning void has been processed doesn't seem like a strange feature when looked at this way (a void future is a bit odd though).
I hadn't realised how integral the state machine was to your design until I read your other post. Still, if you are doing dispatch on both message code and object state, it can be simplified to bind the message to a per-message code dispatch function which then dispatches on object state.
Characterizing of reactive objects as a framework for state machines would be a little bit sad. I'm not sure it was your intention to say that so just in case others are listening, I will elaborate from my POV.
Sorry, I read too much into the example code in the other message. I assumed the messaging would be an independent layer, but that the design you were using must have the communicating objects modelled as state machines. Thanks for clearing that up.
This observer didn't see the state machine aspect as the exclusive use, more an important use case reflected in the design. Would that be fair to say?
<snip>
The messaging facillity has _no understanding_ of what the recipient does with a message. Proxies and state machines were intended to highlight the benefits of that design.
To summarize; the Reactive Objects that I currently consider our target would communicate via a similar mechansim and there would be no assumption that a Reactive Object was a state machine, or a proxy, or anything except an object that accepts messages.
My questions that I've been thinking out loud about (always dangerous in public) were: What is a message? What is acceptance? Sorry this was so long - am I barking up the wrong tree here or not?
Yes, that all sounds good.
Cheers, Scott
Matt
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/bo> ost
########################################################################## This e-mail is for the use of the intended recipient(s) only. If you have received this e-mail in error, please notify the sender immediately and then delete it. If you are not the intended recipient, you must not use, disclose or distribute this e-mail without the author's prior permission. We have taken precautions to minimise the risk of transmitting software viruses, but we advise you to carry out your own virus checks on any attachment to this message. We cannot accept liability for any loss or damage caused by software viruses. ##########################################################################