RE: [boost] [Threads] Simple active object wrapper, take 2

[mailto:boost-bounces@lists.boost.org]On Behalf Of Matthew Vogt
nice! only one item of feedback really. think that i follow your machinery and can envisage the "model of execution" - its pretty slick. its also somewhat different to the pattern :-| so i would wonder at the use of "active object"? is your model of execution the one that everyone is expecting of the previously, unimplemented class? this does seem to be a class/framework for (very convenient) arrangement of concurrent method execution. which piece of the ActiveObject pattern does this map to? other than the "claiming of the name", it looks like its got legs! ps: i s'pose anything declared in the private scope of "object" is thread safe?
// test_active_object.cpp --------------------------------------------------
#include "active_object.hpp"
using std::cin; using std::endl;
<big snip>

scott <scottw <at> qbik.com> writes:
only one item of feedback really. think that i follow your machinery and can envisage the "model of execution" - its pretty slick. its also somewhat different to the pattern
Really? I think it corresponds fairly closely to the pattern described in the paper by Lavender and Schmidt; it's a limited form of the *potential* pattern, in that it enforces a single execution thread for the active object. More complicated schemes could certainly be created, but not without creating constraints on the type to be wrapped. The reference paper describes the use of a single thread...
this does seem to be a class/framework for (very convenient) arrangement of concurrent method execution. which piece of the ActiveObject pattern does this map to?
Like this: the proxy functors map to the 'Proxy' objects, the detail::task_descriptor objects map to the 'Method Request' objects, the boost::thread member of the active<> wrapper maps to the 'Scheduler' object the detail::task_queue of the active<> wrapper maps to the 'Activation Queue' object the wrapped object maps to the 'Servant' object the future objects map to the 'Future' objects Assuming I've followed the paper correctly, of course.
i s'pose anything declared in the private scope of "object" is thread safe?
No, only methods which are explicitly wrapped in 'proxy' functors are safe, but I suppose any access to the object's private details from the active<> wrapper's internal thread would be safe; also, the wrapped object is private to the wrapper and not accessible to clients. Matt

Matthew Vogt <mvogt <at> juptech.com> writes:
Really? I think it corresponds fairly closely to the pattern described in the paper by Lavender and Schmidt; it's a limited form of the *potential* pattern, in that it enforces a single execution thread for the active object. More complicated schemes could certainly be created, but not without creating constraints on the type to be wrapped. The reference paper describes the use of a single thread...
and replies to self: Actually, the paper does note that for performance reasons, the thread could be replaced with a thread pool. This requires support, however, in that the Method Request needs to be able to lock the resources needed to perform the task prior to doing so. In terms of my wrapper, the object wrapped needs to have 'hooks' for locking its resources, and the active<> wrapper class would need to have a policy class to perform the 'Scheduler' role, which knew enough to use the resource locking hooks. I considered adding a policy for this, then I would have to change the subject line to read 'Complex active object wrapper' :)

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org]On Behalf Of Matthew Vogt Sent: Friday, February 20, 2004 4:46 PM To: boost@lists.boost.org Subject: [boost] Re: [Threads] Simple active object wrapper, take 2
scott <scottw <at> qbik.com> writes:
only one item of feedback really. think that i follow your machinery and can envisage the "model of execution" - its pretty slick. its also somewhat different to the pattern
Really? I think it corresponds fairly closely to the pattern described in the paper by Lavender and Schmidt; it's a limited form of the *potential* pattern, in that it enforces a single execution thread for the active
yes. quite true. apologies for the previous bluntness. my only justification is that i can see a very useful chunk missing and my inability to articulate this is making even me laugh. and i am also having difficulty with my "shift" key... This message is a response to all relevant messages through to your response of Saturday 21st. In my opinion, the ActiveObject pattern can be divided into two operational aspects. One is where a "main thread" has one or more ActiveObjects available to it and the other is where ActiveObjects interact with each other. If these operational architectures were referred to as asymmetric and symmetric activation, respectively, then my goals with respect to implementation of the pattern are focused on "symmetric activation". Distinguishing these two architectures I feel, is crucial. Your latest implementation code includes "futures" _and_ "method requests". I acknowledge this (again :-). But there is no mechansim for the results of a "method request" to be returned _asynchronously_. For this reason I (and I am open to explanations of why I am wrong) view your implementation to be in the direction of asymmetric activation (AA). To highlight the significance of this target (i.e. AA) I will return to the database server example that several contributors have made reference to. Lets say that the database server is most effective as a threaded, ActiveObject. Lets also say that due to the architecture of the underlying interface to an RDBMS, there are significant throughput gains to be made with 4 slave threads. If a GUI app is the "main thread", the database server is a threaded, ActiveObject and the slaves are threaded, ActiveObjects (all examples of AA) then the result is non-optimal. It would not realise the advertised benefits of 4 slave threads as the database server must block waiting for completion of a slave method (e.g. evaluation of a future). A cruder example of what I am trying to highlight is where a group of interacting ActiveObjects manage to create a "circular activation", i.e. A calls B, B calls C and C calls A. This is a fatal symptom of the underlying problem associated with AA. I immediately concede that you may implement some sophisticated "event notification" between the database server and its slaves to solve this. You may even choose to not implement the slaves as ActiveObjects to give yourself the necessary "freedom". My response to this would be that the custom event notification is completely unnecessary. Application of an SA-focused ActiveObject removes the need for any such one-off mechanisms (and who wants to write those again, and again...). Finally, having deployed (SA) ActiveObjects I would be disappointed to see any new mechanism for thread communication. Proof of a successful implementation of SA (IMHO) would be that it became the _only_ mechansim for inter-thread communication. Of course, in the real world, this is not going to happen but I would offer it as a noble intent :-) I tentatively suggest that the ActiveObject that most of us want is the AA variety. We can see the objects exchanging the Method Requests in a symphony of optimal operation - in our heads. But between our heads and the "tools at hand" I believe the symphony becomes something else, primarily due to AA-based environments and associated culture. Having made my case (as best I can) things now get muddy. Firstly I dont see any mechansim for the delivery of asynchronous results in the pattern! Please note that I wanted to verify this claim but have failed to connect to siteseer for the last two hours. In my ActiveWorld the same mechansim that is used to queue Method Requests (in your implementation - "tasks") is also used to queue results. While I can see the pragmatic value in including both AA and SA in all ActiveObject initiatives, I also wonder about the psychology of developers. While implementation of SA ActiveObjects is a little bit more difficult I suspect there are other reasons that it doesnt "catch on". Firstly, we tend to shy away from new, foreign models of execution and secondly AA is always there to "fall back" on. It might be interesting to note that my ActiveWorld implementation is _completely_ SA. The very pleasant surprise for me has been the successful manner in which it has been deployed in existing codebases. It may appear foreign but our best coding is naturally trying to emulate the SA ActiveObject. Cheers, Scott ps: I mentioned the "anything declared in private scope is thread-safe" to see if we were "on the same wavelength". After reading your response(s) I think the answer to that is "yes". Or am I deluding myself ;-) Any _data_ declared with private scope in the "struct object" would only be accessible to the methods of that same struct and those methods are only every called by "boost::thread thread" in "class active". Voila! We dont need any mutexes around that data!

Sorry, but there was a fairly awful typo in the previous message. See below;
-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org]On Behalf Of scott Sent: Monday, February 23, 2004 10:44 AM To: 'Boost mailing list' Subject: RE: [boost] Re: [Threads] Simple active object
<snip>
Finally, having deployed (SA) ActiveObjects I would be disappointed to see any new mechanism for thread communication. Proof of a successful implementation of SA (IMHO) would be that it became the _only_ mechansim for inter-thread communication. Of course, in the real world, this is not going to happen but I would offer it as a noble intent :-)
I tentatively suggest that the ActiveObject that most of us want is the AA variety.
should have been SA (symmetric activation) not AA
We can see the objects exchanging the Method Requests in a symphony of optimal operation - in our heads. But between our heads and the "tools at hand" I believe the symphony becomes something else, primarily due to AA-based environments and associated culture.
Cheers, Scott

scott <scottw <at> qbik.com> writes:
In my opinion, the ActiveObject pattern can be divided into two operational aspects. One is where a "main thread" has one or more ActiveObjects available to it and the other is where ActiveObjects interact with each other.
If these operational architectures were referred to as asymmetric and symmetric activation, respectively, then my goals with respect to implementation of the pattern are focused on "symmetric activation".
Distinguishing these two architectures I feel, is crucial. Your latest implementation code includes "futures" _and_ "method requests". I acknowledge this (again . But there is no mechansim for the results of a "method request" to be returned _asynchronously_. For this reason I (and I am open to explanations of why I am wrong) view your implementation to be in the direction of asymmetric activation (AA).
Well, ok. Futures (in my code, and in the pattern) do allow a caller to access a result asynchronously, if you define asynchronously to mean that the caller is free to pursue other interests before acquiring the result. What is lacking is the ability to multiplex over events relating to the delivery of future values. You can't use select, poll or WFMO to determine whether any of an outstanding set of futures have been delivered. I think that this probably makes active objects an inappropriate choice for designs that are inherently reactive.
To highlight the significance of this target (i.e. AA) I will return to the database server example that several contributors have made reference to.
Lets say that the database server is most effective as a threaded, ActiveObject. Lets also say that due to the architecture of the underlying interface to an RDBMS, there are significant throughput gains to be made with 4 slave threads. If a GUI app is the "main thread", the database server is a threaded, ActiveObject and the slaves are threaded, ActiveObjects (all examples of AA) then the result is non-optimal. It would not realise the advertised benefits of 4 slave threads as the database server must block waiting for completion of a slave method (e.g. evaluation of a future).
You could apply the active object pattern to this design, but the simplest way to do so would be to constrain the slave threads to have void-returning methods, and to deliver the results of methods asynchronously, like this: // Psuedocode struct slave { void queueRequest(RequestData data, future<something>& futureResult) { // perform request server::deliverResult(data, futureResult); }; struct server { future<something> doSomething(void) { future<something> futureResult; slave::queueRequest(someRequest, resultResult); retuyrn futureResult; } void deliverResult(RequestData data, future<something>& futureResult) { futureResult.finalise(data); } }; struct client { something someMethod(void) { return server::doSomething(); }; }; (I think this is a variant on the 'half-sync, half-asynch' pattern?) Ideally, though, the server is a reactive design and not terribly suited to the active object pattern.
A cruder example of what I am trying to highlight is where a group of interacting ActiveObjects manage to create a "circular activation", i.e. A calls B, B calls C and C calls A. This is a fatal symptom of the underlying problem associated with AA.
I immediately concede that you may implement some sophisticated "event notification" between the database server and its slaves to solve this. You may even choose to not implement the slaves as ActiveObjects to give yourself the necessary "freedom".
Well, yes, you could (and it could certainly be generic, I'm sure). But it doesn't seem to be in the spirit of the design. Active object is a trade-off between simplicity at the client, and suboptimal performance at the server.
My response to this would be that the custom event notification is completely unnecessary. Application of an SA-focused ActiveObject removes the need for any such one-off mechanisms (and who wants to write those again, and again...). Finally, having deployed (SA) ActiveObjects I would be disappointed to see any new mechanism for thread communication. Proof of a successful implementation of SA (IMHO) would be that it became the _only_ mechansim for inter-thread communication. Of course, in the real world, this is not going to happen but I would offer it as a noble intent
This is certainly achievable, but is it using the active object design? You can implement an exactly equivalent design using the platform AIO primitives, but this would yield code which was much harder to follow and implement than active objects communicating through function calls. If you want optimum throughput throughout the interracting components, I don't think active object is the pattern you're looking for.
I tentatively suggest that the ActiveObject that most of us want is the AA ............................<Mentally changed to 'SA', as per your followup> variety. We can see the objects exchanging the Method Requests in a symphony of optimal operation - in our heads. But between our heads and the "tools at hand" I believe the symphony becomes something else, primarily due to AA-based environments and associated culture.
Having made my case (as best I can) things now get muddy. Firstly I dont see any mechansim for the delivery of asynchronous results in the pattern! Please note that I wanted to verify this claim but have failed to connect to siteseer for the last two hours. In my ActiveWorld the same mechansim that is used to queue Method Requests (in your implementation - "tasks") is also used to queue results.
The asynchronous delivery mechanism is the conversion of the 'future' template object to it's parameterised type. For example, the following blocks: future<int> f = someActiveObject.someMethod(); // Non-blocking int i = f; // Blocks until result delivered Of course, the caller may not need the result yet, and can postpone the wait indefinitely: std::vector<future<int> > results; results.push_back(someActiveObject.someMethod()); // Non-blocking ... // Whatever
While I can see the pragmatic value in including both AA and SA in all ActiveObject initiatives, I also wonder about the psychology of developers. While implementation of SA ActiveObjects is a little bit more difficult I suspect there are other reasons that it doesnt "catch on". Firstly, we tend to shy away from new, foreign models of execution and secondly AA is always there to "fall back" on.
It might be interesting to note that my ActiveWorld implementation is _completely_ SA. The very pleasant surprise for me has been the successful manner in which it has been deployed in existing codebases. It may appear foreign but our best coding is naturally trying to emulate the SA ActiveObject.
What interface are you using to allow active objects to access results generated by other active objects?
Cheers, Scott
ps: I mentioned the "anything declared in private scope is thread-safe" to see if we were "on the same wavelength". After reading your response(s) I think the answer to that is "yes". Or am I deluding myself Any _data_ declared with private scope in the "struct object" would only be accessible to the methods of that same struct and those methods are only every called by "boost::thread thread" in "class active". Voila! We dont need any mutexes around that data!
Yes, but that only applies to a single-threaded active object. Matt

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org]On Behalf Of Matthew Vogt Sent: Tuesday, February 24, 2004 12:06 PM To: boost@lists.boost.org Subject: [boost] Re: [Threads] Simple active object wrapper, take 2
scott <scottw <at> qbik.com> writes:
Well, ok. Futures (in my code, and in the pattern) do allow a caller to access a result asynchronously, if you define asynchronously to mean that the caller is free to pursue other interests before acquiring the result.
What is lacking is the ability to multiplex over events relating to the delivery of future values. You can't use select, poll or WFMO to determine whether any of an outstanding set of futures have been delivered. I think that this probably makes active objects an inappropriate choice for designs that are inherently reactive.
Yes, exactly. Is there no potential to leverage the mechanism you already have, i.e. "task"? It seems almost tragic that you have such a slick method of delivering "Method Requests" and that it cant be used to deliver "Method Results" asynchronously. In another implementation this just involved storing some kind of "return address" and adding that to the queued object (task). On actual execution the thread (belonging to the active object) need only turn around and write the results back into the callers queue. This all assumes that we are discussing symmetric activation. And even want to go there :-)
You could apply the active object pattern to this design, but the simplest way to do so would be to constrain the slave threads to have void-returning
<snip>
(I think this is a variant on the 'half-sync, half-asynch' pattern?)
Ideally, though, the server is a reactive design and not terribly suited to the active object pattern.
<snip>
Well, yes, you could (and it could certainly be generic, I'm sure). But it doesn't seem to be in the spirit of the design. Active object is a trade-off between simplicity at the client, and suboptimal performance at the server.
<snip>
This is certainly achievable, but is it using the active object design? You can implement an exactly equivalent design using the platform AIO primitives, but this would yield code which was much harder to follow and implement than active objects communicating through function calls. If you want optimum throughput throughout the interracting components, I don't think active object is the pattern you're looking for.
<snip> I included the last few fragments of your message to give myself an opportunity to review them as a group. Yes, I understood most of your points. Some of the confusion or gray areas in our exchange have been due to the mis-application (mine) of the ActiveObject pattern. The ActiveObject pattern was introduced early on and became a convenient vocabulary and point of reference. In truth it was never my target (I hadnt seen it previous to our discussion). It is totally impressive work but does duck a major nasty. So, to hopefully clarify; I still feel there is a software abstraction that we (as developers) intuitively want. It is not completely delivered by the ActiveObject pattern (though I thank JG again for the reference). As mentioned in my previous message the pattern has no mechanism for asynchronous return of results. This is a source of wonderment (tending towards frustration) as (IMHO) this is the Missing Link. The frustration stems from the fact that all the machinery is inherently available having previously implemented the delivery of "Method Requests". Just "turn it around" and let the completing method deliver the results to the original caller :-)
The asynchronous delivery mechanism is the conversion of the 'future' template object to it's parameterised type. For example, the following blocks:
future<int> f = someActiveObject.someMethod(); // Non-blocking int i = f; // Blocks until result delivered
Of course, the caller may not need the result yet, and can postpone the wait indefinitely:
As I see it, futures is the closest that the asymmetric activation can get to "async-res". But it isnt close enough. When I refer to asyn-res I am typically referring to the delivery of results from one active object to another, where the receiving object is not required to enter into a (potentially) blocking call (e.g. evaluation of a future). I have tried to label this circumstance as "symmetric activation". This can only occur between two active objects <cringe>. Perhaps we could leave behind the "active object" term and go for "reactive object"? Perhaps you now own the "active object" name and the abstraction that I continue to rave about, lays claim to "reactive object"?
It might be interesting to note that my ActiveWorld implementation is _completely_ SA. The very pleasant surprise for me has been the successful manner in which it has been deployed in existing codebases. It may appear foreign but our best coding is naturally trying to emulate the SA ActiveObject.
What interface are you using to allow active objects to access results generated by other active objects?
The short answer is; the same interface used to deliver Method Requests. The long answer would definitely be long. I can post some code fragments if we are still swimming after the same stick?
methods
of that same struct and those methods are only every called by "boost::thread thread" in "class active". Voila! We dont need any mutexes around that data!
Yes, but that only applies to a single-threaded active object.
<sigh, lowers shoulders> ;-) In my ActiveWorld, no active object has more than a single thread. They are either no thread (i.e. servants) or single-thread (a scheduler deriving from the servant interface). Where an active object deploys "slave threads", these are also zero or one thread, i.e. they are also active objects. This may sound unnecessarily convoluted but this small object hierarchy provides the basis for building arbitrary hierarchies of threads. If that is what you want :-) The big plus being that there is only the one mechansim being used for inter-thread communication. Within this object hierarchy and threading model (that I am making such a meal of) any data declared with private scope in "struct object" is thread-safe. Yes, because there is only ever the one thread :-) Its been fun up to this point but perhaps your "code with legs" is viable as "active object" and that is a valid first phase? I could wheel my barrow off to the side for a while :-) Cheers, Scott

scott <scottw <at> qbik.com> writes:
What is lacking is the ability to multiplex over events relating to the delivery of future values. You can't use select, poll or WFMO to determine whether any of an outstanding set of futures have been delivered. I think that this probably makes active objects an inappropriate choice for designs that are inherently reactive.
Yes, exactly. Is there no potential to leverage the mechanism you already have, i.e. "task"?
Yes, you can (if I now understand what you're asking correctly) - in the parlance of the code we've been discussing, you can achieve this by having all objects 'active' (with a thread), and having all functions as void functions that produce flow-on effects to other objects. Essentially, I send a message from my object to another object, and I receive the result not as a return value, but as a new message from the other object. But, as you point out later, we've now changed patterns. The limitation resulting from the unequal status of different event mechanisms is a fairly fundamental one. Is anyone working on this in a boost threads context?
It seems almost tragic that you have such a slick method of delivering "Method Requests" and that it cant be used to deliver "Method Results" asynchronously.
There isn't any intention to have a slick method of result propagation; the intention is to allow code which is not specialised for multi-threading to seamlessly invoke code which will be performed in another thread. Although I understand what you're looking for.
In another implementation this just involved storing some kind of "return address" and adding that to the queued object (task). On actual execution the thread (belonging to the active object) need only turn around and write the results back into the callers queue.
This implementation requires that all objects have a queue, which is another property of the 'reactive object' system you're describing, but can't work with the approach I've taken.
The ActiveObject pattern was introduced early on and became a convenient vocabulary and point of reference. In truth it was never my target (I hadnt seen it previous to our discussion). It is totally impressive work but does duck a major nasty.
So, to hopefully clarify; I still feel there is a software abstraction that we (as developers) intuitively want. It is not completely delivered by the ActiveObject pattern (though I thank JG again for the reference).
<snip>
When I refer to asyn-res I am typically referring to the delivery of results from one active object to another, where the receiving object is not required to enter into a (potentially) blocking call (e.g. evaluation of a future). I have tried to label this circumstance as "symmetric activation". This can only occur between two active objects <cringe>.
Ok. Having established that this is a different pattern of multithreading, what are the costs/benefits of symmetric activation?
What interface are you using to allow active objects to access results generated by other active objects?
The short answer is; the same interface used to deliver Method Requests. The long answer would definitely be long. I can post some code fragments if we are still swimming after the same stick?
Right; so all objects (even non-threaded ones) contain a task queue? (Or, perhaps 'interaction queue'?) When a request is queued to a non-threaded object, how does that request get processed?
Its been fun up to this point but perhaps your "code with legs" is viable as "active object" and that is a valid first phase? I could wheel my barrow off to the side for a while
Cheers, Scott
Yes, I don't think my code can be modified to incorporate your requirements. I am interested in your pattern, though. Perhaps a change of subject is in order? Matt

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org]On Behalf Of Matthew Vogt
Yes, exactly. Is there no potential to leverage the mechanism you already have, i.e. "task"?
Yes, you can (if I now understand what you're asking correctly) - in the parlance of the code we've been discussing, you can achieve this by having all objects 'active' (with a thread), and having all functions as void functions that produce flow-on effects to other objects.
Essentially, I send a message from my object to another object, and I receive the result not as a return value, but as a new message from the other object.
Yes! This is that recess that I couldnt quite scratch. What you describe is the essence of my "reactive objects". Still a bit more stretching required but that is "the guts of it". A crude application of this technique might mave proxy methods called things suchs as; <paste> // Proxy declarations for all methods of the active object proxy<void (int)> non_void_with_param; proxy<void (void)> non_void_without_param; .. proxy<void (int)> non_void_with_param_returned; proxy<void (const char *)> non_void_without_param_returned; .. <paste/> An immediate reaction might go something like "but look at the overheads!". The plain truth is that for successful interaction between threads, something of this nature is a pre-requisite. It may as well be the mechansim that has already been crafted for the job. If my scrappy example is taken "as is", the implied active objects would effectively be "hard-coded" to interact with each other. This is a non-viable design constraint that is relaxed by adding the "callers address" to the queued "tasks". With this additional info the thread "returns" (i.e. "a new message from the other object") may be directed to any instance. Please note: i'm not proposing the above "pairing of call and return proxies" as the path forward. Its only intended to further expose the essential technique.
The limitation resulting from the unequal status of different event mechanisms is a fairly fundamental one. Is anyone working on this in a boost threads context?
Well, hopefully for reactive objects we have reduced it to 1? :-) But to answer your question, no.
In another implementation this just involved storing some kind of "return address" and adding that to the queued object (task). On actual execution the thread (belonging to the active object) need only turn around and write the results back into the callers queue.
This implementation requires that all objects have a queue, which is another property of the 'reactive object' system you're describing, but can't work with the approach I've taken.
On first appearance this looks to be the case. A bit more sleight of hand and there can be any number of reactive objects "serviced" by a single thread. Previous explanations of this have failed woefully so all I can do is direct you to the ActiveObject pattern, entities "Scheduler" and "Servant". The natural perception that reactive objects are "heavy on threads" is addressed beautifully by this section of the pattern. It appears that Schmidt et al. resolved many of our concerns long before we knew enough to start waving our arms (well, I can speak for myself at least). The pattern doesnt address asynchronous return of results and also doesnt quite give the entities a "final polish", i.e. the Scheduler entity needs to inherit the Servant interface.
Ok. Having established that this is a different pattern of multithreading, what are the costs/benefits of symmetric activation?
Operationally the costs of SA tends towards zero. If you first accept that some form of inter-thread communication was required anyhow, i.e. SA doesnt itself _add_ this requirement. I think costs do exist in terms of the development cycle, e.g. changing of culture. The type of programming actually required is best defined IMHO as "signal processing" (e.g. SDL). It may take some effort to convince developers of desktop apps that they need to take a "signal processing" approach to the next version of their 80Mb CAD package. The benefits? System responsiveness, maintainability of code, low software defect counts. The results are more likely to run correctly for long periods. There are difficulties when debugging. The traditional flow of control is lost.
Its been fun up to this point but perhaps your "code with legs" is viable as "active object" and that is a valid first phase? I could wheel my barrow off to the side for a while
Cheers, Scott
Yes, I don't think my code can be modified to incorporate your requirements. I am interested in your pattern, though.
Perhaps a change of subject is in order?
Will go that way if you really didnt want to do anything more with your active<object>. To me it seemed viable both as a phase and as the basis for reactive objects. But think thats up to you? Cheers, Scott

scott <scottw <at> qbik.com> writes:
Essentially, I send a message from my object to another object, and I receive the result not as a return value, but as a new message from the other object.
Yes! This is that recess that I couldnt quite scratch. What you describe is the essence of my "reactive objects". Still a bit more stretching required but that is "the guts of it".
A crude application of this technique might mave proxy methods called things suchs as;
<paste> // Proxy declarations for all methods of the active object proxy<void (int)> non_void_with_param; proxy<void (void)> non_void_without_param; ..
proxy<void (int)> non_void_with_param_returned; proxy<void (const char *)> non_void_without_param_returned; .. <paste/>
An immediate reaction might go something like "but look at the overheads!". The plain truth is that for successful interaction between threads, something of this nature is a pre-requisite. It may as well be the mechansim that has already been crafted for the job.
Yep, there's gotta be mutexes somewhere. The mechanism you're referring to is the task_queue of fully-parameterised method invocations? So, how precisely does this work? Say I have a scheduler that has performed some work via a method in a servant S, and that method produced a result of type int. I want to return that result to the caller C, but the caller may not be threaded (may not be associated with any scheduler). Does that mean that instead of queueing the response to the object, I will perform some type of registered action in C, in the same thread context as the method invocation of S? If not, and I place the result into a task_queue of some sort in C, how does another scheduler object become aware that C has a result in its queue, and that something should be done with it?
If my scrappy example is taken "as is", the implied active objects would effectively be "hard-coded" to interact with each other. This is a non-viable design constraint that is relaxed by adding the "callers address" to the queued "tasks". With this additional info the thread "returns" (i.e. "a new message from the other object") may be directed to any instance.
The proxy objects I was using before assumed that the return would be passed via a future reference. Perhaps they could be adjusted for your pattern so that they invoked a method (or proxy) on the caller, of type 'boost::function<void, result_type>', which is registered when the original proxy is invoked? E.g. struct Callee : public SomeActiveObjectBase { ... Proxy<int, void> accessValue; }; struct Caller : public SomeActiveObjectBase { void reportvalue(int value) { cout << "Got " << value << " back from callee" << endl; } void someMethod(Callee* other) { other.accessValue(boost::bind(&Caller::reportValue, this)) } };
Please note: i'm not proposing the above "pairing of call and return proxies" as the path forward. Its only intended to further expose the essential technique.
The limitation resulting from the unequal status of different event mechanisms is a fairly fundamental one. Is anyone working on this in a boost threads context?
Well, hopefully for reactive objects we have reduced it to 1? But to answer your question, no.
Sorry, I meant unequal in that you can ::select on FDs, but not on boost:: mutexes, or any underlying implementation detail they might expose.
In another implementation this just involved storing some kind of "return address" and adding that to the queued object (task). On actual execution the thread (belonging to the active object) need only turn around and write the results back into the callers queue.
This implementation requires that all objects have a queue, which is another property of the 'reactive object' system you're describing, but can't work with the approach I've taken.
On first appearance this looks to be the case. A bit more sleight of hand and there can be any number of reactive objects "serviced" by a single thread. Previous explanations of this have failed woefully so all I can do is direct you to the ActiveObject pattern, entities "Scheduler" and "Servant".
So you use callbacks, rather than queues?
The natural perception that reactive objects are "heavy on threads" is addressed beautifully by this section of the pattern. It appears that Schmidt et al. resolved many of our concerns long before we knew enough to start waving our arms (well, I can speak for myself at least).
I don't think threads are necessarily that heavy. Certainly, for long-lived apps that create/destroy threads mostly at setup and tear-down, thread numbers are a major concern. In any case, lowered throughput or higher latency resulting from denying opportunities for concurrency will often be more noticeable than the overhead from the management of the concurrency.
The pattern doesnt address asynchronous return of results and also doesnt quite give the entities a "final polish", i.e. the Scheduler entity needs to inherit the Servant interface.
Yeah, this would be nice, if you could make it stick.
Ok. Having established that this is a different pattern of multithreading, what are the costs/benefits of symmetric activation?
Operationally the costs of SA tends towards zero. If you first accept that some form of inter-thread communication was required anyhow, i.e. SA doesnt itself _add_ this requirement.
I think costs do exist in terms of the development cycle, e.g. changing of culture. The type of programming actually required is best defined IMHO as "signal processing" (e.g. SDL). It may take some effort to convince developers of desktop apps that they need to take a "signal processing" approach to the next version of their 80Mb CAD package.
Perhaps I've been heading off on a tangent, but it actually sounds like what you want is a variant of boost::signal that could communicate between threads. In this design, the callback would have to seamlessly mutate into a decoupled method invocation like that in my Proxy objects. Actually, this sounds like an obvious thing for someone to have tried before...
The benefits? System responsiveness, maintainability of code, low software defect counts. The results are more likely to run correctly for long periods.
Can't complain about that.
There are difficulties when debugging. The traditional flow of control is lost.
Similarly to signal-based designs.
Its been fun up to this point but perhaps your "code with legs" is viable as "active object" and that is a valid first phase? I could wheel my barrow off to the side for a while
Cheers, Scott
Yes, I don't think my code can be modified to incorporate your requirements. I am interested in your pattern, though.
Perhaps a change of subject is in order?
Will go that way if you really didnt want to do anything more with your active<object>. To me it seemed viable both as a phase and as the basis for reactive objects. But think thats up to you?
I'm open. Code is just code, anywhere it goes is fine with me. I just thought maybe the posts' 'subject line' should be changed :) I don't have a suggestion for a better description, though.
Cheers, Scott
Matt

Matthew Vogt <mvogt <at> juptech.com> writes:
Perhaps I've been heading off on a tangent, but it actually sounds like what you want is a variant of boost::signal that could communicate between threads. In this design, the callback would have to seamlessly mutate into a decoupled method invocation like that in my Proxy objects. Actually, this sounds like an obvious thing for someone to have tried before...
In fact, from the Boost.Signals FAQ: "2. Is Boost.Signals thread-safe? No. Using Boost.Signals in a multithreaded concept is very dangerous, and it is very likely that the results will be less than satisfying. Boost.Signals will support thread safety in the future." Has there been any activity on this front?

At 02:41 26/02/2004, you wrote:
scott <scottw <at> qbik.com> writes:
Essentially, I send a message from my object to another object, and I receive the result not as a return value, but as a new message from the other object.
Yes! This is that recess that I couldnt quite scratch. What you describe is the essence of my "reactive objects". Still a bit more stretching required but that is "the guts of it".
A crude application of this technique might mave proxy methods called things suchs as;
<paste> // Proxy declarations for all methods of the active object proxy<void (int)> non_void_with_param; proxy<void (void)> non_void_without_param; ..
proxy<void (int)> non_void_with_param_returned; proxy<void (const char *)> non_void_without_param_returned; .. <paste/>
An immediate reaction might go something like "but look at the overheads!". The plain truth is that for successful interaction between threads, something of this nature is a pre-requisite. It may as well be the mechansim that has already been crafted for the job.
Yep, there's gotta be mutexes somewhere.
The mechanism you're referring to is the task_queue of fully-parameterised method invocations? So, how precisely does this work? Say I have a scheduler that has performed some work via a method in a servant S, and that method produced a result of type int. I want to return that result to the caller C, but the caller may not be threaded (may not be associated with any scheduler). Does that mean that instead of queueing the response to the object, I will perform some type of registered action in C, in the same thread context as the method invocation of S?
If not, and I place the result into a task_queue of some sort in C, how does another scheduler object become aware that C has a result in its queue, and that something should be done with it?
Sorry I haven't been following this thread to closely (not enough time in the day what with work as well) I thought it maybe useful to post a some pseudo code, of a "pattern" that I'm using more and more, incase its of any use, or gives someone a new idea; [sorry about the roughness.. and simplified code!] class servant { public: servant(scheduler s) : m_scheduler(s), m_cs(), m_queue(), m_queue_is_busy(false) {} void post_message(message* m) { criticalsection::lock lock(m_cs); m_queue.push(m); if (!m_queue_is_busy) { m_scheduler->post_activation_request(this, servant::dispatch); } } void dispatch() { message* m = 0; { criticalsection::lock lock(m_cs); m = m_queue.pop(); } // Do something with m delete m; { criticalsection::lock lock(m_cs); if (m_queue.empty()) { m_queue_is_busy = false; } else { m_scheduler->post_activation_request(new callback<servant>(this, servant::dispatch)); } } } private: criticalsection m_cs; std::queue<message*> m_queue; bool m_queue_is_busy; }; class scheduler { public: scheduler() : m_queue, m_cs() ....{ } void post_activation_request(callbackbase* c) { criticalsection::lock lock(m_cs); m_queue.push(c); } // this function is run by a pool of threads (or on win32 an iocp) void run() { while(!terminate) { callbackbase* c = 0; // wait for an entry on m_queue { criticalsection::lock lock(m_cs); c = m_queue.pop(); } (*c)(); // calls the dispatch method of the relevant servant instance delete c; } } }; For me this as a number of advantages; - separation of threads and objects, ie no 1 thread per object - guarantee of only scheduler 1 thread inside a servant at any one time, so little effort required by the servant to be threadsafe (unless communicating between sevants or other threads of course) - I normally implement scheduler::run using a win32 io completion port, so minimal overheads of context switching etc, and so highly scaleable (my current project handles 2000 sockets with just a handful of threads) - tho I've shown message's being posted to a servant, they could be callback objects, which call a servant function with parameters (in fact in my current project I use both messages and callbacks) I'm currently digesting the rest of this post, I haven't needed to implement any return values... yet! Regards Mark
If my scrappy example is taken "as is", the implied active objects would effectively be "hard-coded" to interact with each other. This is a non-viable design constraint that is relaxed by adding the "callers address" to the queued "tasks". With this additional info the thread "returns" (i.e. "a new message from the other object") may be directed to any instance.
The proxy objects I was using before assumed that the return would be passed via a future reference. Perhaps they could be adjusted for your pattern so that they invoked a method (or proxy) on the caller, of type 'boost::function<void, result_type>', which is registered when the original proxy is invoked?
E.g.
struct Callee : public SomeActiveObjectBase { ... Proxy<int, void> accessValue; };
struct Caller : public SomeActiveObjectBase { void reportvalue(int value) { cout << "Got " << value << " back from callee" << endl; }
void someMethod(Callee* other) { other.accessValue(boost::bind(&Caller::reportValue, this)) } };
Please note: i'm not proposing the above "pairing of call and return proxies" as the path forward. Its only intended to further expose the essential technique.
The limitation resulting from the unequal status of different event mechanisms is a fairly fundamental one. Is anyone working on this in a boost threads context?
Well, hopefully for reactive objects we have reduced it to 1? But to answer your question, no.
Sorry, I meant unequal in that you can ::select on FDs, but not on boost:: mutexes, or any underlying implementation detail they might expose.
In another implementation this just involved storing some kind of "return address" and adding that to the queued object (task). On actual execution the thread (belonging to the active object) need only turn around and write the results back into the callers queue.
This implementation requires that all objects have a queue, which is another property of the 'reactive object' system you're describing, but can't work with the approach I've taken.
On first appearance this looks to be the case. A bit more sleight of hand and there can be any number of reactive objects "serviced" by a single thread. Previous explanations of this have failed woefully so all I can do is direct you to the ActiveObject pattern, entities "Scheduler" and "Servant".
So you use callbacks, rather than queues?
The natural perception that reactive objects are "heavy on threads" is addressed beautifully by this section of the pattern. It appears that Schmidt et al. resolved many of our concerns long before we knew enough to start waving our arms (well, I can speak for myself at least).
I don't think threads are necessarily that heavy. Certainly, for long-lived apps that create/destroy threads mostly at setup and tear-down, thread numbers are a major concern.
In any case, lowered throughput or higher latency resulting from denying opportunities for concurrency will often be more noticeable than the overhead from the management of the concurrency.
The pattern doesnt address asynchronous return of results and also doesnt quite give the entities a "final polish", i.e. the Scheduler entity needs to inherit the Servant interface.
Yeah, this would be nice, if you could make it stick.
Ok. Having established that this is a different pattern of multithreading, what are the costs/benefits of symmetric activation?
Operationally the costs of SA tends towards zero. If you first accept that some form of inter-thread communication was required anyhow, i.e. SA doesnt itself _add_ this requirement.
I think costs do exist in terms of the development cycle, e.g. changing of culture. The type of programming actually required is best defined IMHO as "signal processing" (e.g. SDL). It may take some effort to convince developers of desktop apps that they need to take a "signal processing" approach to the next version of their 80Mb CAD package.
Perhaps I've been heading off on a tangent, but it actually sounds like what you want is a variant of boost::signal that could communicate between threads. In this design, the callback would have to seamlessly mutate into a decoupled method invocation like that in my Proxy objects. Actually, this sounds like an obvious thing for someone to have tried before...
The benefits? System responsiveness, maintainability of code, low software defect counts. The results are more likely to run correctly for long periods.
Can't complain about that.
There are difficulties when debugging. The traditional flow of control is lost.
Similarly to signal-based designs.
Its been fun up to this point but perhaps your "code with legs" is viable as "active object" and that is a valid first phase? I could wheel my barrow off to the side for a while
Cheers, Scott
Yes, I don't think my code can be modified to incorporate your requirements. I am interested in your pattern, though.
Perhaps a change of subject is in order?
Will go that way if you really didnt want to do anything more with your active<object>. To me it seemed viable both as a phase and as the basis for reactive objects. But think thats up to you?
I'm open. Code is just code, anywhere it goes is fine with me. I just thought maybe the posts' 'subject line' should be changed :) I don't have a suggestion for a better description, though.
Cheers, Scott
Matt
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
--- Incoming mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.573 / Virus Database: 363 - Release Date: 28/01/2004
--- Outgoing mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.573 / Virus Database: 363 - Release Date: 28/01/2004

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org]On Behalf Of Matthew Vogt
Yep, there's gotta be mutexes somewhere.
The mechanism you're referring to is the task_queue of fully-parameterised method invocations? So, how precisely does this work? Say I have a scheduler that has performed some work via a method in a servant S, and that method produced a result of type int. I want to return that result to the caller C, but the caller may not be threaded (may not be associated with any scheduler). Does that mean that instead of queueing the response to the object, I will perform some type of registered action in C, in the same thread context as the method invocation of S?
Damn. I can see why you ask the questions that you are asking but the answers are very messy. I suspect it may be best to half-answer within the scope of your active<object> and then try to explain the "signal processing" equivalent. At the point where the scheduler is holding the result of type int, it needs to be able to "deliver" it to the caller. The only way it can do this is call a method on the appropriate object (instance) passing the results. Which method and which object? The scheduler can only get these from the queued task, i.e. the task must hold the method-to-run and pointers necessary for the return delivery. This method will be another proxy (e.g. non_void_with_param_returned) that in turn, queues the result. This decouples the scheduler from the recipient of the results; eventually the thread working on that task list will reach the "results task" and will run that. How does the object and method address get in the task object? You may well ask and this was (some of) the reason for my initial response. A by-product of working on "active objects" is a set of classes/templates that constitute a little thread-toolkit. This toolkit should facillitate the design and implementation of pretty much any snarling, thread-beast; if that is what you need. The "damn" part of my response stems from the fact that I believe the "Method Request" model is not viable as a model of execution for the thread toolkit. It is simply wrong and bending it to this task is a deadend. In signal processing the equivalent to "Method Request" is "send". This primitive takes an object to send and the address of the destination (e.g. an active<object>); send( int_results, calling_object ) These calls are made from within the methods that you already have working, i.e. the non-proxy _real_ methods. The trick with "send" is that it can be coded to populate the "task" with all the appropriate object and member pointers so that the receiving thread knows where it came from. Pretty typical SDL classes will look like; struct servant : { void send( signal_base &, servant_address ); }; struct db_slave : public servant { // Example only!!! void operator()( signal_base &s ) { switch( s.code() ) { case int_results: send( other_thing, third_party ); break; .. } } servant::send( signal_base &b, servant_address a ) { task t; // Pseudo only matt! t.message = b; // Payload t.source = this; // Return address .. a.queue( t ); // Again - just pseudo } I really hope this helps because I can see the inductive leap required to get over the chasm of despair. I only have one more esoteric observation to make. The "Method Request" model of execution is nice and familiar to us. It works brilliantly - mostly. Return addresses are managed automagically for us. But return addresses on a CPU stack are pretty meaningless when it comes to inter-thread communication. With the "Method Request" model we are trying to "call" across thread boundaries - IMHO thats just wrong. The signal processing model (once implemented) automanages a diferent type of address. A type of address that can be meaningful with reference to multi-threading. With a bunch more scaffolding this is what my reactive_object code looks like; int db_interface::transition( machine_state<READY>, typed_bind<interface_open> & ) { client.insert( sender() ); send( status, sender() ); return READY; } There is a whole lot of background missing here but hopefully there are some key elements that reinforce this alternate "model of execution". Some notes; db_interface: the class derived from scheduler (i.e. its a thread) and servant transition: overloaded name that handles different state+message combos interface_open: the closest thing to a "method" client: a set<> of servant_addresses send: the signal processing primitive status: an object suitable for sending, like "interface_open" sender(): the address of the object that sent the "interface_open" and caused this transition to run. READY: one of several defined machine states (enum) <sigh> Apologies if this is all badly presented. I have a huge body of code from which the ActiveObject portion would be impossible to remove. A major reason for starting or contributing to this thread is that I think my existing code desparately needs boostifying and the best way (or even a good way) is beyond me. Hence our messages. Hope this was useful. I suspect your right and I should start with another subject. After thinking long and hard about the most constructive approach. Hmmmmmmmm. Cheers, Scott ps: the sample code from mark blewett looks very promising?

scott <scottw <at> qbik.com> writes:
At the point where the scheduler is holding the result of type int, it needs to be able to "deliver" it to the caller. The only way it can do this is call a method on the appropriate object (instance) passing the results. Which method and which object? The scheduler can only get these from the queued task, i.e. the task must hold the method-to-run and pointers necessary for the return delivery.
This method will be another proxy (e.g. non_void_with_param_returned) that in turn, queues the result. This decouples the scheduler from the recipient of the results; eventually the thread working on that task list will reach the "results task" and will run that.
So you have a callback, the invocation of which queues a message?
How does the object and method address get in the task object? You may well ask and this was (some of) the reason for my initial response.
A by-product of working on "active objects" is a set of classes/templates that constitute a little thread-toolkit. This toolkit should facillitate the design and implementation of pretty much any snarling, thread-beast; if that is what you need.
The "damn" part of my response stems from the fact that I believe the "Method Request" model is not viable as a model of execution for the thread toolkit. It is simply wrong and bending it to this task is a deadend.
I don't understand why you think this. I think the active object wrapper indicates that you can use the method request model to encapsulate the interactions between the threads. There's no reason why it can't be the interface layer on top of a message passing system, provided that the result does yield simpler interfaces.
In signal processing the equivalent to "Method Request" is "send". This primitive takes an object to send and the address of the destination (e.g. an active<object>);
send( int_results, calling_object )
These calls are made from within the methods that you already have working, i.e. the non-proxy _real_ methods.
The trick with "send" is that it can be coded to populate the "task" with all the appropriate object and member pointers so that the receiving thread knows where it came from. Pretty typical SDL
What is SDL? I parse it a Simple Directmedia Library...
classes will look like;
<snipped>
I really hope this helps because I can see the inductive leap required to get over the chasm of despair.
Yes, this demonstrates what you're doing quite clearly. But, it also demonstrates the weakness that the logic is switch-based, which is kind of what C++ was invented to circumvent... If you can bind the code-to-be-excuted to the content of the message, then the need to inspect the message and make a decision is removed, and this is an advantage you could derive from a method request model. I do now understand your separation of threads from objects.
I only have one more esoteric observation to make. The "Method Request" model of execution is nice and familiar to us. It works brilliantly - mostly. Return addresses are managed automagically for us. But return addresses on a CPU stack are pretty meaningless when it comes to inter-thread communication. With the "Method Request" model we are trying to "call" across thread boundaries - IMHO thats just wrong.
I can't agree with this. The proxy objects in the active object wrapper demonstrate one way to pass call information between threads - I don't see why the return information cannot be managed in the same way... Of course, you can't return to an arbitrary place in the execution - method addressses would have to be used.
With a bunch more scaffolding this is what my reactive_object code looks like;
int db_interface::transition( machine_state<READY>, typed_bind<interface_open> & ) { client.insert( sender() );
send( status, sender() );
return READY; }
This seems very nice, although it seems to be more about the state machine code than the communicating objects.
Apologies if this is all badly presented. I have a huge body of code from which the ActiveObject portion would be impossible to remove. A major reason for starting or contributing to this thread is that I think my existing code desparately needs boostifying and the best way (or even a good way) is beyond me. Hence our messages.
I understnad what you're doing now. What are the drawbacks you want to address in your code?
Cheers, Scott
ps: the sample code from mark blewett looks very promising?
Yes. Although from my point of view, it has the same drawback with switching on messages that you have. I wonder if the queue of messages in the servant class could be a queue of fully-bound function calls, generated by proxy objects which describe interfaces? (Yes, I am quite enamoured of that idea :)) Matt.

[mailto:boost-bounces@lists.boost.org]On Behalf Of Matthew Vogt Sent: Saturday, February 28, 2004 2:03 PM
This method will be another proxy (e.g. non_void_with_param_returned) that in turn, queues the result. This decouples the scheduler from the recipient of the results; eventually the thread working on that task list will reach the "results task" and will run that.
So you have a callback, the invocation of which queues a message?
Sorry - yes. Callback means something specific and different in my world but can see that your use is correct.
The "damn" part of my response stems from the fact that I believe the "Method Request" model is not viable as a model of execution for the thread toolkit. It is simply wrong and bending it to this task is a deadend.
I don't understand why you think this. I think the active object wrapper indicates that you can use the method request model to encapsulate the interactions between the threads. There's no reason why it can't be the interface layer on top of a message passing system, provided that the result does yield simpler interfaces.
I should be careful here that I'm not just adding confusion. In this context "Method Request" and "synchronous method call" are used to refer to the same thing. A model of execution involving (typically) use of a machine stack and pointer to implement auto "jump and return" behaviour. Servant code (i.e. the callbacks :-) is executed in response to the de-queueing of messages. If such a callback was to make a "Method Request" to another active object, how does your model of execution cope with this? In the SDL world the callback typically terminates and you enter a "waiting for the next message" state (just like a GUI message pump). That next message is hopefully the results of your previous request, but could be many things including error messages. The closest thing to a "return address" in this model is the active object that sends a message, i.e. every message sent is augmented with who the sender was. In the ActiveObject (pattern) model the return address is the address of the next machine instruction after the Method Request. What were you going to do with that? Continue execution? But you would be forced to wait for the async return of results. Do you have some way of suspending the current callback and allowing others to run? Then when the current response arrives, resuming execution at that machine code address? Of that particular execution frame (there may be multiple pending instances)? That would be something :-) I do remember some code that used setjmp/longjmp in the old days, to implement LWT'ing. It was actually good and did achieve something like what I describe above. But I mention that as one of those things we used to do when dinosaurs roamed. And we didnt know any better.
What is SDL? I parse it a Simple Directmedia Library...
Yes, this demonstrates what you're doing quite clearly. But, it also demonstrates the weakness that the logic is switch-based, which is kind of what C++ was invented to circumvent...
Erm, yes. Admirable but for some strange reason they havent removed the keyword yet. Until they do I will exploit the efficiencies therein. Have I shown you my collection of "goto"s yet? ;-) In my particular circumstance there is _no way_ to implement this switch as a compile-time activity (even though I so wanted to). The major reason is that I (well actually a scheduler) must dispatch the message to a callback based on a runtime integer. Integer? Small, set of known possible value? Well switch will do for the moment :-)
If you can bind the code-to-be-excuted to the content of the message, then the need to inspect the message and make a decision is removed, and this is an advantage you could derive from a method request model.
Yes see were you are coming from. Suspect that our directions and understandings separate around "model of execution" (machine call vs SDL send) and that this is another symptom. If you can make it work then I think I can see that the result would be "more intuitive".
With a bunch more scaffolding this is what my reactive_object code looks like;
int db_interface::transition( machine_state<READY>, typed_bind<interface_open> & ) { client.insert( sender() );
send( status, sender() );
return READY; }
This seems very nice, although it seems to be more about the state machine code than the communicating objects.
Thank you. That could be the best (unintentional) compliment I could hope for. If you have other code that accepts an inbound object, remembers who sent it and responds with an object containing the lastest status - and all across thread boundaries - and a reviewers response is essentially "where has the comms gone"; then my work here is done :-)
I understnad what you're doing now. What are the drawbacks you want to address in your code?
Thanks for the question. I will roll the response into a new subject. I think.

scott <scottw <at> qbik.com> writes:
I should be careful here that I'm not just adding confusion. In this context "Method Request" and "synchronous method call" are used to refer to the same thing. A model of execution involving (typically) use of a machine stack and pointer to implement auto "jump and return" behaviour.
Yes, I realise I've also been confusing things here. There are two different aspects to a 'method request': the C++ syntax for requesting the act, and the underlying machine code generated to perform the act. What I want to see here is the former retained through overloading of operator(), while transparently transforming the code generated into something that works across thread boundaries.
Servant code (i.e. the callbacks is executed in response to the de-queueing of messages. If such a callback was to make a "Method Request" to another active object, how does your model of execution cope with this? In the SDL world the callback typically terminates and you enter a "waiting for the next message" state (just like a GUI message pump). That next message is hopefully the results of your previous request, but could be many things including error messages.
Yes, I envisage things in the same way. Although with the exception that I want the messages to be automatically routed to a function that can deal with them, rather than having to go through a message inspection switch. In this way, there is no 'next message' in a sequential sense - the next message that arrives in the object's queue will go to its pre-defined destination, and when the response you're referring to arrives, it too will go its own destination.
The closest thing to a "return address" in this model is the active object that sends a message, i.e. every message sent is augmented with who the sender was. In the ActiveObject (pattern) model the return address is the address of the next machine instruction after the Method Request.
Yes. Although, the sender could bind a return address into the call in this model also - it would have to be the address of a method in the sender. This would allow the automatic routing that I think is preferable to messages. This model is still working by passing messages, mind, but the messages have extra data with them to skip the dispatch processing.
What were you going to do with that? Continue execution? But you would be forced to wait for the async return of results. Do you have some way of suspending the current callback and allowing others to run? Then when the current response arrives, resuming execution at that machine code address? Of that particular execution frame (there may be multiple pending instances)?
I'm not suggesting anything other than a message queue wait loop.
Yes, this demonstrates what you're doing quite clearly. But, it also demonstrates the weakness that the logic is switch-based, which is kind of what C++ was invented to circumvent...
Erm, yes. Admirable but for some strange reason they havent removed the keyword yet. Until they do I will exploit the efficiencies therein. Have I shown you my collection of "goto"s yet?
I'm not referring to inherent harmful-ness, just that switching to dispatch messages is more cumbersome and error-prone that binding the dispatch address into the message. Provided your syntax makes the binding clean and easy, of course.
In my particular circumstance there is _no way_ to implement this switch as a compile-time activity (even though I so wanted to). The major reason is that I (well actually a scheduler) must dispatch the message to a callback based on a runtime integer.
Integer? Small, set of known possible value? Well switch will do for the moment
I must be missing something here. I can't see why you must have client: send message 5 to server server: if (message == 5) perform function instead of: client: send message 5 to server.handler_for(5) server: perform function which can be made equivalent to the semantically easier: client: invoke "service" on server server: perform function Obviously this is a trivialisation, but I don't see what I'm missing.
If you can bind the code-to-be-excuted to the content of the message, then the need to inspect the message and make a decision is removed, and this is an advantage you could derive from a method request model.
Yes see were you are coming from. Suspect that our directions and understandings separate around "model of execution" (machine call vs SDL send) and that this is another symptom. If you can make it work then I think I can see that the result would be "more intuitive".
Yes. I hope I've cleared up my understanding of the model of execution?
Thank you. That could be the best (unintentional) compliment I could hope for. If you have other code that accepts an inbound object, remembers who sent it and responds with an object containing the lastest status - and all across thread boundaries - and a reviewers response is essentially "where has the comms gone"; then my work here is done
Not unintentional :) But, as I see it, you have removed the comms *mechanism* from visibility. To go one step further would be to model the comms purely as method requests (in the sense of operator() ), and remove the comms existence from visibility...
I understnad what you're doing now. What are the drawbacks you want to address in your code?
Thanks for the question. I will roll the response into a new subject. I think.
I look forward to seeing your next message. Matt

Hi Matthew, Hopefully my recent message (new subject "Reactive Objects") clears the air. Some specific responses;
[mailto:boost-bounces@lists.boost.org]On Behalf Of Matthew Vogt Sent: Tuesday, March 02, 2004 12:01 PM
Yes, I envisage things in the same way. Although with the exception that I want the messages to be automatically routed to a function that can deal with them, rather than having to go through a message inspection switch. In this way, there is no 'next message' in a sequential sense - the next message that arrives in the object's queue will go to its pre-defined destination, and when the response you're referring to arrives, it too will go its own destination.
The closest thing to a "return address" in this model is the active object that sends a message, i.e. every message sent is augmented with who the sender was. In the ActiveObject (pattern) model the return address is the address of the next machine instruction after the Method Request.
Yes. Although, the sender could bind a return address into the call in this model also - it would have to be the address of a method in the sender. This would allow the automatic routing that I think is preferable to messages. This model is still working by passing messages, mind, but the messages have extra data with them to skip the dispatch processing.
Hmmmmm. We are either still confused or we have different targets. Since I exist in the former continuum, the following is mostly for me. If there were a callback (called by thread after de-queueing a message) that went something like; db_client::on_next( ... ) // User pressed the "next" button { if(db_server.next( current_record ) == END) { // Wrap to the beginning db_server.first( current_record ); } } How do you "bind a return address" to the point just after the "call" to "db_server.next"? Even if you do manage this syntactically, is the call to "on_next" suspended somehow? Obviously the same "suspension" would have to occur for the call to "db_server.first". I suspect that you are thinking (and have been doing your best to tell me) that the "asynchronous calls" will be effected by the threads in each active object. They will simply "chain"? If that is the case then I see some difficulties. The least would be the loss of throughput relative to "true" async calls. More importantly I think you might be calling back into an object (e.g. the db_server calling a response method in a client) that otherwise has no idea that the original request has been completed. As a "model of operation" this seems at least as foreign as the "fully async" model that SDL pushes?
I'm not referring to inherent harmful-ness, just that switching to dispatch messages is more cumbersome and error-prone that binding the dispatch address into the message. Provided your syntax makes the binding clean and easy, of course.
Understood. Recent message hopefully explains my use of "switch", i.e. it is always auto-generated code.
client: send message 5 to server server: if (message == 5) perform function
instead of:
client: send message 5 to server.handler_for(5) server: perform function
which can be made equivalent to the semantically easier:
client: invoke "service" on server server: perform function
Obviously this is a trivialisation, but I don't see what I'm missing.
In your version of things the sender is selecting the code to be executed in the recipient. This is the fundamental difference. In my version the client cannot assume anything about the receiver. There are perfectly valid examples of "active objects" in my ActiveWorld that _must_ be able to receive _anything_. One example is a proxy object that accepts any message and forwards it across a network connection to an active object in a remote process. Also, the active object may (or may not :-) be a state machine. The sender (i.e. a client) cannot truly know which method to call as the correct method is state-dependent. The runtime switch issue that exists for this circumstance is similar to the issue that exists for selection of method call vs switching on message. One strategy involves knowledge of the recipient, the other doesnt. Well, other than that all active objects are capable of receiving messages :-)
Yes. I hope I've cleared up my understanding of the model of execution?
For sure. Cheers, Scott

scott <scottw <at> qbik.com> writes:
Hi Matthew,
Hopefully my recent message (new subject "Reactive Objects") clears the air.
Yup. I'll reply to that when I've read a couple more times... :)
Yes. Although, the sender could bind a return address into the call in this model also - it would have to be the address of a method in the sender. This would allow the automatic routing that I think is preferable to messages. This model is still working by passing messages, mind, but the messages have extra data with them to skip the dispatch processing.
Hmmmmm. We are either still confused or we have different targets. Since I exist in the former continuum, the following is mostly for me.
If there were a callback (called by thread after de-queueing a message) that went something like;
db_client::on_next( ... ) // User pressed the "next" button { if(db_server.next( current_record ) == END) { // Wrap to the beginning db_server.first( current_record ); } }
How do you "bind a return address" to the point just after the "call" to "db_server.next"? Even if you do manage this syntactically, is the call to "on_next" suspended somehow? Obviously the same "suspension" would have to occur for the call to "db_server.first".
You can't (as far as I know, anyway). So the return address has to be a method in the object that invoked the call. The client has to be refactored to (pseudo-code): db_client::process_next( ... next_record ) { if ( next_record == END ) { // Wrap to the beginning return_to( process_next ) = db_server.first( current_record ); } else { ... // do something } } db_client::on_next( ... ) { return_to( process_next ) = db_server.next( current_record ); } This assumes that the return_to<> proxy can be created to update the other proxy (the method request emulator) to bind the return address...
I suspect that you are thinking (and have been doing your best to tell me) that the "asynchronous calls" will be effected by the threads in each active object. They will simply "chain"?
Not sure what you mean by 'chain'...
If that is the case then I see some difficulties. The least would be the loss of throughput relative to "true" async calls. More importantly I think you might be calling back into an object (e.g. the db_server calling a response method in a client) that otherwise has no idea that the original request has been completed. As a "model of operation" this seems at least as foreign as the "fully async" model that SDL pushes?
I don't think I mean what I think you think I mean. Taking away the proxies, the above pseudo-code would become: db_client::on_next( ... ) { //return_to( process_next ) = db_server.next( current_record ); message m; m.sender = this; m.address = bind(&db_server_class::get_next_record, db_server); m.arg1 = current_record; m.return_address = bind(&db_client::process_next, this); db_server.enqueue_message(m); } and the underlying message_handler code (from which both the db_client and db_server_class classes derive) looks like: message_handler::process_message_queue() { message& m = dequeue_message(); // client request if ( m.return_address ) { message result; result.sender = this; result.address = m.return_address; result.arg1 = m.invoke(); m.sender.enqueue_message(result); } else { m.invoke(); } } In both cases, the objects are merely dequeueing messages and doing whatever processing the message requires, possibly generating other messages as a result (directly, as a return message, or indirectly as a side effect). Therefore, it is fully asynch. The 'method request' interface is only being used in the creation and binding of messages.
In your version of things the sender is selecting the code to be executed in the recipient. This is the fundamental difference. In my version the client cannot assume anything about the receiver. There are perfectly valid examples of "active objects" in my ActiveWorld that _must_ be able to receive _anything_. One example is a proxy object that accepts any message and forwards it across a network connection to an active object in a remote process.
Yes - I was assuming a published interface from the (re)active objects, although you do present a good example of somewhere this can't be done. For this individual case, you can ditch the method-emulating proxies and deal with the underlying message queues directly.
Also, the active object may (or may not be a state machine. The sender (i.e. a client) cannot truly know which method to call as the correct method is state-dependent. The runtime switch issue that exists for this circumstance is similar to the issue that exists for selection of method call vs switching on message. One strategy involves knowledge of the recipient, the other doesnt. Well, other than that all active objects are capable of receiving messages
I hadn't realised how integral the state machine was to your design until I read your other post. Still, if you are doing dispatch on both message code and object state, it can be simplified to bind the message to a per-message code dispatch function which then dispatches on object state. Matt

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org]On Behalf Of Matthew Vogt
<snip>
How do you "bind a return address" to the point just after the "call" to "db_server.next"? Even if you do manage this
<snip>
You can't (as far as I know, anyway). So the return address has to be a method in the object that invoked the call. The client has to be refactored to (pseudo-code):
db_client::process_next( ... next_record ) { if ( next_record == END ) { // Wrap to the beginning return_to( process_next ) = db_server.first( current_record ); } else { ... // do something } }
db_client::on_next( ... ) { return_to( process_next ) = db_server.next( current_record ); }
This assumes that the return_to<> proxy can be created to update the other proxy (the method request emulator) to bind the return address...
<snip> Yep. All straight now. I know understand your breakdown of calls to callbacks is very similar to that inherent in SDL. And yes, you/we/it are fully asynchronous.
the above pseudo-code would become:
db_client::on_next( ... ) { //return_to( process_next ) = db_server.next( current_record ); message m; m.sender = this; m.address = bind(&db_server_class::get_next_record, db_server); m.arg1 = current_record; m.return_address = bind(&db_client::process_next, this);
db_server.enqueue_message(m); }
I'm assuming you prefer this to the reactive object version? Which does a pretty good job of hiding the comms :-) Oh yeah, and doesnt require selection of the method to call in either the receipient or the client (on response), i.e. at the point of making a method call. Maybe we're down to a matter of taste? <snip>
In your version of things the sender is selecting the code to be executed in the recipient. This is the fundamental difference. In my version the client cannot assume anything about the receiver. There are perfectly valid examples of "active objects" in my ActiveWorld that _must_ be able to receive _anything_. One example is a proxy object that accepts any message and forwards it across a network connection to an active object in a remote process.
Yes - I was assuming a published interface from the (re)active objects, although you do present a good example of somewhere this can't be done. For this individual case, you can ditch the method-emulating proxies and deal with the underlying message queues directly.
Also, the active object may (or may not be a state machine. The sender (i.e. a client) cannot truly know which method to call as the correct method is state-dependent. The runtime switch issue that exists for this circumstance is similar to the issue that exists for selection of method call vs switching on message. One strategy involves knowledge of the recipient, the other doesnt. Well, other than that all active objects are capable of receiving messages
I hadn't realised how integral the state machine was to your design until I read your other post. Still, if you are doing dispatch on both message code and object state, it can be simplified to bind the message to a per-message code dispatch function which then dispatches on object state.
Characterizing of reactive objects as a framework for state machines would be a little bit sad. I'm not sure it was your intention to say that so just in case others are listening, I will elaborate from my POV. The messaging facillity that I believe is a fundamental requirement of the alternate threading model (i.e. Reactive Objects?), has no understanding of state machines. It was a requirement of the design that it did not. Proof of success has been the subsequent ability to implement a wide variety of active objects, e.g.; * windows, dialogs and controls, * worker threads on a server * TCP connections (connected and accepted) * proxy objects * database server state machines * virtual machine threads The messaging facillity has _no understanding_ of what the recipient does with a message. Proxies and state machines were intended to highlight the benefits of that design. To summarize; the Reactive Objects that I currently consider our target would communicate via a similar mechansim and there would be no assumption that a Reactive Object was a state machine, or a proxy, or anything except an object that accepts messages. Cheers, Scott

scott <scottw <at> qbik.com> writes:
Yep. All straight now. I know understand your breakdown of calls to callbacks is very similar to that inherent in SDL. And yes, you/we/it are fully asynchronous.
I'm assuming you prefer this to the reactive object version? Which does a pretty good job of hiding the comms Oh yeah, and doesnt require selection of the method to call in either the receipient or the client (on response), i.e. at the point of making a method call.
Maybe we're down to a matter of taste?
Yes, I guess it depends on whether you're familiar/comfortable with the message passing paradigm, or whether you're more comfortable with C++ shenanigans to obscure the messy details. I would postulate, however, that wrapping the details to allow binding at the call site (or message post) is slightly superior, due to the removal of the dispatch. Of course, this minor gain may easily be outweighed by the contortions required to make it work, and the reduction in the transparency of the code...
I hadn't realised how integral the state machine was to your design until I read your other post. Still, if you are doing dispatch on both message code and object state, it can be simplified to bind the message to a per-message code dispatch function which then dispatches on object state.
Characterizing of reactive objects as a framework for state machines would be a little bit sad. I'm not sure it was your intention to say that so just in case others are listening, I will elaborate from my POV.
Sorry, I read too much into the example code in the other message. I assumed the messaging would be an independent layer, but that the design you were using must have the communicating objects modelled as state machines. Thanks for clearing that up. <snip>
The messaging facillity has _no understanding_ of what the recipient does with a message. Proxies and state machines were intended to highlight the benefits of that design.
To summarize; the Reactive Objects that I currently consider our target would communicate via a similar mechansim and there would be no assumption that a Reactive Object was a state machine, or a proxy, or anything except an object that accepts messages.
Yes, that all sounds good.
Cheers, Scott
Matt

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org]On Behalf Of Matthew Vogt
I felt more of your points needed further acknowledgement;
This is certainly achievable, but is it using the active object design? You can implement an exactly equivalent design using the platform AIO primitives, but this would yield code which was much harder to follow and implement than active objects communicating through function calls. If you want optimum throughput throughout the interracting components, I don't think active object is the pattern you're looking for.
I agree with (most of) what you say. As mentioned recently however, I started this thread before I knew of active objects. My goal is and has been the same from the beginning. Introduction of the ActiveObject pattern into this thread has been something of a mixed blessing. Its thoroughly considered piece of work and well presented. But (IMHO) it is also a perfect reflection of The ACE Orb (and CORBA); it persists with the "method call" model of execution. Schmidt wrote ActiveObject and Schmidt wrote TAO. Of course OMG wrote CORBA so maybe this model of execution truly started back with them? But that is all another line of research. You are right; I am outside the scope of the ActiveObject pattern. Another phrasing of that might be; the ActiveObject pattern runs out of steam in a crucial aspect of the work being considered (?). You lost me a little with the option of "using the platfrom AIO primitives". I can imagine something vaguely similar is possible but the interaction of the objects in my head is not bound to IO. A majority of interactions are simply "go ahead" signals from one object to another - where is the IO in that? In some senses, I can concede the "harder to follow" comment. Some of the truth in that is due to the shift in execution model, i.e. did you find it easy adding the "++" to your "C"? My functional programming is non-existent; I 'm still wallowing in the wake of Alexandrescu! OTOH, I have recently converted several sub-systems in a large, distributed, client/server product to "the other side". Code complexity has plummeted. I cant give you formal metrics but an informal one might be "average maximum depth of nesting" in functions/members - halved. Or better. Is shorter, less tortured code "harder to follow"? :-) You are right; the ActiveObject pattern is not the pattern I am looking for. It is very close however, including some entities that I had thought were original (ha!). That same closeness has created some confusion. The subject on recent messages has actually been yours. This being the case I feel that the fate of this thread (and active<object>) is yours. Kindly, Scott
participants (3)
-
Mark Blewett
-
Matthew Vogt
-
scott