
scott <scottw <at> qbik.com> writes:
Essentially, I send a message from my object to another object, and I receive the result not as a return value, but as a new message from the other object.
Yes! This is that recess that I couldnt quite scratch. What you describe is the essence of my "reactive objects". Still a bit more stretching required but that is "the guts of it".
A crude application of this technique might mave proxy methods called things suchs as;
<paste> // Proxy declarations for all methods of the active object proxy<void (int)> non_void_with_param; proxy<void (void)> non_void_without_param; ..
proxy<void (int)> non_void_with_param_returned; proxy<void (const char *)> non_void_without_param_returned; .. <paste/>
An immediate reaction might go something like "but look at the overheads!". The plain truth is that for successful interaction between threads, something of this nature is a pre-requisite. It may as well be the mechansim that has already been crafted for the job.
Yep, there's gotta be mutexes somewhere. The mechanism you're referring to is the task_queue of fully-parameterised method invocations? So, how precisely does this work? Say I have a scheduler that has performed some work via a method in a servant S, and that method produced a result of type int. I want to return that result to the caller C, but the caller may not be threaded (may not be associated with any scheduler). Does that mean that instead of queueing the response to the object, I will perform some type of registered action in C, in the same thread context as the method invocation of S? If not, and I place the result into a task_queue of some sort in C, how does another scheduler object become aware that C has a result in its queue, and that something should be done with it?
If my scrappy example is taken "as is", the implied active objects would effectively be "hard-coded" to interact with each other. This is a non-viable design constraint that is relaxed by adding the "callers address" to the queued "tasks". With this additional info the thread "returns" (i.e. "a new message from the other object") may be directed to any instance.
The proxy objects I was using before assumed that the return would be passed via a future reference. Perhaps they could be adjusted for your pattern so that they invoked a method (or proxy) on the caller, of type 'boost::function<void, result_type>', which is registered when the original proxy is invoked? E.g. struct Callee : public SomeActiveObjectBase { ... Proxy<int, void> accessValue; }; struct Caller : public SomeActiveObjectBase { void reportvalue(int value) { cout << "Got " << value << " back from callee" << endl; } void someMethod(Callee* other) { other.accessValue(boost::bind(&Caller::reportValue, this)) } };
Please note: i'm not proposing the above "pairing of call and return proxies" as the path forward. Its only intended to further expose the essential technique.
The limitation resulting from the unequal status of different event mechanisms is a fairly fundamental one. Is anyone working on this in a boost threads context?
Well, hopefully for reactive objects we have reduced it to 1? But to answer your question, no.
Sorry, I meant unequal in that you can ::select on FDs, but not on boost:: mutexes, or any underlying implementation detail they might expose.
In another implementation this just involved storing some kind of "return address" and adding that to the queued object (task). On actual execution the thread (belonging to the active object) need only turn around and write the results back into the callers queue.
This implementation requires that all objects have a queue, which is another property of the 'reactive object' system you're describing, but can't work with the approach I've taken.
On first appearance this looks to be the case. A bit more sleight of hand and there can be any number of reactive objects "serviced" by a single thread. Previous explanations of this have failed woefully so all I can do is direct you to the ActiveObject pattern, entities "Scheduler" and "Servant".
So you use callbacks, rather than queues?
The natural perception that reactive objects are "heavy on threads" is addressed beautifully by this section of the pattern. It appears that Schmidt et al. resolved many of our concerns long before we knew enough to start waving our arms (well, I can speak for myself at least).
I don't think threads are necessarily that heavy. Certainly, for long-lived apps that create/destroy threads mostly at setup and tear-down, thread numbers are a major concern. In any case, lowered throughput or higher latency resulting from denying opportunities for concurrency will often be more noticeable than the overhead from the management of the concurrency.
The pattern doesnt address asynchronous return of results and also doesnt quite give the entities a "final polish", i.e. the Scheduler entity needs to inherit the Servant interface.
Yeah, this would be nice, if you could make it stick.
Ok. Having established that this is a different pattern of multithreading, what are the costs/benefits of symmetric activation?
Operationally the costs of SA tends towards zero. If you first accept that some form of inter-thread communication was required anyhow, i.e. SA doesnt itself _add_ this requirement.
I think costs do exist in terms of the development cycle, e.g. changing of culture. The type of programming actually required is best defined IMHO as "signal processing" (e.g. SDL). It may take some effort to convince developers of desktop apps that they need to take a "signal processing" approach to the next version of their 80Mb CAD package.
Perhaps I've been heading off on a tangent, but it actually sounds like what you want is a variant of boost::signal that could communicate between threads. In this design, the callback would have to seamlessly mutate into a decoupled method invocation like that in my Proxy objects. Actually, this sounds like an obvious thing for someone to have tried before...
The benefits? System responsiveness, maintainability of code, low software defect counts. The results are more likely to run correctly for long periods.
Can't complain about that.
There are difficulties when debugging. The traditional flow of control is lost.
Similarly to signal-based designs.
Its been fun up to this point but perhaps your "code with legs" is viable as "active object" and that is a valid first phase? I could wheel my barrow off to the side for a while
Cheers, Scott
Yes, I don't think my code can be modified to incorporate your requirements. I am interested in your pattern, though.
Perhaps a change of subject is in order?
Will go that way if you really didnt want to do anything more with your active<object>. To me it seemed viable both as a phase and as the basis for reactive objects. But think thats up to you?
I'm open. Code is just code, anywhere it goes is fine with me. I just thought maybe the posts' 'subject line' should be changed :) I don't have a suggestion for a better description, though.
Cheers, Scott
Matt