
Hi everyone, I'm working on a project right now which uses both Boost.Asio and Boost.Thread along with a lot of other Boost libraries. We've pretty much had to roll our own Active Object implementation everytime we need asynchronous method invocation, and I understand that Futures are being proposed for C++0x. The question would be whether there will be (or whether there already is) a "generic" active object implementation? We've found that Boost.Asio's io_service is a good scheduler/queue and using a futures wrapper for the result types. Perhaps something that used some preprocessor magic or with some TMP ? Insights and pointers will be most appreciated. -- Dean Michael C. Berris C++ Software Architect Orange and Bronze Software Labs, Ltd. Co. web: http://software.orangeandbronze.com/ email: dean@orangeandbronze.com mobile: +63 928 7291459 phone: +63 2 8943415 other: +1 408 4049532 blogs: http://mikhailberis.blogspot.com http://3w-agility.blogspot.com http://cplusplus-soup.blogspot.com

Dean Michael Berris wrote:
need asynchronous method invocation, and I understand that Futures are being proposed for C++0x.
Is there a paper on this somewhere? It's the first I've heard of it. I'm very interested in futures and some of the proposed DARPA HPCS languages have them. It would be nice if these things worked similarly. -Dave

David Greene wrote:
Dean Michael Berris wrote:
need asynchronous method invocation, and I understand that Futures are being proposed for C++0x.
Is there a paper on this somewhere? It's the first I've heard of it. I'm very interested in futures and some of the proposed DARPA HPCS languages have them. It would be nice if these things worked similarly.
Things are still being researched, but so far we have http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2096.html and http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2094.html#futures

Hi Dave, On 10/18/06, David Greene <greened@obbligato.org> wrote:
Dean Michael Berris wrote:
need asynchronous method invocation, and I understand that Futures are being proposed for C++0x.
Is there a paper on this somewhere? It's the first I've heard of it. I'm very interested in futures and some of the proposed DARPA HPCS languages have them. It would be nice if these things worked similarly.
I think Peter's links show a concrete implementation of futures. I got a lot of the ideas though from a presentation by Herb Sutter on the Concur Project and his recorded presentation (available on Google Video) here: http://tinyurl.com/j68m8 . There's a paper by Doug Schmidt and Greg Lavender [ http://tinyurl.com/yhrw9a ] on the Active Object Pattern -- which IIRC is already available in the ACE framework. HTH -- Dean Michael C. Berris C++ Software Architect Orange and Bronze Software Labs, Ltd. Co. web: http://software.orangeandbronze.com/ email: dean@orangeandbronze.com mobile: +63 928 7291459 phone: +63 2 8943415 other: +1 408 4049532 blogs: http://mikhailberis.blogspot.com http://3w-agility.blogspot.com http://cplusplus-soup.blogspot.com

Dean Michael Berris wrote:
Hi everyone,
I'm working on a project right now which uses both Boost.Asio and Boost.Thread along with a lot of other Boost libraries. We've pretty much had to roll our own Active Object implementation everytime we need asynchronous method invocation, and I understand that Futures are being proposed for C++0x.
The question would be whether there will be (or whether there already is) a "generic" active object implementation? We've found that Boost.Asio's io_service is a good scheduler/queue and using a futures wrapper for the result types.
Perhaps something that used some preprocessor magic or with some TMP ?
Insights and pointers will be most appreciated.
In the coroutine library I've developed as part of the summer of code, I've used futures to represent asynchronous function invocations, that is the result of an asynchronous computations , not necessarily concurrent. Every computation whose result can be delivered using an asio io_service can be represented by a future. This model can very well be used to represent active objects, in fact asio sort of already does that: the resolver service can be considered an active object (it uses a worker thread to run the actual resolver), and a future can be used to wait for the result of an async_resolve function. These futures are tightly bound to coroutines (i.e. you can use them only from inside a coroutine), and they might or might not fit in your existing design. When a coroutine wait for a future, control is relinquished to the io_service that can do other work. About preprocessor and TMP magic, I'm using some of it to have futures be able to handle any number and types result types (for example an async_write returns a future<error_type, std::size_t>. While the road to a full active object is long, may be my library is a good start. BTW, coroutines might be a much better way, performance-wise, to implement Active Objects with one thread per object or a thread pool. You can look at the code and see if it fits your need here: https://www.boost-consulting.com:8443/trac/soc/browser/boost/soc/2006/corout... docs here: http://www.crystalclearsoftware.com/soc/coroutine/index.html Currently real life prevent me from cleaning up the library for a boost review request, but plan to do it short term. HTH, -- Giovanni P. Deretta

Hi Giovanni, On 10/18/06, Giovanni P. Deretta <gpderetta@gmail.com> wrote:
Dean Michael Berris wrote:
Insights and pointers will be most appreciated.
[snipped]
While the road to a full active object is long, may be my library is a good start. BTW, coroutines might be a much better way, performance-wise, to implement Active Objects with one thread per object or a thread pool.
Coroutines (if I understand correctly) solve a completely different set of design challenges, mostly related to maintaining state accross reentrant calls to the coroutine -- while Active Objects [See http://www.cs.wustl.edu/~schmidt/PDF/Act-Obj.pdf] just aim to decouple the method execution (of member methods) from the invocation and return values are wrapped in Futures. I had been more or less influenced by Herb Sutter's presentation on the Concur project and the idea of: active class my_class { void operation() { // do some work here }; int another_operation() { int a_value = 0; // do some work here on a_value return a_value; }; }; // somewhere in client code... my_class instance; instance.operation() // immediately returns future<int> a_value = instance.another_operation(); // immediately returns // do some other work cout << a_value.get() << std::endl; // will wait for a_value to "have a value"
You can look at the code and see if it fits your need here:
https://www.boost-consulting.com:8443/trac/soc/browser/boost/soc/2006/corout...
docs here:
http://www.crystalclearsoftware.com/soc/coroutine/index.html
I'm doing that at the moment and so far it's been very interesting. I might not be using this library though for our current project (or anytime soon) because we've already had a lot of code written to implement the active object pattern. It is going to be worth a second look though I bet, when it gets reviewed and maybe eventually included in Boost.
Currently real life prevent me from cleaning up the library for a boost review request, but plan to do it short term.
Good luck! :-)
HTH,
Definitely does. Thanks! :-) -- Dean Michael C. Berris C++ Software Architect Orange and Bronze Software Labs, Ltd. Co. web: http://software.orangeandbronze.com/ email: dean@orangeandbronze.com mobile: +63 928 7291459 phone: +63 2 8943415 other: +1 408 4049532 blogs: http://mikhailberis.blogspot.com http://3w-agility.blogspot.com http://cplusplus-soup.blogspot.com

Dean Michael Berris wrote:
I had been more or less influenced by Herb Sutter's presentation on the Concur project and the idea of:
active class my_class { void operation() { // do some work here };
int another_operation() { int a_value = 0; // do some work here on a_value return a_value; }; };
It isn't clear to me how this simple model can handle synchronization constraints; do all "active calls" run in parallel? Sequentially? What if I want some of them to run in parallel and some of them to obey sequential consistency? Given an appropriate Executor (as in N2096), it's easy to emulate asynchronous calls as future<int> r = ex.execute( bind( &my_class::f, &instance ) ); or maybe even future<int> r = active( &my_class::f, &instance ); // future<int> r = active { instance.f() } in Concur? if we trust the implementation to provide an optimal default executor (an adaptive thread pool, most likely.) But the real world is usually not as simple.

Peter Dimov wrote:
Dean Michael Berris wrote:
<snip>
<snip>
Speaking of all this, I was looking forward to the results of the SoC Boost.Act project and was somewhat disappointed that it was never brought up on the list (or I missed it). I wonder its status is. ( svn: https://www.boost-consulting.com:8443/svn/main/boost/soc/2006/concurrency ) - Michael Marcin

Hi Michael, On 10/19/06, Michael Marcin <mmarcin@method-solutions.com> wrote:
Speaking of all this, I was looking forward to the results of the SoC Boost.Act project and was somewhat disappointed that it was never brought up on the list (or I missed it). I wonder its status is.
I read the docs, and it seems like what I've been looking for is here! Now an update would be very much appreciated at this point. Thanks for the link! -- Dean Michael C. Berris C++ Software Architect Orange and Bronze Software Labs, Ltd. Co. web: http://software.orangeandbronze.com/ email: dean@orangeandbronze.com mobile: +63 928 7291459 phone: +63 2 8943415 other: +1 408 4049532 blogs: http://mikhailberis.blogspot.com http://3w-agility.blogspot.com http://cplusplus-soup.blogspot.com

Hi Peter, On 10/19/06, Peter Dimov <pdimov@mmltd.net> wrote:
Dean Michael Berris wrote:
I had been more or less influenced by Herb Sutter's presentation on the Concur project and the idea of:
[snipped code example]
It isn't clear to me how this simple model can handle synchronization constraints; do all "active calls" run in parallel? Sequentially? What if I want some of them to run in parallel and some of them to obey sequential consistency?
It seems that the simplistic approach leaves a lot to the (compiler) implementation to decide (barring any more additional keywords aside from `active' to be added just for modifying the concurrency implementation). However for all intents and purposes, it seems to me that behind the scenes (and adhering to the Active Object Pattern) there would be a single scheduler for a single active object (perhaps using something similar to io_service).
Given an appropriate Executor (as in N2096), it's easy to emulate asynchronous calls as
future<int> r = ex.execute( bind( &my_class::f, &instance ) );
or maybe even
future<int> r = active( &my_class::f, &instance );
// future<int> r = active { instance.f() } in Concur?
if we trust the implementation to provide an optimal default executor (an adaptive thread pool, most likely.)
I like the above examples a lot, and would like to be able to do something even like: future<int> r = active<new_thread>(&my_class::f, &instance); future<int> r = active<queued>(&my_class::f_other, &instance);
But the real world is usually not as simple.
Indeed. That's why we had to hand-roll our own solution and profile-optimize for our specific situation, though it would be nice to have something flexible and generic enough for a good subset of the very many cases in which you'd want concurrency support in C++ written applications. -- Dean Michael C. Berris C++ Software Architect Orange and Bronze Software Labs, Ltd. Co. web: http://software.orangeandbronze.com/ email: dean@orangeandbronze.com mobile: +63 928 7291459 phone: +63 2 8943415 other: +1 408 4049532 blogs: http://mikhailberis.blogspot.com http://3w-agility.blogspot.com http://cplusplus-soup.blogspot.com
participants (5)
-
David Greene
-
Dean Michael Berris
-
Giovanni P. Deretta
-
Michael Marcin
-
Peter Dimov