
I watched the talk Herb Sutter gave recently regarding concurrency, and I have to say I find the ideas of active objects and future variables to be very exciting. (A link to the talk is provided below, which must be viewed with Internet Explorer). I noticed that Boost already has a futures library in the sandbox under development, which from what I can tell looks well designed. Are there any plans to add the functionality of active objects to boost as well? I would be interested in hearing how you think the problem would be best approached. From what I can tell, it should ideally be possible to use 'active' as a modifier like 'const' or 'volatile' in front of a specific object instance, and have all member function calls automatically add themselves to a queue when called, instead of actually being called. Because this syntax is probably impossible to pull off, I think the next best thing would look something like the following: struct MyClass { int foo ( int arg1, const char * arg2 ); }; void test ( ) { active < MyClass > object; future < int > i = object.request ( MyClass::foo, 42, "blah" ); } 'request' could also be spelled 'enqueue', 'invoke', or 'operator ()', although I'm not sure if the terseness of the latter would be sensible. 'active' would probably use CRTP to derive privately from 'MyClass', enforcing that member functions of MyClass are not called directly. Thoughts? Herb Sutter's video: http://microsoft.sitestream.com/PDC05/TLN/TLN309_files/Default.htm#nopreload=1&autostart=1 -Jason

"Jason Hise" wrote
Herb Sutter's video: http://microsoft.sitestream.com/PDC05/TLN/TLN309_files/Default.htm#nopreload=1&autostart=1
Thanks for the link. the lecture has helped me understand futures and has given me hope that there is a good possibility of a sensible high level approach to threads too. cheers Andy Little

Herb Sutter's video: http://microsoft.sitestream.com/PDC05/TLN/TLN309_files/Default.htm#nopreload=1&autostart=1
So: "The presentation requires Internet Explorer 5.0 or later, Netscape Navigator 7.0 or later, or Internet Explorer 5.2.2 or later for Mac. To download the latest version of Internet Explorer from the Microsoft Web site, click OK." No joy in Firefox, Safari, or Camino. Nice. -- -- Marshall Marshall Clow Idio Software <mailto:marshall@idio.com> It is by caffeine alone I set my mind in motion. It is by the beans of Java that thoughts acquire speed, the hands acquire shaking, the shaking becomes a warning. It is by caffeine alone I set my mind in motion.

Marshall Clow wrote:
Herb Sutter's video: http://microsoft.sitestream.com/PDC05/TLN/TLN309_files/Default.htm#nopreload=1&autostart=1
So: "The presentation requires Internet Explorer 5.0 or later, Netscape Navigator 7.0 or later, or Internet Explorer 5.2.2 or later for Mac. To download the latest version of Internet Explorer from the Microsoft Web site, click OK."
No joy in Firefox, Safari, or Camino.
Nice.
Yea, nice. If don't feel like climbing the Microsoft tree you can download the complete presentation files archive: http://microsoft.sitestream.com/PDC05/TLN/TLN309.zip And look at the Windows Media video. Or hack the javascript browser checks out. WARNING: It's a 140Meg file. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - Grafik/jabber.org

Rene Rivera wrote:
Marshall Clow wrote:
No joy in Firefox, Safari, or Camino.
Nice.
Yea, nice. If don't feel like climbing the Microsoft tree you can download the complete presentation files archive:
http://microsoft.sitestream.com/PDC05/TLN/TLN309.zip
And look at the Windows Media video.
If using IE is impractical, using Windows Media Player is even more so. Regards, Stefan

Stefan Seefeld wrote:
Rene Rivera wrote:
Marshall Clow wrote:
No joy in Firefox, Safari, or Camino.
Nice.
Yea, nice. If don't feel like climbing the Microsoft tree you can download the complete presentation files archive:
http://microsoft.sitestream.com/PDC05/TLN/TLN309.zip
And look at the Windows Media video.
If using IE is impractical, using Windows Media Player is even more so.
There are video players that can play WMV files that aren't written by Microsoft, both free and commercial. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - Grafik/jabber.org

I saw that presentation too, it is very impressive and I can't wait to see what comes of it. I would love to see something like this go into boost but somehow I don't think it could be done elegantly without language and compiler support. Definitely wouldn't complain if someone proved me wrong :) On 10/30/05, Jason Hise <chaos@ezequal.com> wrote:
I watched the talk Herb Sutter gave recently regarding concurrency, and I have to say I find the ideas of active objects and future variables to be very exciting. (A link to the talk is provided below, which must be viewed with Internet Explorer). I noticed that Boost already has a futures library in the sandbox under development, which from what I can tell looks well designed. Are there any plans to add the functionality of active objects to boost as well? I would be interested in hearing how you think the problem would be best approached.
From what I can tell, it should ideally be possible to use 'active' as a modifier like 'const' or 'volatile' in front of a specific object instance, and have all member function calls automatically add themselves to a queue when called, instead of actually being called. Because this syntax is probably impossible to pull off, I think the next best thing would look something like the following:
struct MyClass { int foo ( int arg1, const char * arg2 ); };
void test ( ) { active < MyClass > object; future < int > i = object.request ( MyClass::foo, 42, "blah" ); }
'request' could also be spelled 'enqueue', 'invoke', or 'operator ()', although I'm not sure if the terseness of the latter would be sensible. 'active' would probably use CRTP to derive privately from 'MyClass', enforcing that member functions of MyClass are not called directly. Thoughts?
Herb Sutter's video: http://microsoft.sitestream.com/PDC05/TLN/TLN309_files/Default.htm#nopreload=1&autostart=1
-Jason
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- Cory Nelson http://www.int64.org

Cory Nelson wrote:
I saw that presentation too, it is very impressive and I can't wait to see what comes of it.
Yeah, having active and futures support would make writing multithreaded apps a dream.
I would love to see something like this go into boost but somehow I don't think it could be done elegantly without language and compiler support. Definitely wouldn't complain if someone proved me wrong :)
Active objects would be very hard to do *automatically* since the object is a thread (very neat concept), with the constructor, methods and destructor being processed as messages to the object's thread. However, the active lambda stuff could possibly be done as an extension of Boost.Lambda using Spirit/Phoenix sugar: future< int > = active<>[ _1 = _1 + 2 ]; I'd have to thing about the other things and how to make the above (or something like it) work. Maybe we could create virual classes in a similar way that Boost.Python works that wraps the method calls into messages. Sort of a C++ to threaded C++ language binding. - Reece

Reece Dunn wrote:
Active objects would be very hard to do *automatically* since the object is a thread (very neat concept), with the constructor, methods and destructor being processed as messages to the object's thread.
The main obstacle in making them work automatically is the desire to give them familiar syntax. If the desire to call methods directly can be overlooked, it seems perfectly possible to offer a generic solution. A simplified declaration for active <> might look something like this: template < typename Type > struct active : private Type { active( [...] ); template < typename ReturnType [, ...] > future < ReturnType > enqueue( ReturnType (Type::*func) ( [...] ) [, ...] ); }; The [...]s represent where BOOST_PP adds parameters as it generates versions of the functions taking any number of arguments (I have used this technique before when working on singleton, so I know it is viable). To use this, you would just create an active < MyType > object and enqueue requests by passing member functions to the enqueue method. Wouldn't this design work reasonably cleanly? -Jason

Jason Hise wrote:
Reece Dunn wrote:
Active objects would be very hard to do *automatically* since the object is a thread (very neat concept), with the constructor, methods and destructor being processed as messages to the object's thread.
The main obstacle in making them work automatically is the desire to give them familiar syntax. If the desire to call methods directly can be overlooked, it seems perfectly possible to offer a generic solution. A simplified declaration for active <> might look something like this:
template < typename Type > struct active : private Type { active( [...] );
template < typename ReturnType [, ...] > future < ReturnType > enqueue( ReturnType (Type::*func) ( [...] ) [, ...] ); };
It would be nicer to have the more natural, intuitive syntax for invoking methods which you could do by making use of boost::function like the interface library does. Active lambdas could be done by having: future< int > f = active< function< int ( int ) > >( _1 = _1 * 2 )( 2 ); or something similar. This will probably not work as is, but that would be how I see active lambdas (at the moment). - Reece

Reece Dunn wrote:
Jason Hise wrote:
template < typename Type > struct active : private Type { active( [...] );
template < typename ReturnType [, ...] > future < ReturnType > enqueue( ReturnType (Type::*func) ( [...] ) [, ...] ); };
It would be nicer to have the more natural, intuitive syntax for invoking methods which you could do by making use of boost::function like the interface library does.
What kind of syntax precisely are you referring to? Also, where is the interface library located? -Jason

Jason Hise wrote:
Reece Dunn wrote:
Active objects would be very hard to do *automatically* since the object is a thread (very neat concept), with the constructor, methods and destructor being processed as messages to the object's thread.
The main obstacle in making them work automatically is the desire to give them familiar syntax. If the desire to call methods directly can be overlooked, it seems perfectly possible to offer a generic solution. A simplified declaration for active <> might look something like this:
template < typename Type > struct active : private Type { active( [...] );
template < typename ReturnType [, ...] > future < ReturnType > enqueue( ReturnType (Type::*func) ( [...] ) [, ...] ); };
Another thought: to make the constructor and destructor queueable actions, instead of deriving from Type, active < Type > could contain an aligned_storage of sizeof ( Type ) and construct/destruct Type in place. -Jason

On 11/1/05, Jason Hise <chaos@ezequal.com> wrote:
template < typename ReturnType [, ...] > future < ReturnType > enqueue( ReturnType (Type::*func) ( [...] ) [, ...] );
I'd personally prefer using the member function pointer as an explicit template argument, allowing for the calling syntax: enqueue< &your_type::member_function >( your, arguments, go, here ); Again, with proper forms having different amounts of parameters generated by Boost.Preprocessor. -- -Matt Calabrese

On 11/2/05, Matt Calabrese <rivorus@gmail.com> wrote:
I'd personally prefer using the member function pointer as an explicit template argument, allowing for the calling syntax:
enqueue< &your_type::member_function >( your, arguments, go, here );
Again, with proper forms having different amounts of parameters generated by Boost.Preprocessor.
Heh, scratch that, unless someone can actually come up with a way to provide that syntax in a manner which works for all types. -- -Matt Calabrese

Matt Calabrese wrote:
On 11/2/05, Matt Calabrese <rivorus@gmail.com> wrote:
I'd personally prefer using the member function pointer as an explicit template argument, allowing for the calling syntax:
enqueue< &your_type::member_function >( your, arguments, go, here );
Again, with proper forms having different amounts of parameters generated by Boost.Preprocessor.
Heh, scratch that, unless someone can actually come up with a way to provide that syntax in a manner which works for all types.
It *might* be possible if 'enqueue' were a static member instance of some internal type and one were to severely abuse the '<' and '>' operators. Of course, if we are going in the direction of abusing operators we could do other things like: active < MyType > object; ( object | MyType::Func ) ( /*args here*/ ); ( object << MyType::Func ) ( /*args here*/ ); etc... -Jason

Reece Dunn wrote:
Cory Nelson wrote:
I saw that presentation too, it is very impressive and I can't wait to see what comes of it.
Yeah, having active and futures support would make writing multithreaded apps a dream.
It _could_ make _some_ multithreaded apps easier to write. :-)
I would love to see something like this go into boost but somehow I don't think it could be done elegantly without language and compiler support. Definitely wouldn't complain if someone proved me wrong :)
Active objects would be very hard to do *automatically* since the object is a thread (very neat concept), with the constructor, methods and destructor being processed as messages to the object's thread.
An active object can be implemented as a logical thread, although there should be no need to serialize read-only operations. But a physical thread per object would be impractical.
However, the active lambda stuff could possibly be done as an extension of Boost.Lambda using Spirit/Phoenix sugar:
future< int > = active<>[ _1 = _1 + 2 ];
This doesn't work; your active<> doesn't take any arguments, so you can't use _1. It is possible to make this: future<int> f1 = active( bind( f, 1, 2 ) ); future<int> f2 = active( bind( &X::f, &x, 1, 2 ) ); work. But the problem is that you must have a way to specify which functions (or closures, or even combinations of functions and arguments) can be performed in parallel and which need to be serialized by the "active runtime engine". (In the example above, f(a,b) may call x.f(a,b) "inactively".) This active business looks very similar to apartment-threaded COM, where the COM support framework serialized the calls to the component. It probably could be made to work on a language level, given a sufficiently smart compiler and a heavily optimized runtime.

// lifted from mpl reference manul typedef mpl::vector<char,int,unsigned,long,unsigned long> s1; typedef mpl::list<char,int,unsigned,long, unsigned long> s2; // new list - nothing but floats typedef mpl::list<float,float,float,float,float> s3; void shouldnotwork() { // this one is lifted RIGHT from the example in the MPL ref manual // compiles without error (as expected) BOOST_MPL_ASSERT(( mpl::equal< s1, s2 > )); // now use s3- which is VERY different from s1 // this compiles without error on GCC4 (which is NOT expected) BOOST_MPL_ASSERT(( mpl::equal< s1, s3 > )); } Sorry I have not tested any other compilers

// lifted from mpl reference manul typedef mpl::vector<char,int,unsigned,long,unsigned long> s1; typedef mpl::list<char,int,unsigned,long, unsigned long> s2; // new list - nothing but floats typedef mpl::list<float,float,float,float,float> s3; void shouldnotwork() { // this one is lifted RIGHT from the example in the MPL ref manual // compiles without error (as expected) BOOST_MPL_ASSERT(( mpl::equal< s1, s2 > )); // now use s3- which is VERY different from s1 // this compiles without error on GCC4 (which is NOT expected) BOOST_MPL_ASSERT(( mpl::equal< s1, s3 > )); } Sorry I have not tested any other compilers

(sorry for the duplicate- mail problem (i.e. User error :) ) on my end) please disregard the duplicate message. Duplicate mail withstanding, mpl::equal<> does seem to not work (or is it more "user error"? :) ) I am resorting to writing my own for the moment, but I wanted to make the interested parties aware of the issue. Thanks, Brian

On Tuesday 01 November 2005 20:11, Peter Dimov wrote: [snip]
However, the active lambda stuff could possibly be done as an extension of Boost.Lambda using Spirit/Phoenix sugar:
future< int > = active<>[ _1 = _1 + 2 ];
This doesn't work; your active<> doesn't take any arguments, so you can't use _1. It is possible to make this:
future<int> f1 = active( bind( f, 1, 2 ) ); future<int> f2 = active( bind( &X::f, &x, 1, 2 ) );
What is the point of adding the active-Wrapper? The bind already returns a nullary function. future<int> f1 = bind( f, 1, 2 ); should be sufficient. Just wondering. Thorsten

Thorsten Schuett wrote:
On Tuesday 01 November 2005 20:11, Peter Dimov wrote: [snip]
However, the active lambda stuff could possibly be done as an extension of Boost.Lambda using Spirit/Phoenix sugar:
future< int > = active<>[ _1 = _1 + 2 ];
This doesn't work; your active<> doesn't take any arguments, so you can't use _1. It is possible to make this:
future<int> f1 = active( bind( f, 1, 2 ) ); future<int> f2 = active( bind( &X::f, &x, 1, 2 ) );
What is the point of adding the active-Wrapper? The bind already returns a nullary function.
future<int> f1 = bind( f, 1, 2 );
should be sufficient.
Just wondering.
I suppose you could make future take a bind/function and then make that run on a thread. One of Herb's points was that you could grep for *active* to find the places where you gain concurrency and for *wait* where you lose it. For example: future< int > val = active { ... } // gain concurrency // ... std::cout << "value = " << val.wait(); // lose concurrency - Reece

On Wednesday 02 November 2005 09:36, Reece Dunn wrote:
Thorsten Schuett wrote:
On Tuesday 01 November 2005 20:11, Peter Dimov wrote: [snip]
However, the active lambda stuff could possibly be done as an extension of Boost.Lambda using Spirit/Phoenix sugar:
future< int > = active<>[ _1 = _1 + 2 ];
This doesn't work; your active<> doesn't take any arguments, so you can't use _1. It is possible to make this:
future<int> f1 = active( bind( f, 1, 2 ) ); future<int> f2 = active( bind( &X::f, &x, 1, 2 ) );
What is the point of adding the active-Wrapper? The bind already returns a nullary function.
future<int> f1 = bind( f, 1, 2 );
should be sufficient.
Just wondering.
I suppose you could make future take a bind/function and then make that run on a thread. One of Herb's points was that you could grep for *active* to find the places where you gain concurrency and for *wait* where you lose it. For example:
future< int > val = active { ... } // gain concurrency // ... std::cout << "value = " << val.wait(); // lose concurrency I watched his presentation yesterday and I don't really buy this argument. MS is selling IDEs. It shouldn't be that hard to extend the search dialog to allow the user to express "complex queries" on the source code.
- Find all const functions in template classes - Find all "gain concurrency" points - Find all "lose concurrency" points IMHO, the "active" keyword is redundant. Thorsten

Thorsten Schuett wrote:
On Tuesday 01 November 2005 20:11, Peter Dimov wrote: [snip]
However, the active lambda stuff could possibly be done as an extension of Boost.Lambda using Spirit/Phoenix sugar:
future< int > = active<>[ _1 = _1 + 2 ];
This doesn't work; your active<> doesn't take any arguments, so you can't use _1. It is possible to make this:
future<int> f1 = active( bind( f, 1, 2 ) ); future<int> f2 = active( bind( &X::f, &x, 1, 2 ) );
What is the point of adding the active-Wrapper? The bind already returns a nullary function.
future<int> f1 = bind( f, 1, 2 );
should be sufficient.
'active' is not a wrapper, it's a function taking a function object and returning a future. I haven't looked at your 'future' design too closely (if at all) but I think that what you call 'future' is a future and an executor rolled into one. In the "classic" design a future is a passive result holder and the actual computation of the result is done by the executor. A future by itself does not spawn any threads and doesn't even know about threads. It's a "ticket" for a future value that you can "redeem" at some point.

"Reece Dunn" <msclrhd@hotmail.com> writes:
Cory Nelson wrote:
I saw that presentation too, it is very impressive and I can't wait to see what comes of it.
Yeah, having active and futures support would make writing multithreaded apps a dream.
I would love to see something like this go into boost but somehow I don't think it could be done elegantly without language and compiler support. Definitely wouldn't complain if someone proved me wrong :)
Active objects would be very hard to do *automatically* since the object is a thread (very neat concept)
Also a very old concept. See Simula. Kristen Nygaard would be pleased. However, it sounds a lot like the COM "apartment model," which I've never had the pleasure to use, but whose usability I recall my colleagues complaining bitterly about. -- Dave Abrahams Boost Consulting www.boost-consulting.com

----- Original Message ----- From: "David Abrahams" <dave@boost-consulting.com> To: <boost@lists.boost.org> Sent: Wednesday, November 02, 2005 10:05 AM Subject: Re: [boost] Active objects?
"Reece Dunn" <msclrhd@hotmail.com> writes:
Cory Nelson wrote:
I saw that presentation too, it is very impressive and I can't wait to see what comes of it.
Yeah, having active and futures support would make writing multithreaded apps a dream.
I would love to see something like this go into boost but somehow I don't think it could be done elegantly without language and compiler support. Definitely wouldn't complain if someone proved me wrong :)
Active objects would be very hard to do *automatically* since the object is a thread (very neat concept)
Also a very old concept. See Simula. Kristen Nygaard would be pleased.
However, it sounds a lot like the COM "apartment model," which I've never had the pleasure to use, but whose usability I recall my colleagues complaining bitterly about.
Huh. Had thought the "active object" term had originated from work by D. Schmidt. Not too many complaints about his version of things, even if working with some implementations (CORBA) can be unwieldy. H. Sutter's version looks nice. The idea that calls on an object can be queued for execution at some later time is useful. Successfully hiding the details would be cute. I hope it's fair to also say that it's not a solution to the nasty async problem that I typically run into. Active objects (a la Sutter) appear to be reinforcing the standard "call" model over async activity. This is good because it is familiar to us. Symmetric interaction between groups of peer objects is a different model of operation. Any object can "call" any other object and there is no "return" (i.e. future). Most of my async-programming problems seem to fall into the latter category. In summary, Mr Sutters AOs would deliver a slick solution to a certain class of application. I'm guessing it's not intended as the "final" solution to all async development? Cheers.

On 11/1/05, Scott Woods <scottw@qbik.com> wrote:
----- Original Message ----- From: "David Abrahams" <dave@boost-consulting.com> To: <boost@lists.boost.org> Sent: Wednesday, November 02, 2005 10:05 AM Subject: Re: [boost] Active objects?
"Reece Dunn" <msclrhd@hotmail.com> writes:
Cory Nelson wrote:
I saw that presentation too, it is very impressive and I can't wait to see what comes of it.
Yeah, having active and futures support would make writing multithreaded apps a dream.
I would love to see something like this go into boost but somehow I don't think it could be done elegantly without language and compiler support. Definitely wouldn't complain if someone proved me wrong :)
Active objects would be very hard to do *automatically* since the object is a thread (very neat concept)
Also a very old concept. See Simula. Kristen Nygaard would be pleased.
However, it sounds a lot like the COM "apartment model," which I've never had the pleasure to use, but whose usability I recall my colleagues complaining bitterly about.
Huh. Had thought the "active object" term had originated from work by D. Schmidt. Not too many complaints about his version of things, even if working with some implementations (CORBA) can be unwieldy.
H. Sutter's version looks nice. The idea that calls on an object can be queued for execution at some later time is useful. Successfully hiding the details would be cute.
I hope it's fair to also say that it's not a solution to the nasty async problem that I typically run into.
Active objects (a la Sutter) appear to be reinforcing the standard "call" model over async activity. This is good because it is familiar to us. Symmetric interaction between groups of peer objects is a different model of operation. Any object can "call" any other object and there is no "return" (i.e. future). Most of my async-programming problems seem to fall into the latter category.
In summary, Mr Sutters AOs would deliver a slick solution to a certain class of application. I'm guessing it's not intended as the "final" solution to all async development?
In the presentation he gave reasons for needing to have the future<> and call wait() etc. In the end it seems to boil down to having a point to reliably catch exceptions and other errors. But he also mentioned clearly that it's just a concept they've been toying with and that they have no strong commitment to it.
Cheers.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- Cory Nelson http://www.int64.org

"Scott Woods" wrote
Symmetric interaction between groups of peer objects is a different model of operation. Any object can "call" any other object and there is no "return" (i.e. future). Most of my async-programming problems seem to fall into the latter category.
I'm certainly no expert in threads, but I'm fascinated by the idea of 'symmetric interaction'. Do you have a short example of symmetric interaction ***Only*** ?( without any other structure to the system). I ask because it sounds like a recipe for chaos. Surely in any practical system, there must be a higher level of organisation than this? regards Andy Little

"Scott Woods" wrote
Symmetric interaction between groups of peer objects is a different model of operation. Any object can "call" any other object and there is no "return" (i.e. future). Most of my async-programming problems seem to fall into the latter category.
I'm certainly no expert in threads, but I'm fascinated by the idea of 'symmetric interaction'. Do you have a short example of symmetric interaction ***Only*** ?( without any other structure to the system). I ask because it sounds
----- Original Message ----- From: "Andy Little" <andy@servocomm.freeserve.co.uk> To: <boost@lists.boost.org> Sent: Wednesday, November 02, 2005 10:14 PM Subject: Re: [boost] Active objects? like a
recipe for chaos.
Ha. Reading my message after your feedback, it does. Surely in any practical system, there must be a
higher level of organisation than this?
I used the word symmetric to try and distinguish the nature of objects in the world of D. Schmidt, from those of H. Sutter. It refers to the equivalent potentials that each object has to send messages (to HS; call a queuable method) to another object. There is an asymmetry on the HS model where calls are made to an active object and information is returned through the future concept. In a world such as that of DS (also refer to SDL) all exchange of information is through the same mechanism. Of course, they do not send the same messages (was this your concern? :-). The "higher level of organisation" that you expect would manifest itself in the different messages being sent, e.g. CHALLENGE, WELCOME and the different roles that each object has in an exchange.

"Scott Woods" wrote
I used the word symmetric to try and distinguish the nature of objects in the world of D. Schmidt, from those of H. Sutter. It refers to the equivalent potentials that each object has to send messages (to HS; call a queuable method) to another object. There is an asymmetry on the HS model where calls are made to an active object and information is returned through the future concept. In a world such as that of DS (also refer to SDL) all exchange of information is through the same mechanism.
Its difficult to discuss this without a concrete example. A system may involve only a collection of similar processes, but I conjecture they can be modelled equivalently as workers for one manager process. An analogy maybe the difference between a star and a mesh network topology. The star topology is theoretically less efficien, but It is possible to provide a strategy for management which is not as simple with a mesh, because the manager doesnt not have the same simple birds-eye view of the system. The hub can monitor the system and therefore optimise busy connections ( One point brought up by Herb Sutter is that it is important to be able to check that a system is using concurrency efficiently or even at all), can overrule and shut down particular threads, or even shut down the whole system in an orderly fashion.
Of course, they do not send the same messages (was this your concern? :-).
I guess I was trying to point out that systems must have a higher level of organisation , and once this is realised a peer to peer network becomes less attractive because of its fragility and lack of controllability compared to a centrally managed system. A peer to peer network may promise higher performance from an eyeball analysis, but bottlenecks are difficult to find and so difficult to remove.
The "higher level of organisation" that you expect would manifest itself in the different messages being sent, e.g. CHALLENGE, WELCOME and the different roles that each object has in an exchange.
I think that this throws up the problems of explicit licking and unlocking, which the HS model was trying to hide, but I take the point that the HS model wont cover all situatons. Nevertheless it would be interesting to try to remodel existing systems using it to find out what it can and cant model ( and how that affects performance). In the simple situations I can think of using it for, ie saving files and math calculations it seems to fit very well though regards Andy Little

Its difficult to discuss this without a concrete example. A system may involve only a collection of similar processes, but I conjecture they can be modelled equivalently as workers for one manager process. An analogy maybe the difference between a star and a mesh network topology. The star topology is theoretically less efficien, but It is possible to provide a strategy for management which is not as simple with a mesh, because the manager doesnt not have the same simple birds-eye view of the system. The hub can monitor
system and therefore optimise busy connections ( One point brought up by Herb Sutter is that it is important to be able to check that a system is using concurrency efficiently or even at all), can overrule and shut down
threads, or even shut down the whole system in an orderly fashion.
Of course, they do not send the same messages (was this your concern? :-).
I guess I was trying to point out that systems must have a higher level of organisation , and once this is realised a peer to peer network becomes less attractive because of its fragility and lack of controllability compared to a centrally managed system. A peer to peer network may promise higher
----- Original Message ----- From: "Andy Little" <andy@servocomm.freeserve.co.uk> To: <boost@lists.boost.org> Sent: Thursday, November 03, 2005 10:49 PM Subject: Re: [boost] Active objects? [snip] the particular performance
from an eyeball analysis, but bottlenecks are difficult to find and so difficult to remove.
The "higher level of organisation" that you expect would manifest itself in the different messages being sent, e.g. CHALLENGE, WELCOME and the different roles that each object has in an exchange.
I think that this throws up the problems of explicit licking and unlocking,
which the HS model was trying to hide, but I take the point that the HS model wont cover all situatons. Nevertheless it would be interesting to try to remodel existing systems using it to find out what it can and cant model ( and how
Or indeed, implicit licking. that
affects performance). In the simple situations I can think of using it for, ie saving files and math calculations it seems to fit very well though
Agreed. It feels as though we come from different areas of software development but have drawn similar conclusions. The issues you highlight earlier (e.g. peer-to-peer vs centrally manager) are *big*. While I typically punt for peer-to-peer designs I will readily concede that certain systems have requirements that cannot be met by such an approach. It seems to be a case-by-case thing to me as neither is the silver bullet. These issues are also almost going off-topic. Except that "peer-to-peer" and "centrally managed" are both examples of software systems that I believe would be difficult to develop using (HS's) active objects and futures :-) Cheers.
participants (13)
-
Andy Little
-
Brian Braatz
-
Cory Nelson
-
David Abrahams
-
Jason Hise
-
Marshall Clow
-
Matt Calabrese
-
Peter Dimov
-
Reece Dunn
-
Rene Rivera
-
Scott Woods
-
Stefan Seefeld
-
Thorsten Schuett