Updated version of futures library

Hi all, I've updated my futures library to include wait_for_any() and wait_for_all(). The provided overloads allow either waiting on up to five futures known at compile time: unique_future<int> f1; shared_future<std::string> f2; unique_future<double> f3; unsigned const ready_index=wait_for_any(future1,future2,future3); or on a container of futures: std::vector<jss::shared_future<int> > futures; std::vector<jss::shared_future<int> >::iterator const future= jss::wait_for_any(futures.begin(),futures.end()); In the first instance, they can be of distinct types, but in the latter, all the futures in the container must be of the same type. The documentation is available at http://www.justsoftwaresolutions.co.uk/files/futures_documentation.html and the files and tests are available at http://www.justsoftwaresolutions.co.uk/files/n2561_futures_revised_20080530.... Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

Am Freitag 30 Mai 2008 11:54:59 schrieb Anthony Williams:
unique_future<int> f1; shared_future<std::string> f2; unique_future<double> f3; unsigned const ready_index=wait_for_any(future1,future2,future3);
should this have been: {{{ unique_future<int> f1; shared_future<std::string> f2; unique_future<double> f3; unsigned const ready_index=wait_for_any(f1, f2, f3); }}} ?? -- Maik

Maik Beckmann <beckmann.maik@googlemail.com> writes:
Am Freitag 30 Mai 2008 11:54:59 schrieb Anthony Williams:
unique_future<int> f1; shared_future<std::string> f2; unique_future<double> f3; unsigned const ready_index=wait_for_any(future1,future2,future3);
should this have been: {{{ unique_future<int> f1; shared_future<std::string> f2; unique_future<double> f3; unsigned const ready_index=wait_for_any(f1, f2, f3); }}} ??
Yes. That's what I meant. Thanks. Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

Anthony Williams-4 wrote:
The documentation is available at http://www.justsoftwaresolutions.co.uk/files/futures_documentation.html
Try clicking on the links in the documentation. They all point to: http://www.justsoftwaresolutions.co.uk/files/index.html#xxx rather than: http://www.justsoftwaresolutions.co.uk/files/futures_documentation.html#xxx Johan -- View this message in context: http://www.nabble.com/Updated-version-of-futures-library-tp17555389p17556682... Sent from the Boost - Dev mailing list archive at Nabble.com.

Johan Torp <johan.torp@gmail.com> writes:
Anthony Williams-4 wrote:
The documentation is available at http://www.justsoftwaresolutions.co.uk/files/futures_documentation.html
Try clicking on the links in the documentation. They all point to: http://www.justsoftwaresolutions.co.uk/files/index.html#xxx rather than: http://www.justsoftwaresolutions.co.uk/files/futures_documentation.html#xxx
Oops. That'll teach me to just rename the file. It should be fixed now. Thanks. Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

----- Original Message ----- From: "Anthony Williams" <anthony.ajw@gmail.com> To: <boost@lists.boost.org> Sent: Friday, May 30, 2008 11:54 AM Subject: [boost] Updated version of futures library
Hi all,
I've updated my futures library to include wait_for_any() and wait_for_all().
The provided overloads allow either waiting on up to five futures known at compile time:
unique_future<int> f1; shared_future<std::string> f2; unique_future<double> f3; unsigned const ready_index=wait_for_any(future1,future2,future3);
or on a container of futures:
std::vector<jss::shared_future<int> > futures; std::vector<jss::shared_future<int> >::iterator const future= jss::wait_for_any(futures.begin(),futures.end());
In the first instance, they can be of distinct types, but in the latter, all the futures in the container must be of the same type.
The documentation is available at http://www.justsoftwaresolutions.co.uk/files/futures_documentation.html and the files and tests are available at http://www.justsoftwaresolutions.co.uk/files/n2561_futures_revised_20080530....
Hello, in the implementation of wait_for_any you use the future_object_base which is in the detail namespace and you have added an external_waiters member to this class. Does it mean that the future interface from 20080515 do not allows to implement this function in an efficient way? This was also the case of packaged_task which depends on detail::future_object<T>. Does it mean that the future interface from 20080515 (extracting the packaged_task specifics) do not allows to implement the packaged_task interface in an efficient way? Should a separated packaged_task library be accepted in boost with an implementation using the internals of another library? Does the current packaged_task interface allows to implement a task scheduler with features we have not determined yet, without modifying the internals of packaged_task, and futures? Does all this means that in order to implement higher level abstraction on top of the future library we need to use the internals and/or change the future library implementation? IMO this is a symthom the future library interface is not enough open, and than there are some internals abstractions that should be made public. In addition you have added a lot of friends declarations, in particular unique_future has 4 friend classes and shared_future has 3. friend class shared_future<R>; friend class promise<R>; friend class packaged_task<R>; friend class detail::future_waiter; When we have such a number of friend classes this means that the non public interface is a little bit public. In this case it will be better to identify these interfaces and group them in a backdoor. This allows to separate the interface used by applications and the one used by libraries. Best, Vicente

"vicente.botet" <vicente.botet@wanadoo.fr> writes:
From: "Anthony Williams" <anthony.ajw@gmail.com>
I've updated my futures library to include wait_for_any() and wait_for_all().
The documentation is available at http://www.justsoftwaresolutions.co.uk/files/futures_documentation.html and the files and tests are available at http://www.justsoftwaresolutions.co.uk/files/n2561_futures_revised_20080530....
in the implementation of wait_for_any you use the future_object_base which is in the detail namespace and you have added an external_waiters member to this class.
Does it mean that the future interface from 20080515 do not allows to implement this function in an efficient way?
That is correct.
This was also the case of packaged_task which depends on detail::future_object<T>. Does it mean that the future interface from 20080515 (extracting the packaged_task specifics) do not allows to implement the packaged_task interface in an efficient way?
I think that it can be implemented most efficiently with access to the internals of the futures. However, it is possible to implement a packaged_task on top of a promise.
Should a separated packaged_task library be accepted in boost with an implementation using the internals of another library?
I think that packaged_task is an important class from the user perspective, as it simplifies task launching, which is one of the common use cases I expect from a futures library. I therefore think it should be provided, whatever underlying implementation we use.
Does the current packaged_task interface allows to implement a task scheduler with features we have not determined yet, without modifying the internals of packaged_task, and futures?
That is the intention. The scheduler can decide which thread to run the task on, and when, without having to know the details of the task or how it connects to its futures. The set_wait_callback feature allows the scheduler to know when a thread blocks on the future associated with a task, and therefore to be able to reschedule the task associated with that future if appropriate.
Does all this means that in order to implement higher level abstraction on top of the future library we need to use the internals and/or change the future library implementation?
IMO this is a symthom the future library interface is not enough open, and than there are some internals abstractions that should be made public.
If there is something that cannot be done without access to the internals, then it might mean that there is an abstraction missing. My intent is to provide a minimal set of features that allow higher level abstractions to be built. This is why I keep evolving the interface, e.g. to add wait_for_any.
In addition you have added a lot of friends declarations, in particular unique_future has 4 friend classes and shared_future has 3.
friend class shared_future<R>; friend class promise<R>; friend class packaged_task<R>; friend class detail::future_waiter;
When we have such a number of friend classes this means that the non public interface is a little bit public. In this case it will be better to identify these interfaces and group them in a backdoor. This allows to separate the interface used by applications and the one used by libraries.
The friend declarations mean that these classes are closely related and need to access each other's internals. This is safe because they are provided together as a coherent whole. I do not agree with the idea of having a public backdoor. Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Friday 30 May 2008 05:54 am, Anthony Williams wrote:
Hi all,
I've updated my futures library to include wait_for_any() and wait_for_all().
It seems like you might want to have timed_wait_for_any() and timed_wait_for_all() too? -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFIQE7L5vihyNWuA4URAmyoAJ9QPTvt0hgv+v1dab0Rj9VzHyY/HACguSIb l1VJChzSdrzT+/Jn3NNWw8g= =sBbw -----END PGP SIGNATURE-----

Frank Mori Hess <frank.hess@nist.gov> writes:
On Friday 30 May 2008 05:54 am, Anthony Williams wrote:
Hi all,
I've updated my futures library to include wait_for_any() and wait_for_all().
It seems like you might want to have timed_wait_for_any() and timed_wait_for_all() too?
Yes, that might be sensible. Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

I've updated my futures library to include wait_for_any() and wait_for_all().
The provided overloads allow either waiting on up to five futures known at compile time:
unique_future<int> f1; shared_future<std::string> f2; unique_future<double> f3; unsigned const ready_index=wait_for_any(future1,future2,future3);
or on a container of futures:
std::vector<jss::shared_future<int> > futures; std::vector<jss::shared_future<int> >::iterator const future= jss::wait_for_any(futures.begin(),futures.end());
In the first instance, they can be of distinct types, but in the latter, all the futures in the container must be of the same type.
The documentation is available at http://www.justsoftwaresolutions.co.uk/files/futures_documentation.html and the files and tests are available at http://www.justsoftwaresolutions.co.uk/files/n2561_futures_revised_2008 0530.zip
This reminds me of a discussion we had some time ago, namely overloading operator&&() for futures allowing to achieve the same thing: waiting for all of the futures: future<int> f1; future<double> f2; future<fusion::vector<int, double> > f3 (f1 && f2); fusion::vector<int, double> result = f3.get(); // waits for f1 and f2 Similarly, overloading operator|| would allow to wait for the first future to finish only: future<int> f1; future<double> f2; future<variant<int, double> > f3 (f1 || f2); variant<int, double> result = f3.get(); // waits for first, f1 or f2 Regards Hartmut

"Hartmut Kaiser" <hartmut.kaiser@gmail.com> writes:
I've updated my futures library to include wait_for_any() and wait_for_all().
The provided overloads allow either waiting on up to five futures known at compile time:
unique_future<int> f1; shared_future<std::string> f2; unique_future<double> f3; unsigned const ready_index=wait_for_any(future1,future2,future3);
or on a container of futures:
std::vector<jss::shared_future<int> > futures; std::vector<jss::shared_future<int> >::iterator const future= jss::wait_for_any(futures.begin(),futures.end());
In the first instance, they can be of distinct types, but in the latter, all the futures in the container must be of the same type.
The documentation is available at http://www.justsoftwaresolutions.co.uk/files/futures_documentation.html and the files and tests are available at http://www.justsoftwaresolutions.co.uk/files/n2561_futures_revised_2008 0530.zip
This reminds me of a discussion we had some time ago, namely overloading operator&&() for futures allowing to achieve the same thing: waiting for all of the futures:
Yes. I was trying to stay away from the overloaded operators, but this is certainly the same issue. Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

This reminds me of a discussion we had some time ago, namely overloading operator&&() for futures allowing to achieve the same thing: waiting for all of the futures:
Yes. I was trying to stay away from the overloaded operators, but this is certainly the same issue.
Are you interested in something like this? What's your reason for 'staying away' from the operators so far? Regards Hartmut

Hartmut Kaiser wrote:
Yes. I was trying to stay away from the overloaded operators, but this is certainly the same issue.
Are you interested in something like this? What's your reason for 'staying away' from the operators so far?
There has been some confusion as to what the operator semantics should be. Instead of returning the first ready future, one could return a future holding the value of lhs.get() && rhs.get() as soon as this is computable. I.e. future<bool> operator&&(future<bool>& lhs, future<bool>& rhs) would return a new future which is equal to true, iff lhs.get()==true and rhs.get()==true. If either one returned false it would be false and otherwise the composed future would not be ready yet. I think this is more natural semantics, but most people seem to think otherwise. Because of this confusion, I think we should leave the operators be. I believe this is what Anthony is referring to. Note that if we let the operators map against the templated types operators as soon as they are computable, we could implement arithmetic operators too: future<double> a,b,c; future<double> d = a*(b+c*a); // Will become ready when a, b and c is ready This would allow "lifting" of aribitrary arithmetic expressions to futures. Johan -- View this message in context: http://www.nabble.com/Updated-version-of-futures-library-tp17555389p17573449... Sent from the Boost - Dev mailing list archive at Nabble.com.

Johan Torp wrote:
Yes. I was trying to stay away from the overloaded operators, but this is certainly the same issue.
Are you interested in something like this? What's your reason for 'staying away' from the operators so far?
There has been some confusion as to what the operator semantics should be. Instead of returning the first ready future, one could return a future holding the value of lhs.get() && rhs.get() as soon as this is computable.
I.e. future<bool> operator&&(future<bool>& lhs, future<bool>& rhs) would return a new future which is equal to true, iff lhs.get()==true and rhs.get()==true. If either one returned false it would be false and otherwise the composed future would not be ready yet.
Doing it the way you're proposing implies shortcutting operator&&(), which can't be implemented. OTOH, writing fusion::vector<bool, bool> doesn't imply this and is no deviation from the generic case.
I think this is more natural semantics, but most people seem to think otherwise. Because of this confusion, I think we should leave the operators be. I believe this is what Anthony is referring to.
Note that if we let the operators map against the templated types operators as soon as they are computable, we could implement arithmetic operators too: future<double> a,b,c; future<double> d = a*(b+c*a); // Will become ready when a, b and c is ready This would allow "lifting" of aribitrary arithmetic expressions to futures.
This isn't possible in the generic case, because not all types provide the corresponding operators. Additionally, this applies the operators to the results and not the futures itself, where my proposal adds semantics for the operators in terms of the futures and not their results. Regards Hartmut

On Saturday 31 May 2008 14:30, Hartmut Kaiser wrote:
I.e. future<bool> operator&&(future<bool>& lhs, future<bool>& rhs) would return a new future which is equal to true, iff lhs.get()==true and rhs.get()==true. If either one returned false it would be false and otherwise the composed future would not be ready yet.
Doing it the way you're proposing implies shortcutting operator&&(), which can't be implemented.
left-to right evaluation can't be short-circuited, but he's talking about short circuiting in time, as either the lhs or rhs futures complete.
Note that if we let the operators map against the templated types operators as soon as they are computable, we could implement arithmetic operators too: future<double> a,b,c; future<double> d = a*(b+c*a); // Will become ready when a, b and c is ready This would allow "lifting" of aribitrary arithmetic expressions to futures.
This isn't possible in the generic case, because not all types provide the corresponding operators.
So? That just means something like future<A> fa; future<B> fb; future<C> fc; fc = fa + fb; will compile if and only if A a; B b; C c; c = a + b; compiles. What's the problem with that?

On Saturday 31 May 2008 14:30, Hartmut Kaiser wrote:
I.e. future<bool> operator&&(future<bool>& lhs, future<bool>& rhs) would return a new future which is equal to true, iff lhs.get()==true and rhs.get()==true. If either one returned false it would be false and otherwise the composed future would not be ready yet.
Doing it the way you're proposing implies shortcutting operator&&(), which can't be implemented.
left-to right evaluation can't be short-circuited, but he's talking
Right, that's exactly my point.
about short circuiting in time, as either the lhs or rhs futures complete.
Didn't we talk about operator&&()? No 'either/or' here, only both futures. Additionally, the same comments apply as outlined below.
Note that if we let the operators map against the templated types operators as soon as they are computable, we could implement arithmetic operators too: future<double> a,b,c; future<double> d = a*(b+c*a); // Will become ready when a, b and c is ready This would allow "lifting" of aribitrary arithmetic expressions to futures.
This isn't possible in the generic case, because not all types provide the corresponding operators.
So? That just means something like
future<A> fa; future<B> fb; future<C> fc; fc = fa + fb;
will compile if and only if
A a; B b; C c; c = a + b;
compiles. What's the problem with that?
No problem here. My main point was that such operator overloads apply to the result types and not the futures themselves, which adds unnecessary semantics and defeats separation of concerns. And, BTW, this is very much like the default conversion operator to the result type for the futures which - as most have agreed - is a bad idea. Regards Hartmut

On Saturday 31 May 2008 17:14, Hartmut Kaiser wrote:
On Saturday 31 May 2008 14:30, Hartmut Kaiser wrote:
I.e. future<bool> operator&&(future<bool>& lhs, future<bool>& rhs) would return a new future which is equal to true, iff lhs.get()==true and rhs.get()==true. If either one returned false
it
would be false and otherwise the composed future would not be ready yet.
Doing it the way you're proposing implies shortcutting operator&&(), which can't be implemented.
left-to right evaluation can't be short-circuited, but he's talking
Right, that's exactly my point.
about short circuiting in time, as either the lhs or rhs futures complete.
Didn't we talk about operator&&()? No 'either/or' here, only both futures. Additionally, the same comments apply as outlined below.
I'm pretty sure he's not talking about your "wait_for_all" operator&&() at all, he's talking about one with semantics closer to the "real" operator&& defined by the language.
My main point was that such operator overloads apply to the result types and not the futures themselves, which adds unnecessary semantics and defeats separation of concerns. And, BTW, this is very much like the default conversion operator to the result type for the futures which - as most have agreed - is a bad idea.
It only appears to defeat "separation of concerns" because you're not separating your definition of the future logical operator overloads from his. They really are two different ideas and are not useful for the same things. Personally, I don't particularly want to see a future library overload any logical operators at all, or at least have the overloads sequestered in separate headers that aren't included by any of the other future library headers.

Frank Mori Hess wrote:
would be false and otherwise the composed future would not be ready yet.
Doing it the way you're proposing implies shortcutting operator&&(), which can't be implemented.
left-to right evaluation can't be short-circuited, but he's talking
Right, that's exactly my point.
about short circuiting in time, as either the lhs or rhs futures complete.
Didn't we talk about operator&&()? No 'either/or' here, only both futures. Additionally, the same comments apply as outlined below.
I'm pretty sure he's not talking about your "wait_for_all" operator&&() at all, he's talking about one with semantics closer to the "real" operator&& defined by the language.
Sure, I understand. But the issue is that these are very similar to the proposed math operators in the sense that they add semantics in terms of the future results not the futures themselves.
My main point was that such operator overloads apply to the result types and not the futures themselves, which adds unnecessary semantics and defeats separation of concerns. And, BTW, this is very much like the default conversion operator to the result type for the futures which - as most have agreed - is a bad idea.
It only appears to defeat "separation of concerns" because you're not separating your definition of the future logical operator overloads from his.
This is just wrong and I don't think I gave you any reason to assert this. Please refrain from imposing something I didn't say just for the sake of making your point.
They really are two different ideas and are not useful for the same things.
I'm perfectly aware of this. And actually I expressed exactly the same by pointing out the main difference between the two sets of operators: the logical operators I proposed are for the futures themselves while the others are related to the future results. But you're diligently ignoring my point: overloading the operators for the future results is conceptually equivalent to having a default conversion operator to the result type for the future itself! And for that reason I would not like to see something like that in the future library. OTOH, IMHO, having the logical operators alone improves code readability, simplifies the implementation (by avoiding multiple overloads for different parameter counts), allows to build more complex logical execution restrictions as (f1 || f2) && f3 (which is not possible using the proposed wait_for_all and wait_for_any), and all that without having to sacrifice any of the existing functionality.
Personally, I don't particularly want to see a future library overload any logical operators at all, or at least have the overloads sequestered in separate headers that aren't included by any of the other future library headers.
That's a matter of taste, for sure. But would you mind telling us why you don't want to have these? Your opinion sounds to be quite strong, so you must have good reasons! Regards Hartmut

On Saturday 31 May 2008 21:19, Hartmut Kaiser wrote:
I'm perfectly aware of this. And actually I expressed exactly the same by pointing out the main difference between the two sets of operators: the logical operators I proposed are for the futures themselves while the others are related to the future results.
But you're diligently ignoring my point: overloading the operators for the future results is conceptually equivalent to having a default conversion operator to the result type for the future itself! And for that reason I would not like to see something like that in the future library.
Ah, I think I'm understanding you better now. You are appalled by the idea of treating a future<T> like it was a T, is that correct? You want a future<T> to only be treated like an abstract handle to a value, not the value itself? I wasn't ignoring your point, it's just that your view is extremely foreign to how I view futures, so your words were almost meaningless to me. I view being able to view a future<T> as a placeholder for a T object as a large part of what makes futures useful. I implemented the future class in libpoet to support that, with implicit conversions from future<T> to future<U>, from T to future<T>, etc. And being able to treat a future<T> as a placeholder for a T is essential to implementing things like poet::active_function. I don't particularly like the implicit conversion from T to future<T>, but that has absolutely nothing to do with not wanting to treat a future<T> as a placeholder. It's simply because the conversion can block, and thus produce unexpected behavior (an object appearing on the right hand side of an assignment stalling your program). The explicit future::get() at least gives a hint that something more than a quick conversion might be happening. I guess the features that allow a future to act as a placeholder could be split out into a separate higher level class, call it "future_value", which would internally contain a plain old future. The idea doesn't really do anything for me, but maybe you would find it preferable?
OTOH, IMHO, having the logical operators alone improves code readability, simplifies the implementation (by avoiding multiple overloads for different parameter counts), allows to build more complex logical execution restrictions as (f1 || f2) && f3 (which is not possible using the proposed wait_for_all and wait_for_any), and all that without having to sacrifice any of the existing functionality.
The poet::future_barrier/future_composing_barrier/future_select/future_selector stuff I'm currently working on allows for composition. Did you see the earlier discussion on this a couple weeks ago? For example this thread: http://lists.boost.org/Archives/boost/2008/05/137417.php
Personally, I don't particularly want to see a future library overload any logical operators at all, or at least have the overloads sequestered in separate headers that aren't included by any of the other future library headers.
That's a matter of taste, for sure. But would you mind telling us why you don't want to have these? Your opinion sounds to be quite strong, so you must have good reasons!
It's the obvious reason. I don't think taking a function with a 2 letter name, which is already overloaded, and adding a new set of overloads to it which have semantics completely unrelated to the existing overloads is a desirable or aesthetically pleasing interface. No one would even consider doing such a thing if the function wasn't an operator. I guess I just don't find operator syntax as compelling a feature as others.

On Saturday 31 May 2008 23:21, Frank Mori Hess wrote:
I don't particularly like the implicit conversion from T to future<T>,
Err, I meant future<T> to T.
but that has absolutely nothing to do with not wanting to treat a future<T> as a placeholder. It's simply because the conversion can block, and thus produce unexpected behavior (an object appearing on the right hand side of an assignment stalling your program). The explicit future::get() at least gives a hint that something more than a quick conversion might be happening.

On Sun, Jun 1, 2008 at 12:21 AM, Frank Mori Hess <fmhess@speakeasy.net> wrote:
On Saturday 31 May 2008 21:19, Hartmut Kaiser wrote:
[snip]
Personally, I don't particularly want to see a future library overload any logical operators at all, or at least have the overloads sequestered in separate headers that aren't included by any of the other future library headers.
That's a matter of taste, for sure. But would you mind telling us why you don't want to have these? Your opinion sounds to be quite strong, so you must have good reasons!
It's the obvious reason. I don't think taking a function with a 2 letter name, which is already overloaded, and adding a new set of overloads to it which have semantics completely unrelated to the existing overloads is a desirable or aesthetically pleasing interface. No one would even consider doing such a thing if the function wasn't an operator. I guess I just don't find operator syntax as compelling a feature as others.
I tend to agree with Frank's view of overloading && and || for futures. I'm surely not a never-overload-operators-guy. I, for instance, love spirit. But in spirit && and || has a specific meaning inside EBNF domain. In futures, we're just inventing that meaning. Couldn't lambda's help us more than overloading here? -- Felipe Magno de Almeida

On Sunday 01 June 2008 06:34, Felipe Magno de Almeida wrote:
That's a matter of taste, for sure. But would you mind telling us why you don't want to have these? Your opinion sounds to be quite strong, so you must have good reasons!
It's the obvious reason. I don't think taking a function with a 2 letter name, which is already overloaded, and adding a new set of overloads to it which have semantics completely unrelated to the existing overloads is a desirable or aesthetically pleasing interface. No one would even consider doing such a thing if the function wasn't an operator. I guess I just don't find operator syntax as compelling a feature as others.
I tend to agree with Frank's view of overloading && and || for futures. I'm surely not a never-overload-operators-guy. I, for instance, love spirit. But in spirit && and || has a specific meaning inside EBNF domain. In futures, we're just inventing that meaning. Couldn't lambda's help us more than overloading here?
Oh, I came up with another reason. A future can be empty (default constructed, or a moved unique_future). Such classes often support converson to bool (or conversion to "unspecified boolean type" when they want to be safer) to test if they have been initialized. So seeing a future in a logical operator, one might mistakenly assume the operator is is a test to see if either or both futures have been initialized. Although, to nit-pick myself, I don't think the statement I made about the logical operators already being overloaded was technically correct. I suppose its really the conversion-to-bool operator that is overloaded and there is just the bool prototype for the logical operators.

Frank,
I'm perfectly aware of this. And actually I expressed exactly the same by pointing out the main difference between the two sets of operators: the logical operators I proposed are for the futures themselves while the others are related to the future results.
But you're diligently ignoring my point: overloading the operators for the future results is conceptually equivalent to having a default conversion operator to the result type for the future itself! And for that reason I would not like to see something like that in the future library.
Ah, I think I'm understanding you better now. You are appalled by the idea of treating a future<T> like it was a T, is that correct?
Yep. That's the separation of concerns I was alluding to. Futures should do one thing, and that is encapsulating deferred or eager execution of a (remote) task.
You want a future<T> to only be treated like an abstract handle to a value, not the value itself? I wasn't ignoring your point, it's just that your view is extremely foreign to how I view futures, so your words were almost meaningless to me. I view being able to view a future<T> as a placeholder for a T object as a large part of what makes futures useful. I implemented the future class in libpoet to support that, with implicit conversions from future<T> to future<U>, from T to future<T>, etc. And being able to treat a future<T> as a placeholder for a T is essential to implementing things like poet::active_function.
I don't particularly like the implicit conversion from T to future<T>, but that has absolutely nothing to do with not wanting to treat a future<T> as a placeholder.
What's the difference between treating a future as a placeholder and having a future implicitly convertibly to its result type? Doesn't a placeholder imply having this implicit conversion?
It's simply because the conversion can block, and thus produce unexpected behavior (an object appearing on the right hand side of an assignment stalling your program). The explicit future::get() at least gives a hint that something more than a quick conversion might be happening.
I guess the features that allow a future to act as a placeholder could be split out into a separate higher level class, call it "future_value", which would internally contain a plain old future. The idea doesn't really do anything for me, but maybe you would find it preferable?
I could live with that. The future_value would be implemented on top of anything encapsulating deferred/eager remote execution and its sole purpose is to provide implicit conversion, function argument lifting, etc. No mixing of concerns here, fine.
OTOH, IMHO, having the logical operators alone improves code readability, simplifies the implementation (by avoiding multiple overloads for different parameter counts), allows to build more complex logical execution restrictions as (f1 || f2) && f3 (which is not possible using the proposed wait_for_all and wait_for_any), and all that without having to sacrifice any of the existing functionality.
The poet::future_barrier/future_composing_barrier/future_select/future_sele ctor stuff I'm currently working on allows for composition. Did you see the earlier discussion on this a couple weeks ago? For example this thread:
I didn't follow this discussion, thanks for the link.
Personally, I don't particularly want to see a future library overload any logical operators at all, or at least have the overloads sequestered in separate headers that aren't included by any of the other future library headers.
That's a matter of taste, for sure. But would you mind telling us why you don't want to have these? Your opinion sounds to be quite strong, so you must have good reasons!
It's the obvious reason. I don't think taking a function with a 2 letter name, which is already overloaded, and adding a new set of overloads to it which have semantics completely unrelated to the existing overloads is a desirable or aesthetically pleasing interface. No one would even consider doing such a thing if the function wasn't an operator. I guess I just don't find operator syntax as compelling a feature as others.
Come on. You and I _know_, that an operator isn't just a function with a two letter name (and as you admitted in a later mail, they aren't yet overloaded, so your reasoning isn't really convincing). An operator is certainly more than that, i.e. it provides infix notation, well defined associativity and precedence rules, terse expression syntax, etc. The semantics might be unusual, I agree. But the same was true when Bjarne introduced the shift operators for input/output. Nowadays nobody even discusses this anymore, it's just 'natural'. Once you think about it for a while it's very 'natural' as well to use || and && to express logical execution restrictions for futures - not their values, giving you a very powerful means of controlling fine grained parallelization. BTW, we could go even further by overloading '->', '>>', or '>>=' to express dependencies between futures (except that >>= probably has the wrong precedence for that, just a hunch, I didn't check). This would be just another means of controlling parallelization while using an easy understandable syntax. But I'm not going to propose _that_ :-P And just compare the amount of code needed to write something equivalent to (f1 || f2) && f3 using the proposed API from the link you gave me above. That's really appalling - certainly only IMHO! Regards Hartmut

On Sunday 01 June 2008 14:43, Hartmut Kaiser wrote:
What's the difference between treating a future as a placeholder and having a future implicitly convertibly to its result type? Doesn't a placeholder imply having this implicit conversion?
Yes, that's why I added it. I just think it's a little dangerous (see following paragraph). For example, boost::optional is a stand-in for its templated type, but it only supports implicit conversion from T to optional<T>, and not from optional<T> to T. Maybe a future_value should do the same.
It's simply because the conversion can block, and thus produce unexpected behavior (an object appearing on the right hand side of an assignment stalling your program). The explicit future::get() at least gives a hint that something more than a quick conversion might be happening.
And just compare the amount of code needed to write something equivalent to (f1 || f2) && f3 using the proposed API from the link you gave me above. That's really appalling - certainly only IMHO!
I guess you're talking about Johan's initial proposal? In my current code, that is roughly future_select(future_barrier(f1, f2), f3) although that particular expression would produce a future<void> instead of a future<tuple<variant<T1, T2>, T3> If the user wanted composing functions or operators that returned tuples/variants, they could implement them without much trouble using future_combining_barrier. Unfortunately, I don't have any documentation or the latest version of my code online yet, but hopefully next week.

What's the difference between treating a future as a placeholder and having a future implicitly convertibly to its result type? Doesn't a placeholder imply having this implicit conversion?
Yes, that's why I added it. I just think it's a little dangerous (see following paragraph). For example, boost::optional is a stand-in for its templated type, but it only supports implicit conversion from T to optional<T>, and not from optional<T> to T. Maybe a future_value should do the same.
That's exactly my point! For this reason I don't think having the default implicit conversion operator for futures is a good idea. But I'm starting to repeat myself. I so won't comment any further on this thread.
It's simply because the conversion can block, and thus produce unexpected behavior (an object appearing on the right hand side of an assignment stalling your program). The explicit future::get() at least gives a hint that something more than a quick conversion might be happening.
And just compare the amount of code needed to write something equivalent to (f1 || f2) && f3 using the proposed API from the link you gave me above. That's really appalling - certainly only IMHO!
I guess you're talking about Johan's initial proposal? In my current code, that is roughly
future_select(future_barrier(f1, f2), f3)
although that particular expression would produce a future<void> instead of a
future<tuple<variant<T1, T2>, T3>
If the user wanted composing functions or operators that returned tuples/variants, they could implement them without much trouble using future_combining_barrier. Unfortunately, I don't have any documentation or the latest version of my code online yet, but hopefully next week.
That's better than the initial proposal but still much more complicated than it could be: (f1 || f2) && f3. Just think about more complex logical execution constraints, these won't be readable anymore when using the function based syntax. Why do you object against using such a simple syntax, even if it's now obvious that it has additional advantages over your function based proposal (coherent return value treatment)? But again: I'm repeating myself, so there won't be any further comments from my side. If the authors of the future Boost futures library object against providing the logical operators it will be easy enough for me to implement those solely on top of the library... This is no reason for me to vote against it. But I will vote against a library implementing implicit conversion to the result type of the future: future<T> to T. Thanks for the insightful discussion. Regards Hartmut

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Sunday 01 June 2008 20:41 pm, Hartmut Kaiser wrote:
What's the difference between treating a future as a placeholder and having a future implicitly convertibly to its result type? Doesn't a placeholder imply having this implicit conversion?
Yes, that's why I added it. I just think it's a little dangerous (see following paragraph). For example, boost::optional is a stand-in for its templated type, but it only supports implicit conversion from T to optional<T>, and not from optional<T> to T. Maybe a future_value should do the same.
That's exactly my point! For this reason I don't think having the default implicit conversion operator for futures is a good idea. But I'm starting to repeat myself. I so won't comment any further on this thread.
I think we're having some misunderstanding due to when you say "future", I hear "future_value", instead of what you mean when you say "future". I'll call that "future_handle" here for clairity.
That's better than the initial proposal but still much more complicated than it could be: (f1 || f2) && f3. Just think about more complex logical execution constraints, these won't be readable anymore when using the function based syntax. Why do you object against using such a simple syntax, even if it's now obvious that it has additional advantages over your function based proposal (coherent return value treatment)?
But again: I'm repeating myself, so there won't be any further comments from my side. If the authors of the future Boost futures library object against providing the logical operators it will be easy enough for me to implement those solely on top of the library... This is no reason for me to vote against it.
Yes, I've attached an example of how a user could implement your operator&& and operator|| in terms of future_select and future_combining_barrier. Of course, they could also choose to implement a lot of other things too (including operators with slightly different semantics) with future_combining_barrier.
But I will vote against a library implementing implicit conversion to the result type of the future: future<T> to T.
I'd like to try and summarize and comment a bit on what we've discussed. We have two ideas that have both been referred to as a future: 1) future_handle: A minimal, focused class. This is essentially William's proposal. One feature I do believe belongs in future_handle is conversion from any future_handle<T> to future_handle<void>. William's proposal does not support this, but Gaskill's does. 2) future_value: A slightly more complex and featureful class, whose objects can be used as a stand-ins for objects of its template type. This is useful for things like lifting ordinary functions into asyncronous ones that use future_value for their return values and input parameters. future_value might _not_ want to support conversion of arbitrary future_value<T> to future_value<void>. Otherwise, it is a future_handle that adds support for: 2a) construction/assignment of future<T> from future<U>, if it is supported by T and U. I believe this is supported by Gaskill's. 2b) construction/assignment of future<T> from T, and maybe (or maybe not) the opposite direction: implicit conversion from future<T> to T. Gaskill's only supports the (more dangerous) conversion from T to future<T>. My use case requires future_value. Furthermore, from my experience implementing the future_value features in poet::future, I doubt it will be possible to implement future_value relying only the public interface of future_handle. Thus, a class like future_value would have to be part of the futures library so it can access and extend the future_handle internals. If the futures library doesn't provide a future_value, it will be of no use to me since I'll have to continue providing my own independent futures implementation. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFIRBWF5vihyNWuA4URAgKMAJ4ulvjuJaS6/ko6NESm3dJ+gylFhACgsoxM lMQN/TnaQ//w79roS+kKrxw= =R8Xk -----END PGP SIGNATURE-----

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Monday 02 June 2008 11:45 am, Frank Mori Hess wrote:
2b) construction/assignment of future<T> from T, and maybe (or maybe not) the opposite direction: implicit conversion from future<T> to T. Gaskill's only supports the (more dangerous) conversion from T to future<T>.
Err, I seem to keep getting that backwards. I meant Gaskill's only supports future<T> to T. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFIRBeS5vihyNWuA4URAjJ/AKCOkGYNYsiptj+Vwcx++6cT+NHj+QCdG2It uPMQ1QfDvuxsKxptZqzrID4= =zD2U -----END PGP SIGNATURE-----

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Monday 02 June 2008 11:55 am, Peter Dimov wrote:
Frank Mori Hess:
My use case requires future_value.
I generally try to follow the future-related discussions, but do not recall seeing an explanation of the specifics of your use case. Can you please point me to one?
Oh, my use case is the "lifting of an ordinary function to an asynchronous one that takes/returns future_value objects" I mentioned earlier in the post. To be more specific: http://www.comedi.org/projects/libpoet/boostbook/doc/boostbook/html/poet/act... -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFIRBq75vihyNWuA4URAtR0AKC4eAtIq+P1bKiPcEL0r9xGUNwRFACgyDOg 3qjarkoZVbq9cVkjy0Ix15k= =aeip -----END PGP SIGNATURE-----

Frank Mori Hess wrote:
My use case requires future_value.
I generally try to follow the future-related discussions, but do not recall seeing an explanation of the specifics of your use case. Can you please point me to one?
Oh, my use case is the "lifting of an ordinary function to an asynchronous one that takes/returns future_value objects" I mentioned earlier in the post. To be more specific:
http://www.comedi.org/projects/libpoet/boostbook/doc/boostbook/html/poet/act...
To clarify some of my posts and another use case; I have been thinking a lot about a lazy "passive function" which is rather analogous to an active function. The difference is that evaluation work is not done by an associated thread, but by a future-listening thread in it's wait, get or is_ready calls. In general any function: R foo(A1 a1, A2 a2) can be "lifted" to a passive or active function if any of it's arguments needs to be supplied asynchronously. This can be done either totally; future<R> foo(future<A1> a1, future<A2> a2) or partially; future<R> foo(future<A1> a1, A2 a2) I hope that futures will be enough computationally powerful to implement "passive lifting" and that it won't be too cumbersome for users to express this. Johan -- View this message in context: http://www.nabble.com/Updated-version-of-futures-library-tp17555389p17606122... Sent from the Boost - Dev mailing list archive at Nabble.com.

Frank Mori Hess:
Oh, my use case is the "lifting of an ordinary function to an asynchronous one that takes/returns future_value objects" I mentioned earlier in the post. To be more specific:
http://www.comedi.org/projects/libpoet/boostbook/doc/boostbook/html/poet/act...
Hm. It would be nice if you could provide a better motivating example. :-) That aside, I'm not sure why you need future_value semantics. Let's assume that one has a primitive future<R> async( F f, A1 a1, ..., An an ); that schedules bind(f,a1,a2,...,an) for execution. It wouldn't be hard to make it recognize when Ai is future<Bi> and do the following instead: async_with_guard( bind( A1::ready, a1 ) && ..., f, bind( A1::get, a1 ), ... ); The 'guard' predicate is just symbolic since the scheduler would in reality store the futures as dependencies and use the appropriate "multi-multiwait". But the point is that I don't see the above requiring any future_value-specific operations. Or stated differently: future<double> fd1 = async( f1, 0 ); future<double> fd2 = async( f1, 1 ); future<double> fd = async( f2, fd1, 0.3, fd2, 0.7 ); // actually does: future<double> fd = async_with_dependencies( bind( f2, bind( future<double>::get, fd1 ), 0.3, bind( future<double>::get, fd2 ), 0.7 ), // function object fd1, fd2 ); // dependency list In this case one would need to use something like protect(fd1) if the function really takes a future. It's not clear whether the goal of libpoet is to provide fine-grained parallelism or lazy evaluation; if the latter, one could expect all functions to take futures, since an active_function provides no fine-grained laziness. In this case the above "activize by default" approach won't be very convenient.

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Monday 02 June 2008 13:02 pm, Peter Dimov wrote:
Frank Mori Hess:
Oh, my use case is the "lifting of an ordinary function to an asynchronous one that takes/returns future_value objects" I mentioned earlier in the post. To be more specific:
http://www.comedi.org/projects/libpoet/boostbook/doc/boostbook/html/poet/ active_function.html
Hm.
It would be nice if you could provide a better motivating example. :-)
Well, I want the "lifted" versions of functions to be as useable as ordinary functions. So, you can compose functions whose types don't match exactly: float f(); void g(double x); g(f()); //fine But not the version lifted to future_handles: future_handle<float> lifted_f(); future_handle<void> lifted_g(future_handle<double> x); lifted_g(lifted_f()); // error, no implicit conversions for future_handle
That aside, I'm not sure why you need future_value semantics. Let's assume that one has a primitive
future<R> async( F f, A1 a1, ..., An an );
that schedules bind(f,a1,a2,...,an) for execution. It wouldn't be hard to make it recognize when Ai is future<Bi> and do the following instead:
async_with_guard(
bind( A1::ready, a1 ) && ...,
f,
bind( A1::get, a1 ), ... );
The 'guard' predicate is just symbolic since the scheduler would in reality store the futures as dependencies and use the appropriate "multi-multiwait". But the point is that I don't see the above requiring any future_value-specific operations.
That's interesting. The binds you describe in async_with_guard play a role similar to my future_combining_barrier(), except they need a scheduler thread to evaluate them, whereas the user functor passed to future_combining_barrier will be evaluated by whatever thread tries to use the returned future once its future dependencies are all ready. And as I realized and posted a couple hours ago, future_combining_barrier plus a future_handle would be sufficient for me to implement a future_value on top of. However, the William's future does not currently support any multi-multiwait. Its wait_for_all and wait_for_any do not return futures but block when called. The Gaskill futures do have operator|| and operator&& but they are not efficient enough to build a scheduler on. So, I'd like to see something like the future_select, future_barrier, future_selector, future_combining_barrier stuff I'm working on in libpoet in the boost futures library. None of them require a future_value.
It's not clear whether the goal of libpoet is to provide fine-grained parallelism or lazy evaluation;
active_function provides fine-grained parallelism. Or, maybe adjustable-grained, since multiple active_functions can share the same scheduler thread. Lazy evaluation can be achieved with future_combining_barrier (which isn't in the online docs or any release yet).
if the latter, one could expect all functions to take futures, since an active_function provides no fine-grained laziness. In this case the above "activize by default" approach won't be very convenient.
You can get fine-grained laziness by using future_select in conjunction with future_combining_barrier. For instance, if you wait on a future returned by future_select, and the inputs to the future select where created by future_combining_barrier, then it will only run the combiners of the input futures until the first one of them completes. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFIRD7G5vihyNWuA4URAsXnAJ9T2L1HVs08S2lfw4DEbEKLuj5WrwCeKSI/ ECL+rJ1JKZQeUgRl0lmuZAw= =pw1w -----END PGP SIGNATURE-----

Frank Mori Hess:
However, the William's future does not currently support any multi-multiwait. Its wait_for_all and wait_for_any do not return futures but block when called. The Gaskill futures do have operator|| and operator&& but they are not efficient enough to build a scheduler on.
I think that wait_for_any would be enough for such a scheduler. In pseudocode, the dependency-resolving thread would do something like the following, in a loop: set< future<void> > waiting_, ready_; wait_for_any( waiting_ ); // for_each in waiting_ // if ready() move to ready_ // for each pending target // if dependencies are a subset of ready_, move to exec queue // garbage-collect ready_ in some way :-)

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Monday 02 June 2008 15:58 pm, Peter Dimov wrote:
I think that wait_for_any would be enough for such a scheduler. In pseudocode, the dependency-resolving thread
I didn't mention it, but I was thinking of a scheduler picking method requests and dispatching them using a single thread. At one extreme, you could create a thread that returns a future and then blocks on wait_for_any/wait_for_all in order to implement something that acts like future_select or future_combining_barrier, but it's less than ideal.
would do something like the following, in a loop:
set< future<void> > waiting_, ready_;
wait_for_any( waiting_ );
A scheduler which takes O(N) to dispatch a method request is considered inefficient. N being the number of pending method requests in the scheduler. The call to wait_for_any alone is O(N), thus my motivation for using future_selector for a scheduler. I'll let you in on a dirty secret: the scheduler I'm using now is O(N), but I'd hate not to be able to improve it to something respectable when I need to.
// for_each in waiting_ // if ready() move to ready_
// for each pending target // if dependencies are a subset of ready_, move to exec queue
// garbage-collect ready_ in some way :-)
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFIRF8+5vihyNWuA4URAj9LAJ4lsjOLJp0MvElF+rrgAg2DXl35UQCgoJH/ 1RUsOThKBrKVFjBJL026jUM= =fgK/ -----END PGP SIGNATURE-----

Frank Mori Hess:
I didn't mention it, but I was thinking of a scheduler picking method requests and dispatching them using a single thread.
Hm. Are you envisaging a concurrency model in which every active_function runs its own scheduler thread? I probably need to take a look at the source of libpoet.
wait_for_any( waiting_ );
A scheduler which takes O(N) to dispatch a method request is considered inefficient. N being the number of pending method requests in the scheduler.
The pseudocode isn't even O(N); it's O(NlnN) at best, maybe worse. :-) Do note that it doesn't necessarily dispatch a single request per wait_for_any call though.

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Monday 02 June 2008 18:12 pm, Peter Dimov wrote:
The pseudocode isn't even O(N); it's O(NlnN) at best, maybe worse. :-) Do note that it doesn't necessarily dispatch a single request per wait_for_any call though.
Your pseudocode putting the futures in a set does bring up a missing feature in both the proposals. They don't provide an operator<() that is a strict weak ordering so they can be used as keys. A future_handle should have no problem providing an ordering where two future_handles are equivalent if and only if they both reference the same promise. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFIRIYN5vihyNWuA4URAukHAJwNo5McItLrdn28lIq2cDEZR8UITQCfdqOt alZDXfzr+2bV77IniW4LdNU= =jzgI -----END PGP SIGNATURE-----

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Monday 02 June 2008 18:12 pm, Peter Dimov wrote:
Frank Mori Hess:
I didn't mention it, but I was thinking of a scheduler picking method requests and dispatching them using a single thread.
Hm. Are you envisaging a concurrency model in which every active_function runs its own scheduler thread? I probably need to take a look at the source of libpoet.
By default they do. They can also share a scheduler, so you can have an active object consisting of a bunch of active_functions all sharing one scheduler. That gives some automatic thread safety since all the user code run by the active_functions sharing the scheduler is run in the same thread. If each active_function corresponds to a method on an underlying ordinary object, then methods of the underlying object can all access and modify the object's internal data without worrying about concurrent access, since they will all be running in the same thread. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFIRJjX5vihyNWuA4URAphWAJ9HgIFSOB7m5HZM7CKsq+rUUf/mCQCdFVO8 5BrfuXTZsOc07MX7uT7d7UM= =nBzZ -----END PGP SIGNATURE-----

Frank Mori Hess:
wait_for_any( waiting_ );
A scheduler which takes O(N) to dispatch a method request is considered inefficient. N being the number of pending method requests in the scheduler. The call to wait_for_any alone is O(N), thus my motivation for using future_selector for a scheduler.
Using algorithmic complexity to evaluate the performance of a call that blocks because you have nothing to do is not quite correct. Let's tidy up the pseudocode a bit: list< future<void> > deps_; wait_for_any( deps_ ); for each in deps_: if ready() erase; for each pending task: if all dependencies are ready() execute task I believe that the "if all dependencies are ready" part corresponds to your method_request::ready. One can reasonably assume that wait_for_any will return early if it finds a ready() future, bit even if it doesn't, we can optimize the loop to only call wait_for_any when there was no activity on the last iteration. So we only enter wait_for_any when none of deps_ are ready() and it blocks. It doesn't really matter what the overhead of wait_for_any is (as long as it isn't pathological) since it's usually going to block anyway. The worse the overhead of wait_for_any, the less often we'll enter it. What matters, in the end, is whether the rest of the scheduler can keep up with the outside world submitting requests; if wait_for_any turns into a bottleneck, it simply won't be called at all.

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Monday 02 June 2008 19:17 pm, Peter Dimov wrote:
Frank Mori Hess:
wait_for_any( waiting_ );
A scheduler which takes O(N) to dispatch a method request is considered inefficient. N being the number of pending method requests in the scheduler. The call to wait_for_any alone is O(N), thus my motivation for using future_selector for a scheduler.
Using algorithmic complexity to evaluate the performance of a call that blocks because you have nothing to do is not quite correct. Let's tidy up the pseudocode a bit:
list< future<void> > deps_;
wait_for_any( deps_ );
for each in deps_: if ready() erase;
for each pending task: if all dependencies are ready() execute task
I believe that the "if all dependencies are ready" part corresponds to your method_request::ready.
One can reasonably assume that wait_for_any will return early if it finds a ready() future, bit even if it doesn't, we can optimize the loop to only call wait_for_any when there was no activity on the last iteration.
So we only enter wait_for_any when none of deps_ are ready() and it blocks. It doesn't really matter what the overhead of wait_for_any is (as long as it isn't pathological) since it's usually going to block anyway. The worse the overhead of wait_for_any, the less often we'll enter it. What matters, in the end, is whether the rest of the scheduler can keep up with the outside world submitting requests; if wait_for_any turns into a bottleneck, it simply won't be called at all.
Okay, lets try ignoring the wait_for_any call. Say you have on average N method requests just sitting in your scheduler that are not ready, and have on average M method requests that are ready to execute, and each method request has on average P dependencies (I'd imagine P would probably be small). That means each pass through your pseudocode will take O(N*P) to iterate through all the unready dependencies of the unready method requests. Then it will take O(M+N) to go through all the method requests and execute the M ready tasks. That results in O(N*P/M+1) per method request executed. Having M in the denominator is nice, as you would expect it to scale with N, and it means the scheduler probably won't break down under heavy load. On the downside, in a common case I'd expect M to be small (between 0 and 1), with performance concerns about the scheduler starting to come into play as M approaches 1. In my case, where the scheduler thread is executing the tasks itself (as opposed to sending them to a thread pool for instance), if I were in optimization mode I would start looking at breaking things up more finely once M hit 1, to maximize use of available processors. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFIRJbc5vihyNWuA4URAjs4AKCnkXU1L1I16IptutBfU30IIhL19wCePg5p OQE5Qc7GW3032ITk8Ht36e0= =Njyz -----END PGP SIGNATURE-----

Frank Mori Hess: ...
for each in deps_: if ready() erase;
for each pending task: if all dependencies are ready() execute task ...
Okay, lets try ignoring the wait_for_any call. Say you have on average N method requests just sitting in your scheduler that are not ready, and have on average M method requests that are ready to execute, and each method request has on average P dependencies (I'd imagine P would probably be small). That means each pass through your pseudocode will take O(N*P) to iterate through all the unready dependencies of the unready method requests.
The number of unready dependencies can be lower than N*P, but let's ignore that for now.
Then it will take O(M+N) to go through all the method requests and execute the M ready tasks. That results in O(N*P/M+1) per method request executed.
I think that going through the method requests takes (N+M)*P, because of the "all dependencies are ready" part. (I think that this is also true for your scheduler.) So we have (2*N+M)*P total, or (2*N/M+1)*P per executed request.
Having M in the denominator is nice, as you would expect it to scale with N, and it means the scheduler probably won't break down under heavy load.
On the downside, in a common case I'd expect M to be small (between 0 and 1), with performance concerns about the scheduler starting to come into play as M approaches 1.
If the scheduler performance is insufficient, it will not be able to maintain a low M. If it does manage to maintain a low M, then its performance is sufficient. Stated differently, in the additional N*P time taken by the first loop, some of the futures will become ready that wouldn't otherwise, and the second loop will execute more tasks, automatically diluting the overhead.

I wrote that
If the scheduler performance is insufficient, it will not be able to maintain a low M. If it does manage to maintain a low M, then its performance is sufficient.
Now that I think about it, this isn't true for the typical fork-join parallelism. http://gee.cs.oswego.edu/dl/papers/fj.pdf If we have for example an active function fib(n) that returns fib(n-1)+fib(n-2), it will quickly generate a large N. In this case however, if the task queue is actually a stack, it can be reduced in a single execution as the older tasks will become ready as soon as the newer tasks are done. (But it will take logN to fill it. On second thought, with a FIFO policy it will be generated in a single pass and then reduced in logN passes. So it's basically the same.) Not that fork-join parallelism makes much sense for a single-threaded active object. :-)

On Monday 02 June 2008 21:26, Peter Dimov wrote:
The number of unready dependencies can be lower than N*P, but let's ignore that for now.
The big O means I don't care about constant factors :)
Then it will take O(M+N) to go through all the method requests and execute the M ready tasks. That results in O(N*P/M+1) per method request executed.
I think that going through the method requests takes (N+M)*P, because of the "all dependencies are ready" part.
I think some kind of optimization could prevent having to check all P of each method request's dependencies at that stage, like if the earlier pass through the dependencies incremented a count in a ready dependency's associated method request. Then only the count's value would need to be checked.
(I think that this is also true for your scheduler.)
Oh, the scheduler I implemented in libpoet is not optimal. But with, for example, a completion callback on futures, you could in principle implement an O(P) scheduler. P future complete signals to a method request make it generate its ready signal, then the slot connected to the method request's ready signal (which might have had an iterator to the method request's location in a pending requests list bound to it when it was connected) moves the request from the pending requests list into a ready requests list. So my goal is to make waiting with future_selector in this scenario be O(P).
So we have (2*N+M)*P total, or (2*N/M+1)*P per executed request.
Having M in the denominator is nice, as you would expect it to scale with N, and it means the scheduler probably won't break down under heavy load.
On the downside, in a common case I'd expect M to be small (between 0 and 1), with performance concerns about the scheduler starting to come into play as M approaches 1.
If the scheduler performance is insufficient, it will not be able to maintain a low M. If it does manage to maintain a low M, then its performance is sufficient.
Stated differently, in the additional N*P time taken by the first loop, some of the futures will become ready that wouldn't otherwise, and the second loop will execute more tasks, automatically diluting the overhead.
I'm not quite sure what you're driving at with the above 2 paragraphs. Are you satisfied with your pseudocode scheduler's performance, or no?

Frank Mori Hess:
I'm not quite sure what you're driving at with the above 2 paragraphs. Are you satisfied with your pseudocode scheduler's performance, or no?
I'm not sure, as it's hard to measure the performance of a pseudocode. :-) To see how it fares, we need a benchmark based on a workload that is representative for your intended use cases... one that cannot be trivially rewritten to perform better using an alternative concurrency model. ;-)

On Monday 02 June 2008 20:56, Frank Mori Hess wrote:
Okay, lets try ignoring the wait_for_any call. Say you have on average N method requests just sitting in your scheduler that are not ready, and have on average M method requests that are ready to execute, and each method request has on average P dependencies (I'd imagine P would probably be small). That means each pass through your pseudocode will take O(N*P) to iterate through all the unready dependencies of the unready method requests.
Ah, I knew there was some mistake. At the beginning of the loop, all N+M method requests are "not ready" as far as the scheduler knows. So that adds another O(M*P) dependencies, giving O((N+M)*P) to iterate through dependencies.
Then it will take O(M+N) to go through all the method requests and execute the M ready tasks. That results in O(N*P/M+1) per method request executed. Having M in the denominator is nice, as you would expect it to scale with N, and it means the scheduler probably won't break down under heavy load.
So we end up with O((N/M + 1)*P).

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Monday 02 June 2008 13:02 pm, Peter Dimov wrote:
That aside, I'm not sure why you need future_value semantics. Let's assume that one has a primitive
future<R> async( F f, A1 a1, ..., An an );
that schedules bind(f,a1,a2,...,an) for execution. It wouldn't be hard to make it recognize when Ai is future<Bi>
Oh, I came up with a reason that having future_value objects as inputs can be preferable to the solution you describe. Your solution forces the "lifted" functions to always take templates arguments. But template methods can't be made virtual. Also, you can't pass them to bind() without manually specifying all the template parameters. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFIRElJ5vihyNWuA4URAhRkAJ9LaXtCBzMOQnC7QuMHdchQqs27owCg6gIv SeowJjoPt+Fy4CJwjKRoXJE= =wkfT -----END PGP SIGNATURE-----

Frank Mori Hess:
On Monday 02 June 2008 13:02 pm, Peter Dimov wrote:
That aside, I'm not sure why you need future_value semantics. Let's assume that one has a primitive
future<R> async( F f, A1 a1, ..., An an );
that schedules bind(f,a1,a2,...,an) for execution. It wouldn't be hard to make it recognize when Ai is future<Bi>
Oh, I came up with a reason that having future_value objects as inputs can be preferable to the solution you describe. Your solution forces the "lifted" functions to always take templates arguments. But template methods can't be made virtual. Also, you can't pass them to bind() without manually specifying all the template parameters.
Hm. I'm not sure I follow. Let's say that I have: struct A { virtual int f( long x ) = 0; }; int g(); future<int> h( A* pa ) { return async( &A::f, pa, async( g ) ); } What's the problem with the outer async recognizing that its third argument is a future and treating it as a dependency?

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Monday 02 June 2008 15:42 pm, Peter Dimov wrote:
Oh, I came up with a reason that having future_value objects as inputs can be preferable to the solution you describe. Your solution forces the "lifted" functions to always take templates arguments. But template methods can't be made virtual. Also, you can't pass them to bind() without manually specifying all the template parameters.
Hm. I'm not sure I follow. Let's say that I have:
struct A { virtual int f( long x ) = 0; };
int g();
future<int> h( A* pa ) { return async( &A::f, pa, async( g ) ); }
What's the problem with the outer async recognizing that its third argument is a future and treating it as a dependency?
That's interesting, is this async stuff from an existing library somewhere, or is it a hypothetical interface? I don't see any problem with it. I was thinking of a virtual active object class, like: struct ActiveA { virtual future_value<int> f( future_value<long> x ) = 0; }; I believe your solution could be used to achieve similar effects, but it is a very different interface. I'm not saying my interface is the only way to skin the cat, just that I want to be able to implement the interface I prefer and make it work as well as possible. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFIRFsx5vihyNWuA4URAtQ0AKDK/MKpUzFJuAUk/yUhxxUkuny3MQCfdBNu KYeis+VWtWRuNzABJcqrbx8= =NJKJ -----END PGP SIGNATURE-----

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Monday 02 June 2008 11:45 am, Frank Mori Hess wrote:
My use case requires future_value. Furthermore, from my experience implementing the future_value features in poet::future, I doubt it will be possible to implement future_value relying only the public interface of future_handle.
Actually, let me back down from that a little. I think if the futures library provided a future_combining_barrier (which I was grouping with the future_value functionality in my head before), I could implement a future_value on top of a future_handle. To elaborate, the future_combining_barrier would let you produce a future_handle<R> from a heterogeneous group of input future_handle objects, plus a user-supplied combiner function. The combiner is called to produce the value for the future_handle<R> from the values of the input future_handles when they are ready. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFIRCEY5vihyNWuA4URAnTGAKCVERqhgxyCistma9Tl9en01h3SYgCgkgX4 1gqqfSTzFnUK8/l6XFy8Dik= =UZG0 -----END PGP SIGNATURE-----

----- Original Message ----- From: "Hartmut Kaiser" <hartmut.kaiser@gmail.com> To: <boost@lists.boost.org> Sent: Friday, May 30, 2008 9:10 PM Subject: Re: [boost] Updated version of futures library
I've updated my futures library to include wait_for_any() and wait_for_all().
The provided overloads allow either waiting on up to five futures known at compile time:
unique_future<int> f1; shared_future<std::string> f2; unique_future<double> f3; unsigned const ready_index=wait_for_any(future1,future2,future3);
or on a container of futures:
std::vector<jss::shared_future<int> > futures; std::vector<jss::shared_future<int> >::iterator const future= jss::wait_for_any(futures.begin(),futures.end());
In the first instance, they can be of distinct types, but in the latter, all the futures in the container must be of the same type.
The documentation is available at http://www.justsoftwaresolutions.co.uk/files/futures_documentation.html and the files and tests are available at http://www.justsoftwaresolutions.co.uk/files/n2561_futures_revised_2008 0530.zip
This reminds me of a discussion we had some time ago, namely overloading operator&&() for futures allowing to achieve the same thing: waiting for all of the futures:
future<int> f1; future<double> f2; future<fusion::vector<int, double> > f3 (f1 && f2);
fusion::vector<int, double> result = f3.get(); // waits for f1 and f2
Similarly, overloading operator|| would allow to wait for the first future to finish only:
future<int> f1; future<double> f2; future<variant<int, double> > f3 (f1 || f2);
variant<int, double> result = f3.get(); // waits for first, f1 or f2
Hello, I think that the problem with the operator is not its implementation, but its definition. Your proposal seams natural for && and for || when the types are different. But when we have comon types things start to be more complicated for operator ||. When all are differents we can know via the type which future has finished the first. This information is lost otherwise. Which type will have (f1 || f2) if future<string> f1; future<string> f2; If we don't mind which future has finished, future<string> should be good (does variant<string, string> work in this case?). If we mind a future<pair<unsigned, string> > . The first component give the index, athe second the result. The problem is that this do not scale to more that 2 futures. Which type will have (f1 || f2|| f3) if future<string> f1; future<string> f2; future<int> f3; future<variant<string, int> > f3 (f1 || f2 || f3); or future<pair<unsigned, <variant<string, int> > > f3 (f1 || f2 || f3); Vicente

vicente.botet wrote:
This reminds me of a discussion we had some time ago, namely overloading operator&&() for futures allowing to achieve the same thing: waiting for all of the futures:
future<int> f1; future<double> f2; future<fusion::vector<int, double> > f3 (f1 && f2);
fusion::vector<int, double> result = f3.get(); // waits for f1 and f2
Similarly, overloading operator|| would allow to wait for the first future to finish only:
future<int> f1; future<double> f2; future<variant<int, double> > f3 (f1 || f2);
variant<int, double> result = f3.get(); // waits for first, f1 or f2
I think that the problem with the operator is not its implementation, but its definition. Your proposal seams natural for && and for || when the types are different. But when we have comon types things start to be more complicated for operator ||. When all are differents we can know via the type which future has finished the first. This information is lost otherwise.
Valid point. But this information can be retained easily by returning a pair from the composed future (see below). The types in the list have to be collapsed, for sure.
Which type will have (f1 || f2) if future<string> f1; future<string> f2;
If we don't mind which future has finished, future<string> should be good (does variant<string, string> work in this case?).
variant<string, string> will compile, but obviously doesn't make any sense. If all future return types are identical the variant can be dropped completely.
If we mind a future<pair<unsigned, string> > . The first component give the index, and the second the result.
The problem is that this do not scale to more that 2 futures. Which type will have (f1 || f2|| f3) if future<string> f1; future<string> f2; future<int> f3;
future<variant<string, int> > f3 (f1 || f2 || f3); or future<pair<unsigned, <variant<string, int> > > f3 (f1 || f2 || f3);
I'ld suggest to use the latter one. The unsigned for the index of the finished future and the variant holding all different possible return types (or just the return type if all types are identical): future<string> f1, f2; future<int> f3; future<pair<unsigned, variant<string, int> > > f4 (f1 || f2 || f3); and future<string> f1, f2, f3; future<pair<unsigned, string> > f4 (f1 || f2 || f3); which scales well for arbitrary numbers of parameters. Which shows another advantage of this syntax: you don't have to provide N overloads for wait_all/wait_any for N possible parameters, but only 2 or 3. Regards Hartmut
participants (9)
-
Anthony Williams
-
Felipe Magno de Almeida
-
Frank Mori Hess
-
Frank Mori Hess
-
Hartmut Kaiser
-
Johan Torp
-
Maik Beckmann
-
Peter Dimov
-
vicente.botet