[future] Early draft of wait for multiple futures interface

We've been discussing the need to implement a mechanism to wait for multiple futures in a separate thread: www.nabble.com/Review-Request%3A-future-library-%28Gaskill-version%29-to16600402.html I've come up with some interfaces which I find intuitive and useful. I've never implemented thread waiting, so please let me know if you think something like this is viable. ------------------------------ Proposal ------------------------------ A future can be constructed and thereafter wait on four different "future conditions" * a promise * a future_switch * a future_barrier * another future Each condition has a specified return type and the futures you create from it must match this type. They all have the same possible states - isn't ready, has value or an expression. I have left these and alot of other details out of the interface below for conciseness. ------------------------------ Interface drafts ------------------------------ /// Future condition which is ready when any of the added conditions is ready. Has an arbitrary return-type template<class ReturnType> class future_switch { public: /// Direct return template<class FutureConditionWithMatchingReturnType> void add_case(FutureConditionWithMatchingReturnType f); /// Possibility to perform custom functor, bind state, perform some logic etc before fulfilling it's condition template<class FutureCondition, class FutureTypeToReturnTypeFunctor> void add_case(FutureCondition f, FutureTypeToReturnTypeFunctor case); /// Only valid in is_ready state. Removes last fulfilled futurecondition and enables new futures to be /// constructed. Existing futures will have copied the value or exception already and will still work. void remove_ready_case(); bool empty() const; }; /// Future condition which is ready when all added futures are. /// This interface is less promising and has some flaws. Maybe you can get inspired by it though. template<class ReturnType> class future_barrier { public: future_barrier(const function<ReturnType()>& onReady); template<class FutureCondition> void add_condition(const future<FutureType>& f); }; ------------------------------ Example usage ------------------------------ template<class T> future_switch<T> operator||(const future<T>& lhs, const future<T>& rhs) { future_switch<T> sw; sw.add_case(lhs); sw.add_case(rhs); return sw; } template<class T> T and_impl(const future<T>& lhs, const future<T>& rhs) { return lhs.get() && rhs.get(); } template<class T> future_barrier<T> operator&&(const future<T>& lhs, const future<T>& rhs) { // Rather large runtime insecurity, we might forget to add lhs and rhs as conditions to the barrier future_barrier b(bind(&and_impl, lhs, rhs)); b.add_condition(lhs); b.add_condition(rhs); return b; } // This is an experimental use case for using switch as a dynamic set of future conditions // It handles three future responses and calls the three functions on the next lines bool handle_response_a(ResponseA r); bool handle_response_b(ResponseB r); bool handle_response_c(ResponseC r); void handle_pending_requests(const future<ResponseA>& a, const future<ResponseB >& b, const future<ResponseC >& c) { future_switch<void> sw; sw.add_case(a, &handle_repsonse_a); sw.add_case(b, &handle_repsonse_b); sw.add_case(c, &handle_repsonse_c); bool all_ok = true; while (!sw.empty()) { // Will automatically remove ok:ed futures from sw future<bool> next_response_was_ok(sw); bool all_ok = all_ok && next_response_was_ok.get(); sw.remove_ready_case(); } return all_ok; } ------------------------------ Some comments ------------------------------ There is a lot to improve here. I'd appreciate it if we could focus on determining whether this is a viable solution and the way we want to go before diving into details. These abstractions has the following property: A Users can compose conditions and poll them without the need of waiting. B We do not need to spawn extra threads for waiting - we only wait on the "outmost" future C Threads needn't awake and poll readiness - assuming the and/or logic is handled by waitingcode. D No "user logic" from the future-blocking threads propagate to the promise-fulfilling thread. E It's possible to implement efficient laziness by keeping readiness/waiting logic internal in the future library "wait for all" can be implemented without adding future_barrier, by simply calling wait() on all futures. We lose property A and E for "wait for all" if we do it this way. The blocking thread will awake unnecessarily many times. Property C means promise-fullfilling threads need to do some basic and/or logic. I think this can be implemented in a safe and efficient manner, O(log(N)) for barrier/all and O(1) for switch/any. I also think it is much preferable to waking and switching to future-blocking threads which re-check readiness conditions. Note property D. Whatever mechanism we choose I think it's desirable to be able to easily "lift" existing functions to futures; R foo(A1 a1, A2 a2, A3 a3) should be easily rewritable as; future<R> foo(future<A1> a1, A2 a2, future<A3> a3) // Some parameters need not be lifted Johan -- View this message in context: http://www.nabble.com/-future--Early-draft-of-wait-for-multiple-futures-inte... Sent from the Boost - Dev mailing list archive at Nabble.com.

After thinking more carefully on the problem I've realized it is implementable using condition variables. Here is a revised proposal, which - for now - excludes barriers. ---------------------------- Proposal ---------------------------- Add class future_expression that basically is a function which depend on futures values. - A typed evaluation function (some work) is associated with each future dependency --- This evaluation can result in that the future_expressions becomes ready - A future dependency is a parent child relationship. This forms a graph where "promise-futures" are leafs. --- You can wait() on any future in the graph - When a future is completed it tells it's parents they need to re-evaluate via a future-complete callback --- The future_expression that needs re-evaluation will: --- 1. Schedule the associated evaluation to be run on the client thread on next wait or is_ready is called --- 2. Notifies it's condition in case the client thread was waiting on this particular node --- 3. Signals it's parents they too need to re-evaluate ------ This way, the notifications propagate upwards to the root future ------ Some future_expression might be scheduled for re-evaluation even though - When a future expression becomes ready it drops it's shared ownership to all depending childs --- They are no longer needed and can die --- This is done as a part of the pending evaluations on the client thread - For now this is not a concurrent object, neither are the futures formed from it --- All evaluation code can thus be single threaded --- We can change this so that the work of evaluating a future is done by the first thread waiting on it. Note: - The callback is ONLY usable by future_exprs. - No user code can be executed by promise fulfilling code - Other abstractions build on future_expressions rather than the dangerous complete-callback ---------------------------- Part of interface ---------------------------- template<class T> struct future_result { T running_result_; bool value_is_final_; optional<exception> exception_; ... }; /// A representation of a function which depends on futures. /// template <class T> class future_expression { /// Default value if no dependencies are added future_expression(T starting_value); /// Add a future alongside with an evaluation function which will be called lazily by waiting threads template<class U> void add_dependency<U> (shared_future<U> f, function<void(U future_value, future_result<T>& result_setter)> on_ready); /// Perform pending evaluations. If none sets an exception or result wait on the internal condition void wait(); /// If not ready, perform pending evaluations. If none sets an exception or result, return false bool is_ready(); }; ---------------------------- Example code ---------------------------- void set_if_true(bool b, result& result_setter) { if (b) { result_setter.running_value_ = true; result_setter.value_is_final_ = true; } } template<class T> future<bool> operator||(future<bool> lhs, future<bool> rhs) { future_expression<R> exp(false); exp.add_dependency(a1, &set_if_true); exp.add_dependency(a2, &set_if_true); return future<R>(exp); } template<class T> void running_add(T value, result& result_setter) { result_setter.running_value_ += value; } template<class T> future<T> sum(vector<future<T>> futures) { future_expression<T> exp(value_initialized<T>()); for f in futures exp.add_dependency(f, &running_add); return future<T>(exp); } ---------------------------- Conclusions ---------------------------- This mechanism would solve all the issues we've found. Most importantly it allows making composite futures while prohibiting execution of user code while fulfilling promises. Unfortunately, I feel it might be a little cumbersome to use. Also, since we need wait_for_many support it needs to get accepted alongside with the first version of the future library. I was hoping this version could be very lightweight. What do you think? Any quick thoughts? Johan -- View this message in context: http://www.nabble.com/-future--Early-draft-of-wait-for-multiple-futures-inte... Sent from the Boost - Dev mailing list archive at Nabble.com.

Johan Torp <johan.torp@gmail.com> writes:
After thinking more carefully on the problem I've realized it is implementable using condition variables. Here is a revised proposal, which - for now - excludes barriers.
Given wait_for_any and wait_for_all (see my latest revision), I don't think we need these more complex composite classes in many cases, and where we do they might be better implemented specifically for the case at hand. Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

Anthony Williams-4 wrote:
After thinking more carefully on the problem I've realized it is implementable using condition variables. Here is a revised proposal, which - for now - excludes barriers.
Given wait_for_any and wait_for_all (see my latest revision), I don't think we need these more complex composite classes in many cases, and where we do they might be better implemented specifically for the case at hand.
I agree we do not need this complex functionality in most cases. I too want to find a minimal interface which we can implement and get accepted into boost first, then we can add rich functionality - as Gaskill's proposal - on top of it. I hope we can find something much simpler than what I proposed without exposing the "completed callback". I fear that it might be difficult though. I don't think your proposed interface is powerful enough to implement composed futures, but I might be wrong. For instance, how would you implement future operators with it? future<bool> operator||(future<bool> lhs, future<bool> rhs); Johan -- View this message in context: http://www.nabble.com/-future--Early-draft-of-wait-for-multiple-futures-inte... Sent from the Boost - Dev mailing list archive at Nabble.com.

Johan Torp <johan.torp@gmail.com> writes:
Anthony Williams-4 wrote:
After thinking more carefully on the problem I've realized it is implementable using condition variables. Here is a revised proposal, which - for now - excludes barriers.
Given wait_for_any and wait_for_all (see my latest revision), I don't think we need these more complex composite classes in many cases, and where we do they might be better implemented specifically for the case at hand.
I agree we do not need this complex functionality in most cases. I too want to find a minimal interface which we can implement and get accepted into boost first, then we can add rich functionality - as Gaskill's proposal - on top of it. I hope we can find something much simpler than what I proposed without exposing the "completed callback". I fear that it might be difficult though.
I don't think your proposed interface is powerful enough to implement composed futures, but I might be wrong. For instance, how would you implement future operators with it?
future<bool> operator||(future<bool> lhs, future<bool> rhs);
Using wait callbacks: namespace detail { template<typename R> class future_or { boost::shared_ptr<packaged_task<R> > task; shared_future<R> lhs; shared_future<R> rhs; public: future_or(boost::shared_ptr<packaged_task<R> > const& task_, shared_future<R> const& lhs_, shared_future<R> const& rhs_): task(task_),lhs(lhs_),rhs(rhs_) {} R operator()() { unsigned const index=wait_for_any(lhs,rhs); return index?rhs.get():lhs.get(); } static void run(jss::packaged_task<int>& pt) { try { pt(); } catch(task_already_started&) { } } }; } template<typename R> unique_future<R> operator||(shared_future<R> const& lhs,shared_future<R> const& rhs) { boost::shared_ptr<packaged_task<R> > task(new packaged_task<R>); *task=packaged_task<R>(detail::future_or<R>(task,lhs,rhs)); task->set_wait_callback(detail::future_or<R>::run); return task->get_future(); } Alternatively, you could have future_or spawn a thread to run the task rather than do it lazily as a wait callback. Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

Anthony Williams-4 wrote:
Alternatively, you could have future_or spawn a thread to run the task rather than do it lazily as a wait callback.
I don't think spawning a thread is acceptable. 1. Starting a new thread is expensive. Thread-pools might help here but they don't exist yet. 2. Context switching is expensive. Typically, the evaluation work is real small. This approach requires an additional context switch compared to if the future-listener performed the evaluation. It might also reduce the performance of a thread-pool if composite futures are frequently evaluated. 3. Composed futures might need to access data belonging to the future-listener in it's evaluation. If the evaluation is performed by a single future-listener, it can use const references. Now such the data needs to be copied or protected by mutexes. 4. Nested composed futures might get a context switch at every depth level. I wouldn't be surprised if the context switching would rule out futures from libraries such as asio or poet. That would be a real shame. Frank or somebody with asio insight, what do you think? I'm also not convinced that we want a wait callback. I'm not sure "thread stack re-use" is such a good idea. As I pointed out before, it can spread dead locks to client threads in very unexpected ways, for instance by violating lock acquistion orderings. Library support for fibers or even language support for co-routines are cleaner alternatives, I suppose they're quite difficult to add though. Johan -- View this message in context: http://www.nabble.com/-future--Early-draft-of-wait-for-multiple-futures-inte... Sent from the Boost - Dev mailing list archive at Nabble.com.

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Friday 30 May 2008 09:25 am, Johan Torp wrote:
Anthony Williams-4 wrote:
Alternatively, you could have future_or spawn a thread to run the task rather than do it lazily as a wait callback.
I don't think spawning a thread is acceptable.
1. Starting a new thread is expensive. Thread-pools might help here but they don't exist yet. 2. Context switching is expensive. Typically, the evaluation work is real small. This approach requires an additional context switch compared to if the future-listener performed the evaluation. It might also reduce the performance of a thread-pool if composite futures are frequently evaluated. 3. Composed futures might need to access data belonging to the future-listener in it's evaluation. If the evaluation is performed by a single future-listener, it can use const references. Now such the data needs to be copied or protected by mutexes. 4. Nested composed futures might get a context switch at every depth level.
I wouldn't be surprised if the context switching would rule out futures from libraries such as asio or poet. That would be a real shame. Frank or somebody with asio insight, what do you think?
I've been working on the "wait for any/all" issue with libpoet's futures, and for me it has boiled down to the question of how this function should behave: template<typename R, typename Combiner, typename T1, typename T2, ..., typename TN> future<R> future_combining_barrier(Combiner combiner, future<T1> f1, future<T2> f2, ..., future<TN> fN); The idea of future_combining_barrier is the returned future<R> gets its value by calling combiner(f1.get(), f2.get(), ..., fN.get()) once all the input futures are ready. I see 3 options: 1) The combiner is run in a future-waiting thread. 2) The combiner is run in a promise-fulfilling thread. 3) The combiner is run in its own thread. I initially tried to implement option (1) but could not make it work satisfactorily with my implementation (which relies on completion callbacks). Basically, I couldn't get the returned future<R> to work well enough. I wanted it to run the combiner without holding any internal library locks, and also to run its own completion callback without undue goading from external code. Also, it is easy to accidentally run the combiner in a promise-fulfilling thread if the combiner gets run by the future::ready() query as well as when waiting. I haven't fully explored the uses of wait callbacks though, maybe something can be made to work. Currently, I have implemented solution (2). It seems to work well enough. Exceptions thrown in the combiner are transported to the returned future<R>. However, it does mean that a combiner which blocks or takes a long time can cause the promise-fulfilling thread to stall running it. Solution (3) has the advantage that it insulates the promise-fulfilling thread from the possibility of being stalled by a future-waiting thread which inserts a slow-running combiner. However, it is a relatively heavyweight solution, since I'd expect the typical combiner to be a fairly trivial and quick function to execute. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFIQEu25vihyNWuA4URAlB1AKCuwXXwJD9yRcooajshi2jyzPhgKgCfXBF7 SkoHgkcRO2JQiPXEB75t1oM= =TDTm -----END PGP SIGNATURE-----

Frank Mori Hess wrote:
Alternatively, you could have future_or spawn a thread to run the task rather than do it lazily as a wait callback.
I don't think spawning a thread is acceptable.
1. Starting a new thread is expensive. Thread-pools might help here but they don't exist yet. 2. Context switching is expensive. Typically, the evaluation work is real small. This approach requires an additional context switch compared to if the future-listener performed the evaluation. It might also reduce the performance of a thread-pool if composite futures are frequently evaluated. 3. Composed futures might need to access data belonging to the future-listener in it's evaluation. If the evaluation is performed by a single future-listener, it can use const references. Now such the data needs to be copied or protected by mutexes. 4. Nested composed futures might get a context switch at every depth level.
I wouldn't be surprised if the context switching would rule out futures from libraries such as asio or poet. That would be a real shame. Frank or somebody with asio insight, what do you think?
I've been working on the "wait for any/all" issue with libpoet's futures, and for me it has boiled down to the question of how this function should behave:
template<typename R, typename Combiner, typename T1, typename T2, ..., typename TN> future<R> future_combining_barrier(Combiner combiner, future<T1> f1, future<T2> f2, ..., future<TN> fN);
The idea of future_combining_barrier is the returned future<R> gets its value by calling
combiner(f1.get(), f2.get(), ..., fN.get())
once all the input futures are ready. I see 3 options:
1) The combiner is run in a future-waiting thread. 2) The combiner is run in a promise-fulfilling thread. 3) The combiner is run in its own thread.
I think this is a very important and difficult design decision. As I explained above (with the points numbered 1-4), I don't think option 3 is viable. An additional argument against option 3 is that you can't use futures in single threaded programs. My arguments against option 2 is: A. If the combiner dead-locks, this would spread to the promise-fulfiller. This would make a bug harder to pin-point. B. Argument 3 from the 1-4 arguments above C. You can have exceptions (such as windows se_exceptions, divide by zero, null pointer use etc ) which you want don't want to catch and set in the future, but rather would like to catch at the "stack top" of each thread started (at least the threads you explicitly start yourself). If you catch and set these exceptions and later re-throw them, you lose the stack trace inside the combiner. D. A log typically logs which thread does what. If the promise-fulfiller executes lots of different foreign threads' code it can become more difficult to follow program execution. E. All of the above arguments above are due to executing code which is set by another foreign thread. I believe there are lots of other effects that can happen which I haven't thought of. OTOH, the future-listener which creates the composite future has full control. If the future is shared between more threads after creation, it can be accompanied by documentation (i.e. this future will acquire this lock). We do not want this sort of coupling between the promise-listener and promise-fulfillers. Frank Mori Hess wrote:
I initially tried to implement option (1) but could not make it work satisfactorily with my implementation (which relies on completion callbacks). Basically, I couldn't get the returned future<R> to work well enough. I wanted it to run the combiner without holding any internal library locks, and also to run its own completion callback without undue goading from external code.
I really hope option 1 is implementable without holding any locks. Could explain in more detail what problems you ran into? Frank Mori Hess wrote:
Also, it is easy to accidentally run the combiner in a promise-fulfilling thread if the combiner gets run by the future::ready() query as well as when waiting.
I'm not following you, could you elaborate? Johan -- View this message in context: http://www.nabble.com/-future--Early-draft-of-wait-for-multiple-futures-inte... Sent from the Boost - Dev mailing list archive at Nabble.com.

On Friday 30 May 2008 18:10, Johan Torp wrote:
Frank Mori Hess wrote:
once all the input futures are ready. I see 3 options:
1) The combiner is run in a future-waiting thread. 2) The combiner is run in a promise-fulfilling thread. 3) The combiner is run in its own thread.
I really hope option 1 is implementable without holding any locks. Could explain in more detail what problems you ran into?
I've got it working locally now. The combiner and any implicit type-conversions now only happen in future-waiting threads, and they are run without holding locks. I haven't committed it yet though, since it breaks my scheduler (due to some futures not completing until a wait or a future::ready/has_exception query is performed). This makes my scheduler stall, so I'll need to implement a reasonably efficient re-useable "wait for any" like future_selector to fix it. The changes also add some overhead. I haven't done any measurements, but the amount of locking/unlocking of mutexes has definitely increased.

Frank Mori Hess-2 wrote:
On Friday 30 May 2008 18:10, Johan Torp wrote:
Frank Mori Hess wrote:
once all the input futures are ready. I see 3 options:
1) The combiner is run in a future-waiting thread. 2) The combiner is run in a promise-fulfilling thread. 3) The combiner is run in its own thread.
I really hope option 1 is implementable without holding any locks. Could explain in more detail what problems you ran into?
I've got it working locally now. The combiner and any implicit type-conversions now only happen in future-waiting threads, and they are run without holding locks. I haven't committed it yet though, since it breaks my scheduler (due to some futures not completing until a wait or a future::ready/has_exception query is performed). This makes my scheduler stall, so I'll need to implement a reasonably efficient re-useable "wait for any" like future_selector to fix it. The changes also add some overhead. I haven't done any measurements, but the amount of locking/unlocking of mutexes has definitely increased.
Great, thanks for the effort. Could you post the code somewhere or mail it to me at johan [.] torp [at] gmail [.] com? I've been worried that an "ideal" future will need to hold a lot of state and becomes somewhat complicated. I've been thinking about another problem which seems related to your scheduling problem. I'm not sure if this is due to the way I program or if I'm trying to use futures in ways they're not supposed to be, so please bear with me. I often have some kind of top flow control mechanism for some threads which waits - without any time limit/poll - when there is nothing to do. It might be; - A message pump - An asio::io_service (calling run) - A job queue - A logical clock in reactive programming Within this thread code is executed while processing a message, servicing some thing, executing a job or inside a logical clock tick/instant. Let's call this X's client code. Inside the client code I imagine you often will create a new pending request to some service and get a future back to wait on. Instead of waiting directly on the future and stall the entire thread, you'd typically want to get a notification from X at a later point. Let's call this future F and let's for simplicity assume there is only one of it. This gives us the need to wait for either something to do in X or F. We need to provide some interface/mechanism for the X's client code to reschedule some execution when a future becomes ready. This can be implemented in a number of ways; A. F supplies a callback from the promise-fulfilling thread which is used to signal X. B. X supplies a work-to-do callback hook. We somehow use this hook to create a combined future with F and wait for it. C. X supplies a future<void> which becomes ready when there is work to be done. We wait for the combined future of F and this future D. Futures and X both depend on the same condition-expression waiting mechanism and both expose this fact. We wait on this mechanism. E. X is re-written to work with futures internally. This fact is either exposed directly or a mechanism to schedule a future-dependent execution is given. F. *slap*! Futures should not be exposed directly to X's client code. Rather the future-dependent execution should be done in some external thread which then notifies X. This could be a fire and forget thread per future. Or it could be a reusable service thread which can wait on multiple futures and dispatch arbitrary functions accordingly. I've been arguing against A which otherwise seems like a simple solution. B seems a bit analogous, but I suppose it depends on what X is. Thoughts on this? Is this a real problem and typical use case or am I just programming in strange ways? Johan -- View this message in context: http://www.nabble.com/-future--Early-draft-of-wait-for-multiple-futures-inte... Sent from the Boost - Dev mailing list archive at Nabble.com.

Johan Torp:
I often have some kind of top flow control mechanism for some threads which waits - without any time limit/poll - when there is nothing to do. It might be; - A message pump - An asio::io_service (calling run) - A job queue - A logical clock in reactive programming
Within this thread code is executed while processing a message, servicing some thing, executing a job or inside a logical clock tick/instant. Let's call this X's client code. Inside the client code I imagine you often will create a new pending request to some service and get a future back to wait on. Instead of waiting directly on the future and stall the entire thread, you'd typically want to get a notification from X at a later point. Let's call this future F and let's for simplicity assume there is only one of it.
This gives us the need to wait for either something to do in X or F.
The fundamental question is whether the layer you're calling should expose future<X> async_do_something( Y y ); or void async_do_something( Y y, function<void(X)> done ); Obviously, in a message-passing-based architecture, the second interface would be more convenient as it allows you to just post an appropriate message in 'done'. The "completion callback" approach tries to express the second interface in terms of the first, but I'm not quite sure that this is correct. Expressing the first in terms of the second seems more natural to me. future<X> async_do_something( Y y ) { promise<X> px; future<X> fx = px.get_future(); async_do_something( y, bind( &promise<X>::set_value, move(px), _1 ) ); return fx; } There's the small problem with the exceptions though. void async_do_something( Y y, function<void(X)> done, function<void(exception_ptr)> failed ); future<X> async_do_something( Y y ) { shared_ptr< promise<X> > px( new promise<X> ); future<X> fx = px->get_future(); async_do_something( y, bind( &promise<X>::set_value, px, _1 ) bind( &promise<X>::set_exception, px, _1 ) ); return fx; } which only reaffirms my experience that each time you see something move-only, you are going to need shared_ptr to get around its limitations. But that's another story. :-)

Peter Dimov-5 wrote:
The fundamental question is whether the layer you're calling should expose
future<X> async_do_something( Y y );
or
void async_do_something( Y y, function<void(X)> done );
Obviously, in a message-passing-based architecture, the second interface would be more convenient as it allows you to just post an appropriate message in 'done'.
The "completion callback" approach tries to express the second interface in terms of the first, but I'm not quite sure that this is correct. Expressing the first in terms of the second seems more natural to me.
Interesting way of putting it. I was hoping we could do better than the second interface, that futures could solve a lot of the problems which are associated with that approach; - Exception propagation, as you pointed out - It does not allow detection that there are not futures associated with a particular promise, which has been proposed. - You need to handle safe and threadsafe shutdown from within the "done" callback, if the future-listening thread wants to terminate before the active object - It also suffers from all of the problem with moving functors/code from one thread to another. See my arguments regarding executing foreign thread's code at: http://www.nabble.com/-future--Early-draft-of-wait-for-multiple-futures-inte... I guess most done-callbacks will be thin wrappers which does minimal things like posting a message containing the value but the problems are still there. The alternatives A-F which I listed in my previous post assumes interface 1. It's about trying to "marry" different architectures with futures. I was hoping futures could be used as in interface 1 given any architecture you might have - but it seems difficult. Does this seem like a reasonable goal? Do you have any comments on the alternatives A-F in my previous post? Or any thoughts regarding the problems I listed with executing foreign threads' code? Johan -- View this message in context: http://www.nabble.com/-future--Early-draft-of-wait-for-multiple-futures-inte... Sent from the Boost - Dev mailing list archive at Nabble.com.

Johan Torp:
Interesting way of putting it. I was hoping we could do better than the second interface, that futures could solve a lot of the problems which are associated with that approach; - Exception propagation, as you pointed out - It does not allow detection that there are not futures associated with a particular promise, which has been proposed. - You need to handle safe and threadsafe shutdown from within the "done" callback, if the future-listening thread wants to terminate before the active object - It also suffers from all of the problem with moving functors/code from one thread to another. See my arguments regarding executing foreign thread's code at: http://www.nabble.com/-future--Early-draft-of-wait-for-multiple-futures-inte... I guess most done-callbacks will be thin wrappers which does minimal things like posting a message containing the value but the problems are still there.
There's no question that the future-based interface can be safer and more convenient. But this doesn't necessarily make it "the" low-level interface on top of which one can (or must) express all other interfaces. The callback-based interface, on the other hand, can be difficult to use, but both future-based and message-based interfaces can be built upon it. A future-based interface built on top of a callback interface doesn't seem to suffer from the deficiencies you list, and neither does a message-based interface (given an appropriately powerful and robust message-passing framework in which to express it.)

Peter Dimov-5 wrote:
There's no question that the future-based interface can be safer and more convenient. But this doesn't necessarily make it "the" low-level interface on top of which one can (or must) express all other interfaces.
I wasn't suggesting this, for instance I doubt Microsoft will be unwilling to rewrite their message pumps to use futures internally :) I wonder if all existing architectures can somehow be adapted to work smoothly with futures - without the future-complete callback. Either by intrusively adding explicit support for futures or by writing some simple helper functions which maps futures into the main flow control. For instance, a message passing architecture might provide something like this: active_object a; future<X> x = a.do_foo_async(); // Can this be implemented? repost_as_message_when_ready(message_channel, x, bind(&make_foo_complete_message, _1)); If futures can be integrated into any architecture, I imagine they can become "the glue" between user architecture and various asynchronous libraries. Does this seem lika a reasonable goal for futures? Peter Dimov-5 wrote:
The callback-based interface, on the other hand, can be difficult to use, but both future-based and message-based interfaces can be built upon it.
A future-based interface built on top of a callback interface doesn't seem to suffer from the deficiencies you list, and neither does a message-based interface (given an appropriately powerful and robust message-passing framework in which to express it.)
You're right. Maybe the more powerful but more dangerous callback-style interfaces is better in many situations rather than forcing futures on the users. Future return value-interfaces has a certain elegancy/clarity to them which I guess I'm drawn to :) Johan -- View this message in context: http://www.nabble.com/-future--Early-draft-of-wait-for-multiple-futures-inte... Sent from the Boost - Dev mailing list archive at Nabble.com.

Peter Dimov-5 wrote:
A future-based interface built on top of a callback interface doesn't seem to suffer from the deficiencies you list, and neither does a message-based interface (given an appropriately powerful and robust message-passing framework in which to express it.)
One of my points is valid even for these two cases. The future-interface allows the promise-fulfiller to detect that all associated futures has died and that it needn't fulfill the request. That is, if we add this functionality to the promise class. Johan -- View this message in context: http://www.nabble.com/-future--Early-draft-of-wait-for-multiple-futures-inte... Sent from the Boost - Dev mailing list archive at Nabble.com.

Johan Torp:
One of my points is valid even for these two cases. The future-interface allows the promise-fulfiller to detect that all associated futures has died and that it needn't fulfill the request. That is, if we add this functionality to the promise class.
In principle, the callback interface could return a "task handle" instead of void, one that can be used to cancel the task. This looks very similar to asio, by the way.

On 6/6/08, Peter Dimov <pdimov@pdimov.com> wrote:
Johan Torp:
One of my points is valid even for these two cases. The future-interface allows the promise-fulfiller to detect that all associated futures has died and that it needn't fulfill the request. That is, if we add this functionality to the promise class.
In principle, the callback interface could return a "task handle" instead of void, one that can be used to cancel the task. This looks very similar to asio, by the way.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- __________________________________________________________________________ dott. Corrado Zoccolo mailto:czoccolo@gmail.com PhD - Department of Computer Science - University of Pisa, Italy -------------------------------------------------------------------------- The self-confidence of a warrior is not the self-confidence of the average man. The average man seeks certainty in the eyes of the onlooker and calls that self-confidence. The warrior seeks impeccability in his own eyes and calls that humbleness. Tales of Power - C. Castaneda

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Monday 02 June 2008 04:54 am, Johan Torp wrote:
Great, thanks for the effort. Could you post the code somewhere or mail it to me at johan [.] torp [at] gmail [.] com? I've been worried that an "ideal" future will need to hold a lot of state and becomes somewhat complicated.
I've committed it to the changes to libpoet cvs now. I've got future_selector implemented, and it seems to work. I haven't completely converted my own scheduler over to properly using it. I've done some testing, but no benchmarking. My implementation has become a bit complicated, somewhat more so than absolutely necessary. It boils down to 2 essential things I had to add to my future implementation in order to run all user code in future-waiting threads, and wait efficiently. 1) I added an event queue for a future, which any thread can push functors onto, but only future-waiting threads which call future::get/ready/has_exception will cause the functors to be popped off and run. 2) For efficiency, I added a signal which is emitted whenever a functor is pushed onto a event queue. The signal publishes another event which interested observers can push onto their own event queue. When the event is run, it simply polls the event queue that originally emitted the signal. This allows a future which is waiting on N dependencies to only poll the event queues which might actually contain events, instead of having to poll all N of them every time. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFISFiQ5vihyNWuA4URAn62AJ0ZrTsjJWna7KaaEluYVs/07KdbQQCfV5Rg mKM4KaAnWdr8YOv9YY95vWc= =5P/X -----END PGP SIGNATURE-----

Anthony Williams-4 wrote:
I don't think your proposed interface is powerful enough to implement composed futures, but I might be wrong. For instance, how would you implement future operators with it?
future<bool> operator||(future<bool> lhs, future<bool> rhs);
Using wait callbacks:
is_ready doesn't trigger the callback, so this won't work. OTOH, I think is_ready should trigger the callback, even for the "run extra work in wait()" thread pool use case. Johan Johan -- View this message in context: http://www.nabble.com/-future--Early-draft-of-wait-for-multiple-futures-inte... Sent from the Boost - Dev mailing list archive at Nabble.com.

Johan Torp:
is_ready doesn't trigger the callback, so this won't work.
OTOH, I think is_ready should trigger the callback, even for the "run extra work in wait()" thread pool use case.
ready() should trigger a separate "ready callback", since its semantics are not the same. (f1 || f2).ready :- f1.ready || f2.ready (f1 || f2).wait :- wait_for_any(f1, f2) (Ignoring the problem in which f1 completes with an exception and f2 is still active.) Java futures don't have this callback proliferation problem because they are abstract. class future { public: virtual void wait() = 0; virtual bool ready() = 0; }; but we'd rather like to pass and return futures by value. :-)

"Peter Dimov" <pdimov@pdimov.com> writes:
Johan Torp:
is_ready doesn't trigger the callback, so this won't work.
OTOH, I think is_ready should trigger the callback, even for the "run extra work in wait()" thread pool use case.
ready() should trigger a separate "ready callback", since its semantics are not the same.
(f1 || f2).ready :- f1.ready || f2.ready (f1 || f2).wait :- wait_for_any(f1, f2)
Interesting thought. An is_ready() query is certainly not a blocking wait, so shouldn't call the wait callback. A "ready callback" would allow a scheduler to priotize the task without taking over the waiting thread. This goes along with something mentioned earlier about flagging a future as "needed" in order to trigger lazy evaluation, or prioritize the task in a thread pool.
(Ignoring the problem in which f1 completes with an exception and f2 is still active.)
I think that's a non-problem. If f1 completes with an exception, f1 is ready, so (f1 || f2) is ready and should propagate the exception, regardless of the state of f2. This is the same logic as if f1 completes with a value. Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

Anthony Williams:
"Peter Dimov" <pdimov@pdimov.com> writes:
(Ignoring the problem in which f1 completes with an exception and f2 is still active.)
I think that's a non-problem. If f1 completes with an exception, f1 is ready, so (f1 || f2) is ready and should propagate the exception, regardless of the state of f2.
The "problem" (bad choice of a word on my part) is with choosing between the behavior you outlined and the alternative, in which f1 || f2 only propagates an exception when both f1 and f2 end with an exception. Both approaches make sense (at least at first sight). I've chosen to implement the first in the pseudocode because it's easier, but the implementability of the second should probably be a concern as well.

"Peter Dimov" <pdimov@pdimov.com> writes:
Anthony Williams:
"Peter Dimov" <pdimov@pdimov.com> writes:
(Ignoring the problem in which f1 completes with an exception and f2 is still active.)
I think that's a non-problem. If f1 completes with an exception, f1 is ready, so (f1 || f2) is ready and should propagate the exception, regardless of the state of f2.
The "problem" (bad choice of a word on my part) is with choosing between the behavior you outlined and the alternative, in which f1 || f2 only propagates an exception when both f1 and f2 end with an exception. Both approaches make sense (at least at first sight). I've chosen to implement the first in the pseudocode because it's easier, but the implementability of the second should probably be a concern as well.
OK. Let's call it "first_value()" as it returns the first value rather than the first ready future, and we don't have to worry about operator semantics. Here's an implementation, based on my operator||. Note that if the first future ready threw an exception, it takes the value or exception from the second. If the second had a value, that's what you want. If it didn't, you've got two exceptions, and you can either pick one at random or throw a new exception. As before, if you're worried about is_ready(), you can have it spawn a thread rather than do lazy evaluation. namespace detail { template<typename R> class future_first_value { boost::shared_ptr<packaged_task<R> > task; shared_future<R> lhs; shared_future<R> rhs; public: future_first_value(boost::shared_ptr<packaged_task<R> > const& task_, shared_future<R> const& lhs_, shared_future<R> const& rhs_): task(task_),lhs(lhs_),rhs(rhs_) {} R operator()() { unsigned const index=wait_for_any(lhs,rhs); bool const is_exceptional= index?rhs.has_exception():lhs.has_exception(); if(!is_exceptional) { return index?rhs.get():lhs.get(); } else { return index?lhs.get():rhs.get(); } } static void run(jss::packaged_task<int>& pt) { try { pt(); } catch(task_already_started&) { } } }; } template<typename R> unique_future<R> first_value(shared_future<R> const& lhs,shared_future<R> const& rhs) { boost::shared_ptr<packaged_task<R> > task(new packaged_task<R>); *task=packaged_task<R>(detail::future_first_value<R>(task,lhs,rhs)); task->set_wait_callback(detail::future_first_value<R>::run); return task->get_future(); } Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

Peter Dimov-5 wrote:
is_ready doesn't trigger the callback, so this won't work.
OTOH, I think is_ready should trigger the callback, even for the "run extra work in wait()" thread pool use case.
ready() should trigger a separate "ready callback", since its semantics are not the same.
(f1 || f2).ready :- f1.ready || f2.ready (f1 || f2).wait :- wait_for_any(f1, f2)
No, (f1 || f2).ready != f1.ready || f2.ready. f1 can be ready and false, in which case we need to wait for f2 to become ready until the composite future is ready.
Java futures don't have this callback proliferation problem because they are abstract.
class future { public: virtual void wait() = 0; virtual bool ready() = 0; };
There are other ways of solving this too, without exposing callbacks.
but we'd rather like to pass and return futures by value. :-)
What do you mean? Johan -- View this message in context: http://www.nabble.com/-future--Early-draft-of-wait-for-multiple-futures-inte... Sent from the Boost - Dev mailing list archive at Nabble.com.

Johan Torp <johan.torp@gmail.com> writes:
Peter Dimov-5 wrote:
is_ready doesn't trigger the callback, so this won't work.
OTOH, I think is_ready should trigger the callback, even for the "run extra work in wait()" thread pool use case.
ready() should trigger a separate "ready callback", since its semantics are not the same.
(f1 || f2).ready :- f1.ready || f2.ready (f1 || f2).wait :- wait_for_any(f1, f2)
No, (f1 || f2).ready != f1.ready || f2.ready. f1 can be ready and false, in which case we need to wait for f2 to become ready until the composite future is ready.
What does it mean for a shared_future<string> to be "ready and false"? I view "f1 || f2" as a short-hand for a call to wait_for_any(f1,f2) followed by either f1.get() or f2.get() depending on which was ready first.
Java futures don't have this callback proliferation problem because they are abstract.
class future { public: virtual void wait() = 0; virtual bool ready() = 0; };
There are other ways of solving this too, without exposing callbacks.
Can you suggest some? Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Friday 30 May 2008 10:44 am, Anthony Williams wrote:
Johan Torp <johan.torp@gmail.com> writes:
Peter Dimov-5 wrote:
is_ready doesn't trigger the callback, so this won't work.
OTOH, I think is_ready should trigger the callback, even for the "run extra work in wait()" thread pool use case.
ready() should trigger a separate "ready callback", since its semantics are not the same.
(f1 || f2).ready :- f1.ready || f2.ready (f1 || f2).wait :- wait_for_any(f1, f2)
No, (f1 || f2).ready != f1.ready || f2.ready. f1 can be ready and false, in which case we need to wait for f2 to become ready until the composite future is ready.
What does it mean for a shared_future<string> to be "ready and false"?
I view "f1 || f2" as a short-hand for a call to wait_for_any(f1,f2) followed by either f1.get() or f2.get() depending on which was ready first.
I in believe Johan's operator||, the value of the returned future logically corresponds to f1.get() || f2.get() That is, the operator|| is acting more consistently with the usual meaning of operator|| for boolean values, as opposed to just being a short hand way to express a "wait for either future" operation. This kind of confusion is why I don't like operator overloading :) -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFIQBYv5vihyNWuA4URAkhbAKCoZK5pP48ieB+9gSN14GAIhr0UnQCferoj 6tHjtxYhwJYp8GPaimF66KE= =KCnZ -----END PGP SIGNATURE-----

Anthony Williams-4 wrote:
No, (f1 || f2).ready != f1.ready || f2.ready. f1 can be ready and false, in which case we need to wait for f2 to become ready until the composite future is ready.
What does it mean for a shared_future<string> to be "ready and false"?
I view "f1 || f2" as a short-hand for a call to wait_for_any(f1,f2) followed by either f1.get() or f2.get() depending on which was ready first.
I was talking about future operators. That is the composite future we got from f1 || f2, not selecting the first one which is ready. This should become ready when it has an evaluated value of true or false. Anthony Williams-4 wrote:
Java futures don't have this callback proliferation problem because they are abstract.
class future { public: virtual void wait() = 0; virtual bool ready() = 0; };
There are other ways of solving this too, without exposing callbacks.
Can you suggest some?
I already have, all of my proposals has solved this without exposing any external callbacks. I propose adding a parent-child relationship to futures which is only exposed the wait for any/all mechanism. The promise-fullfillers does not only notify their own condition, they traverse their future's parents and notify them too. The future-expressions can be built up with expressive templates or using some sort of future collection. Read my other posts in the review request thread and you should get an idea. Johan -- View this message in context: http://www.nabble.com/-future--Early-draft-of-wait-for-multiple-futures-inte... Sent from the Boost - Dev mailing list archive at Nabble.com.

Johan Torp <johan.torp@gmail.com> writes:
Anthony Williams-4 wrote:
No, (f1 || f2).ready != f1.ready || f2.ready. f1 can be ready and false, in which case we need to wait for f2 to become ready until the composite future is ready.
What does it mean for a shared_future<string> to be "ready and false"?
I view "f1 || f2" as a short-hand for a call to wait_for_any(f1,f2) followed by either f1.get() or f2.get() depending on which was ready first.
I was talking about future operators. That is the composite future we got from f1 || f2, not selecting the first one which is ready. This should become ready when it has an evaluated value of true or false.
Are you envisaging "f1 || f2" as a short-hand for "f1.get() || f2.get()" that may not need to wait for both? I see that as very limited utility, since it only works for future<R> where R can be used with ||. My "f1 || f2" is a future operator that returns a composite future. It is the future that is subject to ||, not the returned value.
There are other ways of solving this too, without exposing callbacks.
Can you suggest some?
I already have, all of my proposals has solved this without exposing any external callbacks. I propose adding a parent-child relationship to futures which is only exposed the wait for any/all mechanism. The promise-fullfillers does not only notify their own condition, they traverse their future's parents and notify them too.
OK. This is similar to what I've done with detail::future_waiter. I could probably extend it to provide a full future that had the semantics of the future_or I posted earlier. I'll have a look next week. Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

----- Original Message ----- From: "Anthony Williams" <anthony.ajw@gmail.com> To: <boost@lists.boost.org> Sent: Friday, May 30, 2008 5:56 PM Subject: Re: [boost] [future] Early draft of wait for multiple futuresinterface
Johan Torp <johan.torp@gmail.com> writes:
Anthony Williams-4 wrote:
No, (f1 || f2).ready != f1.ready || f2.ready. f1 can be ready and false, in which case we need to wait for f2 to become ready until the composite future is ready.
What does it mean for a shared_future<string> to be "ready and false"?
I view "f1 || f2" as a short-hand for a call to wait_for_any(f1,f2) followed by either f1.get() or f2.get() depending on which was ready first.
I was talking about future operators. That is the composite future we got from f1 || f2, not selecting the first one which is ready. This should become ready when it has an evaluated value of true or false.
Are you envisaging "f1 || f2" as a short-hand for "f1.get() || f2.get()" that may not need to wait for both? I see that as very limited utility, since it only works for future<R> where R can be used with ||.
My "f1 || f2" is a future operator that returns a composite future. It is the future that is subject to ||, not the returned value.
There are other ways of solving this too, without exposing callbacks.
Can you suggest some?
I already have, all of my proposals has solved this without exposing any external callbacks. I propose adding a parent-child relationship to futures which is only exposed the wait for any/all mechanism. The promise-fullfillers does not only notify their own condition, they traverse their future's parents and notify them too.
OK. This is similar to what I've done with detail::future_waiter. I could probably extend it to provide a full future that had the semantics of the future_or I posted earlier. I'll have a look next week.
Do you mean that future_waiter will be public and not a implementation detail? Vicente

"vicente.botet" <vicente.botet@wanadoo.fr> writes:
----- Original Message ----- From: "Anthony Williams" <anthony.ajw@gmail.com>
OK. This is similar to what I've done with detail::future_waiter. I could probably extend it to provide a full future that had the semantics of the future_or I posted earlier. I'll have a look next week.
Do you mean that future_waiter will be public and not a implementation detail?
I mean that something based on future_waiter would be public, yes. Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

Anthony Williams-4 wrote:
I was talking about future operators. That is the composite future we got from f1 || f2, not selecting the first one which is ready. This should become ready when it has an evaluated value of true or false.
Are you envisaging "f1 || f2" as a short-hand for "f1.get() || f2.get()" that may not need to wait for both? I see that as very limited utility, since it only works for future<R> where R can be used with ||.
My "f1 || f2" is a future operator that returns a composite future. It is the future that is subject to ||, not the returned value.
I see. As others has pointed out, maybe we shouldn't supply the operators because of this possible confusion. Johan -- View this message in context: http://www.nabble.com/-future--Early-draft-of-wait-for-multiple-futures-inte... Sent from the Boost - Dev mailing list archive at Nabble.com.

Johan Torp <johan.torp@gmail.com> writes:
Anthony Williams-4 wrote:
I don't think your proposed interface is powerful enough to implement composed futures, but I might be wrong. For instance, how would you implement future operators with it?
future<bool> operator||(future<bool> lhs, future<bool> rhs);
Using wait callbacks:
is_ready doesn't trigger the callback, so this won't work.
That depends on what you mean by "work". It isn't ready, so is_ready() returns false. It just happens to always return false until you do something to make it ready. If you wait on it, it becomes ready. If you don't wait, it doesn't. This is the essential problem with lazy futures. Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

Anthony Williams-4 wrote:
is_ready doesn't trigger the callback, so this won't work.
That depends on what you mean by "work".
It isn't ready, so is_ready() returns false. It just happens to always return false until you do something to make it ready.
If you wait on it, it becomes ready. If you don't wait, it doesn't. This is the essential problem with lazy futures.
That is way too strange semantics. A user would not expect having to call wait() or get() to make a future become ready. I have been thinking about not type erasing the laziness and instead providing a lazy_future class or composite future class. But I think inherent, type-erased laziness is a better idea. Johan -- View this message in context: http://www.nabble.com/-future--Early-draft-of-wait-for-multiple-futures-inte... Sent from the Boost - Dev mailing list archive at Nabble.com.

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Friday 30 May 2008 06:23 am, Anthony Williams wrote:
Given wait_for_any and wait_for_all (see my latest revision), I don't think we need these more complex composite classes in many cases, and where we do they might be better implemented specifically for the case at hand.
One problem is wait_for_any is not sufficient to implement an efficient scheduler. You have to copy all the futures into wait_for_any so each wait_for_any call is O(N). So something like the future_selecter (should be spelled future_selector?) class I mentioned earlier would still be needed. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFIQBz75vihyNWuA4URAolQAKCmet2KPNWK0oXkz5Gd8OAR6xoc3QCg0vHf Oh3VSBo5/gnAIk6cJRFR3is= =oQ5s -----END PGP SIGNATURE-----

Frank Mori Hess <frank.hess@nist.gov> writes:
On Friday 30 May 2008 06:23 am, Anthony Williams wrote:
Given wait_for_any and wait_for_all (see my latest revision), I don't think we need these more complex composite classes in many cases, and where we do they might be better implemented specifically for the case at hand.
One problem is wait_for_any is not sufficient to implement an efficient scheduler. You have to copy all the futures into wait_for_any so each wait_for_any call is O(N). So something like the future_selecter (should be spelled future_selector?) class I mentioned earlier would still be needed.
What do you mean by O(N) in this context? You have N futures to wait for. You have to somehow register that you're waiting for each one => you need O(N) operations. I suppose that if one of the futures was already ready, you could skip the rest, but that doesn't really strike me as much of an improvement. Once it's done registering, wait_for_any then blocks until one of them is ready. No polling, just a single wait. Could you elaborate on what you were getting at? Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

Anthony Williams-4 wrote:
One problem is wait_for_any is not sufficient to implement an efficient scheduler. You have to copy all the futures into wait_for_any so each wait_for_any call is O(N). So something like the future_selecter (should be spelled future_selector?) class I mentioned earlier would still be needed.
What do you mean by O(N) in this context?
You have N futures to wait for. You have to somehow register that you're waiting for each one => you need O(N) operations.
I suppose that if one of the futures was already ready, you could skip the rest, but that doesn't really strike me as much of an improvement.
Once it's done registering, wait_for_any then blocks until one of them is ready. No polling, just a single wait.
Could you elaborate on what you were getting at?
I think that it's a common use case to serve or ignore one ready future and then go back to waiting on the remaining ones. This would requrie O(N^2) with your interface. Johan -- View this message in context: http://www.nabble.com/-future--Early-draft-of-wait-for-multiple-futures-inte... Sent from the Boost - Dev mailing list archive at Nabble.com.

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Friday 30 May 2008 11:45 am, Anthony Williams wrote:
One problem is wait_for_any is not sufficient to implement an efficient scheduler. You have to copy all the futures into wait_for_any so each wait_for_any call is O(N). So something like the future_selecter (should be spelled future_selector?) class I mentioned earlier would still be needed.
What do you mean by O(N) in this context?
I mean there are N method requests in my scheduler, and I want to wait until one of them is ready.
You have N futures to wait for. You have to somehow register that you're waiting for each one => you need O(N) operations.
Yes. But the scheduler is going to repeatedly wait on the same N futures (plus or minus one). So as Johan already pointed out, passing iterators to a seperate container of futures to wait_for_any() will cause the sheduler to call wait_for_any N times and take O(N^2) to dispatch N method requests. On the other hand, if the container of futures and wait_for_any are integrated into one class (future_selector), then it doesn't have to do the full setup/tear down with each wait. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFIQDgf5vihyNWuA4URAnr9AKDFSPDXNZub0bu4C3d/cbGdDJd+VwCgpmsv t+JA1JnP0mQPP4iv61wVjZw= =u825 -----END PGP SIGNATURE-----

Frank Mori Hess <frank.hess@nist.gov> writes:
On Friday 30 May 2008 11:45 am, Anthony Williams wrote:
One problem is wait_for_any is not sufficient to implement an efficient scheduler. You have to copy all the futures into wait_for_any so each wait_for_any call is O(N). So something like the future_selecter (should be spelled future_selector?) class I mentioned earlier would still be needed.
What do you mean by O(N) in this context?
I mean there are N method requests in my scheduler, and I want to wait until one of them is ready.
You have N futures to wait for. You have to somehow register that you're waiting for each one => you need O(N) operations.
Yes. But the scheduler is going to repeatedly wait on the same N futures (plus or minus one).
Aha: it's the repeated waits that matter. For one wait it's O(N) in all cases. If you wait again on (almost) the same set, until the set is empty, it's an issue if you have to redo the set up, as that is then O(N^2) in total. That's a different use case to the one I imagined. I would call it a "future dispatcher" and have it invoke a callback on each future as it became ready. Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Friday 30 May 2008 16:24 pm, Anthony Williams wrote:
Aha: it's the repeated waits that matter. For one wait it's O(N) in all cases. If you wait again on (almost) the same set, until the set is empty, it's an issue if you have to redo the set up, as that is then O(N^2) in total.
That's a different use case to the one I imagined. I would call it a "future dispatcher" and have it invoke a callback on each future as it became ready.
Interesting, being able to attach a callback to such a future dispatcher would actually make it possible for me to implement my scheduler. I also wouldn't need to have a future_combining_barrier which can convert a heterogeneous group of input futures into a result future of arbitrary type. But I think I've finally figured out how to properly run the combiner in a waiting thread as Johan has been suggesting, so I think I'm still going to implement future_selector. It seems more powerful to return a future, although the need does seem far more abstract now that I realize I don't absolutely need the feature myself. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFIQG+o5vihyNWuA4URAu0xAJ9/SzdFOl7BiMITpMbT9W4QVV+YMQCgqZZ5 mCMjYnfdlHS0cmcBWwrZOII= =HhXC -----END PGP SIGNATURE-----

On Fri, May 30, 2008 at 10:24 PM, Anthony Williams <anthony.ajw@gmail.com> wrote:
Frank Mori Hess <frank.hess@nist.gov> writes:
On Friday 30 May 2008 11:45 am, Anthony Williams wrote:
One problem is wait_for_any is not sufficient to implement an efficient scheduler. You have to copy all the futures into wait_for_any so each wait_for_any call is O(N). So something like the future_selecter (should be spelled future_selector?) class I mentioned earlier would still be needed.
What do you mean by O(N) in this context?
I mean there are N method requests in my scheduler, and I want to wait until one of them is ready.
You have N futures to wait for. You have to somehow register that you're waiting for each one => you need O(N) operations.
Yes. But the scheduler is going to repeatedly wait on the same N futures (plus or minus one).
Aha: it's the repeated waits that matter. For one wait it's O(N) in all cases. If you wait again on (almost) the same set, until the set is empty, it's an issue if you have to redo the set up, as that is then O(N^2) in total.
That's a different use case to the one I imagined. I would call it a "future dispatcher" and have it invoke a callback on each future as it became ready.
Hum, I think that boost already has such a dispatcher: it is called asio::io_service ;). IMHO futures wait sets should be used to wait for very few events. If the number of events grows significantly, it is probably a sign that the application would be better redesigned as an event driven state machine. I do not think it is worth optimizing futures for large N. -- gpd

On Friday 30 May 2008 17:41, Giovanni Piero Deretta wrote:
Hum, I think that boost already has such a dispatcher: it is called asio::io_service ;).
IMHO futures wait sets should be used to wait for very few events. If the number of events grows significantly, it is probably a sign that the application would be better redesigned as an event driven state machine. I do not think it is worth optimizing futures for large N.
That's essentially where I started. To do an event driven design requires the ability to generate an event when a future completes. That means a completion callback as part of the future's public interface. A completion callback has been resisted as too dangerous, and waits on groups of futures offered as a safer alternative. Therefore, I'm trying to insist the wait functionality be sufficient to build a decent scheduler on.

On Friday 30 May 2008 20:21, Frank Mori Hess wrote:
On Friday 30 May 2008 17:41, Giovanni Piero Deretta wrote:
Hum, I think that boost already has such a dispatcher: it is called asio::io_service ;).
IMHO futures wait sets should be used to wait for very few events. If the number of events grows significantly, it is probably a sign that the application would be better redesigned as an event driven state machine. I do not think it is worth optimizing futures for large N.
That's essentially where I started. To do an event driven design requires the ability to generate an event when a future completes. That means a completion callback as part of the future's public interface. A completion callback has been resisted as too dangerous, and waits on groups of futures offered as a safer alternative. Therefore, I'm trying to insist the wait functionality be sufficient to build a decent scheduler on.
What if we specifically support asio for handling future completion events? That is, the future lets you register an asio::io_service along with a completion handler, and the promise just posts the completion handler to the io_service when it is fulfilled. Then the completion handler won't actually get run in the promise-fulfilling thread. Would that be safe enough and is it sensible (I don't know asio very well)?

----- Original Message ----- From: "Anthony Williams" <anthony.ajw@gmail.com> To: <boost@lists.boost.org> Sent: Friday, May 30, 2008 5:45 PM Subject: Re: [boost] [future] Early draft of wait for multiple futuresinterface
Frank Mori Hess <frank.hess@nist.gov> writes:
On Friday 30 May 2008 06:23 am, Anthony Williams wrote:
Given wait_for_any and wait_for_all (see my latest revision), I don't think we need these more complex composite classes in many cases, and where we do they might be better implemented specifically for the case at hand.
One problem is wait_for_any is not sufficient to implement an efficient scheduler. You have to copy all the futures into wait_for_any so each wait_for_any call is O(N). So something like the future_selecter (should be spelled future_selector?) class I mentioned earlier would still be needed.
What do you mean by O(N) in this context?
You have N futures to wait for. You have to somehow register that you're waiting for each one => you need O(N) operations.
I suppose that if one of the futures was already ready, you could skip the rest, but that doesn't really strike me as much of an improvement.
Once it's done registering, wait_for_any then blocks until one of them is ready. No polling, just a single wait.
Could you elaborate on what you were getting at?
The O(N) comes from wait function. unsigned wait() { all_futures_lock lk(futures); for(;;) { for(unsigned i=0;i<futures.size();++i) // O(N) { if(futures[i].future->done) { return futures[i].index; } } cv.wait(lk); } } and on the all_futures_lock which will be inialized (O(N)), unlocked on wait (O(N)) and locked after wait (O(N)) struct all_futures_lock { unsigned count; boost::scoped_array<boost::unique_lock<boost::mutex> > locks; all_futures_lock(std::vector<registered_waiter>& futures): count(futures.size()),locks(new boost::unique_lock<boost::mutex>[count]) { for(unsigned i=0;i<count;++i) // O(N) { locks[i]=boost::unique_lock<boost::mutex>(futures[i].future->mutex); } } void lock() { boost::lock(locks.get(),locks.get()+count); // O(N) } void unlock() { for(unsigned i=0;i<count;++i) // O(N) { locks[i].unlock(); } } }; Best Vicente

"vicente.botet" <vicente.botet@wanadoo.fr> writes:
----- Original Message ----- From: "Anthony Williams" <anthony.ajw@gmail.com>
Frank Mori Hess <frank.hess@nist.gov> writes:
On Friday 30 May 2008 06:23 am, Anthony Williams wrote:
Given wait_for_any and wait_for_all (see my latest revision), I don't think we need these more complex composite classes in many cases, and where we do they might be better implemented specifically for the case at hand.
One problem is wait_for_any is not sufficient to implement an efficient scheduler. You have to copy all the futures into wait_for_any so each wait_for_any call is O(N). So something like the future_selecter (should be spelled future_selector?) class I mentioned earlier would still be needed.
What do you mean by O(N) in this context?
You have N futures to wait for. You have to somehow register that you're waiting for each one => you need O(N) operations.
I suppose that if one of the futures was already ready, you could skip the rest, but that doesn't really strike me as much of an improvement.
Once it's done registering, wait_for_any then blocks until one of them is ready. No polling, just a single wait.
Could you elaborate on what you were getting at?
The O(N) comes from wait function.
and on the all_futures_lock which will be inialized (O(N)), unlocked on wait (O(N)) and locked after wait (O(N))
Yes, but these are additive, so the result is O(N). You have N futures to wait for, so you cannot do better than O(N): the best you can do is a constant factor. As Frank and Johan have posted, it's the O(N) per wait, with O(N) waits that's the killer. Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL
participants (8)
-
Anthony Williams
-
Corrado Zoccolo
-
Frank Mori Hess
-
Frank Mori Hess
-
Giovanni Piero Deretta
-
Johan Torp
-
Peter Dimov
-
vicente.botet