Review Request: future library (Gaskill version)

I wanted to formally request a review of my Futures library. Latest Version: http://braddock.com/~braddock/future/ The library has matured greatly over the past year. It has been heavily used as a key component in a mid-sized commercial application on both Linux and Windows. Extensive unit tests and (just now) documentation have been written - in fact there are more than twice as many lines of test code and documentation than are in the library proper. The library incorporates many ideas from this list, from other prospective submissions, other languages, and from academic papers. The library does not currently use jamfiles or boostdoc (I tried I really did). It is a header-only library. The documentation is currently straight HTML, and should be very easy to translate to boostdoc if the submission looks good. NOTE - to avoid confusion, note that there is an older 2005 boost.future candidate in the vault by Thorsten Schuett that is not related to mine (although it was studied early on). Thanks, Braddock Gaskill braddock@braddock.com

Braddock Gaskill:
I wanted to formally request a review of my Futures library. Latest Version: http://braddock.com/~braddock/future/
The library has matured greatly over the past year. ...
I no longer like future<>::cancel. :-) Have you considered the alternative of automatically invoking the cancel handler when no futures remain? This requires that a promise and a future be created at once: pair<promise,future> pc = create_channel(); or, if empty futures are allowed: future f; // empty/detached promise p(f); // attach p to f This mirrors the future's ability to detect the lack of a promise object.

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Wednesday 09 April 2008 22:17 pm, Peter Dimov wrote:
I no longer like future<>::cancel. :-) Have you considered the alternative of automatically invoking the cancel handler when no futures remain? This requires that a promise and a future be created at once:
pair<promise,future> pc = create_channel();
Would that really be necessary? Presumably, you would only want it to call the cancel handler when the future reference count makes the transition from 1 to 0, not potentially multiple times while the reference count sits at 0.
or, if empty futures are allowed:
future f; // empty/detached promise p(f); // attach p to f
- -- Frank -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFH/hfz5vihyNWuA4URAhHvAJ4vohVgFRV3DeMVEt2fJrYSmYOzOwCgtMv/ 8tdNg8GH5nAnp/25GlewS9g= =S9UK -----END PGP SIGNATURE-----

On Thu, 10 Apr 2008 05:17:25 +0300, Peter Dimov wrote:
I no longer like future<>::cancel. :-)
future<T>::cancel is an impurity - I completely agree and devoted an aside to that in my documentation. But many will find it useful. Future variables really should have nothing to do with task management. This is a major confusion which I've seen in prior future implementations and standard proposals. Futures themselves should not be inherently tied to thread creation or anything else...very limiting. I provide a few hooks so that you can tie futures to any scheduler, rpc scheme, or other asynchronous "backend".
Have you considered the alternative of automatically invoking the cancel handler when no futures remain? This requires that a promise and a future be created at once:
pair<promise,future> pc = create_channel();
The thought crossed my mind, but there are too many places in real-world usage (I've had a lot now) where I find I want to get a future from a promise. I think it might be awkward to store both, and defeats the intention of automatic cancelation.
or, if empty futures are allowed:
I allow empty futures. I tried without, but it got too awkward at times.
future f; // empty/detached promise p(f); // attach p to f
If I can create a new promise from a future, then I can no longer detect a "broken promise" exception (when the last promise associated with a future goes out of scope for whatever reason, possibly leaving a future forever waiting). Thanks, Braddock Gaskill

Braddock Gaskill:
pair<promise,future> pc = create_channel();
The thought crossed my mind, but there are too many places in real-world usage (I've had a lot now) where I find I want to get a future from a promise. I think it might be awkward to store both, and defeats the intention of automatic cancelation.
Can you please go into more detail? It seems to me that if you need to create futures on demand, then the non-automatic cancelation should not be used. If some future holder calls cancel, later futures created by this promise will not work. I can't think of a situation where this would be desirable. Storing a future in addition to the promise does disable the implicit cancelation - by design; presumably if you need the capability to create more futures at some later point, you don't want the task canceled in the meantime. And if you don't mind the task being canceled since you no longer need to create futures, you just reset() your local future copy.

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Friday 11 April 2008 10:36 am, Peter Dimov wrote:
It seems to me that if you need to create futures on demand, then the non-automatic cancelation should not be used. If some future holder calls cancel, later futures created by this promise will not work. I can't think of a situation where this would be desirable.
Storing a future in addition to the promise does disable the implicit cancelation - by design; presumably if you need the capability to create more futures at some later point, you don't want the task canceled in the meantime. And if you don't mind the task being canceled since you no longer need to create futures, you just reset() your local future copy.
Would support for releasing a future's reference count be acceptable in this scheme? That is, something like future<void> fire(); //... { future<void> forget = fire(); forget.release(); // promise is not cancelled when forget destructs } Releasing a future would make the associated promise uncancelable, would this be a problem? Tangentially, would the addition of a weak_future make future reference counting more palatable? - -- Frank -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFH/44n5vihyNWuA4URAlNeAKCTcBdPz6E1EHb8NvOdQ8tHfolVcACePJ/t npAO6T8CpEfjrLdADCrenwc= =UQK4 -----END PGP SIGNATURE-----

Frank Mori Hess:
Would support for releasing a future's reference count be acceptable in this scheme? That is, something like
future<void> fire(); //... { future<void> forget = fire(); forget.release(); // promise is not cancelled when forget destructs }
Good point about fire and forget tasks. I'm not sure I have a satisfactory answer yet. There is no way to distinguish a result-only task from a perform-side-effects task automatically, so the user must have some way to say which is which. On the other hand, a newly created promise doesn't have a cancel handler by default, so whoever installs the cancel handler (the executor) can easily take an argument that disables said installation.

On Fri, 11 Apr 2008 17:36:43 +0300, Peter Dimov wrote:
pair<promise,future> pc = create_channel();
The thought crossed my mind, but there are too many places in real-world usage (I've had a lot now) where I find I want to get a future from a promise. I think it might be awkward to store both, and defeats the intention of automatic cancelation.
Can you please go into more detail?
Frank nailed it with the fire-and-forget example, but I did go back and grep through my application code to see where I was deriving futures from promises. 1) I converted a promise to future when I needed status functions like ready() from the future. I've kept the promise interface pretty narrow. So that usage could be changed if promise was made broader. 2) In my scheduler I had a "class task" object for each job which contained a promise. The user could also hold a handle to this task object and get a copy of the corresponding future. I suppose code like that could be restructured if the extra constraint was justified. -braddock

Hello Peter, this future library use your exception_ptr (providing a partial emulation of N2179) and as I supose should be the case of any future implementation. Do you plan to submit your implementation to Boost? Best regards? _____________________ Vicente Juan Botet Escriba

vicente.botet:
Hello Peter,
this future library use your exception_ptr (providing a partial emulation of N2179) and as I supose should be the case of any future implementation.
Do you plan to submit your implementation to Boost?
My implementation is merely a proof of concept. I know Anthony Williams has one (specific to MSVC) that is much more impressive (although I don't see it in the SVN). I see that Emil Dotchevski also has something along these lines in boost/exception/cloning.hpp. The future library also seems to contain an enhanced implementation.

Braddock Gaskill <braddock@braddock.com> writes:
I wanted to formally request a review of my Futures library. Latest Version: http://braddock.com/~braddock/future/
The library incorporates many ideas from this list, from other prospective submissions, other languages, and from academic papers.
Have you looked at N2561 (http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2561.html)? What do you think? I should have an implementation ready soon, which I was intending to submit to boost. Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

Hello, How do your library positions with respect to the standard proposals? N2561 Anthony Williams: An Asynchronous Future Value N2094 Howard Hinnant: Multithreading API for C++0X - A Layered Approach, N2185 Peter Dimov: Proposed Text for Parallel Task Execution, N2276 Anthony Williams: Thread Pools and Futures. Good luck _____________________ Vicente Juan Botet Escriba ----- Original Message ----- From: "Braddock Gaskill" <braddock@braddock.com> To: <boost@lists.boost.org> Sent: Thursday, April 10, 2008 3:39 AM Subject: [boost] Review Request: future library (Gaskill version)
I wanted to formally request a review of my Futures library. Latest Version: http://braddock.com/~braddock/future/
The library has matured greatly over the past year. It has been heavily used as a key component in a mid-sized commercial application on both Linux and Windows. Extensive unit tests and (just now) documentation have been written - in fact there are more than twice as many lines of test code and documentation than are in the library proper.
The library incorporates many ideas from this list, from other prospective submissions, other languages, and from academic papers.
The library does not currently use jamfiles or boostdoc (I tried I really did). It is a header-only library. The documentation is currently straight HTML, and should be very easy to translate to boostdoc if the submission looks good.
NOTE - to avoid confusion, note that there is an older 2005 boost.future candidate in the vault by Thorsten Schuett that is not related to mine (although it was studied early on).
Thanks, Braddock Gaskill braddock@braddock.com
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Thu, 10 Apr 2008 21:53:33 +0200, vicente.botet wrote:
How do your library positions with respect to the standard proposals? N2561 Anthony Williams: An Asynchronous Future Value N2094 Howard Hinnant: Multithreading API for C++0X - A Layered Approach,
N2185 Peter Dimov: Proposed Text for Parallel Task Execution,
N2276 Anthony Williams: Thread Pools and Futures.
Hi Vincente, The Dimov and Hinnant proposals were primary sources of inspiration of this library when it was written last spring. Much of the exception transport code is from Peter's other N2096 proposal. I will look at Anthony's newer standards proposals this weekend and give a comparison. -braddock

On Thu, 10 Apr 2008 21:53:33 +0200, vicente.botet wrote:
How do your library positions with respect to the standard proposals? N2561 Anthony Williams: An Asynchronous Future Value
I've now taken a good look at the recent C++0X N2561 "An Asynchronous Future Value" proposal. Both my library and N2561 are heavily based on Peter Dimov's earlier API... I dare say a few method name changes would make N2561 a proper subset of my future and promise classes and future_wrapper helper. I'm disregarding the future C++ move semantics, of course. We need a future<C++0x>::wait() for that... :) unique_future<T> is an interesting concept. Still wrapping my head around that. My base classes offer two features which N2561 does not (and which I would really hope to see in any C++0X standard). -- 1) First - future<T>::add_callback(f). add_callback is a hook called when the future is fulfilled. End users probably shouldn't have to touch it, but framework authors (who write schedulers, asio, co-routines, etc) will DEFINITELY need it. add_callback() enables the following: -Future Operators ((f1 && f2) || (f3 && f2) etc) - With add_callback, custom future_operators can be written by users. -Guarded Schedulers (See my library doc "Future Concept Tutorial" section at http://braddock.com/~braddock/future) - Guards are a fundamental concept in the academic future languages I've studied (see some of Ian Foster's papers on Strand and PCN, for example). I have found you can do some amazing things with a guarded task scheduler (I've implemented one outside the future lib). -Basic "future fulfillment" event notification (ie, just use add_callback as an event handler). Gives you essentially a signals/slot capability. Not a fan. An alternative to add_callback() would be to provide just the operators themselves - Guards can reasonably be derived from the combination ops. Any of these mechanisms solve the busy wait in the N2561 motivating example. -- 2) Second - Lazy Futures. This is something I picked up from the Oz language future implementation, and have found very useful. A Lazy Future essentially flags itself when it is "needed" - ie, when someone is blocked on a f.wait(), and f.get(), or has explicitly called "f.set_is_needed()". This allows, for example, a task to only process _IF_ the result is actually needed. Again, see my library doc Tutorial where I show how to easily create a "Lazy Job Queue". Permits nice Memoization patterns as well. Lazy Futures are also needed for "Lazy Future Streams". A Stream is the primary means of inter-task communication in future-related academic languages. It permits producer/consumer patterns and one-to-many channels. A Lazy stream allows the producer task to produce new items only as fast as his fastest consumer needs them (See Ian T. Foster's work again - or my test_future.hpp unit test code). Note I provide an easy-to-use future stream library with std iterator interface. I also provide a future::cancel(), which has been discussed in the other posts and which I'm not terribly attached to. I really like seeing the N2561 C++ proposal uses a split future/promise. Broken_promise exceptions have saved me many times in real-world applications. And I like that you aren't using the implicit type conversion (which has bitten me many times in real-world applications! I only use my future::get() method now). (I still provide implicit conversion, but would take it out very fast if others are no longer attached to it). Braddock Gaskill

----- Original Message ----- From: "Braddock Gaskill" <braddock@braddock.com> To: <boost@lists.boost.org> Sent: Monday, April 14, 2008 12:33 AM Subject: Re: [boost] Review Request: future library (Gaskill version)
On Thu, 10 Apr 2008 21:53:33 +0200, vicente.botet wrote:
How do your library positions with respect to the standard proposals? N2561 Anthony Williams: An Asynchronous Future Value
I've now taken a good look at the recent C++0X N2561 "An Asynchronous Future Value" proposal.
Both my library and N2561 are heavily based on Peter Dimov's earlier API... I dare say a few method name changes would make N2561 a proper subset of my future and promise classes and future_wrapper helper.
I thought that the future and promise separation was proposed by Christopher Kohlhoff http://www.nabble.com/-futures--composite-or-to9069443.html#a9473680 :) I think that you should add the reference to his post.
I'm disregarding the future C++ move semantics, of course. We need a future<C++0x>::wait() for that... :)
unique_future<T> is an interesting concept. Still wrapping my head around that.
Me too. There is still the packaged_task class. What do you think about?
My base classes offer two features which N2561 does not (and which I would really hope to see in any C++0X standard).
-- 1) First - future<T>::add_callback(f).
add_callback is a hook called when the future is fulfilled. End users probably shouldn't have to touch it, but framework authors (who write schedulers, asio, co-routines, etc) will DEFINITELY need it.
add_callback() enables the following:
-Future Operators ((f1 && f2) || (f3 && f2) etc) - With add_callback, custom future_operators can be written by users.
From the new documentation these operators are already provided by your library. I don't like too much the overloading of the && and || operators when they don't have logical semantics, but this is only a question of style and taste. maybe & and | are less controversal. And find very strange the function op()
-Guarded Schedulers (See my library doc "Future Concept Tutorial" section at http://braddock.com/~braddock/future) - Guards are a fundamental concept in the academic future languages I've studied (see some of Ian Foster's papers on Strand and PCN, for example). I have found you can do some amazing things with a guarded task scheduler (I've implemented one outside the future lib).
I like this usage of guards. Interesting. Could you add the reference in the documentation? I'll take a look.
-Basic "future fulfillment" event notification (ie, just use add_callback as an event handler). Gives you essentially a signals/slot capability. Not a fan.
An alternative to add_callback() would be to provide just the operators themselves - Guards can reasonably be derived from the combination ops.
Any of these mechanisms solve the busy wait in the N2561 motivating example.
I have not found the use of wait in the example ...
-- 2) Second - Lazy Futures.
This is something I picked up from the Oz language future implementation, and have found very useful. Could you add the referece. A Lazy Future essentially flags itself when it is "needed" - ie, when someone is blocked on a f.wait(), and f.get(), or has explicitly called "f.set_is_needed()". This allows, for example, a task to only process _IF_ the result is actually needed.
Again, see my library doc Tutorial where I show how to easily create a "Lazy Job Queue". Permits nice Memoization patterns as well.
Lazy Futures are also needed for "Lazy Future Streams". A Stream is the primary means of inter-task communication in future-related academic languages. It permits producer/consumer patterns and one-to-many channels. A Lazy stream allows the producer task to produce new items only as fast as his fastest consumer needs them (See Ian T. Foster's work again - or my test_future.hpp unit test code).
Note I provide an easy-to-use future stream library with std iterator interface.
I also provide a future::cancel(), which has been discussed in the other posts and which I'm not terribly attached to.
IMO, the fact cancel can not really ensure the function is stopped makes this operation not very safe. But maybe useful in some cases ...
I really like seeing the N2561 C++ proposal uses a split future/promise. Broken_promise exceptions have saved me many times in real-world applications. And I like that you aren't using the implicit type conversion (which has bitten me many times in real-world applications! I only use my future::get() method now).
(I still provide implicit conversion, but would take it out very fast if others are no longer attached to it).
I have just a question why there are three ways to get the value from a future. f, f() and f.get()? I don't like too much the implicit conversion. N2561 has removed the implicit conversion only from the unique_future but preserved it for shared_future. Can some one explain why? template <> class unique_future<void> { public: // ... void move(); // <<<<<<<<< // ... }; template <typename R> class shared_future { public: // ... // retrieving the value operator R const & () const; // <<<<<<<<< R const & get() const; // ... }; If this implicit conversion is definitively removed, the operators || and && will not need any more the op function wich is quite odd. Best _____________________ Vicente Juan Botet Escriba

On Sunday 13 April 2008 22:06, vicente.botet wrote:
1) First - future<T>::add_callback(f).
add_callback is a hook called when the future is fulfilled. End users probably shouldn't have to touch it, but framework authors (who write schedulers, asio, co-routines, etc) will DEFINITELY need it.
I agree with Braddock, although in libpoet I chose to make poet::future depend on thread_safe_signals and so used signals/slots instead of rolling yet another callback mechanism. The schedulers in libpoet rely on the signals coming from futures to know when one of the method requests in their queue has become ready so they can wake up and process it.
add_callback() enables the following:
-Future Operators ((f1 && f2) || (f3 && f2) etc) - With add_callback, custom future_operators can be written by users.
From the new documentation these operators are already provided by your
library. I don't like too much the overloading of the && and || operators when they don't have logical semantics, but this is only a question of style
I too dislike the operator|| and && overloading. Hopefully, these overloads are only enabled when the user deliberately includes a separate header for them, making them optional. -- Frank

Braddock Gaskill <braddock@braddock.com> writes:
On Thu, 10 Apr 2008 21:53:33 +0200, vicente.botet wrote:
How do your library positions with respect to the standard proposals? N2561 Anthony Williams: An Asynchronous Future Value
I've now taken a good look at the recent C++0X N2561 "An Asynchronous Future Value" proposal.
Thanks for your comments. I'll try and incorporate them in an updated paper. Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

I haven't downloaded and compared your and Anthony's implementation yet so forgive me if I misunderstand anything. Braddock Gaskill-2 wrote:
1) First - future<T>::add_callback(f).
add_callback is a hook called when the future is fulfilled. End users probably shouldn't have to touch it, but framework authors (who write schedulers, asio, co-routines, etc) will DEFINITELY need it.
add_callback() enables the following:
-Future Operators ((f1 && f2) || (f3 && f2) etc) - With add_callback, custom future_operators can be written by users.
Isn't it more natural to consider the promise object a signal and the future object a slot? If we exposed a mechanism to wait for multiple promises we could implement promise operators instead. ((p1 && p2) || (p3 && p2) would create a new promise which internally listens to multiple promises. A binary future which takes two promises and a binary operator would be enough for this example. For efficiency reasons we'd probably need some mechanism which can add arbitrarily many promises at runtime. Braddock Gaskill-2 wrote:
2) Second - Lazy Futures.
This would correspond to lazy promises. Best Regards, Johan -- View this message in context: http://www.nabble.com/Review-Request%3A-future-library-%28Gaskill-version%29... Sent from the Boost - Dev mailing list archive at Nabble.com.

I've downloaded and started looking at your implementation. Well actually, I already use a part of it in my job :) Please disregard my previous reply - I realize now we don't want to require users to have access to the promises to create compound futures. Braddock Gaskill-2 wrote:
I'm disregarding the future C++ move semantics, of course. We need a future<C++0x>::wait() for that... :)
unique_future<T> is an interesting concept. Still wrapping my head around that.
Still, boost::future might be a too general name if we believe C++0x will provide both a reference semantics and a move semantics future object. Braddock Gaskill-2 wrote:
My base classes offer two features which N2561 does not (and which I would really hope to see in any C++0X standard).
1) First - future<T>::add_callback(f).
Exposing this in the interface poses two problems: 1. If any of the callbacks raises an exception it will be forwarded to the promise-fulfilling thread. 2. remove_callback cannot be implemented "instantly" in a dead-lock safe manner. User code which returns from remove_callback() will still need to be prepared for callbacks. I agree with you that the functionality is _very_ useful. Have you considered implementing a future-intrusive mechanism to wait for multiple futures instead? This way a separate waiting thread would be needed - but isn't that the desired use case? You probably need to be able to wait until either any or all of a dynamically created group of futures are ready. Future return value composition, for instance operators && and ||, might be nice to support but will probably require a great deal of thought. Braddock Gaskill-2 wrote:
2) Second - Lazy Futures.
This is something I picked up from the Oz language future implementation, and have found very useful. A Lazy Future essentially flags itself when it is "needed" - ie, when someone is blocked on a f.wait(), and f.get(), or has explicitly called "f.set_is_needed()". This allows, for example, a task to only process _IF_ the result is actually needed.
Do we really want to use the future value as a parallel programming primitive? I haven't read about lazy future streams yet but my initial instinct is a fear of making the future object the solution to everything. Introducing "is_needed" couples future-listeners and promise-fulfillers. If I had a const promise reference I'd expect to be able to create a temporary future, query it and then destroy it. With lazy futures, the lifetime of futures start to matter in implicit ways which might not be obvious to the "future using part" of the code. Analogously I think it would be strange if a broadcaster chose to not do something because nobody was listening or if an observable in the observer-observable pattern changed it's behaviour if it wasn't observed. Still, the concept might be transparent and useful enough. I need to read up future streams and other applications some more. Best Regards, Johan -- View this message in context: http://www.nabble.com/Review-Request%3A-future-library-%28Gaskill-version%29... Sent from the Boost - Dev mailing list archive at Nabble.com.

I have some more questions related to the callback support. 1. If thread A is blocked waiting for a future and thread B puts a value into the promise, is A unblocked right away, or do the callbacks complete first? In other words, is A guaranteed to observe the side effects of the callbacks? 2. What is the motivation for allowing more than one callback? This turns the promise into a signal. Usually we want to keep components separate and composable. Can't we just allow set_ready_handler and F* get_ready_handler<F> (a-la function<>::target)? The client can then install a signal.

On Sun, 13 Apr 2008 15:54:08 +0300, Peter Dimov wrote:
I have some more questions related to the callback support.
1. If thread A is blocked waiting for a future and thread B puts a value into the promise, is A unblocked right away, or do the callbacks complete first? In other words, is A guaranteed to observe the side effects of the callbacks?
Hi Peter, No, someone waiting on the future is not guaranteed to see callback side-effects. All listeners are notified and the future_impl mutex unlocked before the callbacks are called.
2. What is the motivation for allowing more than one callback? This turns the promise into a signal.
Hehe, I go back and forth on one or many callbacks every time I revisit the design - last time being last week. My justification is that add_callback() is really needed for authors of other frameworks to hook into things (see prior post on guards and operators). My fear with only one callback is that, for example, ASIO adds future support, uses add_the_callback internally, and later a user of ASIO uses add_the_callback for his own purposes and breaks ASIO. I have considered making add_callback a friend function in a separate future_ext.hpp header just to distance it from standard future<> usage. It is really only necessary to make the library extensible. Braddock Gaskill

Hello Gaskill, From: "Braddock Gaskill" <braddock@braddock.com> To: <boost@lists.boost.org> Sent: Monday, April 14, 2008 12:55 AM Subject: Re: [boost] Review Request: future library (Gaskill version) <snip>
2. What is the motivation for allowing more than one callback? This turns the promise into a signal.
Hehe, I go back and forth on one or many callbacks every time I revisit the design - last time being last week.
My justification is that add_callback() is really needed for authors of other frameworks to hook into things (see prior post on guards and operators). My fear with only one callback is that, for example, ASIO adds future support, uses add_the_callback internally, and later a user of ASIO uses add_the_callback for his own purposes and breaks ASIO.
I have considered making add_callback a friend function in a separate future_ext.hpp header just to distance it from standard future<> usage. It is really only necessary to make the library extensible.
I use to put these kind of interfaces in a backdoor class which is a friend class . This pattern was already used by the thread class in order to separate the safe interface(scoped_lock) and the unsafe one(lock/unlock), but was this backdoor was on a detail namespace and not in the public interface). The new thread library do not contains nymore this indirection. The user of the library is not aware of this interface, but the author of other library will use this backdoor class knowing that the door it opens is a little bit more risky or unsafe and need a careful usage. template<typename R> class future { // ... private: friend template <typename R> class future_backdoor; callback_reference add_callback(const boost::function<void (void)> &f); void remove_callback(callback_reference &ref); } // future_backdoor.hpp // ... template<typename R> class future_backdoor { future<R>& fut_; public: future_backdoor(future<R>& fut): fut_(fut){} callback_reference add_callback(const boost::function<void (void)> &f) {return fut_.add_callback(f)} void remove_callback(callback_reference &ref) {return fut_.remove_callback(ref)} } The aware user could use this as follows: future<T> fut; //... future_backdoor<T> bd_fut(fut); bd_fut.add_callback(f); or directly future_backdoor<T>(fut).add_callback(f); future_backdoor behaves like unsafe cast, a little bit as for example const_cast. What do you think? _____________________ Vicente Juan Botet Escriba
Braddock Gaskill
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Some minor modifications to the backdoor friend declaration ----- Original Message ----- From: "vicente.botet" <vicente.botet@wanadoo.fr> To: <boost@lists.boost.org> Sent: Monday, April 14, 2008 2:56 AM Subject: Re: [boost] Review Request: future library (Gaskill version)
Hello Gaskill,
From: "Braddock Gaskill" <braddock@braddock.com> To: <boost@lists.boost.org> Sent: Monday, April 14, 2008 12:55 AM Subject: Re: [boost] Review Request: future library (Gaskill version)
<snip>
2. What is the motivation for allowing more than one callback? This turns the promise into a signal.
Hehe, I go back and forth on one or many callbacks every time I revisit the design - last time being last week.
My justification is that add_callback() is really needed for authors of other frameworks to hook into things (see prior post on guards and operators). My fear with only one callback is that, for example, ASIO adds future support, uses add_the_callback internally, and later a user of ASIO uses add_the_callback for his own purposes and breaks ASIO.
I have considered making add_callback a friend function in a separate future_ext.hpp header just to distance it from standard future<> usage. It is really only necessary to make the library extensible.
I use to put these kind of interfaces in a backdoor class which is a friend class . This pattern was already used by the thread class in order to separate the safe interface(scoped_lock) and the unsafe one(lock/unlock), but was this backdoor was on a detail namespace and not in the public interface). The new thread library do not contains nymore this indirection.
The user of the library is not aware of this interface, but the author of other library will use this backdoor class knowing that the door it opens is a little bit more risky or unsafe and need a careful usage.
template<typename R> class future { // ...
typedef future_backdoor<R> backdoor;
private:
friend class backdoor;
callback_reference add_callback(const boost::function<void (void)> &f); void remove_callback(callback_reference &ref); }
// future_backdoor.hpp
// ...
template<typename R> class future_backdoor { future<R>& fut_; public: future_backdoor(future<R>& fut): fut_(fut){} callback_reference add_callback(const boost::function<void (void)> &f) {return fut_.add_callback(f)} void remove_callback(callback_reference &ref) {return fut_.remove_callback(ref)} }
The aware user could use this as follows:
future<T> fut; //... future_backdoor<T> bd_fut(fut);
or future<T>::backdoor bd_fut(fut);
bd_fut.add_callback(f);
or directly
future_backdoor<T>(fut).add_callback(f);
or future<T>::backdoor(fut).add_callback(f);
future_backdoor behaves like unsafe cast, a little bit as for example const_cast.
What do you think?
--------------------------- Vicente Juan Botet Escriba

I have added a new API section to my future library documentation. The new section documents the library class and method prototypes. http://braddock.com/~braddock/future/index.html#api -braddock

Hello, is it correct that user defined exceptions thrown by the defered function object will be rethrown as as std::runtime_error (if it was derived from std::exception) or std::bad_exception? If yes - what about specifying an mpl::vector with user defined exception types (passing to future_wrapper etc.)? regards, oliver
I wanted to formally request a review of my Futures library. Latest Version: http://braddock.com/~braddock/future/
The library has matured greatly over the past year. It has been heavily used as a key component in a mid-sized commercial application on both Linux and Windows. Extensive unit tests and (just now) documentation have been written - in fact there are more than twice as many lines of test code and documentation than are in the library proper.
The library incorporates many ideas from this list, from other prospective submissions, other languages, and from academic papers.
The library does not currently use jamfiles or boostdoc (I tried I really did). It is a header-only library. The documentation is currently straight HTML, and should be very easy to translate to boostdoc if the submission looks good.
NOTE - to avoid confusion, note that there is an older 2005 boost.future candidate in the vault by Thorsten Schuett that is not related to mine (although it was studied early on).
Thanks, Braddock Gaskill braddock@braddock.com
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Sorry - didn't read your docu carefully. As you say in the docs 'However, arbitrary user-defined exception types cannot be supported.'. What about to specify an mpl::vector with arbitrary user-defined exception types? I'm using your future library siunce last year with this little modification (at least for me it works). Oliver template< typename R, typename V = mpl::vector<>
class future_wrapper { public: future_wrapper(const boost::function<R (void)> &fn, const promise<R> &ft ) : fn_(fn), ft_(ft) {}; // stores fn and ft void operator()() throw() { // executes fn() and places the outcome into ft typedef typename boost::mpl::fold< V, detail::exec_function, detail::catch_exception< boost::mpl::_1, boost::mpl::_2 > >::type exec_type; detail::catch_ellipsis< exec_type >::exec( fn_, ft_); } private: boost::function<R (void)> fn_; promise<R> ft_; }; struct exec_function { template< typename R > static void exec( boost::function< R ( void) > const& fn, boost::promise< R > & ft) { ft.set( fn() ); } static void exec( boost::function< void ( void) > const& fn, boost::promise< void > & ft) { fn(); ft.set(); } }; template< typename P, typename E > struct catch_exception { template< typename R > static void exec( boost::function< R ( void) > const& fn, boost::promise< R > & ft) { try { P::exec( fn, ft); } catch ( E const& e) { ft.set_exception( e); } } static void exec( boost::function< void ( void) > const& fn, boost::promise< void > & ft) { try { P::exec( fn, ft); } catch ( E const& e) { ft.set_exception( e); } } }; template< typename P > struct catch_ellipsis { template< typename R > static void exec( boost::function< R ( void) > const& fn, boost::promise< R > & ft) { try { P::exec( fn, ft); } catch (...) { ft.set_exception(boost::detail::current_exception() ); } } static void exec( boost::function< void ( void) > const& fn, boost::promise< void > & ft) { try { P::exec( fn, ft); } catch (...) { ft.set_exception(boost::detail::current_exception() ); } } };
Hello, is it correct that user defined exceptions thrown by the defered function object will be rethrown as as std::runtime_error (if it was derived from std::exception) or std::bad_exception? If yes - what about specifying an mpl::vector with user defined exception types (passing to future_wrapper etc.)? regards, oliver
I wanted to formally request a review of my Futures library. Latest Version: http://braddock.com/~braddock/future/
The library has matured greatly over the past year. It has been heavily used as a key component in a mid-sized commercial application on both Linux and Windows. Extensive unit tests and (just now) documentation have been written - in fact there are more than twice as many lines of test code and documentation than are in the library proper.
The library incorporates many ideas from this list, from other prospective submissions, other languages, and from academic papers.
The library does not currently use jamfiles or boostdoc (I tried I really did). It is a header-only library. The documentation is currently straight HTML, and should be very easy to translate to boostdoc if the submission looks good.
NOTE - to avoid confusion, note that there is an older 2005 boost.future candidate in the vault by Thorsten Schuett that is not related to mine (although it was studied early on).
Thanks, Braddock Gaskill braddock@braddock.com
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Hi Olivier, This is very elegant on the context of the future library. I was thinking on some kind of mpl generation for the exception_ptr library but I didn't thounk to wrappe the call. ----- Original Message ----- From: "Kowalke Oliver (QD IT PA AS)" <Oliver.Kowalke@qimonda.com> To: <boost@lists.boost.org> Sent: Monday, April 14, 2008 9:26 AM Subject: Re: [boost] Review Request: future library (Gaskill version)
Sorry - didn't read your docu carefully. As you say in the docs 'However, arbitrary user-defined exception types cannot be supported.'.
What about to specify an mpl::vector with arbitrary user-defined exception types? I'm using your future library siunce last year with this little modification (at least for me it works). Oliver
template< typename R, typename V = mpl::vector<>
class future_wrapper { public: future_wrapper(const boost::function<R (void)> &fn, const promise<R> &ft ) : fn_(fn), ft_(ft) {}; // stores fn and ft void operator()() throw() { // executes fn() and places the outcome into ft
typedef typename boost::mpl::fold< V, detail::exec_function, detail::catch_exception< boost::mpl::_1, boost::mpl::_2 > >::type exec_type;
detail::catch_ellipsis< exec_type >::exec( fn_, ft_); } private: boost::function<R (void)> fn_; promise<R> ft_; };
struct exec_function { template< typename R > static void exec( boost::function< R ( void) > const& fn, boost::promise< R > & ft) { ft.set( fn() ); }
static void exec( boost::function< void ( void) > const& fn, boost::promise< void > & ft) { fn(); ft.set(); } };
template< typename P, typename E > struct catch_exception { template< typename R > static void exec( boost::function< R ( void) > const& fn, boost::promise< R > & ft) { try { P::exec( fn, ft); } catch ( E const& e) { ft.set_exception( e); } }
static void exec( boost::function< void ( void) > const& fn, boost::promise< void > & ft) { try { P::exec( fn, ft); } catch ( E const& e) { ft.set_exception( e); } } };
template< typename P > struct catch_ellipsis { template< typename R > static void exec( boost::function< R ( void) > const& fn, boost::promise< R > & ft) { try { P::exec( fn, ft); } catch (...) { ft.set_exception(boost::detail::current_exception() ); } }
static void exec( boost::function< void ( void) > const& fn, boost::promise< void > & ft) { try { P::exec( fn, ft); } catch (...) { ft.set_exception(boost::detail::current_exception() ); } } };
Hello, is it correct that user defined exceptions thrown by the defered function object will be rethrown as as std::runtime_error (if it was derived from std::exception) or std::bad_exception? If yes - what about specifying an mpl::vector with user defined exception types (passing to future_wrapper etc.)? regards, oliver
I see only a problem: the interface of the future_wrapper is changed. Maybe something like template< typename R, typename V = BOOST_EXCEPTION_PTR_USER_EXCEPTION_VECTOR class future_wrapper ... allow to preserv the interface. Another problem is that we need to do the same for each class using the exception_ptr library. It will be nice if the exec_function, the catch_exception and the catch_ellipsis do not depend directly on the promise<R>. I don't know if it is possible to make a wrap_call template with a template parameter Promise and having these three classes nested. template <class template<class> PromiseTmpl> struct wrap_call { struct exec_function { template< typename R, class Policy= promise_policy< PromiseTmpl< R >
static void exec( boost::function< R ( void) > const& fn, PromiseTmpl< R > & ft) { Policy::set(ft, fn() ); } // ...
template < typename V = BOOST_EXCEPTION_PTR_USER_EXCEPTION_VECTOR > struct call { typedef typename boost::mpl::fold< V, wrap_call<PromiseTmpl>::exec_function, wrap_call<PromiseTmpl>::catch_exception< boost::mpl::_1, boost::mpl::_2 > >::type exec_type; typedef wrap_call<PromiseTmpl>::catch_ellipsis< exec_type > type; }; } A policy class should be needed to take care of the promise concept interface template <class Promise> struct promise_policy { void set(Promise& p, const R &v) {p.set(v);} void set_exception(Promise& p, const exception_ptr&e) {p.set_exception(e);} }; And use it as follows template< typename R, typename V = BOOST_EXCEPTION_PTR_USER_EXCEPTION_VECTOR class future_wrapper { // ... future_wrapper(const boost::function<R (void)> &fn, const promise<R> &ft ) : fn_(fn), ft_(ft) {}; // stores fn and ft void operator()() throw() { // executes fn() and places the outcome into ft exception_ptr::wrap_call<promise>::call<V>::exec( fn_, ft_); } // ... Do you think that this could work? Best _____________________ Vicente Juan Botet Escriba

Hello Vincente,
This is very elegant on the context of the future library.
thank you :))
It will be nice if the exec_function, the catch_exception and the catch_ellipsis do not depend directly on the promise<R>.
If desired I would suggest that exception_ptr will handle arbitrary user-defined exception types. The function detail::_exp_current_exception (I'm refering to exception_ptr_impl.hpp from Braddogs future library) mußt be a template which gets the arbitrary user-defined exception types as tempalte arguments. I append a first shot of the code (modified versions of future and exception_ptr): #include <iostream> #include <cstdlib> #include <stdexcept> #include <boost/bind.hpp> #include <boost/mpl/vector.hpp> #include <boost/thread.hpp> #include <future.hpp> struct X { std::string msg; X( std::string const& msg_) : msg( msg_) {} }; int add( int a, int b) { throw X("abc"); return a + b; } void execute( boost::function< void() > fn) { fn(); } int main( int argc, char *argv[]) { try { boost::promise< int > p; boost::future_wrapper< int, boost::mpl::vector< X > > wrapper( boost::bind( add, 11, 13), p); boost::future< int > f( p); boost::thread t( boost::bind( & execute, wrapper) ); std::cout << "add = " << f.get() << std::endl; return EXIT_SUCCESS; } catch ( X const& x) { std::cerr << x.msg << std::endl; } catch ( std::exception const& e) { std::cerr << e.what() << std::endl; } catch ( ... ) { std::cerr << "unhandled exception" << std::endl; } return EXIT_FAILURE; }

the code works at least with gcc 4.2.3 Oliver
#include <iostream> #include <cstdlib> #include <stdexcept>
#include <boost/bind.hpp> #include <boost/mpl/vector.hpp> #include <boost/thread.hpp>
#include <future.hpp>
struct X { std::string msg;
X( std::string const& msg_) : msg( msg_) {} };
int add( int a, int b) { throw X("abc"); return a + b; }
void execute( boost::function< void() > fn) { fn(); }
int main( int argc, char *argv[]) { try { boost::promise< int > p; boost::future_wrapper< int, boost::mpl::vector< X > > wrapper( boost::bind( add, 11, 13), p); boost::future< int > f( p);
boost::thread t( boost::bind( & execute, wrapper) );
std::cout << "add = " << f.get() << std::endl;
return EXIT_SUCCESS; } catch ( X const& x) { std::cerr << x.msg << std::endl; } catch ( std::exception const& e) { std::cerr << e.what() << std::endl; } catch ( ... ) { std::cerr << "unhandled exception" << std::endl; } return EXIT_FAILURE; }

On Mon, 14 Apr 2008 09:26:07 +0200, Kowalke Oliver (QD IT PA AS) wrote:
What about to specify an mpl::vector with arbitrary user-defined exception types? I'm using your future library siunce last year with this little modification (at least for me it works).
I really want to improve the handling of arbitrary user-defined exceptions, and this looks at first glance like just the ticket! I'll definitely take a close look at this (no time to refresh my mpl tonight). -braddock

Hello Braddock, where does the code below block? I believe in boost::bind(Add, fa, 3) where fa is inplizitly converted to an int - right? So JobQueue::schedule is executed only if fa got an value assigned?! future<int> fb = q.schedule<int>(boost::bind(Add, fa, 3), fa); // fa + 3 Oliver

On Mon, 14 Apr 2008 09:58:52 +0200, Kowalke Oliver (QD IT PA AS) wrote:
where does the code below block? I believe in boost::bind(Add, fa, 3) where fa is inplizitly converted to an int - right? So JobQueue::schedule is executed only if fa got an value assigned?!
future<int> fb = q.schedule<int>(boost::bind(Add, fa, 3), fa); // fa + 3
Hi Oliver, It doesn't block. This is code from the "Guards" section of my documentation - the schedule() method is modified to accept a future<T> as the second argument for use as a guard. fa is not implicitly converted to an int (and does not block), because the second schedule() arg is in the context of a future<int>. fa is also not implicitly converted to an int within the bind, because the bind is templated such that it gladly accepts fa as a future<int> without forcing it into an int context until the function is called. The guard magic comes from the change we made to the schedule() method just before that example - so that the job is not REALLY scheduled until fa (the guard) is fulfilled (via a callback). The following code does not dead-lock, for example: promise<int> p9; future<int> f9(p9); future<int> f8 = q.schedule<int>(boost::bind(Add, f9, 3), f9); //no block p9.set(99); // we still get here without a deadlock block assert(f8.get() == 102); future/example1.cpp contains all of the example code from the documentation if you actually want to build it. The lack of clarity is certainly yet another argument for getting rid of the implicit conversion method, though. If no one speaks up in it's defense soon I'm likely to axe implicit conversion. -braddock

Is there a schedule for a review of this library?
I wanted to formally request a review of my Futures library. Latest Version: http://braddock.com/~braddock/future/
The library has matured greatly over the past year. It has been heavily used as a key component in a mid-sized commercial application on both Linux and Windows. Extensive unit tests and (just now) documentation have been written - in fact there are more than twice as many lines of test code and documentation than are in the library proper.
The library incorporates many ideas from this list, from other prospective submissions, other languages, and from academic papers.
The library does not currently use jamfiles or boostdoc (I tried I really did). It is a header-only library. The documentation is currently straight HTML, and should be very easy to translate to boostdoc if the submission looks good.
NOTE - to avoid confusion, note that there is an older 2005 boost.future candidate in the vault by Thorsten Schuett that is not related to mine (although it was studied early on).
Thanks, Braddock Gaskill braddock@braddock.com
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Hi Braddock, I have recieved your request and have added the Futures library to the review queue. Cheers, ron On Apr 9, 2008, at 9:39 PM, Braddock Gaskill wrote:
I wanted to formally request a review of my Futures library. Latest Version: http://braddock.com/~braddock/future/
The library has matured greatly over the past year. It has been heavily used as a key component in a mid-sized commercial application on both Linux and Windows. Extensive unit tests and (just now) documentation have been written - in fact there are more than twice as many lines of test code and documentation than are in the library proper.
The library incorporates many ideas from this list, from other prospective submissions, other languages, and from academic papers.
The library does not currently use jamfiles or boostdoc (I tried I really did). It is a header-only library. The documentation is currently straight HTML, and should be very easy to translate to boostdoc if the submission looks good.
NOTE - to avoid confusion, note that there is an older 2005 boost.future candidate in the vault by Thorsten Schuett that is not related to mine (although it was studied early on).
Thanks, Braddock Gaskill braddock@braddock.com
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/ listinfo.cgi/boost

Hi all, Having just uploaded my prototype of N2561 futures to my website, I would like it to be considered for review alongside Braddock Gaskill's version. It is not as rich in features as Braddock's version, but this is the current proposal before the C++ committee. I intend to update the proposal in time for the next committee mailing, which is 16th May, so would be grateful if people could comment before then, even if we don't get a formal review. It currently requires the boost trunk. http://www.justsoftwaresolutions.co.uk/files/n2561_future.hpp http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2561.html Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

Hi, why we don't need to protect get_future() function of multiple thread access? // Result retrieval unique_future<R> get_future() { if(!future) { throw future_moved(); } if(future_obtained) { throw future_already_retrieved(); } future_obtained=true; return unique_future<R>(future); } Best _____________________ Vicente Juan Botet Escriba ----- Original Message ----- From: "Anthony Williams" <anthony_w.geo@yahoo.com> To: <boost@lists.boost.org> Sent: Monday, May 05, 2008 10:41 AM Subject: [boost] Review Request: future library (N2561/Williams version)
Hi all,
Having just uploaded my prototype of N2561 futures to my website, I would like it to be considered for review alongside Braddock Gaskill's version. It is not as rich in features as Braddock's version, but this is the current proposal before the C++ committee. I intend to update the proposal in time for the next committee mailing, which is 16th May, so would be grateful if people could comment before then, even if we don't get a formal review. It currently requires the boost trunk.
http://www.justsoftwaresolutions.co.uk/files/n2561_future.hpp http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2561.html
Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

"vicente.botet" <vicente.botet@wanadoo.fr> writes:
why we don't need to protect get_future() function of multiple thread access?
By design you can only call this function once, so if multiple threads called it concurrently and that was safe, only one would get the future, and the other would get an exception. The user should therefore use appropriate synchronization to ensure correct results anyway, so making this call thread-safe would not be of benefit. Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

----- Original Message ----- From: "Anthony Williams" <anthony_w.geo@yahoo.com> To: <boost@lists.boost.org> Sent: Tuesday, May 06, 2008 8:42 AM Subject: Re: [boost] Review Request: future library (N2561/Williams version)
"vicente.botet" <vicente.botet@wanadoo.fr> writes:
why we don't need to protect get_future() function of multiple thread access?
By design you can only call this function once, so if multiple threads called it concurrently and that was safe, only one would get the future, and the other would get an exception. The user should therefore use appropriate synchronization to ensure correct results anyway, so making this call thread-safe would not be of benefit.
I don't understand the need of this function. Could you show a use case for promise::get_future() function? _____________________ Vicente Juan Botet Escriba

"vicente.botet" <vicente.botet@wanadoo.fr> writes:
----- Original Message ----- From: "Anthony Williams" <anthony_w.geo@yahoo.com> To: <boost@lists.boost.org> Sent: Tuesday, May 06, 2008 8:42 AM Subject: Re: [boost] Review Request: future library (N2561/Williams version)
"vicente.botet" <vicente.botet@wanadoo.fr> writes:
why we don't need to protect get_future() function of multiple thread access?
By design you can only call this function once, so if multiple threads called it concurrently and that was safe, only one would get the future, and the other would get an exception. The user should therefore use appropriate synchronization to ensure correct results anyway, so making this call thread-safe would not be of benefit.
I don't understand the need of this function. Could you show a use case for promise::get_future() function?
promise::get_future() is the only way to get a future from a promise. Since the whole point of using promise is to get the future, it's rather pointless without it. Were you getting at something else? Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

----- Original Message ----- From: "Anthony Williams" <anthony_w.geo@yahoo.com> To: <boost@lists.boost.org> Sent: Thursday, May 08, 2008 2:35 PM Subject: Re: [boost] Review Request: future library (N2561/Williams version)
"vicente.botet" <vicente.botet@wanadoo.fr> writes:
----- Original Message ----- From: "Anthony Williams" <anthony_w.geo@yahoo.com> To: <boost@lists.boost.org> Sent: Tuesday, May 06, 2008 8:42 AM Subject: Re: [boost] Review Request: future library (N2561/Williams version)
"vicente.botet" <vicente.botet@wanadoo.fr> writes:
why we don't need to protect get_future() function of multiple thread access?
By design you can only call this function once, so if multiple threads called it concurrently and that was safe, only one would get the future, and the other would get an exception. The user should therefore use appropriate synchronization to ensure correct results anyway, so making this call thread-safe would not be of benefit.
I don't understand the need of this function. Could you show a use case for promise::get_future() function?
promise::get_future() is the only way to get a future from a promise. Since the whole point of using promise is to get the future, it's rather pointless without it.
why this is not an internal feautre?
Were you getting at something else?
Sorry, but I thouth that it is up to the promise to communicate with his future when the user do a set_value or set_exception or elsewhere When the user will need the future of a promise? Vicente

vicente.botet:
I don't understand the need of this function. Could you show a use case for promise::get_future() function?
promise::get_future() is the only way to get a future from a promise. Since the whole point of using promise is to get the future, it's rather pointless without it.
why this is not an internal feautre?
You have to have a way to create the initial future, but for the "one time" semantics one can have for example promise::promise( future& f ); instead of get_future. FWIW, Braddock has argued (see the archives) that it's convenient to be able to obtain a future from a promise, without the "one time" restriction. ("His" futures are "shared" though, as are "mine".) I argued that without this feature, a promise can detect that no futures are left and can cancel its task.

Hi all, I have updated my prototype futures library implementation in light of various comments received, and my own thoughts. The new version is available for download, again under the Boost Software License. It still needs to be compiled against the Boost Subversion Trunk, as it uses the Boost Exception library, which is not available in an official boost release. Sample usage can be seen in the test harness. The support for alternative allocators is still missing. Changes * I have removed the try_get/timed_get functions, as they can be replaced with a combination of wait() or timed_wait() and get(), and they don't work with unique_future<R&> or unique_future<void>. * I've also removed the move() functions on unique_future. Instead, get() returns an rvalue-reference to allow moving in those types with move support. Yes, if you call get() twice on a movable type then the second get() returns an empty shell of an object, but I don't really think that's a problem: if you want to call get() multiple times, use a shared_future. I've implemented this with both rvalue-references and the boost.thread move emulation, so you can have a unique_future<boost::thread> if necessary. test_unique_future_for_move_only_udt() in test_futures.cpp shows this in action with a user-defined movable-only type X. * Finally, I've added a set_wait_callback() function to both promise and packaged_task. This allows for lazy-futures which don't actually run the operation to generate the value until the value is needed: no threading required. It also allows for a thread pool to do task stealing if a pool thread waits for a task that's not started yet. The callbacks must be thread-safe as they are potentially called from many waiting threads simultaneously. At the moment, I've specified the callbacks as taking a non-const reference to the promise or packaged_task for which they are set, but I'm open to just making them be any callable function, and leaving it up to the user to call bind() to do that. I've left the wait operations as wait() and timed_wait(), but I've had a suggestion to use wait()/wait_for()/wait_until(), which I'm actively considering. Please download it, try it out, and let me know what you think. Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

----- Original Message ----- From: "Anthony Williams" <anthony_w.geo@yahoo.com> To: <boost@lists.boost.org> Sent: Sunday, May 11, 2008 11:53 AM Subject: Re: [boost] Review Request: future library (N2561/Williams version)
Hi all,
I have updated my prototype futures library implementation in light of various comments received, and my own thoughts.
The new version is available for download, again under the Boost Software License. It still needs to be compiled against the Boost Subversion Trunk, as it uses the Boost Exception library, which is not available in an official boost release.
Hi, Sorry for the question, but where can we download the new version? I have followed the old link http://www.justsoftwaresolutions.co.uk/files/n2561_future.hpp and there is no modification. Best Vicente

"vicente.botet" <vicente.botet@wanadoo.fr> writes:
----- Original Message ----- From: "Anthony Williams" <anthony_w.geo@yahoo.com>
I have updated my prototype futures library implementation in light of various comments received, and my own thoughts.
The new version is available for download, again under the Boost Software License. It still needs to be compiled against the Boost Subversion Trunk, as it uses the Boost Exception library, which is not available in an official boost release.
Hi,
Sorry for the question, but where can we download the new version? I have followed the old link http://www.justsoftwaresolutions.co.uk/files/n2561_future.hpp and there is no modification.
Oops. It's http://www.justsoftwaresolutions.co.uk/files/n2561_futures_revised_20080511.... Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

Nice work! Let's hope we can all agree on a best interface - and get it standardized directly :) Anthony Williams-3 wrote:
* Finally, I've added a set_wait_callback() function to both promise and packaged_task. This allows for lazy-futures which don't actually run the operation to generate the value until the value is needed: no threading required. It also allows for a thread pool to do task stealing if a pool thread waits for a task that's not started yet. The callbacks must be thread-safe as they are potentially called from many waiting threads simultaneously. At the moment, I've specified the callbacks as taking a non-const reference to the promise or packaged_task for which they are set, but I'm open to just making them be any callable function, and leaving it up to the user to call bind() to do that.
If you haven't, please read my concerns about direct callbacks in http://www.nabble.com/Review-Request%3A-future-library-%28Gaskill-version%29... Anthony Williams-3 wrote:
I've left the wait operations as wait() and timed_wait(), but I've had a suggestion to use wait()/wait_for()/wait_until(), which I'm actively considering.
I like timed_wait because that's the same name as in condition_variable. wait_until could be confusing - one might think the thread will wait until the specified time regardless of the future is set or not. time_limited_wait or wait_max_time are further alternatives. Best Regards, Johan -- View this message in context: http://www.nabble.com/Review-Request%3A-future-library-%28Gaskill-version%29... Sent from the Boost - Dev mailing list archive at Nabble.com.

Johan Torp <johan.torp@gmail.com> writes:
Nice work! Let's hope we can all agree on a best interface - and get it standardized directly :)
Yes, I hope so.
Anthony Williams-3 wrote:
* Finally, I've added a set_wait_callback() function to both promise and packaged_task. This allows for lazy-futures which don't actually run the operation to generate the value until the value is needed: no threading required. It also allows for a thread pool to do task stealing if a pool thread waits for a task that's not started yet. The callbacks must be thread-safe as they are potentially called from many waiting threads simultaneously. At the moment, I've specified the callbacks as taking a non-const reference to the promise or packaged_task for which they are set, but I'm open to just making them be any callable function, and leaving it up to the user to call bind() to do that.
If you haven't, please read my concerns about direct callbacks in http://www.nabble.com/Review-Request%3A-future-library-%28Gaskill-version%29...
I have read your comments. My primary reason for including this came from thread pools: if you know that the current thread (from the pool) is blocked on a future related to a task in the pool, you can move it up the queue, or maybe even invoke on the blocked thread. Braddock suggested lazy futures, and I think that's also an important use case. For one thing, it shows that futures are useful without thread pools: you can use them in single-threaded code.
Anthony Williams-3 wrote:
I've left the wait operations as wait() and timed_wait(), but I've had a suggestion to use wait()/wait_for()/wait_until(), which I'm actively considering.
I like timed_wait because that's the same name as in condition_variable. wait_until could be confusing - one might think the thread will wait until the specified time regardless of the future is set or not. time_limited_wait or wait_max_time are further alternatives.
This was alongside a suggestion that we change the names for condition_variable waits. The important part was the separation of timed_wait(duration) vs timed_wait(absolute_time) with distinct names, so it was clear which you were calling, and you wouldn't accidentally pass a duration when you meant a fixed time point. We could go for timed_wait_for() and timed_wait_until(), but they strike me as rather long-winded. Maybe that's a good thing ;-) Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

Anthony Williams-3 wrote:
I have read your comments. My primary reason for including this came from thread pools: if you know that the current thread (from the pool) is blocked on a future related to a task in the pool, you can move it up the queue, or maybe even invoke on the blocked thread. Braddock suggested lazy futures, and I think that's also an important use case. For one thing, it shows that futures are useful without thread pools: you can use them in single-threaded code.
I don't quite understand you. What is the "current" thread in a thread pool? If there are dependencies between tasks in a thread-pool, shouldn't prioritizing be the task of an external scheduler - and solved before the tasks are initiated? I'd like to know your thoughts on what the thread pool should be and what problems it should solve more specifically than what's explained in N2276. I thought the most common use case for futures was the active object pattern. We should all try to agree what use cases/design patterns/higher level abstractions are most important and which we want to support. IMHO, this should be top priority for the "future ambition". Even though no higher level abstractions built on futures will make it to C++0x or boost anytime soon, it's important that the future interface needn't change to support them in the - future :) To me, being able to wait for any or all of a number of futures seems like an important use case. I'd use it to implement "i'm waiting on the result of a number of time-consuming commands and queries". Maybe this is implemented better in another way - any ideas? Anthony Williams-3 wrote:
This was alongside a suggestion that we change the names for condition_variable waits. The important part was the separation of timed_wait(duration) vs timed_wait(absolute_time) with distinct names, so it was clear which you were calling, and you wouldn't accidentally pass a duration when you meant a fixed time point.
We could go for timed_wait_for() and timed_wait_until(), but they strike me as rather long-winded. Maybe that's a good thing ;-)
Yes, long names is a good thing :) duration_timed_wait/absolute_timed_wait are other alternatives. duration and absolute_time will have two different types, right? If so I don't think they should have different function names because: - IMO it doesn't increase code readability to repeat type information in symbol names - It reduces genericity. Function overloading can be used to implement LSP for generic functions. template<class TimeType> void foo_algorithm(future<void>&f, TimeType t) { ... do stuff ... f.timed_wait(t); // LSP for TimeType, more generic } I vote for 2 x time_limited_wait. Best Regards, Johan -- View this message in context: http://www.nabble.com/Review-Request%3A-future-library-%28Gaskill-version%29... Sent from the Boost - Dev mailing list archive at Nabble.com.

Johan Torp <johan.torp@gmail.com> writes:
Anthony Williams-3 wrote:
I have read your comments. My primary reason for including this came from thread pools: if you know that the current thread (from the pool) is blocked on a future related to a task in the pool, you can move it up the queue, or maybe even invoke on the blocked thread. Braddock suggested lazy futures, and I think that's also an important use case. For one thing, it shows that futures are useful without thread pools: you can use them in single-threaded code.
I don't quite understand you. What is the "current" thread in a thread pool?
In this case I mean the thread that called some_future.wait().
If there are dependencies between tasks in a thread-pool, shouldn't prioritizing be the task of an external scheduler - and solved before the tasks are initiated? I'd like to know your thoughts on what the thread pool should be and what problems it should solve more specifically than what's explained in N2276.
Suppose you're using a thread pool to provide a parallel version of quick-sort. The easiest way to do that is to partition the values into those less than and those not-less-than the chosen pivot (as you would for a single-threaded version), and submit tasks to the thread pool to sort each half and then waits for them to finish. This doubles the number of tasks with each level of recursion. At some point the number of tasks will exceed the number of threads in the pool, in which case you have some tasks waiting on others that have been submitted to the pool but not yet scheduled. If you can arrange for the implementation to identify this scenario as it happens, and thus schedule the task being waited for to run on the waiting thread, you can achieve greater thread reuse within the pool, and reduce the number of blocked threads. One way to do this is have the pool use packaged_tasks internally, and set a wait callback which is invoked when a thread waits on a future from a pool task. When the callback is invoked by the waiting thread (as part of the call to wait()), if that waiting thread is a pool thread, it can proceed as above. If not, then it might arrange to schedule the waited-for task next, or just do nothing: the task will get its turn in the end.
I thought the most common use case for futures was the active object pattern.
That's one possible use. I wouldn't have pegged it as "most common" unless you're considering all cases of a background thread performing operations for a foreground thread as uses of active object.
We should all try to agree what use cases/design patterns/higher level abstractions are most important and which we want to support. IMHO, this should be top priority for the "future ambition". Even though no higher level abstractions built on futures will make it to C++0x or boost anytime soon, it's important that the future interface needn't change to support them in the - future :)
I agree we should think about the higher-level abstractions we want to support, to ensure the "futures" abstraction provides the necessary baseline. I'd like higher-level stuff to be built on top of C++0x futures without having to replace them with a different low-level abstraction that provides a similar feature set.
To me, being able to wait for any or all of a number of futures seems like an important use case. I'd use it to implement "i'm waiting on the result of a number of time-consuming commands and queries". Maybe this is implemented better in another way - any ideas?
Waiting for one of a number of tasks is an important use case. I'm not sure how best to handle it. I've seen people talk about "future_or" and "f1 || f2", but I'm not sure if that's definitely the way to go.
Anthony Williams-3 wrote:
This was alongside a suggestion that we change the names for condition_variable waits. The important part was the separation of timed_wait(duration) vs timed_wait(absolute_time) with distinct names, so it was clear which you were calling, and you wouldn't accidentally pass a duration when you meant a fixed time point.
We could go for timed_wait_for() and timed_wait_until(), but they strike me as rather long-winded. Maybe that's a good thing ;-)
Yes, long names is a good thing :) duration_timed_wait/absolute_timed_wait are other alternatives. duration and absolute_time will have two different types, right? If so I don't think they should have different function names because: - IMO it doesn't increase code readability to repeat type information in symbol names - It reduces genericity. Function overloading can be used to implement LSP for generic functions.
template<class TimeType> void foo_algorithm(future<void>&f, TimeType t) { ... do stuff ... f.timed_wait(t); // LSP for TimeType, more generic }
I vote for 2 x time_limited_wait.
duration and absolute_time will have distinct types. In Boost at the moment, for boost::condition_variable, duration is anything that implements the Boost Date-Time duration concept, such as boost::posix_time::milliseconds, and absolute_time is boost::system_time. However, even though distinct overloads will be called, this is not necessarily desirable, as the semantics are distinct. The members of the LWG are discussing renaming condition_variable::timed_wait to have distinct names for the duration and absolute time overloads in order to ensure that the user has absolute clarity of intent: wait_for(absolute_time) or wait_until(duration) won't compile. Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

Anthony Williams:
One way to do this is have the pool use packaged_tasks internally, and set a wait callback which is invoked when a thread waits on a future from a pool task. When the callback is invoked by the waiting thread (as part of the call to wait()), if that waiting thread is a pool thread, it can proceed as above.
It actually doesn't matter whether the waiting thread is a pool thread or not. If the task hasn't been scheduled, it can be "stolen" and executed synchronously from within the wait().

"Peter Dimov" <pdimov@pdimov.com> writes:
Anthony Williams:
One way to do this is have the pool use packaged_tasks internally, and set a wait callback which is invoked when a thread waits on a future from a pool task. When the callback is invoked by the waiting thread (as part of the call to wait()), if that waiting thread is a pool thread, it can proceed as above.
It actually doesn't matter whether the waiting thread is a pool thread or not. If the task hasn't been scheduled, it can be "stolen" and executed synchronously from within the wait().
Yes, you could do that. I'm not convinced it's necessarily a good idea, though. Different threads potentially have different priorities or access permissions. Also, thread interruption will behave differently: in my prototype implementation, future::wait() is an interruption point. If a call to wait() stole a task from a thread pool, interrupting the thread would instead interrupt the task, which is not necessarily what was intended, as this may have consequences for other threads waiting on that task. Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

Anthony Williams-3 wrote:
Suppose you're using a thread pool to provide a parallel version of quick-sort. The easiest way to do that is to partition the values into those less than and those not-less-than the chosen pivot (as you would for a single-threaded version), and submit tasks to the thread pool to sort each half and then waits for them to finish. This doubles the number of tasks with each level of recursion. At some point the number of tasks will exceed the number of threads in the pool, in which case you have some tasks waiting on others that have been submitted to the pool but not yet scheduled.
If you can arrange for the implementation to identify this scenario as it happens, and thus schedule the task being waited for to run on the waiting thread, you can achieve greater thread reuse within the pool, and reduce the number of blocked threads.
One way to do this is have the pool use packaged_tasks internally, and set a wait callback which is invoked when a thread waits on a future from a pool task. When the callback is invoked by the waiting thread (as part of the call to wait()), if that waiting thread is a pool thread, it can proceed as above. If not, then it might arrange to schedule the waited-for task next, or just do nothing: the task will get its turn in the end.
I misunderstood - I thought set_wait_callback was Gaskill's proposed callback when a future was ready. If I understand you correctly this use case is as follows; Each future correlates to an unfinished task. A. When a worker thread calls wait(), instead of/prior to blocking it might perform another task B. By detecting when client/non-worker threads are waiting we can use that information to serve them faster. A seems very dangerous. The worker thread has a stack associated with the task it is carrying out. If the thread crashes or an exception is thrown, only the current task is affected. If it should carry out another task [task2] on top of the first task [task1], task2 crashing would destroy task1 too. Also, we could get a problem with too deep stack nesting if the same thread starts working on more and more tasks. B might be useful. It can't detect waiting by periodic is_ready-polling - which with todays interface is needed to wait for more than one future. Rather than implicitly trying to guess client threads' needs wouldn't it be better to either: - Let the thread-pool be a predictable FIFO queue. Trust client code to do the scheduling and not submit too many tasks at the same time. - Open up the pool interface and let users control some kind of prioritization or scheduling I'm probably misunderstanding something here. Anthony Williams-3 wrote:
Waiting for one of a number of tasks is an important use case. I'm not sure how best to handle it. I've seen people talk about "future_or" and "f1 || f2", but I'm not sure if that's definitely the way to go.
I think we should allow waiting on a dynamically changing number of futures. Operators are poorly suited for this because they require function recursion. Maybe some kind of future container with wait_all() and wait_any() functions. My biggest question is how this will map to condition_variables. Also, could this facility be some layer below futures which can be reused? Dependency deduction and lazy return value composition functionality - like operators - should probably be built on top of the dynamic waiting facility. Anthony Williams-3 wrote:
However, even though distinct overloads will be called, this is not necessarily desirable, as the semantics are distinct.
In what way are the semantics different? Anthony Williams-3 wrote:
The members of the LWG are discussing renaming condition_variable::timed_wait to have distinct names for the duration and absolute time overloads in order to ensure that the user has absolute clarity of intent: wait_for(absolute_time) or wait_until(duration) won't compile.
To be extra clear, I'll repeat myself: This will complicate writing parallel algorithms which works generically with any time type. For both library vendors and users. My 5 cents is still that 2 x time_limited_wait is clear and readable enough but it's no strong opinion. For good or bad you are forcing users to supply their intent twice - by both argument type and method name. Is this a general strategy for the standard library? Best Regards, Johan -- View this message in context: http://www.nabble.com/Review-Request%3A-future-library-%28Gaskill-version%29... Sent from the Boost - Dev mailing list archive at Nabble.com.

Johan Torp <johan.torp@gmail.com> writes:
Anthony Williams-3 wrote: I misunderstood - I thought set_wait_callback was Gaskill's proposed callback when a future was ready.
No problem.
If I understand you correctly this use case is as follows; Each future correlates to an unfinished task. A. When a worker thread calls wait(), instead of/prior to blocking it might perform another task B. By detecting when client/non-worker threads are waiting we can use that information to serve them faster.
Yes.
A seems very dangerous. The worker thread has a stack associated with the task it is carrying out. If the thread crashes or an exception is thrown, only the current task is affected. If it should carry out another task [task2] on top of the first task [task1], task2 crashing would destroy task1 too. Also, we could get a problem with too deep stack nesting if the same thread starts working on more and more tasks.
If the thread "crashes", you've got a serious bug: all bets are off. It doesn't matter whether that's the same thread that's performing another task or not. If the task throws an exception, I expect this to be swallowed by the task-launching code, and get stored in the future corresponding to that task. This therefore will not affect the original waiting task. Stack overflow is potentially a real problem. If a thread pool implementation decides to do such task nesting, it needs to ensure the stack is sufficiently large to handle it. For example, if you're going to allow N nested tasks, the stack for that pool thread ought to be N times larger than the stack for a single task.
B might be useful. It can't detect waiting by periodic is_ready-polling - which with todays interface is needed to wait for more than one future.
I would use timed_wait() calls when waiting for more than one future: doing a busy-wait with is_ready just consumes CPU time which would be better spent actually doing the work that will set the futures to ready, and timed_wait is more expressive than sleep: void wait_for_either(jss::unique_future<int>& a,jss::unique_future<int>& b) { if(a.is_ready() || b.is_ready()) { return true; } while(!a.timed_wait(boost::posix_time::milliseconds(1)) && !b.timed_wait(boost::posix_time::milliseconds(1))); } This will trigger the callback.
Rather than implicitly trying to guess client threads' needs wouldn't it be better to either: - Let the thread-pool be a predictable FIFO queue. Trust client code to do the scheduling and not submit too many tasks at the same time.
That's not appropriate for situations where a task on the pool can submit more tasks to the same pool, as in my quicksort example.
- Open up the pool interface and let users control some kind of prioritization or scheduling
Allowing callbacks doesn't preclude this option.
I'm probably misunderstanding something here.
There are many ways of scheduling threads in a thread pool, not all of which are suitable for all circumstances. Providing set_wait_callback gives the writer of the thread pool flexibility to choose the most appropriate model for the circumstances he is trying to handle.
Anthony Williams-3 wrote:
Waiting for one of a number of tasks is an important use case. I'm not sure how best to handle it. I've seen people talk about "future_or" and "f1 || f2", but I'm not sure if that's definitely the way to go.
I think we should allow waiting on a dynamically changing number of futures. Operators are poorly suited for this because they require function recursion. Maybe some kind of future container with wait_all() and wait_any() functions. My biggest question is how this will map to condition_variables. Also, could this facility be some layer below futures which can be reused?
My wait_for_either above could easily be extended to a dynamic set, and to do wait_for_both instead. Condition variables are more complicated, since they don't inherently support wait-for-any or wait-for-all operations, and every wake from a timed_wait call removes the thread from the waitset for that cv. To make it work, you would have to supply all the predicates associated with each cv, and all the associated mutexes. I think I'd rather just use futures.
Dependency deduction and lazy return value composition functionality - like operators - should probably be built on top of the dynamic waiting facility.
It would be relatively simple to build an operator on top of wait_for_either().
Anthony Williams-3 wrote:
However, even though distinct overloads will be called, this is not necessarily desirable, as the semantics are distinct.
In what way are the semantics different?
timed_wait(duration) waits for a specified amount of time to elapse. timed_wait(absolute_time) waits until the clock reads the specified time. This can be important if you're using a cv in a loop: boost::mutex m; boost::condition_variable cv; bool done=false; template<typename Duration> bool wait_with_background_processing(Duration d) { boost::system_time timeout=boost::get_system_time()+d; boost::unique_lock<boost::mutex> lk(m); while(!done) { if(!cv.timed_wait(d)) // oops, meant timeout { return false; } lk.unlock(); do_background_processing(); lk.lock(); } } This code will compile and run, but will wait up to the specified duration every time round the loop rather than waiting only the specified duration in total. By giving the duration and absolute-time overloads different names, you can avoid this bug.
Anthony Williams-3 wrote:
The members of the LWG are discussing renaming condition_variable::timed_wait to have distinct names for the duration and absolute time overloads in order to ensure that the user has absolute clarity of intent: wait_for(absolute_time) or wait_until(duration) won't compile.
To be extra clear, I'll repeat myself: This will complicate writing parallel algorithms which works generically with any time type. For both library vendors and users.
Generic code needs to know whether it's got a duration or an absolute time, so this is not an issue, IMHO.
My 5 cents is still that 2 x time_limited_wait is clear and readable enough but it's no strong opinion. For good or bad you are forcing users to supply their intent twice - by both argument type and method name. Is this a general strategy for the standard library?
This is an important strategy with condition variables, and it is probably sensible to do the same elsewhere in the standard library for consistency. Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

Anthony Williams-3 wrote:
If the thread "crashes", you've got a serious bug: all bets are off. It doesn't matter whether that's the same thread that's performing another task or not.
I agree that something is seriously wrong and that we perhaps don't need to handle things gracefully. But if the threading API allows us to detect "crashing" threads somehow, we could avoid spreading a thread-local problem to the whole process. The client thread could even be notified with a thread_crash exception set in the future. I'm haven't had time to read up on what possibilities the C++0x threading API will supply here, but I suppose you know. Maybe there isn't even a notion of a thread crashing without crashing the process. At the very least, I see a value in not behaving worst than if the associated client thread would have spawned it's own worker thread. That is: std::launch_in_pool(&crashing_function); should not behave worse than std::thread t(&crashing_function); Anthony Williams-3 wrote:
B might be useful. It can't detect waiting by periodic is_ready-polling - which with todays interface is needed to wait for more than one future.
I would use timed_wait() calls when waiting for more than one future: doing a busy-wait with is_ready just consumes CPU time which would be better spent actually doing the work that will set the futures to ready, and timed_wait is more expressive than sleep:
void wait_for_either(jss::unique_future<int>& a,jss::unique_future<int>& b) { if(a.is_ready() || b.is_ready()) { return true; } while(!a.timed_wait(boost::posix_time::milliseconds(1)) && !b.timed_wait(boost::posix_time::milliseconds(1))); }
It could as well have been implemented by: while (!a.is_ready() || !b.is_ready()) { a.timed_wait(boost::posix_time::milliseconds(1)); } You can't detect that b is needed here. I would not implement dynamic wait by timed_waiting on every single future, one at a time. Rather i would have done something like: void wait_for_any(const vector<future<void>>& futures) { while (1) { for (...f in futures...) if (f.is_ready()) return; sleep(10ms); } } Anthony Williams-3 wrote:
- Let the thread-pool be a predictable FIFO queue. Trust client code to do the scheduling and not submit too many tasks at the same time.
That's not appropriate for situations where a task on the pool can submit more tasks to the same pool, as in my quicksort example.
Ah - I knew I missed something. Agreed, child tasks should be prioritized. But that mechanism could be kept internal in the thread pool. Anthony Williams-3 wrote:
My wait_for_either above could easily be extended to a dynamic set, and to do wait_for_both instead.
Still you don't really wait for more than one future at a time. Both yours and mine suggestion above are depressingly inefficient if you were to wait on 1000s of futures simultaneously. I don't know if this will be a real use case or not. If the many core prediction comes true and we get 1000s of cores, it might very well be. Anthony Williams-3 wrote:
My 5 cents is still that 2 x time_limited_wait is clear and readable enough but it's no strong opinion. For good or bad you are forcing users to supply their intent twice - by both argument type and method name. Is this a general strategy for the standard library?
This is an important strategy with condition variables, and it is probably sensible to do the same elsewhere in the standard library for consistency.
I understand your point, even though I'm not sure it's the best strategy. Rather than arguing with more experienced people, I'll adopt whatever public code I write to this. Johan -- View this message in context: http://www.nabble.com/Review-Request%3A-future-library-%28Gaskill-version%29... Sent from the Boost - Dev mailing list archive at Nabble.com.

Johan Torp <johan.torp@gmail.com> writes:
Anthony Williams-3 wrote:
If the thread "crashes", you've got a serious bug: all bets are off. It doesn't matter whether that's the same thread that's performing another task or not.
I agree that something is seriously wrong and that we perhaps don't need to handle things gracefully. But if the threading API allows us to detect "crashing" threads somehow, we could avoid spreading a thread-local problem to the whole process. The client thread could even be notified with a thread_crash exception set in the future. I'm haven't had time to read up on what possibilities the C++0x threading API will supply here, but I suppose you know. Maybe there isn't even a notion of a thread crashing without crashing the process.
No, there isn't. A thread "crashes" as a result of undefined behaviour, in which case the behaviour of the entire application is undefined.
At the very least, I see a value in not behaving worst than if the associated client thread would have spawned it's own worker thread. That is: std::launch_in_pool(&crashing_function); should not behave worse than std::thread t(&crashing_function);
It doesn't: it crashes the application in both cases ;-)
Anthony Williams-3 wrote:
B might be useful. It can't detect waiting by periodic is_ready-polling - which with todays interface is needed to wait for more than one future.
I would use timed_wait() calls when waiting for more than one future: doing a busy-wait with is_ready just consumes CPU time which would be better spent actually doing the work that will set the futures to ready, and timed_wait is more expressive than sleep:
void wait_for_either(jss::unique_future<int>& a,jss::unique_future<int>& b) { if(a.is_ready() || b.is_ready()) { return true; } while(!a.timed_wait(boost::posix_time::milliseconds(1)) && !b.timed_wait(boost::posix_time::milliseconds(1))); }
It could as well have been implemented by:
while (!a.is_ready() || !b.is_ready()) { a.timed_wait(boost::posix_time::milliseconds(1)); }
This has a redundant check on a.is_ready(), and as you mention below, it doesn't cause a wait callback on "b" to be called. Also, this is biased towards waiting on a. By alternating the timed wait you're sharing the load.
You can't detect that b is needed here. I would not implement dynamic wait by timed_waiting on every single future, one at a time. Rather i would have done something like:
void wait_for_any(const vector<future<void>>& futures) { while (1) { for (...f in futures...) if (f.is_ready()) return; sleep(10ms); } }
If it was a large list, I wouldn't /just/ do a timed_wait on each future in turn. The sleep here lacks expression of intent, though. I would write a dynamic wait_for_any like so: void wait_for_any(const vector<future<void>>& futures) { while (1) { for (...f in futures...) { for (...g in futures...) if (g.is_ready()) return; if(f.timed_wait(1ms)) return; } } } That way, you're never just sleeping: you're always waiting on a future. Also, you share the wait around, but you still check each one every time you wake.
Anthony Williams-3 wrote:
- Let the thread-pool be a predictable FIFO queue. Trust client code to do the scheduling and not submit too many tasks at the same time.
That's not appropriate for situations where a task on the pool can submit more tasks to the same pool, as in my quicksort example.
Ah - I knew I missed something. Agreed, child tasks should be prioritized. But that mechanism could be kept internal in the thread pool.
The pool can only do that if the pool knows you're waiting on a child task.
Anthony Williams-3 wrote:
My wait_for_either above could easily be extended to a dynamic set, and to do wait_for_both instead.
Still you don't really wait for more than one future at a time. Both yours and mine suggestion above are depressingly inefficient if you were to wait on 1000s of futures simultaneously. I don't know if this will be a real use case or not. If the many core prediction comes true and we get 1000s of cores, it might very well be.
You're right: if there's lots of futures, then you can consume considerable CPU time polling them, even if you then wait/sleep. What is needed is a mechanism to say "this future belongs to this set" and "wait for one of the set". Currently, I can imagine doing this by spawning a separate thread for each future in the set, which then does a blocking wait on its future and notifies a "combined" value when done. The other threads in the set can then be interrupted when one is done. Of course, you need /really/ lightweight threads to make that worthwhile, but I expect threads to become cheaper as the number of cores increases. Alternatively, you could do it with a completion-callback, but I'm not entirely comfortable with that.
Anthony Williams-3 wrote:
My 5 cents is still that 2 x time_limited_wait is clear and readable enough but it's no strong opinion. For good or bad you are forcing users to supply their intent twice - by both argument type and method name. Is this a general strategy for the standard library?
This is an important strategy with condition variables, and it is probably sensible to do the same elsewhere in the standard library for consistency.
I understand your point, even though I'm not sure it's the best strategy. Rather than arguing with more experienced people, I'll adopt whatever public code I write to this.
Currently the WP uses overloads of timed_wait for condition variables. I expect we'll see whether the committee prefers that or wait_for/wait_until after the meeting in June. Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

Anthony Williams-3 wrote:
Maybe there isn't even a notion of a thread crashing without crashing the process.
No, there isn't. A thread "crashes" as a result of undefined behaviour, in which case the behaviour of the entire application is undefined.
I thought Windows' SetUnhandledExceptionFilter could handle this but I was wrong. Anthony Williams-3 wrote:
At the very least, I see a value in not behaving worst than if the associated client thread would have spawned it's own worker thread. That is: std::launch_in_pool(&crashing_function); should not behave worse than std::thread t(&crashing_function);
It doesn't: it crashes the application in both cases ;-)
You're right. Deadlocks will however be able to "spread" in this non-obvious way. Lets say thread C1 adds task T1 to the pool which is processed by worker thread W1. C1 then blocks until T1 is finished. When T1 waits on a future, it starts working on job T2 which deadlocks. This deadlock now spreads to the uninvolved thread C1 too. Don't now how much of a problem this is though - effective thread re-use might be worth more than this unexpected behaviour. Anthony Williams-3 wrote:
If it was a large list, I wouldn't /just/ do a timed_wait on each future in turn. The sleep here lacks expression of intent, though. I would write a dynamic wait_for_any like so:
void wait_for_any(const vector<future<void>>& futures) { while (1) { for (...f in futures...) { for (...g in futures...) if (g.is_ready()) return; if(f.timed_wait(1ms)) return; } } }
That way, you're never just sleeping: you're always waiting on a future. Also, you share the wait around, but you still check each one every time you wake.
Maybe you would, but I doubt most users would. I wouldn't expect that waiting on a future expresses interest in the value. Anthony Williams-3 wrote:
You're right: if there's lots of futures, then you can consume considerable CPU time polling them, even if you then wait/sleep. What is needed is a mechanism to say "this future belongs to this set" and "wait for one of the set".
Exactly my thoughts. Wait for all would probably be needed too. And to build composites, you need to be able to add both futures and these future-sets to a future-set. Might be one class for wait_for_any and another one for wait_for_all. Anthony Williams-3 wrote:
Currently, I can imagine doing this by spawning a separate thread for each future in the set, which then does a blocking wait on its future and notifies a "combined" value when done. The other threads in the set can then be interrupted when one is done. Of course, you need /really/ lightweight threads to make that worthwhile, but I expect threads to become cheaper as the number of cores increases.
Starting a thread to wait for a future doesn't seem very suitable to me. Imagine 10% of the core threads each waiting on (combinatorial) results from the remaining 90%. Also, waiting on many futures is probably applicable even on single core processors. For instance if you have 100s of pending requests to different types of distributed services, you could model each request with a future and be interested in the first response which arrives. Windows threads today aren't particularily light-weight. This might mean that condition_variable isn't a suitable abstraction to build futures on :( At least not the way it works today. But I don't think it's a good idea to change condition_variables this late. It is a pretty widespread, well working and well understood concurrent model. OTOH changing future's waiting model this late is not good either. Anthony Williams-3 wrote:
Alternatively, you could do it with a completion-callback, but I'm not entirely comfortable with that.
I'm not comfortable with this either, for the reasons I expressed in my response to Gaskill's propsal. This issue is my biggest concern with the future proposal. The alternatives I've seen so far: 1. Change/alter condition variables 2. Add future-complete callback (Gaskill's proposal) 3. Implement wait_for_many with a thread per future 4. Implement wait_for_many with periodic polling with timed_waits 5. Introduce new wait_for_many mechanism (public class or implementation details) 6. Don't ever support waiting on multiple futures 7. Don't support it until next version, but make sure we don't need to alter future semantics/interface when adding it. Alternative 7 blocks the possibility to write some exciting libraries on top of futures until a new future version is available. Do you have further alternatives? Johan -- View this message in context: http://www.nabble.com/Review-Request%3A-future-library-%28Gaskill-version%29... Sent from the Boost - Dev mailing list archive at Nabble.com.

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Wednesday 14 May 2008 08:57 am, Johan Torp wrote:
Anthony Williams-3 wrote:
Alternatively, you could do it with a completion-callback, but I'm not entirely comfortable with that.
I'm not comfortable with this either, for the reasons I expressed in my response to Gaskill's propsal.
This issue is my biggest concern with the future proposal. The alternatives I've seen so far:
1. Change/alter condition variables 2. Add future-complete callback (Gaskill's proposal) 3. Implement wait_for_many with a thread per future 4. Implement wait_for_many with periodic polling with timed_waits 5. Introduce new wait_for_many mechanism (public class or implementation details) 6. Don't ever support waiting on multiple futures 7. Don't support it until next version, but make sure we don't need to alter future semantics/interface when adding it.
Alternative 7 blocks the possibility to write some exciting libraries on top of futures until a new future version is available.
In my libpoet scheduler code I described in an earlier post today, I currently use the a future-complete callback (signals/slots more specifically). Although I could make my scheduler work with wait_for_many support instead, it would be more cumbersome and less efficient. First, with signals/slots I can avoid having to make the scheduler thread wake up every time any input future of any method request in the scheduler queue becomes ready. Instead the individual method requests can observe their inputs, and only generate invoke a signal in turn when all of their futures are ready. Then the scheduler only has to observe the method requests and doesn't have to wake up (possibly spuriously) and check every method request in its queue every time a single input future becomes ready. Second, if all I can do is wait for multiple futures, then the scheduler has to additionally maintain and keep updated a separate container with all the input futures of the method requests currently queued. - -- Frank -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFIK0AR5vihyNWuA4URAk/8AJ9P/AMkQhvs6bcOTxlNy3ZdQ3ddygCgyt4o qeFQZocP8r4yd7G+sBIt310= =lkFA -----END PGP SIGNATURE-----

Frank Mori Hess wrote:
First, with signals/slots I can avoid having to make the scheduler thread wake up every time any input future of any method request in the scheduler queue becomes ready. Instead the individual method requests can observe their inputs, and only generate invoke a signal in turn when all of their futures are ready. Then the scheduler only has to observe the method requests and doesn't have to wake up (possibly spuriously) and check every method request in its queue every time a single input future becomes ready.
Second, if all I can do is wait for multiple futures, then the scheduler has to additionally maintain and keep updated a separate container with all the input futures of the method requests currently queued.
If you have the time, please look at my proposed solution at http://www.nabble.com/-future--Early-draft-of-wait-for-multiple-futures-inte.... I think it solves both your problems without moving "user code execution" from future-blocking threads to promise-fulfilling ones. I'm very interested in seeing if it works well with scheduling and I think your input would be very valuable. Johan -- View this message in context: http://www.nabble.com/Review-Request%3A-future-library-%28Gaskill-version%29... Sent from the Boost - Dev mailing list archive at Nabble.com.

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Wednesday 14 May 2008 19:21 pm, Johan Torp wrote:
Frank Mori Hess wrote:
First, with signals/slots I can avoid having to make the scheduler thread wake up every time any input future of any method request in the scheduler queue becomes ready. Instead the individual method requests can observe their inputs, and only generate invoke a signal in turn when all of their futures are ready. Then the scheduler only has to observe the method requests and doesn't have to wake up (possibly spuriously) and check every method request in its queue every time a single input future becomes ready.
Second, if all I can do is wait for multiple futures, then the scheduler has to additionally maintain and keep updated a separate container with all the input futures of the method requests currently queued.
If you have the time, please look at my proposed solution at http://www.nabble.com/-future--Early-draft-of-wait-for-multiple-futures-int erface-to17242880.html. I think it solves both your problems without moving "user code execution" from future-blocking threads to promise-fulfilling ones. I'm very interested in seeing if it works well with scheduling and I think your input would be very valuable.
Ah, yes it seems like some kind of composable future_switch and future_barrier could work quite well for my use case. Do they actually need to be classes though? What if they were just free functions for example future<void> future_barrier(const future<void> &a1, const future<void> 7a2, ... , const future<void> &aN); - -- Frank -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFILDTO5vihyNWuA4URAsOZAJ9YWXWRMuxZ5mPE34mkEGyUByHnKgCgpvCl rIObEXk3c7/Izc3gje0s9ig= =t+B+ -----END PGP SIGNATURE-----

Frank Mori Hess wrote:
Ah, yes it seems like some kind of composable future_switch and future_barrier could work quite well for my use case. Do they actually need to be classes though? What if they were just free functions for example
future<void> future_barrier(const future<void> &a1, const future<void>& a2, ... , const future<void> &aN);
If we want to support dynamically adding futures to future_switch and maybe future_barrier a free function won't suffice. However I believe these functions are really useful and should be implemented on top of the proposed mechanisms. template<class ReturnType, class Arg1, class Arg2> future<ReturnType> barrier_compose(const future<Arg1> &a1, const future<Arg2>& a2, const function<ReturnType(Arg1, Arg2)>& compose); Johan -- View this message in context: http://www.nabble.com/Review-Request%3A-future-library-%28Gaskill-version%29... Sent from the Boost - Dev mailing list archive at Nabble.com.

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Friday 16 May 2008 05:23 am, Johan Torp wrote:
Frank Mori Hess wrote:
Ah, yes it seems like some kind of composable future_switch and future_barrier could work quite well for my use case. Do they actually need to be classes though? What if they were just free functions for example
future<void> future_barrier(const future<void> &a1, const future<void>& a2, ... , const future<void> &aN);
If we want to support dynamically adding futures to future_switch and maybe future_barrier a free function won't suffice.
Yes, you're right. Especially with future_switch there should be a way to wait on a group of futures whose number is only known at runtime. - -- Frank -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFILZby5vihyNWuA4URAhChAJ9MTK+2ZoKBm8PVaoZZB5UqLHbeqACg5Xtm aPId2WGfrUTXhKdZbJjTYGo= =//EW -----END PGP SIGNATURE-----

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Friday 16 May 2008 10:15 am, Frank Mori Hess wrote:
On Friday 16 May 2008 05:23 am, Johan Torp wrote:
Frank Mori Hess wrote:
Ah, yes it seems like some kind of composable future_switch and future_barrier could work quite well for my use case. Do they actually need to be classes though? What if they were just free functions for example
future<void> future_barrier(const future<void> &a1, const future<void>& a2, ... , const future<void> &aN);
If we want to support dynamically adding futures to future_switch and maybe future_barrier a free function won't suffice.
Yes, you're right. Especially with future_switch there should be a way to wait on a group of futures whose number is only known at runtime.
Hmm, actually what I said is wrong. As I believe has been pointed out before by others on this list, you could build up a arbitrary size group of futures to wait on at runtime by taking the return value from the free future_barrier or future_switch function and passing it to another call. So I'm still partial to the free function approach. It is similar to operator|| and operator&& in Gaskill's library except not operators, no comb<T> class or op() function, and overloaded to accept more than 2 arguments: future<void> future_barrier(const future<void> &a1, const future<void>& a2, ... , const future<void> &aN); future<void> future_switch(const future<void> &a1, const future<void>& a2, ... , const future<void> &aN); A naive implementation would probably be pretty inefficient when evaluating a future that has been produced after passing through many future_barrier/future_switch calls. But I imagine some optimizations could be found under the hood in the implementation to avoid unnecessarily long chains of futures depending on each other. These prototypes assume the future lib supports conversion of any future<T> type to future<void>. - -- Frank -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFILaLU5vihyNWuA4URAuuSAJ9/dlFhPWn/USbuVQCcgoKZBJVDJACfRfEH cOeNVQBDhQy/Ifd8Bkj/Slk= =vG5O -----END PGP SIGNATURE-----

Frank Mori Hess wrote:
If we want to support dynamically adding futures to future_switch and maybe future_barrier a free function won't suffice.
Yes, you're right. Especially with future_switch there should be a way to wait on a group of futures whose number is only known at runtime.
Hmm, actually what I said is wrong. As I believe has been pointed out before by others on this list, you could build up a arbitrary size group of futures to wait on at runtime by taking the return value from the free future_barrier or future_switch function and passing it to another call. So I'm still partial to the free function approach. It is similar to operator|| and operator&& in Gaskill's library except not operators, no comb<T> class or op() function, and overloaded to accept more than 2 arguments:
It is possible, what I meant by "won't suffice" is that it will be very innefficient. Imagine implementing or for 1000 futures. If you have "collection" you can just add children and get a tree with depth 2 and where the root node has 1000 childs. If you rely on free functions, say of arity 2, you get a tree depth of a 1000 and need 999 extra parent nodes which all has one leaf node and one of the parent nodes as child. Frank Mori Hess wrote:
future<void> future_barrier(const future<void> &a1, const future<void>& a2, ... , const future<void> &aN);
future<void> future_switch(const future<void> &a1, const future<void>& a2, ... , const future<void> &aN);
A naive implementation would probably be pretty inefficient when evaluating a future that has been produced after passing through many future_barrier/future_switch calls. But I imagine some optimizations could be found under the hood in the implementation to avoid unnecessarily long chains of futures depending on each other.
This would be nice and could potentially solve the inefficiency problem. You are allowed to wait for any node in a tree. The main difficulty is figuring out which nodes only exists in the tree and which who are visible to the outside world. The nodes who only exist in the tree can be re-arranged into a compact tree. Maybe we can use unique_future to guarantee this. I.e. the combination functions always return unique_futures and if we get these as in-parameters, we can safely re-arrenge the tree. Johan -- View this message in context: http://www.nabble.com/Review-Request%3A-future-library-%28Gaskill-version%29... Sent from the Boost - Dev mailing list archive at Nabble.com.

On Saturday 17 May 2008 07:32, Johan Torp wrote:
Frank Mori Hess wrote:
future<void> future_barrier(const future<void> &a1, const future<void>& a2, ... , const future<void> &aN);
future<void> future_switch(const future<void> &a1, const future<void>& a2, ... , const future<void> &aN);
A naive implementation would probably be pretty inefficient when evaluating a future that has been produced after passing through many future_barrier/future_switch calls. But I imagine some optimizations could be found under the hood in the implementation to avoid unnecessarily long chains of futures depending on each other.
This would be nice and could potentially solve the inefficiency problem. You are allowed to wait for any node in a tree. The main difficulty is figuring out which nodes only exists in the tree and which who are visible to the outside world. The nodes who only exist in the tree can be re-arranged into a compact tree. Maybe we can use unique_future to guarantee this. I.e. the combination functions always return unique_futures and if we get these as in-parameters, we can safely re-arrenge the tree.
Returning a unique_future seems okay. To get around an exponential blowup in number of overloads when overloading each parameter for shared_future/unique_future you'd need some kind of implicitly convertible wrapper class. You could have future_barrier/future_switch accept and return a wrapper like the comb<T> from Braddock's library. The comb<T> could keep carry any state information which is needed to optimize the tree. Another solution would be to just provide another overload that accepts iterators to a user-supplied container of future<void>s. -- Frank

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Saturday 17 May 2008 07:32 am, Johan Torp wrote:
Frank Mori Hess wrote:
future<void> future_barrier(const future<void> &a1, const future<void>& a2, ... , const future<void> &aN);
future<void> future_switch(const future<void> &a1, const future<void>& a2, ... , const future<void> &aN);
A naive implementation would probably be pretty inefficient when evaluating a future that has been produced after passing through many future_barrier/future_switch calls. But I imagine some optimizations could be found under the hood in the implementation to avoid unnecessarily long chains of futures depending on each other.
This would be nice and could potentially solve the inefficiency problem. You are allowed to wait for any node in a tree. The main difficulty is figuring out which nodes only exists in the tree and which who are visible to the outside world. The nodes who only exist in the tree can be re-arranged into a compact tree. Maybe we can use unique_future to guarantee this. I.e. the combination functions always return unique_futures and if we get these as in-parameters, we can safely re-arrenge the tree.
In case you're interested, I've just implemented future_barrier and future_select (changed name from future_switch) free functions in libpoet cvs: http://www.comedi.org/cgi-bin/viewvc.cgi/libpoet/poet/future_waits.hpp?view=... I didn't do any clever under-the-hood optimizations, only provided overloads that accept 2 to 10 future<T> arguments, plus one that accepts first/last iterators to a container of futures. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFINJB25vihyNWuA4URAso5AKDgrKbSC7xLxMgav+dNB3PQsFSb6ACgmscf jdiBKjmET32LjvRuAvyUXmw= =UB2b -----END PGP SIGNATURE-----

Frank Mori Hess wrote:
In case you're interested, I've just implemented future_barrier and future_select (changed name from future_switch) free functions in libpoet cvs:
http://www.comedi.org/cgi-bin/viewvc.cgi/libpoet/poet/future_waits.hpp?view=...
I didn't do any clever under-the-hood optimizations, only provided overloads that accept 2 to 10 future<T> arguments, plus one that accepts first/last iterators to a container of futures.
Interesting! Do you use these within poet anywhere or is it for poet users only? I think it would be good to add some functionality though. For simplicity, let's only talk about the select case. This is what I propose: 1. Clients will probably be interested in which of the child futures fired. The interface should support this. 2. Upon notification from a child, I'd want to be able to check a condition and then either a - become ready b - prune the "consumed" child future and go back to sleep 2.1 This condition checking should probably be done by the first thread which is waiting for the future - it belongs to the future-listening threads, not the promise-fulfilling one. If nobody is waiting for the future we should save this evalution and let it be performed lazily as soon as someone queries the future or waits for it. 2.2 This evaluation is not just a predicate which can make the future ready. It can also be actual calculations. These would be performed from within future::wait(), get() and is_ready(). 2.3 We probably need to supply a default value if all children have been consumed 3. We probably want to support composite futures from different types. If we want to support 1 or 2 we want to get access to the typed value of the ready future. In practice, this probably means the evaluation function should be added at the same time as the child future to ensure type safety. See the interface example below. A simplified example of what an interface which supports 1, 2 and 3 might look like is: template<class ReturnType, class Arg1Type, class Arg2Type> future<ReturnType> future_select(future<Arg1Type> f1, function<optional<ReturnType>(Arg1Type)> e1, future<Arg2Type> f2, function<optional<ReturnType>(Arg2Type)> e2, ReturnType default_value); I'm not entirely satisfied with this though. In the end, I want users to be able to easily "lift" ordinary functions to futures: int foo_func(int, double, bool) => future<int> foo_func(future<int>, future<double>, future<bool>) I have also been thinking about we could move part of this composed waiting in to a conditon_variable_node class to separate concerns. Johan -- View this message in context: http://www.nabble.com/Review-Request%3A-future-library-%28Gaskill-version%29... Sent from the Boost - Dev mailing list archive at Nabble.com.

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Friday 23 May 2008 05:45 am, Johan Torp wrote:
Interesting! Do you use these within poet anywhere or is it for poet users only?
It's only for users at the moment, I haven't rewritten the schedulers to use it.
I think it would be good to add some functionality though. For simplicity, let's only talk about the select case. This is what I propose:
How about an additional future_select overload which accepts two boost::shared_container_iterators as arguments, and returns a future<shared_container_iterator>? The returned iterator would point at the future which is ready or has an exception. Then the caller could use the returned iterator to erase the element from the container before calling future_select again. The shared_container_iterator would insure the input container of futures doesn't die before the return future becomes ready.
1. Clients will probably be interested in which of the child futures fired. The interface should support this.
2. Upon notification from a child, I'd want to be able to check a condition and then either a - become ready b - prune the "consumed" child future and go back to sleep 2.1 This condition checking should probably be done by the first thread which is waiting for the future - it belongs to the future-listening threads, not the promise-fulfilling one. If nobody is waiting for the future we should save this evalution and let it be performed lazily as soon as someone queries the future or waits for it. 2.2 This evaluation is not just a predicate which can make the future ready. It can also be actual calculations. These would be performed from within future::wait(), get() and is_ready(). 2.3 We probably need to supply a default value if all children have been consumed
Having a future execute a callback at the end of a wait completing seems only marginally useful to me. I'm not sure it's worth the complication.
3. We probably want to support composite futures from different types. If we want to support 1 or 2 we want to get access to the typed value of the ready future. In practice, this probably means the evaluation function should be added at the same time as the child future to ensure type safety. See the interface example below.
A simplified example of what an interface which supports 1, 2 and 3 might look like is:
template<class ReturnType, class Arg1Type, class Arg2Type> future<ReturnType> future_select(future<Arg1Type> f1, function<optional<ReturnType>(Arg1Type)> e1, future<Arg2Type> f2, function<optional<ReturnType>(Arg2Type)> e2, ReturnType default_value);
I'm not entirely satisfied with this though.
Oh, now its clear to me why you were using the name future_switch.
In the end, I want users to be able to easily "lift" ordinary functions to futures: int foo_func(int, double, bool) => future<int> foo_func(future<int>, future<double>, future<bool>)
That's exactly what poet::active_function does, or did you have something different in mind?
I have also been thinking about we could move part of this composed waiting in to a conditon_variable_node class to separate concerns. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux)
iD8DBQFINsy+5vihyNWuA4URAj/aAKC5FhgCf3V7oqvAqb3JfsFUPDEbYwCgrJ+P 7J6qL6wPx9y771rTX1e5Q9w= =ebvj -----END PGP SIGNATURE-----

Frank Mori Hess wrote:
How about an additional future_select overload which accepts two boost::shared_container_iterators as arguments, and returns a future<shared_container_iterator>? The returned iterator would point at the future which is ready or has an exception. Then the caller could use the returned iterator to erase the element from the container before calling future_select again. The shared_container_iterator would insure the input container of futures doesn't die before the return future becomes ready.
That would restrict users to put their futures in a container created as a shared_ptr, right? Williams' shared_future already has reference semantics and could be copied. unique_futures should somehow be moved in and owned by the composite future as nobody else should look at it's result. Not sure what is best here. Frank Mori Hess wrote:
Having a future execute a callback at the end of a wait completing seems only marginally useful to me. I'm not sure it's worth the complication.
- snip -
That's exactly what poet::active_function does, or did you have something different in mind?
Yes, I'm thinking of a passive function which doesn't have an internal thread associated with it. One that is evaluated lazily as soon as someone is interested in it. I think this could be very useful, do you? I haven't worked very much with active objects so your 5 cents are probably worth more than mine :) But yes, it requires some complexity - probably comparable to that of active_function. Johan -- View this message in context: http://www.nabble.com/Review-Request%3A-future-library-%28Gaskill-version%29... Sent from the Boost - Dev mailing list archive at Nabble.com.

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Friday 23 May 2008 10:50 am, Johan Torp wrote:
Frank Mori Hess wrote:
How about an additional future_select overload which accepts two boost::shared_container_iterators as arguments, and returns a future<shared_container_iterator>? The returned iterator would point at the future which is ready or has an exception. Then the caller could use the returned iterator to erase the element from the container before calling future_select again. The shared_container_iterator would insure the input container of futures doesn't die before the return future becomes ready.
That would restrict users to put their futures in a container created as a shared_ptr, right?
Yes, but the shared_container_iterator isn't really necessary, I could make future_select return a future<Iterator> for any type of iterator input. I was just trying to make it safer. Unfortunately, even using shared_container_iterator still wouldn't prevent the user from modifying the container so as to invalidate the returned future iterator anyways. Maybe I'll just suggest using shared_container_iterator in the docs.
Yes, I'm thinking of a passive function which doesn't have an internal thread associated with it. One that is evaluated lazily as soon as someone is interested in it.
I think this could be very useful, do you?
I don't know, it might be. I've never tried to program something using a lot of nested lazy evaluation. Could you use boost::bind to achieve the same effect? Doesn't it do something like this by default if you do a recursive bind of a function to a parameter, and don't wrap the function in boost::protect()? -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFINupX5vihyNWuA4URAt0/AJ9WDjsz8KpTZx3P5xcCpqck3xLapwCdHPrf YbCeNge/oz3Nbz51kAD40tM= =S3P6 -----END PGP SIGNATURE-----

Frank Mori Hess wrote:
Yes, I'm thinking of a passive function which doesn't have an internal thread associated with it. One that is evaluated lazily as soon as someone is interested in it.
I think this could be very useful, do you?
I don't know, it might be. I've never tried to program something using a lot of nested lazy evaluation. Could you use boost::bind to achieve the same effect? Doesn't it do something like this by default if you do a recursive bind of a function to a parameter, and don't wrap the function in boost::protect()?
I'm not sure I understand you. boost.bind itself has nothing to do with threads waiting on eachother. You will probably use boost.bind to create the evaluation functions (e1 and e2 in my previous post) though. Johan -- View this message in context: http://www.nabble.com/Review-Request%3A-future-library-%28Gaskill-version%29... Sent from the Boost - Dev mailing list archive at Nabble.com.

On Friday 23 May 2008 14:32 pm, Johan Torp wrote:
I'm not sure I understand you. boost.bind itself has nothing to do with threads waiting on eachother. You will probably use boost.bind to create the evaluation functions (e1 and e2 in my previous post) though.
I must have read too much into what you said before. I was thinking you were entirely in single-threaded mode, and all your futures were just coming from the same kind of lazy evaluations. Anyways, I think the main deficiency with the prototype of future_select() I've coded is it will always be at least O(N) where N is the number of input futures. So if you are waiting for N futures one at a time, it will take N calls to future_select giving O(N^2) complexity. If you have some kind of "future combination" class that contains the set of futures being waited on, and can remove completed futures from it in O(1) time, you might do better. It would take O(N) time to create the "future combination" initially, but then each wait for the next completed future could be O(1). Okay, to summarize where I think we are: 1) In order to wait on an arbitrary number of futures, determined at runtime, we need some kind of container. 2) Since containers are homogeneous, that means some kind of type erasure (future<void>). 3) When a future becomes ready, we will often want to dispatch it somehow which might mean getting its value, or at least identifying the input future which became ready. That could mean doing some kind of type unerasure, or doing an inefficient loop over all the original input futures until you find the completed one. Your solution is to bind the dispatcher function to the future before it is type erased. For thread-safety, you want to run the dispatcher function in one of the waiting threads, right after the future becomes ready. There will be deadlock hazards running the user's dispatcher function while holding the future's associated mutex. Also, it's possible multiple threads will be waiting on the future, in which case how much thread-safety benefit is there to running code in one of the waiting threads versus a callback run in the promise thread? What if we allow a key value to be associated with each input future, then return the key of the completed future? Then it would be left to the user to map the key back to whatever future they were originally interested in.

Frank Mori Hess wrote:
1) In order to wait on an arbitrary number of futures, determined at runtime, we need some kind of container.
Yes, but it needn't be exposed to users. As you suggested, it can be built up with free functions or expressive templates, similar to Gaskill's comb. This is a detail though, I don't want to focus on it now. Frank Mori Hess wrote:
2) Since containers are homogeneous, that means some kind of type erasure (future<void>).
Yes, type erasure will be needed. I hope we can rely on boost.function for that, similar to the predicate version of condition_variable::wait(). Frank Mori Hess wrote:
3) When a future becomes ready, we will often want to dispatch it somehow which might mean getting its value, or at least identifying the input future which became ready. That could mean doing some kind of type unerasure, or doing an inefficient loop over all the original input futures until you find the completed one. Your solution is to bind the dispatcher function to the future before it is type erased. For thread-safety, you want to run the dispatcher function in one of the waiting threads, right after the future becomes ready.
Thread safety is not my primary concern. The "predicate" code might throw an exception and the promise-fullfilling thread can't be expected to catch it. That would couple the promise-fulfiller and the future-listeners in an awkward way. An alternative might be to catch the exceptions and set them on the composite future. Thinking of it, this would be the expected behaviour, wouldn't it? Frank Mori Hess wrote:
There will be deadlock hazards running the user's dispatcher function while holding the future's associated mutex.
Good point. I hope we don't need to hold any mutex while evaluating the predicate. Frank Mori Hess wrote:
Also, it's possible multiple threads will be waiting on the future, in which case how much thread-safety benefit is there to running code in one of the waiting threads versus a callback run in the promise thread?
The callback does not need to be thread safe unless it holds references to some particular thread's data. Only one of the waiting threads will evaluate it. Also, if there is only one "listening" thread it can safely reference it's own data. Frank Mori Hess wrote:
What if we allow a key value to be associated with each input future, then return the key of the completed future? Then it would be left to the user to map the key back to whatever future they were originally interested in.
This would mean a runtime mapping. This costs some but more importantly could introduce bugs because of the added runtime insecurity. I'm hoping we can avoid it. I'm doing a thesis related to C++0x and threading. Peter Dimov suggested I could try and implement an event-like waiting primitive, similar to Windows' WaitForMultipleObjects. If we had such a primitive, we might get better ways to solve this problem. I think part of the reason this is so tricky is because condition_variable's aren't meant to be used for multiple waiting. Maybe Dimov or somebody else has sketches of such an interface. Maybe we should solve this problem first. What do you think? Johan -- View this message in context: http://www.nabble.com/Review-Request%3A-future-library-%28Gaskill-version%29... Sent from the Boost - Dev mailing list archive at Nabble.com.

Johan Torp:
Frank Mori Hess wrote:
1) In order to wait on an arbitrary number of futures, determined at runtime, we need some kind of container.
Yes, but it needn't be exposed to users. As you suggested, it can be built up with free functions or expressive templates, similar to Gaskill's comb.
Suppose that the programmer wants to spawn n tasks, where n is not a compile-time constant, and is only interested in whatever task returns a value first. Something like: T f( int x ); // ... vector<future<T>> v; for( int i = 0; i < n; ++i ) { v.push_back( async( f, i ) ); } // wait for any of v[i] to complete, get the T How could this be rephrased with the container being an implementation detail? Here's one way: future<T> ft; for( int i = 0; i < n; ++i ) { ft = ft || async( f, i ); } T t = ft.get(); This however relies on: future<T> operator|| ( future<T>, future<T> ); You can't use 'comb' as the return value.

Peter Dimov-5 wrote:
future<T> operator|| ( future<T>, future<T> );
You can't use 'comb' as the return value.
In the expressive template solution, operator|| would return an unspecified future-expression type and futures can be constructed from it. Just like boost.function and boost.bind interacts. Other overloads would also work with these unnamed future-expression temporaries. Johan -- View this message in context: http://www.nabble.com/Review-Request%3A-future-library-%28Gaskill-version%29... Sent from the Boost - Dev mailing list archive at Nabble.com.

On Sunday 25 May 2008 14:02, Peter Dimov wrote:
Here's one way:
future<T> ft;
for( int i = 0; i < n; ++i ) { ft = ft || async( f, i ); }
T t = ft.get();
That reminds me, I've been so focused on trying to make future_select work with heterogeneous input futures, that I neglected the homogeneous case. That is, future_select() always returns a future<void> even when all the input futures have the same type. And after all, maybe supporting the homogeneous case is good enough. For example, in my use case I've already taken care to type-erase the method requests in the schedulers down to a uniform type, because the scheduler needs to store them in a container anyways. So the only other thing I'd need is some kind of explicit future container that can be re-used to "wait for any" repeatedly without rebuilding the entire set of futures each time. Maybe something with a queue-like interface: template<typename T> class future_selecter { public: future<T> front(); void pop(); void push(const future<T> &f); }; where the future returned by front() would have the value from the next future in the future_selector to become ready. pop() would discard the completed future previously being returned by front(), so front() can return a new future corresponding to the next one to complete.

Frank Mori Hess-2 wrote:
That reminds me, I've been so focused on trying to make future_select work with heterogeneous input futures, that I neglected the homogeneous case. That is, future_select() always returns a future<void> even when all the input futures have the same type. And after all, maybe supporting the homogeneous case is good enough. For example, in my use case I've already taken care to type-erase the method requests in the schedulers down to a uniform type, because the scheduler needs to store them in a container anyways.
So the only other thing I'd need is some kind of explicit future container that can be re-used to "wait for any" repeatedly without rebuilding the entire set of futures each time. Maybe something with a queue-like interface:
template<typename T> class future_selecter { public: future<T> front(); void pop(); void push(const future<T> &f); };
where the future returned by front() would have the value from the next future in the future_selector to become ready. pop() would discard the completed future previously being returned by front(), so front() can return a new future corresponding to the next one to complete.
I still think solving the heterogenous case is worthwhile. The interface isn't much more complicated, it could look like this: template<typename ResultType> class future_selecter { public: template<class T> void push(const future<T> &f, function<optional<ResultType>(T)>& eval); }; template<class ReturnType> future::future(future_selector<ReturnType>&) Johan -- View this message in context: http://www.nabble.com/Review-Request%3A-future-library-%28Gaskill-version%29... Sent from the Boost - Dev mailing list archive at Nabble.com.

On Monday 26 May 2008 03:53, Johan Torp wrote:
Frank Mori Hess-2 wrote:
So the only other thing I'd need is some kind of explicit future container that can be re-used to "wait for any" repeatedly without rebuilding the entire set of futures each time. Maybe something with a queue-like interface:
template<typename T> class future_selecter { public: future<T> front(); void pop(); void push(const future<T> &f); };
where the future returned by front() would have the value from the next future in the future_selector to become ready. pop() would discard the completed future previously being returned by front(), so front() can return a new future corresponding to the next one to complete.
I still think solving the heterogenous case is worthwhile. The interface isn't much more complicated, it could look like this:
template<typename ResultType> class future_selecter { public: template<class T> void push(const future<T> &f, function<optional<ResultType>(T)>& eval); };
template<class ReturnType> future::future(future_selector<ReturnType>&)
The thing is, this whole exercise started for me to see if it was possible to get rid of the public signal/slots on futures and replace it with just various waits on futures. Once we start adding callback functions to the future_select and future_barrier, which are executed on completion, it's really a sign of failure to me. I'd rather just leave the public slots on futures as that is simpler, more powerful, and no less dangerous.

Frank Mori Hess-2 wrote:
The thing is, this whole exercise started for me to see if it was possible to get rid of the public signal/slots on futures and replace it with just various waits on futures. Once we start adding callback functions to the future_select and future_barrier, which are executed on completion, it's really a sign of failure to me. I'd rather just leave the public slots on futures as that is simpler, more powerful, and no less dangerous.
I agree public slots are simpler and more powerful. IMO they are a lot more dangerous though. Some examples: If I connect a slot that might throw an exception, that exception will be thrown from the promise-fullfilling thread's set()-call. This is totally unexpected. If a slot acquires a lock, it can lead to unexpected deadlocks if the promise-fullfilling thread is holding an external lock while calling future::set(). Assume you have two locks which you always should acquire in some specified order to avoid dead-locks. Say you should acquire mutex1 before mutex2; slot: scoped_lock l(mutex1); promise-fulfiller: scoped_lock l(mutex2); promise.set(100); // It is totally unexpected that this code will acquire mutex1 other thread: scoped_lock l(mutex1); scoped_lock l(mutex2); I don't know how common this particular example would be but I'm guessing there are lots of similar problems out there. Basically, you don't expect future::set to run a lot of aribtrary code. And if you do, you have coupled the future-listeners and future-fulfillers in a very implicit way. If futures are internal to a library such as poet it is fine. But to expose a generic future-object with such an interface is far from optimal, IMHO. Johan -- View this message in context: http://www.nabble.com/Review-Request%3A-future-library-%28Gaskill-version%29... Sent from the Boost - Dev mailing list archive at Nabble.com.

On Monday 26 May 2008 16:13, Johan Torp wrote:
I agree public slots are simpler and more powerful. IMO they are a lot more dangerous though. Some examples:
If I connect a slot that might throw an exception, that exception will be thrown from the promise-fullfilling thread's set()-call. This is totally unexpected.
That's true. However, the connect call of the future which connects a slot to the future-complete signal could return a future<void>. The returned future<void> would either become ready if the slot runs successfully, or transport the exception the slot threw. I think you could even make it a template function and have it return a future<R>, and let it accept the completed future's value as an input parameter.
If a slot acquires a lock, it can lead to unexpected deadlocks if the promise-fullfilling thread is holding an external lock while calling future::set(). Assume you have two locks which you always should acquire
That is even more true when the callback code is run in future::get() or future::ready(), since the callback code will have to be run while holding the future's mutex (unless perhaps if it is a unique_future). If the user callback code is run in the promise-setting thread, it can be done without holding any locks internal to the library.
I don't know how common this particular example would be but I'm guessing there are lots of similar problems out there. Basically, you don't expect future::set to run a lot of aribtrary code. And if you do,
Is future::get or future::ready to running a lot of arbitrary code any less surprising?

Frank Mori Hess-2 wrote:
If I connect a slot that might throw an exception, that exception will be thrown from the promise-fullfilling thread's set()-call. This is totally unexpected.
That's true. However, the connect call of the future which connects a slot to the future-complete signal could return a future<void>. The returned future<void> would either become ready if the slot runs successfully, or transport the exception the slot threw. I think you could even make it a template function and have it return a future<R>, and let it accept the completed future's value as an input parameter.
You would need to return a connection too so that you can disconnect. And after a wait-for-any composition has become ready, how would you know which of all the connected slots you need to check for an exception? I think that having a composite future in which the exception appears is a more natural interface. Frank Mori Hess-2 wrote:
If a slot acquires a lock, it can lead to unexpected deadlocks if the promise-fullfilling thread is holding an external lock while calling future::set(). Assume you have two locks which you always should acquire
That is even more true when the callback code is run in future::get() or future::ready(), since the callback code will have to be run while holding the future's mutex (unless perhaps if it is a unique_future). If the user callback code is run in the promise-setting thread, it can be done without holding any locks internal to the library.
Do you really need to hold the future's mutex while doing the evaluation? Can't other threads just see it as not_ready until the evaluation is complete and one of the future-listeners call future::set() or future::set_exception() on the composite future? Frank Mori Hess-2 wrote:
Is future::get or future::ready to running a lot of arbitrary code any less surprising?
IMHO, yes. The future-listeners are the ones which set up the lazy evaluation so they should be prepared for it. Just like condition_variable's predicate wait. If you set up a composite future and then share it to other future-listening threads, you can state that this future can throw this and that or does xxx. The promise-fulfiller OTOH should be de-coupled from what it's listeners are doing. Johan -- View this message in context: http://www.nabble.com/Review-Request%3A-future-library-%28Gaskill-version%29... Sent from the Boost - Dev mailing list archive at Nabble.com.

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Tuesday 27 May 2008 05:46 am, Johan Torp wrote:
Do you really need to hold the future's mutex while doing the evaluation? Can't other threads just see it as not_ready until the evaluation is complete and one of the future-listeners call future::set() or future::set_exception() on the composite future?
You're probably right, I shouldn't say things can't be done until I've tried to do them :)
If you set up a composite future and then share it to other future-listening threads, you can state that this future can throw this and that or does xxx. The promise-fulfiller OTOH should be de-coupled from what it's listeners are doing.
That's a good point, if the only way to add callback code is in the factory functions for the composite futures, it means the callback code is only run by the newly created composite future (or possibly a copy of it), but doesn't effect the input futures or any other copies of the input futures which may already exist. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFIPA9H5vihyNWuA4URArUEAJwI6tkpgEUVnedavsJ//CyFgvBdZgCeJk9A bX/3oDCNumLAqJgZSxvcSow= =Moh4 -----END PGP SIGNATURE-----

Johan Torp:
I'm doing a thesis related to C++0x and threading. Peter Dimov suggested I could try and implement an event-like waiting primitive, similar to Windows' WaitForMultipleObjects. If we had such a primitive, we might get better ways to solve this problem. I think part of the reason this is so tricky is because condition_variable's aren't meant to be used for multiple waiting. Maybe Dimov or somebody else has sketches of such an interface.
I believe that futures only need the basic "manual reset event" in Windows terms: class event { private: bool is_set_; public: event(); // post: !is_set_ void set(); // post: is_set_ void reset(); // post: !is_set_ void wait(); // block until is_set_ bool try_wait(); bool try_wait_for(...); bool try_wait_until(...); }; The advantage of this primitive is that it maps to Windows directly. The disadvantage is that it's not robust when reset() is used because this introduces a race. This is not a problem for futures since they are one-shot and their underlying ready_ event is never reset. A more robust primitive is the so-called "eventcount" which doesn't need to be reset and avoids the race: class event { private: unsigned k_; public: event(); // post: k_ == 0 void signal(); // effects: ++k_ unsigned get(); // returns: k_ void wait( unsigned k ); // block until k_ != k // ... try_wait ... }; Typical use is // waiter for( ;; ) { unsigned k = event_.get(); if( predicate ) return; event_.wait( k ); } // setter predicate = true; event_.signal();

Frank Mori Hess <frank.hess@nist.gov> writes:
1) In order to wait on an arbitrary number of futures, determined at runtime, we need some kind of container.
2) Since containers are homogeneous, that means some kind of type erasure (future<void>).
I don't think that waiting for any of a dynamic set of futures of different types is common. My new wait_for_any implementation allows waiting for one of a container of identical futures, or one of a compile-time set of futures of heterogeneous types. I think that should cover most real uses. Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

On Friday 23 May 2008 10:50, Johan Torp wrote:
Yes, I'm thinking of a passive function which doesn't have an internal thread associated with it. One that is evaluated lazily as soon as someone is interested in it.
I think this could be very useful, do you?
Hmm, so it does seem that I would need something like what you've been suggesting to convert my scheduler over to waiting on futures instead of relying directly on signals/slots. Or, at least I would minimally need to be able to construct a future<X> from and future<Y> where X and Y are unrelated types by specifying a passive conversion function. So for example, when future<Y> fut_y; becomes ready, the future<X> would become ready with the value returned by conversion_function(fut_y.get()). The more general solution that accepts multiple arguments would be like future_barrier, except it would accept an additional first argument that is the conversion function, and it would return a future<R> where the value R for the returned future is obtained by calling the conversion function with all the values from the input futures. So it might look something like R combining_function(const T1 &t1, const T2 &t2, ..., const TN &tn); future<T1> f1; future<T1> f2; //... future<T1> fN; future<R> result = future_combining_barrier(&combining_function, f1, f2, ..., fN);

Frank Mori Hess-2 wrote:
The more general solution that accepts multiple arguments would be like future_barrier, except it would accept an additional first argument that is the conversion function, and it would return a future<R> where the value R for the returned future is obtained by calling the conversion function with all the values from the input futures. So it might look something like
R combining_function(const T1 &t1, const T2 &t2, ..., const TN &tn);
future<T1> f1; future<T1> f2; //... future<T1> fN;
future<R> result = future_combining_barrier(&combining_function, f1, f2, ..., fN);
This is how I imagine the barrier/wait-for-all case looks too. The select/switch/wait-for-any case is tricker if we wan't to supply the same type safety for heterogenous types. Johan -- View this message in context: http://www.nabble.com/Review-Request%3A-future-library-%28Gaskill-version%29... Sent from the Boost - Dev mailing list archive at Nabble.com.

----- Original Message ----- From: "Anthony Williams" <anthony_w.geo@yahoo.com> To: <boost@lists.boost.org> Sent: Monday, May 12, 2008 12:52 PM Subject: Re: [boost] Re view Request: future library (N2561/Williams version)
duration and absolute_time will have distinct types. In Boost at the moment, for boost::condition_variable, duration is anything that implements the Boost Date-Time duration concept, such as boost::posix_time::milliseconds, and absolute_time is boost::system_time.
Please could you add this to the documentation of the Boost.Thread (duration_type is anything that implements the Boost Date-Time duration concept). I have no seen it on the documentation. The same should be true for the other uses of dureation_type as template parameter. I think that it becomes urgent to solve the time versus duration concepts usable on real time applications and doispose of specific models. template<typename lock_type,typename duration_type> bool timed_wait(lock_type& lock,duration_type const& rel_time) Effects: Atomically call lock.unlock() and blocks the current thread. The thread will unblock when notified by a call to this->notify_one() or this->notify_all(), after the period of time indicated by the rel_time argument has elapsed, or spuriously. When the thread is unblocked (for whatever reason), the lock is reacquired by invoking lock.lock() before the call to wait returns. The lock is also reacquired by invoking lock.lock() if the function exits with an exception. Best, icente _____________________ Vicente Juan Botet Escriba

"vicente.botet" <vicente.botet@wanadoo.fr> writes:
----- Original Message ----- From: "Anthony Williams" <anthony_w.geo@yahoo.com> To: <boost@lists.boost.org> Sent: Monday, May 12, 2008 12:52 PM Subject: Re: [boost] Re view Request: future library (N2561/Williams version)
duration and absolute_time will have distinct types. In Boost at the moment, for boost::condition_variable, duration is anything that implements the Boost Date-Time duration concept, such as boost::posix_time::milliseconds, and absolute_time is boost::system_time.
Please could you add this to the documentation of the Boost.Thread (duration_type is anything that implements the Boost Date-Time duration concept). I have no seen it on the documentation. The same should be true for the other uses of dureation_type as template parameter. I think that it becomes urgent to solve the time versus duration concepts usable on real time applications and doispose of specific models.
It's on my list. Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

----- Original Message ----- From: "Johan Torp" <johan.torp@gmail.com> To: <boost@lists.boost.org> Sent: Monday, May 12, 2008 12:14 PM Subject: Re: [boost] Re view Request: future library (N2561/Williams version)
I thought the most common use case for futures was the active object pattern. We should all try to agree what use cases/design patterns/higher level abstractions are most important and which we want to support. IMHO, this should be top priority for the "future ambition". Even though no higher level abstractions built on futures will make it to C++0x or boost anytime soon, it's important that the future interface needn't change to support them in the - future :)
I don't know if the most common use case is the active object pattern, I use it every time I call a functions whic could finish or not depending on its internals. Some times the function sould wait on some messages and then the best is to return a future. But the function can also finish on the same thread if it don't needs external interaction. The client could later on wait in the future (active) or attach a callback to be called when the future is ready (reaction). For example: { // ... future<T> f = call_somme_fct(); // ... // when the result is needed either wait by using it or if (!f) // attach some callback and change the intenal state else // continue // ... } I agree with you that the best test for the future interface is to show some good usages in the tutorial and examples of the library. The threadpool library could be one, but not the only one. Imagine now we had a function which returs a value and had an out parameter (maybe by reference) Result f(InOut& ref); Now we want to refactor this function (either because the new implementation will introduce an IO wait, or because the old blocking implementation could not be supported. Which should be the interface of the new function. The first thing could be to try with future<Result> f(InOut& ref); but nothing forbids the caller to use the ref parameter before the operation has completed and which could be invalid. If we want consistency what we would need is something like future<Result> f(future<InOut&>& ref); IMO this do not works in any proposal? Do we need future to works with in/out references? Yet another example future<Out&> f(); Note that future<Out&> and future<InOut&> should not mean the same. Do we need two future of reference classes? future<InOut&> will need a constructor future<InOut&>(InOut&), but the assocaited promise should copy the value when doint the set_value. Any thought? Has all this a sense or I'm completly lost? This example was there only to introdude other use cases, and in particular future of reference types or pointer types. If you have a better choice for the f function do not hesitate. Anthony, sorry for short cut (futures instead of unique_future or shared_future). In order to be coherent with the thread library (mutex/shared_mutex), should't unique_future be named future? Vicente

"vicente.botet" <vicente.botet@wanadoo.fr> writes:
Imagine now we had a function which returs a value and had an out parameter (maybe by reference)
Result f(InOut& ref);
Now we want to refactor this function (either because the new implementation will introduce an IO wait, or because the old blocking implementation could not be supported. Which should be the interface of the new function. The first thing could be to try with
future<Result> f(InOut& ref);
but nothing forbids the caller to use the ref parameter before the operation has completed and which could be invalid. If we want consistency what we would need is something like
future<Result> f(future<InOut&>& ref);
IMO this do not works in any proposal?
You can write it with my proposal, but I'm not sure it means what you intend, given what you write below.
Do we need future to works with in/out references?
Yet another example future<Out&> f();
Note that future<Out&> and future<InOut&> should not mean the same. Do we need two future of reference classes? future<InOut&> will need a constructor future<InOut&>(InOut&), but the assocaited promise should copy the value when doint the set_value.
If the promise associated with your future<InOut&> should copy the value, then what you want is a future<InOut>. If you had a future<InOut&>(InOut&) constructor, then you could /still/ use the original InOut& before the future had returned, just as if you passed a plain reference to the function. With my (updated) proposal, unique_future<T>::get() returns an rvalue-reference, so you could move/copy this into the InOut value you intend to use, or you could use shared_future<T>, where get() returns a const reference.
Anthony, sorry for short cut (futures instead of unique_future or shared_future). In order to be coherent with the thread library (mutex/shared_mutex), should't unique_future be named future?
unique_future/shared_future is by analogy to unique_ptr/shared_ptr, which I think is a closer match than mutex/shared_mutex. Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

----- Original Message ----- From: "Anthony Williams" <anthony_w.geo@yahoo.com> To: <boost@lists.boost.org> Sent: Tuesday, May 13, 2008 8:51 AM Subject: Re: [boost] Review Request: future library : what does future ofreferences exactly means?
"vicente.botet" <vicente.botet@wanadoo.fr> writes:
Imagine now we had a function which returs a value and had an out parameter (maybe by reference)
Result f(InOut& ref);
Now we want to refactor this function (either because the new implementation will introduce an IO wait, or because the old blocking implementation could not be supported. Which should be the interface of the new function. The first thing could be to try with
future<Result> f(InOut& ref);
but nothing forbids the caller to use the ref parameter before the operation has completed and which could be invalid. If we want consistency what we would need is something like
future<Result> f(future<InOut&>& ref);
IMO this do not works in any proposal?
You can write it with my proposal, but I'm not sure it means what you intend, given what you write below.
Do we need future to works with in/out references?
Yet another example future<Out&> f();
Note that future<Out&> and future<InOut&> should not mean the same. Do we need two future of reference classes? future<InOut&> will need a constructor future<InOut&>(InOut&), but the assocaited promise should copy the value when doint the set_value.
If the promise associated with your future<InOut&> should copy the value, then what you want is a future<InOut>. If you had a future<InOut&>(InOut&) constructor, then you could /still/ use the original InOut& before the future had returned, just as if you passed a plain reference to the function.
With my (updated) proposal, unique_future<T>::get() returns an rvalue-reference, so you could move/copy this into the InOut value you intend to use, or you could use shared_future<T>, where get() returns a const reference.
Well let me come back to my initial example. Supose that I had Result f(InOut& ref); // ... { InOut v=0; // ... v = 13; Result r = f(v); // ... use r or v; g(r, v); v = 15; } Do you mean that the following works with your proposal? unique_future<Result> f(shared_future<InOut>& ref); // ... { shared_future<InOut> v; v->get()=0; // ... v->get() = 13; unique_future<Result> r = f(v); // ... use r or v; g(r->get(), v->get()); v->get() = 15; }
Anthony, sorry for short cut (futures instead of unique_future or shared_future). In order to be coherent with the thread library (mutex/shared_mutex), should't unique_future be named future?
unique_future/shared_future is by analogy to unique_ptr/shared_ptr, which I think is a closer match than mutex/shared_mutex.
OK. I understand Vicente

"vicente.botet" <vicente.botet@wanadoo.fr> writes:
Well let me come back to my initial example. Supose that I had Result f(InOut& ref); // ... { InOut v=0; // ... v = 13; Result r = f(v); // ... use r or v; g(r, v); v = 15; }
Do you mean that the following works with your proposal?
unique_future<Result> f(shared_future<InOut>& ref); // ... { shared_future<InOut> v; v->get()=0; // ... v->get() = 13; unique_future<Result> r = f(v); // ... use r or v; g(r->get(), v->get()); v->get() = 15; }
Not quite. You can't set a value on a future by writing to the result of a call to get() - you need a promise or a packaged task for that. However, you can write: void foo() { promise<InOut> p; shared_future<InOut> v=p.get_future(); p.set_value(13); unique_future<Result> r=f(v); // v.get() may have changed if f took v by reference g(r.get(),v.get()); } Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

----- Original Message ----- From: "Anthony Williams" <anthony_w.geo@yahoo.com> To: <boost@lists.boost.org> Sent: Tuesday, May 13, 2008 10:03 PM Subject: Re: [boost] Review Request: future library : what does futureofreferences exactly means?
"vicente.botet" <vicente.botet@wanadoo.fr> writes:
Well let me come back to my initial example. Supose that I had Result f(InOut& ref); // ... { InOut v=0; // ... v = 13; Result r = f(v); // ... use r or v; g(r, v); v = 15; }
Do you mean that the following works with your proposal?
unique_future<Result> f(shared_future<InOut>& ref); // ... { shared_future<InOut> v; v->get()=0; // ... v->get() = 13; unique_future<Result> r = f(v); // ... use r or v; g(r->get(), v->get()); v->get() = 15; }
Not quite. You can't set a value on a future by writing to the result of a call to get() - you need a promise or a packaged task for that. However, you can write:
void foo() { promise<InOut> p; shared_future<InOut> v=p.get_future(); p.set_value(13);
unique_future<Result> r=f(v);
// v.get() may have changed if f took v by reference g(r.get(),v.get()); }
Sorry, this is exactly the behaviour I wanted to refactor. Thanks Vicente

vicente.botet wrote:
Imagine now we had a function which returs a value and had an out parameter (maybe by reference)
Result f(InOut& ref);
Now we want to refactor this function (either because the new implementation will introduce an IO wait, or because the old blocking implementation could not be supported. Which should be the interface of the new function. The first thing could be to try with
future<Result> f(InOut& ref);
but nothing forbids the caller to use the ref parameter before the operation has completed and which could be invalid. If we want consistency what we would need is something like
future<Result> f(future<InOut&>& ref);
Firstly, this is a dangerous design. The calling thread must keep the reference alive until the future is ready - no matter what. For instance, what if it's owner thread tells it to terminate? Secondly, you can't prohibit the calling thread to access it's original reference. So the problem isn't really solved. Presently, I haven't seen any use cases which would motivate reference support for futures. Not allowing references is a safety net in it's own right. vicente.botet wrote:
Yet another example future<Out&> f();
This is also very dangerous, the promise-fulfilling thread must guarantee that the reference is valid until program termination as it can't detect when the future dies. Even if it could detect future destruction, it would be a strange design. Shared_ptrs or some kind of moving should be applied here. Johan -- View this message in context: http://www.nabble.com/Review-Request%3A-future-library-%28Gaskill-version%29... Sent from the Boost - Dev mailing list archive at Nabble.com.

--------------------------- Vicente Juan Botet Escriba ----- Original Message ----- From: "Johan Torp" <johan.torp@gmail.com> To: <boost@lists.boost.org> Sent: Tuesday, May 13, 2008 11:42 AM Subject: Re: [boost] Re view Request: future library : what does future of references exactly means?
vicente.botet wrote:
Imagine now we had a function which returs a value and had an out parameter (maybe by reference)
Result f(InOut& ref);
Now we want to refactor this function (either because the new implementation will introduce an IO wait, or because the old blocking implementation could not be supported. Which should be the interface of the new function. The first thing could be to try with
future<Result> f(InOut& ref);
but nothing forbids the caller to use the ref parameter before the operation has completed and which could be invalid. If we want consistency what we would need is something like
future<Result> f(future<InOut&>& ref);
Firstly, this is a dangerous design. The calling thread must keep the reference alive until the future is ready - no matter what. For instance, what if it's owner thread tells it to terminate?
What do you propose instead? What do you think of future<Result> f(cosnt InOut& in, future<InOut>& out); future<InOut> fv; r= f(v, fv) v = fv.get();
Secondly, you can't prohibit the calling thread to access it's original reference. So the problem isn't really solved.
You are right, the problem is not solved.
Presently, I haven't seen any use cases which would motivate reference support for futures. Not allowing references is a safety net in it's own right.
vicente.botet wrote:
Yet another example future<Out&> f();
This is also very dangerous, the promise-fulfilling thread must guarantee that the reference is valid until program termination as it can't detect when the future dies. Even if it could detect future destruction, it would be a strange design. Shared_ptrs or some kind of moving should be applied here.
This is not more dangerous that Out& f(); We know how dangerous it is and we use every time. This reference point usualy to a member object, and the object can be deleted. Vicente

vicente.botet wrote:
If we want consistency what we would need is something like
future<Result> f(future<InOut&>& ref);
Firstly, this is a dangerous design. The calling thread must keep the reference alive until the future is ready - no matter what. For instance, what if it's owner thread tells it to terminate?
What do you propose instead? What do you think of future<Result> f(cosnt InOut& in, future<InOut>& out);
future<InOut> fv; r= f(v, fv) v = fv.get();
If result was binary - succeed or "not possible" I would use unique_future<boost::optional<InOut> > f(); otherwise unique_future<tuple<Result, InOut> > f(); or unique_future<variant<ErrorCode, InOut> > I'm assuming InOut is movable and we have r-value references. Otherwise, use unique_ptr/auto_ptr instead of InOut. If we need shared_futures I would use shared_ptr instead of InOut. vicente.botet wrote:
Yet another example future<Out&> f();
This is also very dangerous, the promise-fulfilling thread must guarantee that the reference is valid until program termination as it can't detect when the future dies. Even if it could detect future destruction, it would be a strange design. Shared_ptrs or some kind of moving should be applied here.
This is not more dangerous that Out& f();
Sharing references between threads is always very dangerous and should be avoided. The owning thread can typically not know when the other thread will access it and must hence keep it alive until the program terminates and can never alter the data again (unless it's a concurrent object). vicente.botet wrote:
We know how dangerous it is and we use every time. This reference point usualy to a member object, and the object can be deleted.
The single threaded case is a lot less dangerous. As a side note, I think return-by-reference is an almost deprecated programming style. Code which depend on boost can use auto_ptr, shared_ptr, variant, optional and tuples to easily return data, unless performance is _really_ critical. R-value references will remove the expensive copying for movable types too. Those are my thoughts on the matter. Johan -- View this message in context: http://www.nabble.com/Review-Request%3A-future-library-%28Gaskill-version%29... Sent from the Boost - Dev mailing list archive at Nabble.com.

----- Original Message ----- From: "Anthony Williams" <anthony_w.geo@yahoo.com> To: <boost@lists.boost.org> Sent: Sunday, May 11, 2008 11:53 AM Subject: Re: [boost] Review Request: future library (N2561/Williams version)
* Finally, I've added a set_wait_callback() function to both promise and packaged_task. This allows for lazy-futures which don't actually run the operation to generate the value until the value is needed: no threading required. It also allows for a thread pool to do task stealing if a pool thread waits for a task that's not started yet. The callbacks must be thread-safe as they are potentially called from many waiting threads simultaneously. At the moment, I've specified the callbacks as taking a non-const reference to the promise or packaged_task for which they are set, but I'm open to just making them be any callable function, and leaving it up to the user to call bind() to do that.
Hi, Why you don't allow multiple callbacks? I suposse that this is related to the implementation of do_callback void do_callback(boost::unique_lock<boost::mutex>& lock) { if(callback && !done) { boost::function<void()> local_callback=callback; relocker relock(lock); local_callback(); } } You need to call all the callbacks with the mutex unlock, and you need to protect from other concurrent set_wait_callback. So you will need to copy the list of callbacks before unlock. Is this correct? Anyway, IMO, having a single call back could motivate a user wrapper which offer "bugy" multiple callbacks. Vicente

"vicente.botet" <vicente.botet@wanadoo.fr> writes:
Why you don't allow multiple callbacks? I suposse that this is related to the implementation of do_callback
void do_callback(boost::unique_lock<boost::mutex>& lock) { if(callback && !done) { boost::function<void()> local_callback=callback; relocker relock(lock); local_callback(); } }
You need to call all the callbacks with the mutex unlock, and you need to protect from other concurrent set_wait_callback. So you will need to copy the list of callbacks before unlock.
Is this correct?
That is correct with respect to the implementation, but I don't actually see the need for multiple callbacks. The callbacks are set as part of the promise or packaged_task. I can't imagine why that would require multiple callbacks. In any case, the user can provide that facility on their own if required. Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

----- Original Message ----- From: "Anthony Williams" <anthony_w.geo@yahoo.com> To: <boost@lists.boost.org> Sent: Wednesday, May 14, 2008 10:55 AM Subject: Re: [boost] Review Request: future library (N2561/Williams version)
"vicente.botet" <vicente.botet@wanadoo.fr> writes:
Why you don't allow multiple callbacks? I suposse that this is related to the implementation of do_callback
void do_callback(boost::unique_lock<boost::mutex>& lock) { if(callback && !done) { boost::function<void()> local_callback=callback; relocker relock(lock); local_callback(); } }
You need to call all the callbacks with the mutex unlock, and you need to protect from other concurrent set_wait_callback. So you will need to copy the list of callbacks before unlock.
Is this correct?
That is correct with respect to the implementation,
BTW, Braddock implementation do a move of the list of callbacks before doing the callbacks. What do you thing about this approach?
but I don't actually see the need for multiple callbacks. The callbacks are set as part of the promise or packaged_task. I can't imagine why that would require multiple callbacks. In any case, the user can provide that facility on their own if required.
What about the guarded schedule of Braddock? template future<T> schedule(boost::function<T (void)> const& fn, future<void> guard = future<void>()) { promise<T> prom; // create promise future_wrapper<T> wrap(fn,prom); guard.add_callback(boost::bind(&JobQueue3::queueWrapped<T>, this, wrap, prom)); return future<T>(prom); // return a future created from the promise } Several task can be scheduled guarded by the same future. Vicente

"vicente.botet" <vicente.botet@wanadoo.fr> writes:
----- Original Message ----- From: "Anthony Williams" <anthony_w.geo@yahoo.com> To: <boost@lists.boost.org> Sent: Wednesday, May 14, 2008 10:55 AM Subject: Re: [boost] Review Request: future library (N2561/Williams version)
"vicente.botet" <vicente.botet@wanadoo.fr> writes:
Why you don't allow multiple callbacks? I suposse that this is related to the implementation of do_callback
void do_callback(boost::unique_lock<boost::mutex>& lock) { if(callback && !done) { boost::function<void()> local_callback=callback; relocker relock(lock); local_callback(); } }
You need to call all the callbacks with the mutex unlock, and you need to protect from other concurrent set_wait_callback. So you will need to copy the list of callbacks before unlock.
Is this correct?
That is correct with respect to the implementation,
BTW, Braddock implementation do a move of the list of callbacks before doing the callbacks. What do you thing about this approach?
That's an interesting idea: rather than calling the callback every time some thread waits on a future, it's only called on the first call. You could make the callback clear itself when it was invoked if you want that behaviour, as the promise or packaged_task is passed in to the callback.
but I don't actually see the need for multiple callbacks. The callbacks are set as part of the promise or packaged_task. I can't imagine why that would require multiple callbacks. In any case, the user can provide that facility on their own if required.
What about the guarded schedule of Braddock? template future<T> schedule(boost::function<T (void)> const& fn, future<void> guard = future<void>()) { promise<T> prom; // create promise future_wrapper<T> wrap(fn,prom); guard.add_callback(boost::bind(&JobQueue3::queueWrapped<T>, this, wrap, prom)); return future<T>(prom); // return a future created from the promise }
Several task can be scheduled guarded by the same future.
That's a completion callback, not a wait callback. My proposal doesn't offer completion callbacks. Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

----- Original Message ----- From: "Anthony Williams" <anthony_w.geo@yahoo.com> To: <boost@lists.boost.org> Sent: Wednesday, May 14, 2008 12:15 PM Subject: Re: [boost] Review Request: future library (N2561/Williams version)
"vicente.botet" <vicente.botet@wanadoo.fr> writes:
----- Original Message ----- From: "Anthony Williams" <anthony_w.geo@yahoo.com> To: <boost@lists.boost.org> Sent: Wednesday, May 14, 2008 10:55 AM Subject: Re: [boost] Review Request: future library (N2561/Williams version)
"vicente.botet" <vicente.botet@wanadoo.fr> writes:
Why you don't allow multiple callbacks? I suposse that this is related to the implementation of do_callback
void do_callback(boost::unique_lock<boost::mutex>& lock) { if(callback && !done) { boost::function<void()> local_callback=callback; relocker relock(lock); local_callback(); } }
You need to call all the callbacks with the mutex unlock, and you need to protect from other concurrent set_wait_callback. So you will need to copy the list of callbacks before unlock.
Is this correct?
That is correct with respect to the implementation,
BTW, Braddock implementation do a move of the list of callbacks before doing the callbacks. What do you thing about this approach?
That's an interesting idea: rather than calling the callback every time some thread waits on a future, it's only called on the first call. You could make the callback clear itself when it was invoked if you want that behaviour, as the promise or packaged_task is passed in to the callback.
but I don't actually see the need for multiple callbacks. The callbacks are set as part of the promise or packaged_task. I can't imagine why that would require multiple callbacks. In any case, the user can provide that facility on their own if required.
What about the guarded schedule of Braddock? template future<T> schedule(boost::function<T (void)> const& fn, future<void> guard = future<void>()) { promise<T> prom; // create promise future_wrapper<T> wrap(fn,prom); guard.add_callback(boost::bind(&JobQueue3::queueWrapped<T>, this, wrap, prom)); return future<T>(prom); // return a future created from the promise }
Several task can be scheduled guarded by the same future.
That's a completion callback, not a wait callback. My proposal doesn't offer completion callbacks.
Why you don't want to provide completion callbacks? Have you another proposal for completion callbacks? Maybe you could open the interface and return the last callback setting. This will result in a chain of responsabilities and the client must be aware of that with thow potential problems: growing stack and clients forgetting its responsabilities. template<typename F,typename U>boost::function0<void> boost::function0<void> set_wait_callback(F f,U* u) { boost::function0<void> cb = callback callback=boost::bind(f,boost::ref(*u)); return cb; } In this case it is also needed to reset the callback, other wise the void do_callback(boost::unique_lock<boost::mutex>& lock) { if(callback && !done) { boost::function<void()> local_callback=callback; callback.clear(); relocker relock(lock); local_callback(); } } Vicente

"vicente.botet" <vicente.botet@wanadoo.fr> writes:
Why you don't want to provide completion callbacks? Have you another proposal for completion callbacks?
I don't like them because you're chaining work onto the thread that sets the future value, and that strikes me as dangerous.
Maybe you could open the interface and return the last callback setting. This will result in a chain of responsabilities and the client must be aware of that with thow potential problems: growing stack and clients forgetting its responsabilities.
I really don't see the need for multiple wait callbacks. Anthony -- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Wednesday 14 May 2008 08:28 am, Anthony Williams wrote:
"vicente.botet" <vicente.botet@wanadoo.fr> writes:
Why you don't want to provide completion callbacks? Have you another proposal for completion callbacks?
I don't like them because you're chaining work onto the thread that sets the future value, and that strikes me as dangerous.
Maybe you could open the interface and return the last callback setting. This will result in a chain of responsabilities and the client must be aware of that with thow potential problems: growing stack and clients forgetting its responsabilities.
I really don't see the need for multiple wait callbacks.
To add my 2 cents: in libpoet, I allow multiple completion callbacks by providing poet::future::connect_update() which takes a slot (from thread_safe_signals). A completion callback allowed me to solve the "wait for multiple futures" problem which arose in my scheduler classes (which contain a queue of method requests, each of which must wait for a set of input futures before in can be executed). The completion callback notifies a condition variable which the scheduler thread is waiting on so it can wake up and see if any method requests have become ready. I chose signals/slots over a simple callback because I find simple callbacks unbearably primitive (no multiple callbacks, no automatic connection management). I may be more enthusiastic about using signal/slots than most however. As was mentioned earlier, It is possible to tack on a signal to a simple callback. You could bundle each future with a signal and make the callback just invoke the signal. However, that requires every future to have its own signal. By integrating the signals/slots into the future library, the future and all its copies can accept slots and forward them to a single signal which is invoked when their promise is fulfilled or reneged. If there are no completion callbacks, I regard at least the ability to wait on an arbitrary number of futures simultaneously as essential. Something that provides functionality analagous to POSIX select() on a set of file descriptors. - -- Frank -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFIKu915vihyNWuA4URAiCjAKDUNsClBfsktvFC+cShsEWe3rA62wCdEICb qkv7JM8XovbY+bn/cvpaUEw= =2KGe -----END PGP SIGNATURE-----
participants (10)
-
Anthony Williams
-
Anthony Williams
-
Braddock Gaskill
-
Frank Mori Hess
-
Frank Mori Hess
-
Johan Torp
-
Kowalke Oliver (QD IT PA AS)
-
Peter Dimov
-
Ronald Garcia
-
vicente.botet