
Hi, I was looking at the futures implementation in the boost vault. It looks very promising, I must say. One of the open issues however is thread cancellation which I imagine will be important in the case of the "or" futures group. Has there been any progress on this front? One of the coolest things about boost libraries has been the gradual improvement of legacy C/C++ code by non-invasive means. I can't imagine solving this problem without invasive means (or traditionally undesirable means, i.e. something like thread->die()). I'm not saying that its non-invasive or nothing, but there are people smarter than me working on this problem! Thanks in advance, Sohail

I just started looking at the boost::futures zip file in the vault as well. I'd really like to use it for a current project if it is near maturity. Can I use it in conjunction with my own thread pool or with an active object? For example, I'd want something like: int f(); future<int> val = myThreadPool.queue<int>(f); val.ready() val.cancel() etc.. The documentation is incomplete except for the simple case, which appears to spawn a new thread for every new future? The ability to use a future with the asio::io_service in particular would be quite powerful. Thanks, Braddock Gaskill Dockside Vision Inc On Tue, 20 Feb 2007 12:11:56 -0800, Sohail Somani wrote:
I was looking at the futures implementation in the boost vault. It looks very promising, I must say. One of the open issues however is thread cancellation which I imagine will be important in the case of the "or" futures group.
Has there been any progress on this front? One of the coolest things about boost libraries has been the gradual improvement of legacy C/C++ code by non-invasive means. I can't imagine solving this problem without invasive means (or traditionally undesirable means, i.e. something like thread->die()). I'm not saying that its non-invasive or nothing, but there are people smarter than me working on this problem!
Thanks in advance,
Sohail _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Braddock Gaskill wrote:
I just started looking at the boost::futures zip file in the vault as well. I'd really like to use it for a current project if it is near maturity.
It's far from mature. Just a first (but functional) sketch.
Can I use it in conjunction with my own thread pool or with an active object?
If you write the thread pool or active object support, certainly. The library as is doesn't know anything about thread pools or active objects. It just spawns a new thread for every future.
For example, I'd want something like: int f(); future<int> val = myThreadPool.queue<int>(f); val.ready() val.cancel() etc..
val.cancel() isn't supported at all, mainly because Boost.Thread doesn't support cancel.
The documentation is incomplete except for the simple case, which appears to spawn a new thread for every new future?
Yes.
The ability to use a future with the asio::io_service in particular would be quite powerful.
Agreed. And I'm happy to assist, if needed. Regards Hartmut
Thanks, Braddock Gaskill Dockside Vision Inc
I was looking at the futures implementation in the boost vault. It looks very promising, I must say. One of the open issues however is thread cancellation which I imagine will be important in
On Tue, 20 Feb 2007 12:11:56 -0800, Sohail Somani wrote: the case of the "or"
futures group.
Has there been any progress on this front? One of the coolest things about boost libraries has been the gradual improvement of legacy C/C++ code by non-invasive means. I can't imagine solving this problem without invasive means (or traditionally undesirable means, i.e. something like thread->die()). I'm not saying that its non-invasive or nothing, but there are people smarter than me working on this problem!
Thanks in advance,
Sohail _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Fri, 09 Mar 2007 13:57:52 -0600, Hartmut Kaiser wrote:
Can I use it in conjunction with my own thread pool or with an active object?
If you write the thread pool or active object support, certainly. The library as is doesn't know anything about thread pools or active objects. It just spawns a new thread for every future.
I noticed that none of the "future" C++ standardization proposals by Hinnant and Dimov, as well as Sutter's Concur implicitly create a new thread upon future construction like your simple_future implementation. As far as I can tell, their future classes are very simple and almost completely agnostic to the scheduling, threading, etc particulars. ie, the Hinnant proposal at http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2094.html#futures would do the following if a new thread was needed: std::future<int> f1 = std::launch_in_thread(f); but more likely in the real world since thread creation is expensive: std::future<int> f1 = std::launch_in_pool(f); In the meanwhile, Sutter is on an Active Object kick: future<int> f1 = myActiveProxy.f() I noticed that your implementation of simple_future takes a considerably different tack, where future_base is derived and typed, and the derived future sub-class itself has knowledge of how to launch the thread with the functions. Glancing through the code, this seems to add considerable complexity and dependencies to class future, and encourages users to sub-type class future, a la simple_future. Is there any reason task creation, scheduling, and threading can't be entirely disentangled from the future concept?
val.cancel() isn't supported at all, mainly because Boost.Thread doesn't support cancel.
Thread lifetime won't necessary (usually?) be linked to asynchronous function call lifetime. Sutter makes use of a future::cancel(), and Dimov notes "Once a thread layer and a cancelation layer are settled upon, future would need to offer support for canceling the active asynchronous call when the caller is no longer interested in it;" Maybe this could be handled with an optional "cancel" function passed to the future constructor, like the way a custom destructor can be passed to a shared_ptr. If cancel() is a no-op for some scenarios, it could either throw or be ignored. I have a serious need for this functionality, so I'm willing to put in a bit of effort to work up some solution. Thanks, Braddock Gaskill Dockside Vision Inc
The documentation is incomplete except for the simple case, which appears to spawn a new thread for every new future?
Yes.
The ability to use a future with the asio::io_service in particular would be quite powerful.
Agreed. And I'm happy to assist, if needed.
Regards Hartmut

Braddock Gaskill wrote:
I noticed that none of the "future" C++ standardization proposals by Hinnant and Dimov, as well as Sutter's Concur implicitly create a new thread upon future construction like your simple_future implementation. As far as I can tell, their future classes are very simple and almost completely agnostic to the scheduling, threading, etc particulars.
Indeed, the intent has been for futures to support arbitrary asynchronous task execution, not just the simple thread per task model. One could imagine using futures over an interprocess protocol or TCP/IP. My current proposal for std::future<R> is available at http://www.pdimov.com/cpp/N2185.html and will be part of the pre-meeting mailing. We'll see how it fares. As an aside, does anyone have a success story about active objects? I can't seem to grasp their significance; in particular, how are they supposed to scale to many cores/HW threads if every access is implicitly serialized?

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Peter Dimov
Indeed, the intent has been for futures to support arbitrary asynchronous task execution, not just the simple thread per task model. One could imagine using futures over an interprocess protocol or TCP/IP.
My current proposal for std::future<R> is available at
May I ask why you saw it necessary to have std::fork in that incarnation? Won't you /always/ do the same thing? I'm not sure why we couldn't have it be implied that construction of a future implies "fork". Thanks, Sohail

Sohail Somani wrote:
Indeed, the intent has been for futures to support arbitrary asynchronous task execution, not just the simple thread per task model. One could imagine using futures over an interprocess protocol or TCP/IP.
My current proposal for std::future<R> is available at
May I ask why you saw it necessary to have std::fork in that incarnation?
Lawrence Crowl has made an excellent point that it's important for us to support the most common use case in a simple way, even if this means that we have to sacrifice generality to do so. Hence std::fork and the simple examples.
Won't you /always/ do the same thing?
I'm not sure why we couldn't have it be implied that construction of a future implies "fork".
I didn't want to sacrifice the generality we already did have. :-) You can still make your non-blocking API (an active object, a C++ RPC interface) return a future. std::fork is merely a convenient way to use an efficient system-provided thread pool. Nothing stops you from writing your own 'fork' that doesn't, as shown in N2096.

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Peter Dimov
I'm not sure why we couldn't have it be implied that construction of a future implies "fork".
I didn't want to sacrifice the generality we already did have. :-) You can still make your non-blocking API (an active object, a C++ RPC interface) return a future. std::fork is merely a convenient way to use an efficient system-provided thread pool. Nothing stops you from writing your own 'fork' that doesn't, as shown in N2096.
I think I get it. I suppose I was thinking along the lines of (default) allocators. I've implemented a futures-like concept that went kind of along those lines so I guess I am a bit biased :) Thanks, Sohail

Peter Dimov wrote:
Braddock Gaskill wrote:
I noticed that none of the "future" C++ standardization proposals by Hinnant and Dimov, as well as Sutter's Concur implicitly create a new thread upon future construction like your simple_future implementation. As far as I can tell, their future classes are very simple and almost completely agnostic to the scheduling, threading, etc particulars.
Indeed, the intent has been for futures to support arbitrary asynchronous task execution, not just the simple thread per task model. One could imagine using futures over an interprocess protocol or TCP/IP.
My current proposal for std::future<R> is available at
http://www.pdimov.com/cpp/N2185.html
and will be part of the pre-meeting mailing. We'll see how it fares.
As an aside, does anyone have a success story about active objects? I can't seem to grasp their significance; in particular, how are they supposed to scale to many cores/HW threads if every access is implicitly serialized?
Sure, they're useful. Couple ways they can scale. You might have a large number of totally independent active objects running at any giving time. For example, imagine and 'endpoint' handler for a telephone calls -- it's a complex state machine that's executes a bunch of commands as new events arrive. If you handle lots of calls and you map 1 per thread then you are effectively scaling to many cores by having lots of active objects. Probably a better design is to have a pool of threads wake up the AO when a new event arrives, b/c at any moment most of the objects are idle and there's no point in tying up a thread. Totally different approach is for a single active object to have a pool of threads to dispatch execution of long running tasks within the AO. In this design even though the start of tasks is serialized the end might not be. The AO needs to be essentially stateless for this approach to work. HTH, Jeff

Jeff Garland wrote:
Peter Dimov wrote:
As an aside, does anyone have a success story about active objects? I can't seem to grasp their significance; in particular, how are they supposed to scale to many cores/HW threads if every access is implicitly serialized?
Sure, they're useful. Couple ways they can scale. You might have a large number of totally independent active objects running at any giving time.
I figured that many active objects would be able to scale, I just wasn't able to come up with a realistic use case.
For example, imagine and 'endpoint' handler for a telephone calls -- it's a complex state machine that's executes a bunch of commands as new events arrive. If you handle lots of calls and you map 1 per thread then you are effectively scaling to many cores by having lots of active objects.
I'm afraid that I still don't get it. What does an active object represent in this scheme, and who is invoking its methods and waiting for its results? [...]
Totally different approach is for a single active object to have a pool of threads to dispatch execution of long running tasks within the AO. In this design even though the start of tasks is serialized the end might not be. The AO needs to be essentially stateless for this approach to work.
A stateless object is not an interesting case since it's essentially a bunch of functions inside a namespace. One could simply use future+fork to spawn the calls. The classic active object idiom, as I understand it, deals with the shared state problem by implicitly serializing the method calls via a message queue, so that concurrent access to the state is eliminated. It's possible that the term has grown since then. :-)

Peter Dimov wrote:
For example, imagine and 'endpoint' handler for a telephone calls -- it's a complex state machine that's executes a bunch of commands as new events arrive. If you handle lots of calls and you map 1 per thread then you are effectively scaling to many cores by having lots of active objects.
I'm afraid that I still don't get it. What does an active object represent in this scheme, and who is invoking its methods and waiting for its results?
Well, for sake of discussion, lets say it represents the 'call' between one or more telephones. So when someone starts to place a call a new active object/thread is created on a server to process the call. Depending on the kind of phone system it will do stuff like collect digits, execute timeouts, see of the other end is available, keep track of billing, setup network connections, handle disconnects from phones, etc. Each method on the object is inherently asynchronous and often depends on getting information from a remote servers. This kind of object obviously would live in a larger distributed system, but at the server level the strategy is to code the 'call object' as an active object that responds to events. In the real world it might be a tad more complicated -- like you might have active objects for devices as well -- but hopefully that explains the basic idea.
[...]
Totally different approach is for a single active object to have a pool of threads to dispatch execution of long running tasks within the AO. In this design even though the start of tasks is serialized the end might not be. The AO needs to be essentially stateless for this approach to work.
A stateless object is not an interesting case since it's essentially a bunch of functions inside a namespace. One could simply use future+fork to spawn the calls.
Sure, but the point of the active object is that the concurrency approach is a detail is hidden behind the interface of the object.
The classic active object idiom, as I understand it, deals with the shared state problem by implicitly serializing the method calls via a message queue, so that concurrent access to the state is eliminated. It's possible that the term has grown since then. :-)
Ok, I was a bit sloppy...the object can have state, it's just that the methods that are spawned to execute in a thread pool must be independent of any state changes the other makes. In the case of the 'call object' the order things arrive impacts the results. Some other kinds of objects (eg: the call object) that isn't true. If you haven't already you might want to have a look at this paper: http://citeseer.ist.psu.edu/lavender96active.html Jeff

Jeff Garland wrote:
Sure, but the point of the active object is that the concurrency approach is a detail is hidden behind the interface of the object.
It's clear that you're using a broader definition of "active object" - any object that provides an asynchronous interface. I'm not, I'm interested in the specific case of:
The classic active object idiom, as I understand it, deals with the shared state problem by implicitly serializing the method calls via a message queue, so that concurrent access to the state is eliminated.
The significance of the classic definition is that you can take an existing single threaded object and turn it into an active object _automatically_. In Concur, for example, the language can do that for you; the various proposals for a Boost implementation do something similar. If you use the broader definition, AO becomes a pattern, and you can't implement it in either the language or the library.
If you haven't already you might want to have a look at this paper:
I will, thank you.

On Sat, 10 Mar 2007 01:54:57 +0200, Peter Dimov wrote:
Braddock Gaskill wrote: My current proposal for std::future<R> is available at http://www.pdimov.com/cpp/N2185.html and will be part of the pre-meeting mailing. We'll see how it fares.
Thanks Peter, I coded up a quick straight-forward implementation of your proposal. I'll make it available as soon as I test it a little. Maybe it can be grafted into the vault implementation to provide composites, etc. I have a few questions/comments: 1) void set_exception( exception_ptr p); This is tricky. I don't really want to pass and throw to the caller an exception pointer because responsibility for deleting the exception pointer will be very difficult to get right. Imagine the case where two objects each hold references to the same future, and so the exception gets re-thrown twice as they both try to access the value. Which one delete's the exception pointer? Should the future handle deletion responsibilty when the last future<> reference is gone? What type should the pointer be? We can't assume that all exceptions are even derived from std::exception. Passing by value is a no-go because of slicing issues due to the unknown exact type of the exception (even the calling code probably won't know). 2) Are multiple calls to set_value() permitted for a single future object? Your prior proposal (N2096) returned the value by reference, implying that multiple set_value() calls could not be allowed. This proposal returns by value, so multiple calls are at least possible. IMHO, it should not be allowed. The future object does not have any way to ensure that the "consumer" gets every successive value the "producer" might set. Too confusing. 3) What should 'timespec' be? I'm using boost::threads::xtime 4) set_cancel_handle(thread::handle ) This thread handle seems to assume that future<> is being used with a thread-per-invocation, and does not appear to provide a mechanism for canceling a queued invocaction, which is necessary for at least my application. I'd need a generic "callback-on-cancel()". 5) join()/try_join() vs wait()/ready() As far as I can tell, you renamed the wait() and ready() to join() and try_join(). In my opinion, these names are far less clear, and again my anti-thread-per-invocation bias doesn't like the implied thread semantics. In particular, try_join() makes little sense in a polling context. A naive programmer would be more likely to want to use has_value() instead, which would be WRONG if the method throws. while (!has_value()) //infinite loop if has_exception do something; 6) operator R() const; My opinion here, but if I understand this then the semantics of this are far too subtle. future<int> x = std::fork(myfunc); //200 lines later z = x + y; // this blocks or throws, mystifying young programmers everywhere I liked your prior proposal N2096 which would have: z = x() + y; // x is not your grandfather's variable Also, what is the meaning of: mysubroutine(x) // Does this block or throw, or just use a future argument? What is the meaning if mysubroutine is declared: template<T> mysubroutine(T a); Thanks, Braddock Gaskill Dockside Vision Inc

Braddock Gaskill wrote:
On Sat, 10 Mar 2007 01:54:57 +0200, Peter Dimov wrote:
Braddock Gaskill wrote: My current proposal for std::future<R> is available at http://www.pdimov.com/cpp/N2185.html and will be part of the pre-meeting mailing. We'll see how it fares.
Thanks Peter,
I coded up a quick straight-forward implementation of your proposal. I'll make it available as soon as I test it a little. Maybe it can be grafted into the vault implementation to provide composites, etc.
I have one, too; look at the 'Implementability' section for the links.
I have a few questions/comments:
1) void set_exception( exception_ptr p);
This is tricky. I don't really want to pass and throw to the caller an exception pointer because responsibility for deleting the exception pointer will be very difficult to get right. Imagine the case where two objects each hold references to the same future, and so the exception gets re-thrown twice as they both try to access the value. Which one delete's the exception pointer?
Should the future handle deletion responsibilty when the last future<> reference is gone?
What type should the pointer be? We can't assume that all exceptions are even derived from std::exception.
exception_ptr is described in http://www.pdimov.com/cpp/N2179.html and is typically a reference-counted smart pointer. If the proposal for language support doesn't pass, we'll have to live with something along the lines of http://www.pdimov.com/cpp/N2179/exception_ptr.hpp http://www.pdimov.com/cpp/N2179/exception_ptr.cpp as linked from N2185.
2) Are multiple calls to set_value() permitted for a single future object?
Allowed, with the second and subsequent calls ignored.
3) What should 'timespec' be? I'm using boost::threads::xtime
'struct timespec' as described in POSIX <time.h>, basically struct timespec { time_t tv_sec; long tv_nsec; };
4) set_cancel_handle(thread::handle )
This thread handle seems to assume that future<> is being used with a thread-per-invocation, and does not appear to provide a mechanism for canceling a queued invocaction, which is necessary for at least my application. I'd need a generic "callback-on-cancel()".
Yes, you are right that set_cancel_handle is limited to canceling threads. I was short on time since the pre-mailing deadline was yesterday so I went with what I already had implemented. set_cancel_callback would've been more general, but could open the door to deadlocks. I'll probably revise the paper for the meeting to generalize the cancel support to accept an arbitrary callback if I'm satisfied with the semantics of such a design.
5) join()/try_join() vs wait()/ready()
As far as I can tell, you renamed the wait() and ready() to join() and try_join(). In my opinion, these names are far less clear, and again my anti-thread-per-invocation bias doesn't like the implied thread semantics.
Yes, I could go either way on that one. I opted for consistency with the join/try_join/timed_join family of thread functions.
6) operator R() const;
My opinion here, but if I understand this then the semantics of this are far too subtle.
future<int> x = std::fork(myfunc); //200 lines later z = x + y; // this blocks or throws, mystifying young programmers everywhere
Another tradeoff. I think that the implicit conversion here is worth keeping because the parallelized syntax is closer to the traditional sequential one. It's a matter of taste.
Also, what is the meaning of: mysubroutine(x) // Does this block or throw, or just use a future argument?
What is the meaning if mysubroutine is declared: template<T> mysubroutine(T a);
Whether this is a bug or a feature is also a matter of taste. The conversion allows some mysubroutines to work on a future<X> as if it were an X, doing the 'obvious' thing.

In the spirit of greppability, which is very useful in the case of *_cast, I hope the proposal can be reworked into a syntax that one can grep for when trying to find instances of where a statement may block. -----Original Message----- From: boost-bounces@lists.boost.org on behalf of Peter Dimov Sent: Sat 3/10/2007 5:17 AM To: boost@lists.boost.org Subject: Re: [boost] [futures] boost::futures
future<int> x = std::fork(myfunc); //200 lines later z = x + y; // this blocks or throws, mystifying young programmers everywhere
Another tradeoff. I think that the implicit conversion here is worth keeping because the parallelized syntax is closer to the traditional sequential one. It's a matter of taste.

Here is another question/comment as I move through this implementation: I see a strong need for future<void>. If I build my multi-threaded task scheduling system around future, I will likely have some invocations which do not return a value, but which I do want to synchronize with, receive exceptions from, and possibly cancel(). Has there been any work/thoughts how to handle this? Should future<void> be allowed? Thanks for the other info Peter and your implementation - very useful. Braddock Gaskill Dockside Vision Inc

Braddock Gaskill wrote:
Here is another question/comment as I move through this implementation:
I see a strong need for future<void>.
If I build my multi-threaded task scheduling system around future, I will likely have some invocations which do not return a value, but which I do want to synchronize with, receive exceptions from, and possibly cancel().
Has there been any work/thoughts how to handle this? Should future<void> be allowed?
N2185 does include a future<void> specialization.

Peter Dimov wrote:
As an aside, does anyone have a success story about active objects? I can't seem to grasp their significance; in particular, how are they supposed to scale to many cores/HW threads if every access is implicitly serialized?
I don't know if this applies directly, but I've recently looked at Intel's Thread Building Blocks and I must say I'm quite impressed how they've handled things. (It's free to download for evaluation, see http://www.intel.com/software/products/tbb/) Their library is solely aimed at parallellizing cpu-limited algorithms to many cores, by means of a task scheduler and some nifty breath-first/depth-first evaluation schemes. It's different from the concept of an 'active object' which may reside on different threades/processes/cpus/machines/planets, but I think it could be one direction to think of in terms of futures and how they spawn recursively to create job objects that are handled by a thread-pool (typically one thread per core). The two problems are a bit different, but I just thought I should mention it here as I liked it a lot and I think a boost-ish implementation of something similar would be of great value to the world. Cheers, /Marcus

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Braddock Gaskill
I noticed that none of the "future" C++ standardization proposals by Hinnant and Dimov, as well as Sutter's Concur implicitly create a new thread upon future construction like your simple_future implementation. As far as I can tell, their future classes are very simple and almost completely agnostic to the scheduling, threading, etc particulars.
You are right. The idea is to be able to scale to whatever the node can handle. So 100 futures on a single processor node would probably not execute in 100 threads(!)
Is there any reason task creation, scheduling, and threading can't be entirely disentangled from the future concept?
Maybe not entirely, but at some point the future would probably tell some "future scheduler" to schedule the future. I think this is as far as the futures concept should know about scheduling or threads (i.e., not much!)
val.cancel() isn't supported at all, mainly because Boost.Thread doesn't support cancel.
Thread lifetime won't necessary (usually?) be linked to asynchronous function call lifetime. Sutter makes use of a future::cancel(), and Dimov notes "Once a thread layer and a cancelation layer are settled upon, future would need to offer support for canceling the active asynchronous call when the caller is no longer interested in it;"
I think the real power of futures is in future groups, which would require thread cancellation. Seems like you would have to modify the worker code to either tell the API that you are cancellable or to check if you've been asked to be cancelled. I think an API would be ideal which is probably what Dimov is getting at. Sohail PS: IMVHO active objects are iffy, I like futures.

I noticed that none of the "future" C++ standardization proposals by Hinnant and Dimov, as well as Sutter's Concur implicitly create a new thread upon future construction like your simple_future implementation. As far as I can tell, their future classes are very simple and almost completely agnostic to the scheduling, threading, etc particulars.
ie, the Hinnant proposal at http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2094.
Braddock Gaskill wrote: In short, I completely agree with your points. But remember, the futures implementation in the vault has been written long before any of these proposals have been published, so it's just an useful experiment, even if not with optimal results. I'm personally very interested in the futures concept as well, so please count me in when it comes to any new implementations. We should try to get it as close as possible to a future standard, though. Regards Hartmut html#futures
would do the following if a new thread was needed: std::future<int> f1 = std::launch_in_thread(f);
but more likely in the real world since thread creation is expensive: std::future<int> f1 = std::launch_in_pool(f);
In the meanwhile, Sutter is on an Active Object kick: future<int> f1 = myActiveProxy.f()
I noticed that your implementation of simple_future takes a considerably different tack, where future_base is derived and typed, and the derived future sub-class itself has knowledge of how to launch the thread with the functions.
Glancing through the code, this seems to add considerable complexity and dependencies to class future, and encourages users to sub-type class future, a la simple_future.
Is there any reason task creation, scheduling, and threading can't be entirely disentangled from the future concept?
val.cancel() isn't supported at all, mainly because Boost.Thread doesn't support cancel.
Thread lifetime won't necessary (usually?) be linked to asynchronous function call lifetime. Sutter makes use of a future::cancel(), and Dimov notes "Once a thread layer and a cancelation layer are settled upon, future would need to offer support for canceling the active asynchronous call when the caller is no longer interested in it;"
Maybe this could be handled with an optional "cancel" function passed to the future constructor, like the way a custom destructor can be passed to a shared_ptr. If cancel() is a no-op for some scenarios, it could either throw or be ignored.
I have a serious need for this functionality, so I'm willing to put in a bit of effort to work up some solution.
Thanks, Braddock Gaskill Dockside Vision Inc
The documentation is incomplete except for the simple case, which appears to spawn a new thread for every new future?
Yes.
The ability to use a future with the asio::io_service in
particular
would be quite powerful.
Agreed. And I'm happy to assist, if needed.
Regards Hartmut
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Hi Braddock, On Fri, 09 Mar 2007 09:05:42 -0500, "Braddock Gaskill" <braddock@braddock.com> said:
The ability to use a future with the asio::io_service in particular would be quite powerful.
I've attached a quick and dirty futures implementation that I wrote last year to demonstrate how futures and asio might fit together. Unlike the various futures proposals that are around, I prefer to keep the producer and consumer interfaces separate. Hence I have promise<T> (producer interface) and future<T> (consumer interface) -- these names are borrowed from the Alice programming language. Cheers, Chris

Christopher Kohlhoff wrote:
Unlike the various futures proposals that are around, I prefer to keep the producer and consumer interfaces separate. Hence I have promise<T> (producer interface) and future<T> (consumer interface) -- these names are borrowed from the Alice programming language.
promise<> is an interesting name. The best with which I was able to come up so far was source<>. I also like the (obvious in hindsight) approach of making future<> constructible from promise<>. I was exploring something like pair< future<R>, source<R> > create_channel(); This does have the advantage that producers can only get a source/promise and consumers can only get a future, but I wasn't quite satisfied with it. To elaborate on Chris's point, the significance of keeping the producer and the consumer interface separate is that it allows future<R> to be made convertible to future<R2> whenever R is convertible to R2 (or R2 is void); this also allows extensions in the spirit of Frank Mori Hess's operator[], while still preserving the full generality of the producer (i.e. doesn't tie it to a specific executor). I didn't have time before the deadline to flesh out such a design, but if someone is willing and able to defend a similar proposal in Oxford, I will be glad to help with it.

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Peter Dimov
To elaborate on Chris's point, the significance of keeping the producer and the consumer interface separate is that it allows future<R> to be made convertible to future<R2> whenever R is convertible to R2 (or R2 is void); this also allows extensions in the spirit of Frank Mori Hess's operator[],
Are you getting me confused with someone else? I don't remember any operator[]? One additional feature I am planning to add to libpoet is the ability to set a promise with a future. In terms of Chris's promise class, that would be something like a promise::operator()(const Future<T> &) This occurred to me while thinking about writing an active queue example program, where the queue would accept and return future<T> elements. Allowing a promise to be set with a future value would save having to poll the futures in the active queue object until they are ready to be used as values to fulfill promises. Frank

Hess, Frank wrote:
To elaborate on Chris's point, the significance of keeping the producer and the consumer interface separate is that it allows future<R> to be made convertible to future<R2> whenever R is convertible to R2 (or R2 is void); this also allows extensions in the spirit of Frank Mori Hess's operator[],
Are you getting me confused with someone else? I don't remember any operator[]?
I needed to go and check the archives; with so much independent development in this area it's easy to make attribution mistakes. My otherwise unreliable memory turned out to be correct in this case; in: http://lists.boost.org/Archives/boost/2007/03/117571.php you say: "You can do things like assign a Future<T> to a Future<U> if T is implicitly converible to U, without blocking. You can also extract elements from future containers and such, like getting a Future<int> from a Future<std::vector<int> > without blocking." Getting a Future<int> from Future< vector<int> > is an application of operator[], although I admit I don't know how it's expressed in your implementation.

On Mon, 12 Mar 2007 16:38:42 +0200, "Peter Dimov" <pdimov@mmltd.net> said:
To elaborate on Chris's point, the significance of keeping the producer and the consumer interface separate is that it allows future<R> to be made convertible to future<R2> whenever R is convertible to R2 (or R2 is void);
Are there any implications for the ability for movability of the result if you allow such conversions? Cheers, Chris

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Christopher Kohlhoff
I've attached a quick and dirty futures implementation that I wrote last year to demonstrate how futures and asio might fit together. Unlike the various futures proposals that are around, I prefer to keep the producer and consumer interfaces separate. Hence I have promise<T> (producer interface) and future<T> (consumer interface) -- these names are borrowed from the Alice programming language.
Splitting promise from future seems like a good idea, it should make the future class in libpoet a little less confusing. One thing your code reminded me of is forwarding exceptions to the future, something I put off then completely forgot about in libpoet. Since I'm trying to provide an active function object that wraps an ordinary passive function, for the general case I would need to call something like your promise::fail() from a catch block that catches exceptions thrown by the passive function. Unfortunately, that would require something like a templated catch. It seems the best I can do is provide special handling for some particular type of exception, like a boost::shared_ptr<std::exception> and if the passive function throws anything else, it just gets forwarded to the future as something like a poet::unknown_exception. Frank

Hess, Frank wrote:
Splitting promise from future seems like a good idea, it should make the future class in libpoet a little less confusing. One thing your code reminded me of is forwarding exceptions to the future, something I put off then completely forgot about in libpoet. Since I'm trying to provide an active function object that wraps an ordinary passive function, for the general case I would need to call something like your promise::fail() from a catch block that catches exceptions thrown by the passive function. Unfortunately, that would require something like a templated catch. It seems the best I can do is provide special handling for some particular type of exception, like a boost::shared_ptr<std::exception> and if the passive function throws anything else, it just gets forwarded to the future as something like a poet::unknown_exception.
This won't be a problem in the next C++ if N2179 passes: http://www.pdimov.com/cpp/N2179.html My implementation provides a partial emulation of N2179: http://www.pdimov.com/cpp/N2179/exception_ptr.hpp http://www.pdimov.com/cpp/N2179/exception_ptr.cpp

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Monday 12 March 2007 15:32 pm, Peter Dimov wrote:
This won't be a problem in the next C++ if N2179 passes:
http://www.pdimov.com/cpp/N2179.html
My implementation provides a partial emulation of N2179:
http://www.pdimov.com/cpp/N2179/exception_ptr.hpp http://www.pdimov.com/cpp/N2179/exception_ptr.cpp
Yes, that's just what I was looking for. I didn't see any license specified on your implementation though. Is is all right for me to copy your implementation into a boost-licenced library? Also, it would be nice if the top-level webpage on your site had a way to navigate down to your exception_ptr and future proposals (or am I just missing it?). - -- Frank -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (GNU/Linux) iD8DBQFF9boL5vihyNWuA4URAkLrAJ9UyFGuXs73lcek0iPRdMNcU0h81ACcCc2a R2DryUyiNEDyrRleu1e3FDk= =+yJl -----END PGP SIGNATURE-----

Frank Mori Hess wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Monday 12 March 2007 15:32 pm, Peter Dimov wrote:
This won't be a problem in the next C++ if N2179 passes:
http://www.pdimov.com/cpp/N2179.html
My implementation provides a partial emulation of N2179:
http://www.pdimov.com/cpp/N2179/exception_ptr.hpp http://www.pdimov.com/cpp/N2179/exception_ptr.cpp
Yes, that's just what I was looking for. I didn't see any license specified on your implementation though. Is is all right for me to copy your implementation into a boost-licenced library? Also, it would be nice if the top-level webpage on your site had a way to navigate down to your exception_ptr and future proposals (or am I just missing it?).
I will update the files to use the Boost license in a few days (and will link to the papers from the front page as soon as the mailing is made officially available on the committee web site).

Peter Dimov wrote:
I will update the files to use the Boost license in a few days (and will link to the papers from the front page as soon as the mailing is made officially available on the committee web site).
FYI, I did that. Two more papers that may be of interest: N2178, Proposed Text for Chapter 30, Thread Support Library [threads] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2178.html N2195, Proposed Text for Chapter 29, Atomic Operations Library [atomics] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2195.html

Peter Dimov wrote:
N2195, Proposed Text for Chapter 29, Atomic Operations Library [atomics] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2195.html
That's excellent news. Has anyone started on a boostification of that with support for real hardware targets (i.e. processor memory barriers)? Regards Timmo Stange

Timmo Stange wrote:
Peter Dimov wrote:
N2195, Proposed Text for Chapter 29, Atomic Operations Library [atomics] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2195.html
That's excellent news. Has anyone started on a boostification of that with support for real hardware targets (i.e. processor memory barriers)?
Not yet.

On Tue, 13 Mar 2007 00:30:39 +1100, Christopher Kohlhoff wrote:
various futures proposals that are around, I prefer to keep the producer and consumer interfaces separate. Hence I have promise<T> (producer interface) and future<T> (consumer interface) -- these names are
Okay, this got me thinking. A weakness in the future<> concept, as I understand it, is that if a future is never set(), then the invoking thread can hang waiting for it. Not very RAII. The situation is much improved if future<T> and promise<T> are split, and promise<T> is reference counted. If the last promise<T> for a particular future<T> goes out of scope, than any thread waiting on the matching future<T> would be failed with a promise_broken exception or somesuch. Is this already part of the promise<T> concept? Braddock Gaskill Dockside Vision Inc

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Monday 12 March 2007 17:12 pm, Braddock Gaskill wrote:
A weakness in the future<> concept, as I understand it, is that if a future is never set(), then the invoking thread can hang waiting for it. Not very RAII.
The situation is much improved if future<T> and promise<T> are split, and promise<T> is reference counted. If the last promise<T> for a particular future<T> goes out of scope, than any thread waiting on the matching future<T> would be failed with a promise_broken exception or somesuch.
Is this already part of the promise<T> concept?
No, at least it wasn't in the code Chris posted. I like the idea though, thanks. I'm going to incorporate it into my code. - -- Frank -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (GNU/Linux) iD8DBQFF9cVJ5vihyNWuA4URAgLtAKCqOsWWx22RmRjofDcN0/SKM5/yiwCgtmYz ysMas3e+R2ifsMAJTzYaP6I= =aY5Z -----END PGP SIGNATURE-----

On Mon, 12 Mar 2007 17:25:27 -0400, "Frank Mori Hess" <frank.hess@nist.gov> said:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Monday 12 March 2007 17:12 pm, Braddock Gaskill wrote:
A weakness in the future<> concept, as I understand it, is that if a future is never set(), then the invoking thread can hang waiting for it. Not very RAII.
The situation is much improved if future<T> and promise<T> are split, and promise<T> is reference counted. If the last promise<T> for a particular future<T> goes out of scope, than any thread waiting on the matching future<T> would be failed with a promise_broken exception or somesuch.
Is this already part of the promise<T> concept?
No, at least it wasn't in the code Chris posted. I like the idea though, thanks. I'm going to incorporate it into my code.
I agree, it's an excellent idea. I think it would be more idiomatic to name the exception "broken_promise" though :) Cheers, Chris

On Mar 12, 2007, at 9:30 AM, Christopher Kohlhoff wrote:
Hi Braddock,
On Fri, 09 Mar 2007 09:05:42 -0500, "Braddock Gaskill" <braddock@braddock.com> said:
The ability to use a future with the asio::io_service in particular would be quite powerful.
I've attached a quick and dirty futures implementation that I wrote last year to demonstrate how futures and asio might fit together. Unlike the various futures proposals that are around, I prefer to keep the producer and consumer interfaces separate. Hence I have promise<T> (producer interface) and future<T> (consumer interface) -- these names are borrowed from the Alice programming language.
I've been looking at this: ... future<std::string> resolve(std::string host_name) { promise<std::string> result; boost::asio::ip::tcp::resolver::query query(host_name, "0"); resolver_.async_resolve(query, boost::bind(&Resolver::handle_resolve, this, _1, _2, result)); return result; } private: void handle_resolve(boost::system::error_code ec, boost::asio::ip::tcp::resolver::iterator iterator, promise<std::string> result) { if (ec) result.fail(boost::system::system_error(ec)); else result(iterator->endpoint().address().to_string()); } And am somewhat disappointed that the low-level worker function needs to be aware of promise. What if there existed a: template <class R> template <class F> promise_functor<R, F> promise<R>::operator()(F f); This would be in addition to the current setter functionality in promise. You could call this on a promise: result(f) and a promise_functor<R, F> would be returned as the result (I'm not at all very attached to the name promise_functor). Executing that functor would be roughly equivalent to setting the promise: future<std::string> resolve(std::string host_name) { promise<std::string> result; boost::asio::ip::tcp::resolver::query query(host_name, "0"); resolver_.async_resolve(query, result(boost::bind(&Resolver::handle_resolve, this, _1, _2))); return result; } private: std::string handle_resolve(boost::system::error_code ec, boost::asio::ip::tcp::resolver::iterator iterator) { if (ec) throw boost::system::system_error(ec); return iterator->endpoint().address().to_string(); } The promise_functor adaptor would execute f under a try/catch, setting the promise's value on successful termination, else catching the exception and setting the promise's failure mode. Variadic templates and perfect forwarding would make the implementation nearly painless (today it would be a pain). Having a ready-to-run functor that can be produced from a promise might ease setting the promise's value with code that is (by design or by accident) promise-ignorant. Just a thought, and not a fully fleshed out one. -Howard

On Wed, 14 Mar 2007 15:50:34 -0400, Howard Hinnant wrote:
You could call this on a promise: result(f) and a promise_functor<R, F> would be returned as the result (I'm not at all very attached to the name promise_functor). Executing that functor would be roughly equivalent to setting the promise:
I believe this is similar to the 'class task' wrapper that Peter Dimov proposed in N2096 at http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2096.html I think easy to use wrappers are a vital part of implementing future. I'm not sure I see a great advantage to putting the wrapper functionality directly within the promise class constructor though.

On Mar 14, 2007, at 4:05 PM, Braddock Gaskill wrote:
On Wed, 14 Mar 2007 15:50:34 -0400, Howard Hinnant wrote:
You could call this on a promise: result(f) and a promise_functor<R, F> would be returned as the result (I'm not at all very attached to the name promise_functor). Executing that functor would be roughly equivalent to setting the promise:
I believe this is similar to the 'class task' wrapper that Peter Dimov proposed in N2096 at http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2096.html
<nod> You caught me! :-)
I think easy to use wrappers are a vital part of implementing future. I'm not sure I see a great advantage to putting the wrapper functionality directly within the promise class constructor though.
Actually I was proposing a promise operator()(F) which returns the wrapper, thus keeping the wrapper data (the F) out of the promise class. I.e. promise_functor is Peter's class task. And promise is roughly Peter's class task with the functor stripped out of it. The functor, the promise, and the future all need to point the same underlying return_value. By having the promise create the return_value, and then subsequently responsible for producing the future and functor (or equivalently the future and functor are constructed from the promise), you ensure that all three "handles" point to the same return_value. The template type of the functor is clumsy to have to deal with directly, and so it is best if that type is deduced somewhere, such as in a functor factory function, or a templated promise::operator(). -Howard

On Wed, 14 Mar 2007 15:50:34 -0400, "Howard Hinnant" <hinnant@twcny.rr.com> said:
And am somewhat disappointed that the low-level worker function needs to be aware of promise.
In some cases you do want it to be aware of the promise if you chain many async operations together, and only fulfill promise at the end of the chain. However a simpler interface for the one-async-operation case would be nice, I agree.
What if there existed a:
template <class R> template <class F> promise_functor<R, F> promise<R>::operator()(F f);
This would be in addition to the current setter functionality in promise.
Hmm, I think I'd prefer a non-member function to make the distinction between that act of settingthe promise and the act of composing another function object clearer when reading the code. I don't see any implementation reason to make it a member function - is there one? Cheers, Chris

On Mar 14, 2007, at 5:33 PM, Christopher Kohlhoff wrote:
What if there existed a:
template <class R> template <class F> promise_functor<R, F> promise<R>::operator()(F f);
This would be in addition to the current setter functionality in promise.
Hmm, I think I'd prefer a non-member function to make the distinction between that act of settingthe promise and the act of composing another function object clearer when reading the code. I don't see any implementation reason to make it a member function - is there one?
No I don't think there's a reason. A friend free function can do anything a member can. I think we're just looking at syntax: future<std::string> resolve(std::string host_name) { promise<std::string> result; boost::asio::ip::tcp::resolver::query query(host_name, "0"); resolver_.async_resolve(query, result(boost::bind(&Resolver::handle_resolve, this, _1, _2))); return result; } vs: future<std::string> resolve(std::string host_name) { promise<std::string> result; boost::asio::ip::tcp::resolver::query query(host_name, "0"); resolver_.async_resolve(query, make_promise_functor(result, boost::bind(&Resolver::handle_resolve, this, _1, _2))); return result; } -Howard

Howard Hinnant wrote:
What if there existed a:
template <class R> template <class F> promise_functor<R, F> promise<R>::operator()(F f);
I'd actually expect from this syntax to call f and install the result into the promise in one easy step. :-)

On Mar 14, 2007, at 8:01 PM, Peter Dimov wrote:
Howard Hinnant wrote:
What if there existed a:
template <class R>
template <class F>
promise_functor<R, F>
promise<R>::operator()(F f);
I'd actually expect from this syntax to call f and install the result into the promise in one easy step. :-)
That sounds like a thread launch, which indeed ought to be easy. We've got (at least) 4 basic concepts floating around here: * return_value<R> This is a private implementation detail class that nobody ever sees. It is not copyable nor movable. It represents the result of a computation, which can be either normal or exceptional. It has getters and setters for normal and exceptional results. It lives on the heap and various handles share its ownership. * future<R> // return_value getter This forwards a getter request to return_value<R>. The getter may need to wait for a signal from a setter. * promise<R> // return_value setter This is a setter of return_value<R>, signaling any waiting getters. Setting normal or exceptional result is explicit. * functor<R, F> // return_value setter This is a functor that executes a function F and stores the normal or exceptional result in a return_value<R>, possibly via a promise<R>. The result setting is implicit, depending on whether F returns normally, or has an exception propagate out of it. Purposefully (and wisely imho) left out of this mix is what actually executes a setter for return_value<R>. functor<R, F> and promise<R> are simply thin interfaces to setting return_value<R>. But they don't represent asynchronous execution. Thus you can create a function that calls promise<R>::set(r) which is asynchronously executed, say in a thread or a thread_pool. Or you can adapt an existing function by constructing a functor<R, F> with it, and then execute that adapted function in a thread or thread_pool. Either way, the setting of return_value can happen synchronously, in a dedicated thread asynchronously, or be queued in a container of functors (possibly priority sorted) waiting to be processed. Ultimately I think we want to support syntax along the lines of: future<T> t = f( b ); c = g( d ); e = h(t(), c); whereby functors and promises are hidden in relatively low level code, and high level code can just get a future, assume that it is being executed asap, and then join with it later. Perhaps that morphs the above example into: std::thread_pool launch_in_pool; int main() { std::future<T> t = launch_in_pool(std::bind(f, b)); c = g( d ); e = h(t(), c); } -Howard

I've made a futures implementation in the hopes of combining the best features from all the proposals and various implementations currently floating around. It is still rough around the edges, but is mostly covered by unit tests. http://braddock.com/~braddock/future/ I'm looking for input. I'd be willing to add additional functionality needed and progress it to the point where it could be formally submitted into boost if there is interest. It could perhaps be merged into the existing futures implementation in the vault to add combined groups, etc. I used Peter's exception_ptr code, which he said he would post under a boost license. -Braddock Gaskill Below are the README comments: GOAL: Provide a definitive future implementation with the best features of the numerous implementations and proposals floating around, in the hopes to avoid multiple incompatible future implementations (coroutines, active objects, asio, etc). Also, to explore the combined implementation of the best future concepts. FEATURES: -split promise<T>/future<T> concept. -Agnostic to scheduling, any active object implementation, thread model, etc etc. Basic future which can be used in many contexts. -promise<T> is reference counted - if the last promise<T> goes out of scope before the future is set, the future is automatically failed with a broken_promise exception. -add_callback() permits arbitrary functions to be called when the future is set. This is needed to build non-intrusive compound waiting structures, like future_groups, or to better support continuation scheduling concepts, as in coroutines. -Value type does not require a default constructor, and no wasteful excess default construction is performed. Value type only needs to be copy constructable. -set_cancel_handler() permits an arbitrary function to be called to cancel the future, if possible. -atomic set_or_throw() permits the caller to detect if a set was successful (ie, if the caller was the first to transition the future to ready). This enables promise/future to be used to as a type of atomic semaphore. -future<void> and promise<void> specializations, as per Dimov proposal -Pass-by-reference specializations, such as future<int&>, permit return by reference, as per Dimov proposal. -Uses Peter Dimov's exception_ptr implementation to make a best-effort to pass exception types between threads. -implementation of all three of f, f(), an f.get() blocking accessor syntaxes, for better or for worse. TODO: The following class of sub-type assignments should be made possible, as discussed by Frank Hess and Peter Dimov on the list. class A {}; class B : public A {}; promise<shared_ptr<B> > p; future<shared_ptr<A> > f(p); future_group classes and simple function wrappers should be provided, although kept seperate from the main future<T> implementation. RATIONALE: Some method names were reverted to Peter Dimov's earlier N2096 proposal because of better semantics, in the author's opinion. set_value() -> set() - primary use of future join() -> wait() - more descriptive, less thread-centric timed_join() -> timed_wait() try_join() -> ready() - join implies blocking, but this returns state The user-supplied cancel_handler is called after the future/promise is unlocked. Otherwise, the cancel_handler could cause a deadlock if it invokes any methods upon the promise/future. This means that worker threads COULD call set() upon a canceled future, but these calls would be ignored. This is (IMHO) correct behavior. A cancel() invocation sets the future's state with a future_cancel exception - it does not guarantee that the work is actually canceled, because in many cases it cannot be stopped. -Should set() silently ignore calls on an already set promise, or should it throw? Peter Dimov's proposal says ignore, and I'm inclined to agree because if a user sets up a scenario where a promise is shared between multiple functions, or where a future is canceled, it is impossible for the author of the individual function to foresee the exceptional situation. I offer set_or_throw() to still allow use of future for atomic signaling. REFERENCES: Transporting Values and Exceptions between Threads N2096, Peter Dimov http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2096.html Proposed Text for Parallel Task Execution, Peter Dimov (DRAFT) http://www.pdimov.com/cpp/N2185.html Multithreading API for C++0X - A Layered Approach, Howard Hinnant http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2027.html Concur and C++ Futures, Herb Sutter, presentation video & slides http://video.google.com/videoplay?docid=7625918717318948700 http://www.nwcpp.org/Downloads/2006/The_Concur_Project_-_NWCPP.pdf Language Support for Transporting Exceptions Between Threads N2179, Peter Dimov http://www.pdimov.com/cpp/N2179.html Active Object libpoet framework by Frank Mori Hess http://source.emptycrate.com/projects/activeobjects/ http://www.comedi.org/projects/libpoet/index.html Boost Coroutine library Boost Vault future library Chris Kohlhoff's future implementation, posted to boost-devel list Enjoy, Braddock Gaskill Dockside Vision inc

Hello,
I've made a futures implementation in the hopes of combining the best features from all the proposals and various implementations currently floating around. It is still rough around the edges, but is mostly covered by unit tests.
In your implementation I'm missing operators for logical combination of futures like it is supported by the futures implementation of Thorsten Schütt (vault files). Oliver

On Wed, 14 Mar 2007 06:49:57 +0100, Oliver.Kowalke wrote:
I've made a futures implementation in the hopes of combining http://braddock.com/~braddock/future/
In your implementation I'm missing operators for logical combination of futures like it is supported by the futures implementation of Thorsten Schütt (vault files).
Hi Oliver, Yes, I would like to either add this, or merge into Schutt's implementation if he is interested. And example of an unintrusive operator||() is below. I added a public add_callback() method which calls an observer function when the future's state changes. This should allow grouping of futures or combined waits to be handled unintrusively outside of the main future classes. This public observer method is also needed for things like future-aware task scheduling (like Frank Hess's draft active object or boost::coroutine). template <typename T> struct future_or { future_or(future<T> &a, future<T> &b) : a_(a), b_(b) {} future_or(const future_or &f) : a_(f.a_), b_(f.b_), p_(f.p_) {} struct future_or_failed : public std::exception { }; void operator()() { boost::mutex::scoped_lock lck(mutex_); if (a_.has_value()) p_.set(a_.get()); else if (b_.has_value()) p_.set(b_.get()); else if (a_.has_exception() && b_.has_exception()) // both failed p_.set_exception(future_or_failed()); } future<T> a_,b_; promise<T> p_; boost::mutex mutex_; }; template<typename T> future<T> operator||(future<T> &a, future<T> &b) { future_or<T> fa(a, b); a.add_callback(fa); return fa.p_; } void TestCase7() { // test future or || example promise<int> p1; promise<int> p2; future<int> f1(p1); future<int> f2(p2); future<int> f1_or_f2 = f1 || f2; BOOST_CHECK(!f1_or_f2.ready()); p1.set(97); p2.set(-5); // will be ignored by the or BOOST_CHECK(f1_or_f2.ready()); BOOST_CHECK_EQUAL(f1_or_f2.get(), 97); }

On Wednesday, March 14, 2007 12:44 PM, Braddock Gaskill wrote:
I've made a futures implementation in the hopes of combining http://braddock.com/~braddock/future/
In your implementation I'm missing operators for logical combination of futures like it is supported by the futures implementation of Thorsten Schütt (vault files).
Hi Oliver, Yes, I would like to either add this, or merge into Schutt's implementation if he is interested. And example of an unintrusive operator||() is below.
<snip> Looks good for me - so could the other members agree to Braddocks implementation in order to have one base for futhur development/evolution of the futures concept? Oliver

On Wednesday, March 14, 2007 12:44 PM, Braddock Gaskill wrote:
I've made a futures implementation in the hopes of combining http://braddock.com/~braddock/future/
In your implementation I'm missing operators for logical combination of futures like it is supported by the futures implementation of Thorsten Schütt (vault files).
Hi Oliver, Yes, I would like to either add this, or merge into Schutt's implementation if he is interested. And example of an unintrusive operator||() is below.
<snip> Looks good for me - so could the other members agree to Braddocks implementation in order to have one base for further development/evolution of the futures concept? Oliver

On Wed, 14 Mar 2007 07:43:47 -0400, Braddock Gaskill wrote:
And example of an unintrusive operator||() is below. template<typename T> future<T> operator||(future<T> &a, future<T> &b) { future_or<T> fa(a, b); a.add_callback(fa); return fa.p_; }
Minor bug in the code I just posted, I left out b.add_callback(fa) in here. Are there any thoughts on how && and || operators should properly handle exceptions?

Braddock Gaskill wrote:
Minor bug in the code I just posted, I left out b.add_callback(fa) in here.
Are there any thoughts on how && and || operators should properly handle exceptions?
These operators should create a new (composite)future, exposing the same interface as the (simple) futures you're composing. This composite future should handle the exceptions in a similar way as the embedded ones, i.e. propagate the exceptions catched in the embedded futures to the caller, as appropriate. Also, does your implementation of operator|| allow for constructs like f1 || f2 || f3 ? Regards Hartmut

On Wed, 14 Mar 2007 07:27:06 -0500, Hartmut Kaiser wrote:
Braddock Gaskill wrote: These operators should create a new (composite)future, exposing the same interface as the (simple) futures you're composing. This composite future should handle the exceptions in a similar way as the embedded ones, i.e. propagate the exceptions catched in the embedded futures to the caller, as appropriate.
So, that would mean that for f3 = f1 || f2, if f1 propagates an exception while f2 succeeds, f3 still propagates an exception?
Also, does your implementation of operator|| allow for constructs like f1 || f2 || f3 ?
I don't have a real implementation of composition, I posted that simple operator|| example out to show that it can be done without any changes to the base future<T>/promise<T> implementation, so that they can proceed more or less independently. I'm hoping we can make use of an existing composition implemenation. As I understand it, the point you riase is if you want f1 || f2 || f3 to have the semantics of returning a future<variant<T1, T2, T3> > without modifiation to the base future class. I would think that could be done if operator||(future<T1>, future<T2) actually returns a proxy class which is implicitly convertible to a future, and properly specialized operator|| functions are provided. I haven't given this much thought yet though. But that raises another point with the variant/tuple semantics...if I do f3 = f1 || f2; f3 would then have the type future<variant<f1::type, f2::type> >. If I then do a seperate f5 = f3 || f4; then does f5 have the type future<variant<f1::type, f2::type, f4::type> >, or future<variant<variant<f1:type, f2::type>, f4::type> >? Should there be a seperate future_group concept to disambiguate? Composition overloading with || or && gets very hairy or impossible if the current proposal to have a future implicitly convertable to it's value, with blocking, goes through. Maybe Peter has thoughts on this. I like the tuples/variant idea of composition, but how do you handle exceptions? The semantics of composition still seems far less settled than the basic future concept, at least in my mind. Any references are appreciated, I have seen very few. I would like to discuss it. Thanks for the feedback! Braddock Gaskill Dockside Vision Inc

Braddock Gaskill wrote:
These operators should create a new (composite)future, exposing the same interface as the (simple) futures you're composing. This composite future should handle the exceptions in a similar way as the embedded ones, i.e. propagate the exceptions catched in the embedded futures to the caller, as appropriate.
So, that would mean that for f3 = f1 || f2, if f1 propagates an exception while f2 succeeds, f3 still propagates an exception?
I think this is very much use case dependent and anything you code into a library for good will hurt somebody else. So my best guess here is to implement a policy based behavior, allowing to custimize exception handling and propagation.
Also, does your implementation of operator|| allow for constructs like f1 || f2 || f3 ?
I don't have a real implementation of composition, I posted that simple operator|| example out to show that it can be done without any changes operator|| to the base future<T>/promise<T> implementation, so that they can proceed more or less independently. I'm hoping we can make use of an existing composition implemenation.
The implementation of composition in the future's lib in the vault could be used as a starting point, however from todays POV I'ld prefer to reimplement it using Eric Nieblers excellent proto library. This simplifies the required meta template magic considerably.
As I understand it, the point you riase is if you want f1 || f2 || f3 to have the semantics of returning a future<variant<T1, T2, T3> > without modifiation to the base future class. I would think that could be done if operator||(future<T1>, future<T2) actually returns a proxy class which operator||is implicitly convertible to a future, and properly specialized operator|| functions are provided. I haven't given this much thought yet though.
But that raises another point with the variant/tuple semantics...if I do f3 = f1 || f2; f3 would then have the type future<variant<f1::type, f2::type> >. If I then do a seperate f5 = f3 || f4;
then does f5 have the type future<variant<f1::type, f2::type, f4::type> >, or future<variant<variant<f1:type, f2::type>, f4::type> >?
As long as you have variants only you can flatten the structure: future<variant<f1::type, f2::type, f4::type> > But as soon as you start combining || and && this isn't always possible anymore. Additional optimization can be done, if all futures in a operator|| sequence return the same type, in which case you don't need to use a variant.
Should there be a seperate future_group concept to disambiguate?
What do you mean by that?
Composition overloading with || or && gets very hairy or impossible if the current proposal to have a future implicitly convertable to it's value, with blocking, goes through. Maybe Peter has thoughts on this.
I personally don't like the idea of having the future convert implicitely to its value type at all. I'm not sure if this can be implemented completely fail proof without ruunning in surprising behavior.
I like the tuples/variant idea of composition, but how do you handle exceptions?
As I said, you might need to use policies allowing for the user to specify what to do with exceptions.
The semantics of composition still seems far less settled than the basic future concept, at least in my mind. Any references are appreciated, I have seen very few. I would like to discuss it.
Regards Hartmut

On 3/14/07, Hartmut Kaiser <hartmut.kaiser@gmail.com> wrote:
[about combining mutexes] But as soon as you start combining || and && this isn't always possible anymore. Additional optimization can be done, if all futures in a operator|| sequence return the same type, in which case you don't need to use a variant.
Instead of going with generalized future composition, I've tried the route of 'wait_all(<future-list>) ' and 'wait_any(<future-tuple>)' Sure, it is less flexible than overloading || and && and permitting complex expressions, but it is easy to implement, its behavior as a blocking point is immediate and most importantly it is simple to understand. If you really need to wait for multiple futures with complex patterns, simply use a full blown reactor (i.e. asio).
Should there be a seperate future_group concept to disambiguate?
What do you mean by that?
Composition overloading with || or && gets very hairy or impossible if the current proposal to have a future implicitly convertable to it's value, with blocking, goes through. Maybe Peter has thoughts on this.
I personally don't like the idea of having the future convert implicitely to its value type at all. I'm not sure if this can be implemented completely fail proof without ruunning in surprising behavior.
I agree. What about an optional<T> like interface for future<T>? you can do: future<some_type1> a = async_op1(...); some_type1 xa = *a; //may block if not yet ready or, as a more convoluted example: future<some_type1> a = async_op1(...); future<some_type2> b = async_op2(...); future<void> timeout = post_timer(timeout); while(true){ wait_any(a, b,c) if(a) { some_type1 xa = *a; // guaranteed not to block // do something with xa } if(b) { some_type2 xb = *b; // guaranteed not to block // do something with xb } if((a && b) || timeout) { //no boolean operator overloading here, just plain builtins a.cancel(); //no-op if already completed. b.cancel(); //ditto break; } }
I like the tuples/variant idea of composition, but how do you handle exceptions?
In this case wait_any could report a failed future as ready. operator* would throw the forwarded exception. Not sure about that though. The futures I had implemented didn't support exception forwarding. gpd

On Wed, 14 Mar 2007 17:47:09 +0100, Giovanni Piero Deretta wrote:
Instead of going with generalized future composition, I've tried the route of 'wait_all(<future-list>) ' and 'wait_any(<future-tuple>)'
This is exactly what I personally would like to see. I don't relish the thought of digging out a return type or exception from a future<variant<float, string, tuple<int, double, bool> > >. I would just want something that wakes me up when dinner is ready and let me figure out which of my futures are valid or have failed, if I even care. wait(f1 || (f2 && f3)); might be nice syntax, does actually add functionality, and is simple enough as well. IMHO, all of this fancy blocking stuff should really be part of a larger event framework anyway. I'd love to throw a pair of futures together with a non-blocking file read and a listening UDP socket, and have the wait() yeild my thread to another task while I'm dreaming. But this is a far bigger problem than the future concept. add_callback() is my answer to disentangling future<T> from the general blocking problem.
I personally don't like the idea of having the future convert implicitely to its value type at all.
Nor do I. Herb Sutter has a good slide in his Concur presentation against it as well.
I agree. What about an optional<T> like interface for future<T>? future<some_type1> a = async_op1(...); some_type1 xa = *a; //may block if not yet ready
Personally, I just prefer no non-sense 'get()'. But I'm willing to implement whatever the general consensus is, and/or whatever gets past the C++ language committees. Braddock Gaskill Dockside Vision Inc

Braddock Gaskill wrote:
Instead of going with generalized future composition, I've tried the route of 'wait_all(<future-list>) ' and 'wait_any(<future-tuple>)'
This is exactly what I personally would like to see. I don't relish the thought of digging out a return type or exception from a future<variant<float, string, tuple<int, double, bool>
. I would just want something that wakes me up when dinner is ready and let me figure out which of my futures are valid or have failed, if I even care.
You don't need to use the return type, if you don't want to use it afterwards. And in the case you're interested in the actula return value, a function like above will have to return the variant<float, string, tuple<int, double, bool> > as well. So what do you gain from using a function? Operator overloading gives you variable future counts with far less coding effort. With functions you have to spell out a specialization for every possible number of futures you want to combine.
wait(f1 || (f2 && f3)); might be nice syntax, does actually add functionality, and is simple enough as well.
Yeah, sure. The library in the vault has a futurize() function doing exactly what you want, just pass a arbitrary complex future expression to it. No need to know the returned data type. Regards Hartmut

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Hartmut Kaiser Sent: Wednesday, March 14, 2007 8:52 AM To: boost@lists.boost.org Subject: Re: [boost] [futures] boost::futures
Braddock Gaskill wrote:
These operators should create a new (composite)future, exposing the same interface as the (simple) futures you're composing. This composite future should handle the exceptions in a similar way as the embedded ones, i.e. propagate the exceptions catched in the embedded futures to the caller, as appropriate.
So, that would mean that for f3 = f1 || f2, if f1 propagates an exception while f2 succeeds, f3 still propagates an exception?
I think this is very much use case dependent and anything you code into a library for good will hurt somebody else. So my best guess here is to implement a policy based behavior, allowing to custimize exception handling and propagation.
But if you think of it as logical or, the statement f1 || f2 || f3 says: "I don't care which one of these actually finishes, just that one does" just like b1 || b2 || b3 says, "This statement is true if one of b1, b2 or b3 are true". If this were policy based, it would be very confusing I would think.

Braddock Gaskill wrote:
So, that would mean that for f3 = f1 || f2, if f1 propagates an exception while f2 succeeds, f3 still propagates an exception?
I think that one sensible meaning of f1 || f2 is to wait until either one of f1 or f2 returns a value, or both fail. The "first to complete" approach is supported by my proposed future<>, but not in a composable way. You can hand the same future<> to two producers and the first one to place a value or an exception into it wins. But I haven't investigated the infrastructure that would allow operator||. As for operator&&, it doesn't deliver any extra functionality. Instead of wating for f1 && f2, you just wait for f1, then f2: future<int> f1 = fork( f, x ); future<int> f2 = fork( f, y ); std::cout << f1 + f2 << std::endl; There's no need to do: future< pair<int, int> > f3 = f1 && f2; pair<int,int> p = f3; std::cout << p.first + p.second << std::endl;

On 3/14/07, Peter Dimov <pdimov@mmltd.net> wrote:
Braddock Gaskill wrote:
So, that would mean that for f3 = f1 || f2, if f1 propagates an exception while f2 succeeds, f3 still propagates an exception?
I think that one sensible meaning of f1 || f2 is to wait until either one of f1 or f2 returns a value, or both fail.
The "first to complete" approach is supported by my proposed future<>, but not in a composable way. You can hand the same future<> to two producers and the first one to place a value or an exception into it wins. But I haven't investigated the infrastructure that would allow operator||.
As for operator&&, it doesn't deliver any extra functionality. Instead of wating for f1 && f2, you just wait for f1, then f2:
future<int> f1 = fork( f, x ); future<int> f2 = fork( f, y );
std::cout << f1 + f2 << std::endl;
There's no need to do:
future< pair<int, int> > f3 = f1 && f2; pair<int,int> p = f3; std::cout << p.first + p.second << std::endl;
While there is no added functionality to being able to wait to two or more futures at the same time, it could improve performance: the waiting thread need to be awaken only once and you could save some context switches. gpd

On Wed, 14 Mar 2007 22:48:31 +0200, Peter Dimov wrote:
As for operator&&, it doesn't deliver any extra functionality. Instead of wating for f1 && f2, you just wait for f1, then f2:
future<int> f1 = fork( f, x ); future<int> f2 = fork( f, y );
std::cout << f1 + f2 << std::endl;
I gotta say I'm hard-pressed to think of a real-life situation where this is not a sufficient replacement for operator&&. operator|| still has merit though - how would you see that working with implicitly convertible/blocking futures as proposed? Or would it just have to be dropped? -braddock

On Fri, 16 Mar 2007 06:03:14 -0400, Braddock Gaskill wrote:
On Wed, 14 Mar 2007 22:48:31 +0200, Peter Dimov wrote:
As for operator&&, it doesn't deliver any extra functionality. Instead of wating for f1 && f2, you just wait for f1, then f2:
future<int> f1 = fork( f, x ); future<int> f2 = fork( f, y );
std::cout << f1 + f2 << std::endl;
I gotta say I'm hard-pressed to think of a real-life situation where this is not a sufficient replacement for operator&&.
Actually, I retract that. Once you have an ||, then you need an && if you want to do: f = f1 || (f2 && f3); //assuming no implicit block-and-convert syntax f.wait();

On Wed, 14 Mar 2007 07:43:47 -0400, Braddock Gaskill wrote:
And example of an unintrusive operator||() is below. template<typename T> future<T> operator||(future<T> &a, future<T> &b) { future_or<T> fa(a, b); a.add_callback(fa); return fa.p_; }
Minor bug in the code I just posted, I left out b.add_callback(fa) in here.
Are there any thoughts on how && and || operators should properly handle exceptions?
How to handle future< int > && future< std::string > && future< my_class
? Should promise contain a tuple (fusion container?) for the result types?
Oliver

Oliver.Kowalke@qimonda.com wrote:
How to handle future< int > && future< std::string > && future< my_class
? Should promise contain a tuple (fusion container?) for the result types?
The composed future of future<T1> || future<T2> returns a variant<T1, T2>, and a composed future of future<T1> && future<T2> returns a tuple<T1, T2>. Regards Hartmut

On Wednesday 14 March 2007 07:43 am, Braddock Gaskill wrote:
On Wed, 14 Mar 2007 06:49:57 +0100, Oliver.Kowalke wrote:
In your implementation I'm missing operators for logical combination of futures like it is supported by the futures implementation of Thorsten Schütt (vault files).
Hi Oliver, Yes, I would like to either add this, or merge into Schutt's implementation if he is interested. And example of an unintrusive operator||() is below.
I have to say I really am not a big fan of needless overloading of the languages operators like logical || and &&. Especially when you are overloading them with functions that really don't correspond to the original operators. What if I want to overload || with a function that actually does correspond to logical or, like Future<bool> operator||(const Future<T> &a, const Future<U> &b); -- Frank

Frank Mori Hess wrote:
I have to say I really am not a big fan of needless overloading of the languages operators like logical || and &&. Especially when you are overloading them with functions that really don't correspond to the original operators. What if I want to overload || with a function that actually does correspond to logical or, like
Future<bool> operator||(const Future<T> &a, const Future<U> &b);
This seems like a reasonable argument against these composition overloads. - Michael Marcin

Frank Mori Hess wrote:
I have to say I really am not a big fan of needless overloading of the languages operators like logical || and &&. Especially when you are overloading them with functions that really don't correspond to the original operators. What if I want to overload || with a function that actually does correspond to logical or, like
Future<bool> operator||(const Future<T> &a, const Future<U> &b);
1. Why do you would do that? Do you have a use case? 2. If you really have to do that, simply do not include the header(s) defining the operators. Regards Hartmut

On Wednesday 14 March 2007 15:05 pm, Hartmut Kaiser wrote:
Frank Mori Hess wrote:
I have to say I really am not a big fan of needless overloading of the languages operators like logical || and &&. Especially when you are overloading them with functions that really don't correspond to the original operators. What if I want to overload || with a function that actually does correspond to logical or, like
Future<bool> operator||(const Future<T> &a, const Future<U> &b);
1. Why do you would do that? Do you have a use case?
Making active functions that take futures as arguments and return futures as return values is the primary feature of libpoet. See http://www.comedi.org/projects/libpoet/ So the operator|| above would be used if you wanted to || two results and pass the result of the || to another active function without blocking waiting for the futures to become ready. Really it was just an example though, I'm not currently planning to implement this (although maybe I will someday if I discover it to be useful).
2. If you really have to do that, simply do not include the header(s) defining the operators.
I don't really have to define an operator||, I could just give the function an ordinary name. My point is, composing futures like in Thorsten Schütt's implementation doesn't really have to define an operator|| either. And, if I had to choose which function had a more legitimate claim to use operator||, I'd choose my example. Not including the headers isn't really a solution, since it precludes the use of the conflicting functions in the same code. Pointless overloading isn't a good thing. What does it buy you here? A pair of parenthesis. -- Frank

Frank Mori Hess wrote:
Future<bool> operator||(const Future<T> &a, const Future<U> &b);
1. Why do you would do that? Do you have a use case?
Making active functions that take futures as arguments and return futures as return values is the primary feature of libpoet. See
http://www.comedi.org/projects/libpoet/
So the operator|| above would be used if you wanted to || two results and pass the result of the || to another active function without blocking waiting for the futures to become ready. Really it was just an example though, I'm not currently planning to implement this (although maybe I will someday if I discover it to be useful).
Passing the result of f1 || f2 does not block execution at all. F1 || f2 yields just another future you will have to dereference first to make it returning it's value (or blocking, if the value is not available). So I'ld rather cross the bridge, when I'm there. Relying on fictional use cases is at least questionable.
2. If you really have to do that, simply do not include the header(s) defining the operators.
I don't really have to define an operator||, I could just give the function an ordinary name. My point is, composing futures like in Thorsten Schütt's implementation doesn't really have to define an operator|| either. And, if I had to choose which function had a more legitimate claim to use operator||, I'd choose my example.
Not including the headers isn't really a solution, since it precludes the use of the conflicting functions in the same code.
This doesn't preclude anything. You said you don't _want_ to use the overloaded semantics of futures at all (because you don't like them). How this may then conflict with other code? I hold my point: just don't include the header containing the questionable operators and you don't have to pay for them.
Pointless overloading isn't a good thing. What does it buy you here? A pair of parenthesis.
Syntactic sugar improves expressiveness, which makes code more readable. But as always this is a matter of style and personal preference. I don't want to start a religious discussion here. Just don't use the syntactic sugar, if you don't like it. The usage of operator overloading has one advantage over a similar solution using functions, though. When combining futures you ultimately want to flatten the composed return types: i.e. for a f1 || f2 || f3 (with different return types T1, T2 and T3) you want to get a variant<T1, T2, T3> as the composite return type and not a variant<variant<T1, T2>, T3> (as you would get using simple functions taking two parameters). This naturally involves some meta template magic to construct the required return types. Using functions you'll have to do this by hand, using operators you can reuse existing meta-template frameworks like proto allowing to implement this with minimal effort. Regards Hartmut

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Wednesday 14 March 2007 16:38 pm, Hartmut Kaiser wrote:
Frank Mori Hess wrote:
results and pass the result of the || to another active function without blocking waiting for the futures to become ready. Really it was just an example though, I'm not currently planning to implement this (although maybe I will someday if I discover it to be useful).
Passing the result of f1 || f2 does not block execution at all. F1 || f2 yields just another future you will have to dereference first to make it returning it's value (or blocking, if the value is not available).
I thought you asked me why I'd want to use the operator|| that I suggested, so I gave an example of its use. You seem to be talking about the other operator|| that I didn't like.
Not including the headers isn't really a solution, since it precludes the use of the conflicting functions in the same code.
This doesn't preclude anything. You said you don't _want_ to use the overloaded semantics of futures at all (because you don't like them). How this may then conflict with other code? I hold my point: just don't include the header containing the questionable operators and you don't have to pay for them.
Ah, I was thinking the operator||, etc was the only way provided to compose the futures. If they are also available via normal function names, then I retract my complaint.
The usage of operator overloading has one advantage over a similar solution using functions, though. When combining futures you ultimately want to flatten the composed return types: i.e. for a f1 || f2 || f3 (with different return types T1, T2 and T3) you want to get a variant<T1, T2, T3> as the composite return type and not a variant<variant<T1, T2>, T3> (as you would get using simple functions taking two parameters).
This naturally involves some meta template magic to construct the required return types. Using functions you'll have to do this by hand, using operators you can reuse existing meta-template frameworks like proto allowing to implement this with minimal effort.
Why couldn't you just overload the function to take varying numbers of arguments? I'll just use the name "compose_variant" for illustration: variant<T1, T2> compose_variant(T1 t1, T2 t2); variant<T1, T2, T3> compose_variant(T1 t1, T2 t2, T3 t3); // ... - -- Frank -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (GNU/Linux) iD8DBQFF+GBe5vihyNWuA4URAgpyAJ9yDcnWgZtV8nkke4uKDsMUZDx/CACdEswq rDoy7kzchQgP1CxfEP4OG/4= =ElUK -----END PGP SIGNATURE-----

Frank Mori Hess wrote:
The usage of operator overloading has one advantage over a similar solution using functions, though. When combining futures you ultimately want to flatten the composed return types: i.e. for a f1 || f2 || f3 (with different return types T1, T2 and T3) you want to get a variant<T1, T2, T3> as the composite return type and not a variant<variant<T1, T2>, T3> (as you would get using simple functions taking two parameters).
This naturally involves some meta template magic to construct the required return types. Using functions you'll have to do this by hand, using operators you can reuse existing meta-template frameworks like proto allowing to implement this with minimal effort.
Why couldn't you just overload the function to take varying numbers of arguments? I'll just use the name "compose_variant" for illustration:
variant<T1, T2> compose_variant(T1 t1, T2 t2); variant<T1, T2, T3> compose_variant(T1 t1, T2 t2, T3 t3); // ...
Ease of implementation. No need to implement a varying number of overloads for one function or a single complex function having all but one default arguments. But that's a matter of style and taste... Regards Hartmut

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Tuesday 13 March 2007 23:55 pm, braddock wrote:
GOAL:
Provide a definitive future implementation with the best features of the numerous implementations and proposals floating around, in the hopes to avoid multiple incompatible future implementations (coroutines, active objects, asio, etc). Also, to explore the combined implementation of the best future concepts.
It certainly seems like a worthwhile goal. I haven't looked at it in detail yet but I will. I'm sure I'll have some complaints, I mean feedback for you soon.
Active Object libpoet framework by Frank Mori Hess http://source.emptycrate.com/projects/activeobjects/ http://www.comedi.org/projects/libpoet/index.html
Just correcting an attribution, I didn't write the library at http://source.emptycrate.com/projects/activeobjects/ That is by Jason Turner. - -- Frank -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (GNU/Linux) iD8DBQFF9/lX5vihyNWuA4URAkS3AKC6ISSR+A/rpRqYbmyvCMGSH8hJ7ACgq/Sk g9KpYqDHbVkh5jAhLyU/c/g= =GHva -----END PGP SIGNATURE-----

On Tuesday 13 March 2007 23:55 pm, braddock wrote:
I'm looking for input. I'd be willing to add additional functionality needed and progress it to the point where it could be formally submitted into boost if there is interest. It could perhaps be merged into the existing futures implementation in the vault to add combined groups, etc.
-add_callback() permits arbitrary functions to be called when the future is set. This is needed to build non-intrusive compound waiting structures, like future_groups, or to better support continuation scheduling concepts, as in coroutines.
I'd prefer a signal/slot to add_callback(). That is: thread_safe_signals and moving to Boost.Signals when a thread-safe version is accepted.
TODO:
The following class of sub-type assignments should be made possible, as discussed by Frank Hess and Peter Dimov on the list.
class A {}; class B : public A {}; promise<shared_ptr<B> > p; future<shared_ptr<A> > f(p);
That isn't what I was talking about (although I agree it should be made possible). But B doesn't have to be derived from A. If you can do A a; B b = a; then you should be able to do (without blocking) promise<A> p; future<A> fa(p); future<B> fb = fa; For example, if I have an active_function that returns a future<unsigned> then I should be able to pass that return value to an active_function that takes a future<int> as a parameter and it should "just work". Another suggestion is to rename promise::set() (boring) to promise::fulfill() (makes me smile). And if there is an opportunity to work the name "empty_promise" in as a class or a concept, that would be clever too. Other things I would need to replace poet::future with your future class: -Constructing a future from another with an incompatible template type via a user-supplied conversion function. Useful for extracting future elements from future containers, for example. -Adding a promise::fulfill() that accepts a future with a compatible template type. The effect would be that promise::fulfill() would be called with the value from the future when it is ready. -- Frank

On Wed, 14 Mar 2007 14:09:37 -0400, Frank Mori Hess wrote:
I'd prefer a signal/slot to add_callback(). That is: thread_safe_signals and moving to Boost.Signals when a thread-safe version is accepted.
I would prefer a signal too, although I wanted to minimize the boost-isms in the interface in the hope that it will be closer to whatever future concept gets into C++0x (if any). I'd probably expose a thread_safe_signal once available.
then you should be able to do (without blocking) promise<A> p; future<A> fa(p); future<B> fb = fa;
Does libpoet already do this? Any advice on implementation? I split my future_impl from the actual value type so that I could more easily support this, but haven't worked out the details yet.
Another suggestion is to rename promise::set() (boring) to promise::fulfill() (makes me smile). And if there is an opportunity to work the name "empty_promise" in as a class or a concept, that would be clever too.
Personally, I love your fulfill() name. I would also prefer fail() to set_exception()...it seems more descriptive since it is more than just an accessor method. I didn't want to stray TOO far from Peter's C++ language proposal though. If I add a default constructor to future<T>, I'll be sure to have it throw empty_promise or empty_future if get() is called before it is initialized. ;)
-Constructing a future from another with an incompatible template type via a user-supplied conversion function. Useful for extracting future elements from future containers, for example.
I think I saw you mention this in another post, and hoped you would come back with it. Can you give me a short example of how this syntax would work that I can work towards?
-Adding a promise::fulfill() that accepts a future with a compatible template type. The effect would be that promise::fulfill() would be called with the value from the future when it is ready.
So this effectively chains the future's? Ie: promise<T> p1; future<T> f1(p1); promise<U> p2; future<U> f2(p2); p2.fulfill(f1); f2.wait(); //actually waits for f1 to complete.

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Wednesday 14 March 2007 15:26 pm, Braddock Gaskill wrote:
On Wed, 14 Mar 2007 14:09:37 -0400, Frank Mori Hess wrote:
then you should be able to do (without blocking) promise<A> p; future<A> fa(p); future<B> fb = fa;
Does libpoet already do this? Any advice on implementation? I split my future_impl from the actual value type so that I could more easily support this, but haven't worked out the details yet.
Yes. I have a future_body and a future_body_proxy class, both derived from future_body_base. A future_body handles the simple case of a future waiting on a value. A future_body_proxy observes another future_body_base and becomes ready when the future_body_base it is observing becomes ready. It then sets its value by applying a conversion function to the value from the other future_body_base. This handles implicit conversions, and conversions done with a user-specified conversion function. See the poet/future.hpp file: http://www.comedi.org/cgi-bin/viewcvs.cgi/libpoet/poet/future.hpp
-Constructing a future from another with an incompatible template type via a user-supplied conversion function. Useful for extracting future elements from future containers, for example.
I think I saw you mention this in another post, and hoped you would come back with it. Can you give me a short example of how this syntax would work that I can work towards?
I just have a constructor that takes an additional argument for the conversion function. The conversion function argument is really a boost::function<T (const U&)> . See http://www.comedi.org/projects/libpoet/classpoet_1_1future.html#a3
-Adding a promise::fulfill() that accepts a future with a compatible template type. The effect would be that promise::fulfill() would be called with the value from the future when it is ready.
So this effectively chains the future's? Ie: promise<T> p1; future<T> f1(p1); promise<U> p2; future<U> f2(p2); p2.fulfill(f1); f2.wait(); //actually waits for f1 to complete.
I think of it as chaining the promises, but yes you're understanding me correctly. - -- Frank -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (GNU/Linux) iD8DBQFF+Fqm5vihyNWuA4URAkIvAJ4pAJHA8YU+HTwoNelKFN7j31amrgCfai1j ifWPvF7GrBL8peAwCycpNGQ= =jwST -----END PGP SIGNATURE-----

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Wednesday 14 March 2007 15:26 pm, Braddock Gaskill wrote:
I would also prefer fail() to set_exception()...it seems more descriptive since it is more than just an accessor method.
Oh, and clearly the true name of promise::set_exception() is promise::break(). - -- Frank -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (GNU/Linux) iD8DBQFF+F2v5vihyNWuA4URAnLiAKCw93g+YEd8kxYEI5qH2jKr+ahazwCglYkT mqfpX1p5kStN9NlI4LcKuR8= =w2GX -----END PGP SIGNATURE-----

On Wed, 14 Mar 2007 15:26:11 -0400, "Braddock Gaskill" <braddock@braddock.com> said:
On Wed, 14 Mar 2007 14:09:37 -0400, Frank Mori Hess wrote:
Another suggestion is to rename promise::set() (boring) to promise::fulfill() (makes me smile). And if there is an opportunity to work the name "empty_promise" in as a class or a concept, that would be clever too.
Personally, I love your fulfill() name. I would also prefer fail() to set_exception()...it seems more descriptive since it is more than just an accessor method. I didn't want to stray TOO far from Peter's C++ language proposal though.
I totally agree on using fulfill/fail rather than set/set_exception. IMHO pithy names like that also aid in understanding the concepts. Perhaps Peter can be persuaded :) I also didn't see an operator() on the promise<T> to set the value (although I might have missed it). I think that's important because it lets a promises participate more readily in boost::bind compositions, etc. Cheers, Chris

Hello,
I've made a futures implementation in the hopes of combining the best features from all the proposals and various implementations currently floating around. It is still rough around the edges, but is mostly covered by unit tests.
http://braddock.com/~braddock/future/
I'm looking for input. I'd be willing to add additional functionality needed and progress it to the point where it could be formally submitted into boost if there is interest. It could perhaps be merged into the existing futures implementation in the vault to add combined groups, etc.
I used Peter's exception_ptr code, which he said he would post under a boost license.
-Braddock Gaskill
I've tested your implementation with a thread and I get sometimes an segmentation fault (linux, gcc-4.1.0). File future_detail.hpp line 79 causes the segmentation fault. Regards, Oliver Code: #include <cstdlib> #include <iostream> #include <stdexcept> #include <string> #include <boost/bind.hpp> #include <boost/function.hpp> #include <boost/thread.hpp> #include "future.hpp" struct X { std::string execute() { return "std::string X::execute()"; } }; void execute( boost::promise< std::string > p, boost::function< std::string() > fn) { p.set( fn() ); } int main( int argc, char *argv[]) { try { X x; boost::function< std::string() > fn( boost::bind( & X::execute, x) ); boost::promise< std::string > p; boost::future< std::string > f( p); boost::thread t( boost::bind( & execute, p, fn) ); std::cout << f.get() << std::endl; t.join(); return EXIT_SUCCESS; } catch ( std::exception const& e) { std::cerr << e.what() << std::endl; } catch ( ... ) { std::cerr << "unhandled exception" << std::endl; } return EXIT_FAILURE; } Gdb: Program terminated with signal 11, Segmentation fault. #0 basic_string (this=0xbff79740, __str=@0x0) at /opt/gcc-4.1.2-src/src/i486-linux-gnu/libstdc++-v3/include/bits/basic_st ring.h:283 283 /opt/gcc-4.1.2-src/src/i486-linux-gnu/libstdc++-v3/include/bits/basic_st ring.h: Datei oder Verzeichnis nicht gefunden. in /opt/gcc-4.1.2-src/src/i486-linux-gnu/libstdc++-v3/include/bits/basic_st ring.h (gdb) bt #0 basic_string (this=0xbff79740, __str=@0x0) at /opt/gcc-4.1.2-src/src/i486-linux-gnu/libstdc++-v3/include/bits/basic_st ring.h:283 #1 0x0804d8da in boost::detail::future_impl::get<std::string> (this=0x8055048, value=0x0) at /home/kowalke/Projects/test_future/src/future_detail.hpp:79 #2 0x0804d94b in boost::future<std::string>::get (this=0xbff796c4) at /home/kowalke/Projects/test_future/src/future.hpp:210 #3 0x0804a007 in main () at /home/kowalke/Projects/test_future/src/test.cpp:43

On Thu, Mar 15, 2007 at 08:24:43AM +0100, Oliver.Kowalke@qimonda.com wrote:
I've made a futures implementation in the hopes of combining [...] It is still rough around the edges, but is mostly
I've tested your implementation with a thread and I get sometimes an segmentation fault (linux, gcc-4.1.0).
Thanks Oliver, I'll get that fixed this afternoon. Since there definitely does seem to be interest in this, I'll also go back and clean up some of those "rough edges", get a real multi-threaded unit test in place, and get in a few of the new batch of good ideas so people can really start playing with this. -braddock

On Thu, Mar 15, 2007 at 08:24:43AM +0100, Oliver.Kowalke@qimonda.com wrote:
I've tested your implementation with a thread and I get sometimes an segmentation fault (linux, gcc-4.1.0).
Thanks Oliver, I'll get that fixed this afternoon.
There is a new version of my future implementation at: http://braddock.com/~braddock/future/ CHANGES: -fixed this seg-fault problem that Oliver found -added a bunch of new unit tests, including some multi-thread ones which exercise various timing permutations. -added support for automatic conversions of types which are assignable, as discussed with Frank...ie: promise<int> p; future<long> lf(p); future<unsigned char> ucf(p); I learned a lot from Frank's libpoet implementation for this typing magic, but tried to do things a bit tighter. There is no future "proxying" or "chaining" of futures, per se, instead all future references point to the same implementation object under the hood, but do abstract the actual retrieval of the value. The effect is largely the same. The promise/future split helps a lot. I haven't done the same assignment type conversions to the promise class yet, but I suppose I should. Braddock Gaskill Dockside Vision Inc

Hello Braddock,
There is a new version of my future implementation at: http://braddock.com/~braddock/future/
CHANGES: -fixed this seg-fault problem that Oliver found
-added a bunch of new unit tests, including some multi-thread ones which exercise various timing permutations.
-added support for automatic conversions of types which are assignable, as discussed with Frank...ie: promise<int> p; future<long> lf(p); future<unsigned char> ucf(p);
I learned a lot from Frank's libpoet implementation for this typing magic, but tried to do things a bit tighter. There is no future "proxying" or "chaining" of futures, per se, instead all future references point to the same implementation object under the hood, but do abstract the actual retrieval of the value. The effect is largely the same. The promise/future split helps a lot.
I haven't done the same assignment type conversions to the promise class yet, but I suppose I should.
Braddock Gaskill Dockside Vision Inc
Thanks! I still believe that futures should be combinable via operator&& and operator|| - for users of the boost::future library is it more intuitive that the resulting future of an future comination contains the result instead of future<bool> and be force to check all related futures for their return status and return value. I would prefer following syntax: promise< T1 > p1; future< T1 > f1( p1); promise< T2 > p2; future< T2 > f2( p2); promise< T3 > p3; future< T3 > f3( p3); future< tuple< T1, T2, T3 > f( f1 && f2 && f3); future< variant< T1, T2, T3 > f( f1 || f2 || f3); future< tuple< T1, T2, T3 > f( f1 && f2 && f3); future< variant< tuple< T1, T2>, T3 > f( f1 && f2 || f3); as Hartmut suggested. regards, Oliver

On Fri, 16 Mar 2007 08:23:38 +0100, Oliver.Kowalke wrote:
I still believe that futures should be combinable via operator&& and operator|| - for users of the boost::future library is it more intuitive
I'm open to this idea - just waiting for the dust to settle on exactly how it should work. If you make real-life use of these composition operators, you could help with your most complex real-life usage example. braddock
that the resulting future of an future comination contains the result instead of future<bool> and be force to check all related futures for their return status and return value. I would prefer following syntax:
promise< T1 > p1; future< T1 > f1( p1); promise< T2 > p2; future< T2 > f2( p2); promise< T3 > p3; future< T3 > f3( p3);
future< tuple< T1, T2, T3 > f( f1 && f2 && f3); future< variant< T1, T2, T3 > f( f1 || f2 || f3); future< tuple< T1, T2, T3 > f( f1 && f2 && f3); future< variant< tuple< T1, T2>, T3 > f( f1 && f2 || f3);
as Hartmut suggested.
regards, Oliver
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

I'm open to this idea - just waiting for the dust to settle on exactly how it should work. If you make real-life use of these composition operators, you could help with your most complex real-life usage example. braddock
I suggest that future< tuple< T1, T2, T3 > > f_and( f1 && f2 && f3); future< variant< T1, T2, T3 > > f_or( f1 || f2 || f3); behave like ordinary futures f1, f2, f3. That means the the current thread is blocked in future< T >::get() not in the ctor. What happens in exceptional cases? I would prefer that I get the ability to check which future (f1, f2, f3) failed and which exception was thrown. Oliver

Oliver.Kowalke@qimonda.com wrote:
I suggest that
future< tuple< T1, T2, T3 > > f_and( f1 && f2 && f3); future< variant< T1, T2, T3 > > f_or( f1 || f2 || f3);
behave like ordinary futures f1, f2, f3. That means the the current thread is blocked in future< T
::get() not in the ctor.
Sure, that's the exact behavior of the future's library in the Vault.
What happens in exceptional cases? I would prefer that I get the ability to check which future (f1, f2, f3) failed and which exception was thrown.
Agreed. I still think the behavior wrt exceptions should be controllable by a policy. Regards Hartmut

I suggest that
future< tuple< T1, T2, T3 > > f_and( f1 && f2 && f3); future< variant< T1, T2, T3 > > f_or( f1 || f2 || f3);
behave like ordinary futures f1, f2, f3. That means the the current thread is blocked in future< T
::get() not in the ctor.
It should be also possible to get the information from which future the result is taken.
Sure, that's the exact behavior of the future's library in the Vault.
The problem with the futures library in the vault is that sometimes it raies an assertion - this well known by the author but he don't know why this is raised and how to fix. Oliver

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Friday 16 March 2007 03:23 am, Oliver.Kowalke@qimonda.com wrote:
I still believe that futures should be combinable via operator&& and operator|| - for users of the boost::future library is it more intuitive that the resulting future of an future comination contains the result instead of future<bool> and be force to check all related futures for their return status and return value.
I've clearly failed to communicate my criticisms clearly. I was not presenting an alternate syntax for the future combination functions. I was presenting another function with semantics that bear no direct relation to the combination functions, but whose prototype would conflict with the proposed use of operator|| and && for the combination functions. Maybe it would have been clearer if I had said future<bool> operator||(const future<bool> &, const future<bool>&); I was not objecting to the functionality of combining futures (I don't really have an opinion on that) I was objecting to the choice of overloading operators instead of using normal function names. To a user of libpoet, where the correspondence between futures and their values is emphasized, seeing something like future_a || future_b in code, the principle of least surprise would lead them to guess that the meaning of the overloaded operation would be to return a future whose value corresponded to applying logical or to the values of future_a and future_b. This behaviour would correspond to an active_function created from the ordinary operator|| acting on values. The confusion only underscores my point. Overloading a function name (or operator) with multiple functions that bear no direct semantic relationship to each other is merely confusing. The namespace for possible operators is extremely limited. Unfortunately, this seems to make everyone with a neat idea want to overload an operator with it, so that everyone notices their function and is more likely to use it. - -- Frank -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (GNU/Linux) iD8DBQFF+pLk5vihyNWuA4URAsFfAKC40HgefbbOcYUvuPItqIxO4+6jEgCg4uMW dwoS6I+e1t3oWN8gbnCmDCk= =9llc -----END PGP SIGNATURE-----

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Thursday 15 March 2007 23:30 pm, Braddock Gaskill wrote:
There is no future "proxying" or "chaining" of futures, per se, instead all future references point to the same implementation object under the hood, but do abstract the actual retrieval of the value. The effect is largely the same.
Is this an implementation detail, or does it change the semantics? For example, if I the class A is convertible to B and B is convertible to C, but A is not directly convertible to C, can I get a future<C> from a future<A> by going through a future<B> as an intermediary? Or would it try (and fail) to convert the A value directly to a C value? - -- Frank -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (GNU/Linux) iD8DBQFF+pvx5vihyNWuA4URAsKIAKDgi9ekzMWX3+znwUPbNG3jT0c/tACcDvnn L9nZ2nRIBZHC3gNIR06M5sY= =WCB8 -----END PGP SIGNATURE-----

On Fri, 16 Mar 2007 09:30:24 -0400, Frank Mori Hess wrote:
On Thursday 15 March 2007 23:30 pm, Braddock Gaskill wrote:
There is no future "proxying" or "chaining" of futures, per se, instead all future references point to the same implementation object under the hood, but do abstract the actual retrieval of the value. The effect is largely the same.
Is this an implementation detail, or does it change the semantics?
Implementation detail. Eliminates the method forwarding and signal/callback connection/disconnection, and eliminates proxy mutexes. Looking at your implementation laid the groundwork though, and I'd appreciate if you could put some eyeballs on what I did. The split promise/future concept eliminates the nasty setValue() error case, as I think you pointed out in an earlier post.
example, if I the class A is convertible to B and B is convertible to C, but A is not directly convertible to C, can I get a future<C> from a future<A> by going through a future<B> as an intermediary?
This should work. It will chain through a couple of my detail::return_value_type_adaptor instances. See test_future.cpp. Not exactly the case, but implementation-wise it is doing the same. void TestCase9() { // assignable, but different, types boost::promise<int> p; boost::future<long> lfut(p); boost::future<int> ifut(p); boost::future<unsigned int> uifut = lfut; boost::future<unsigned char> ucfut(uifut); p.set(27); BOOST_CHECK_EQUAL(lfut.get(), 27); BOOST_CHECK_EQUAL(ucfut.get(), 27U); } One thing I did not do (yet) was add a generic conversion function capability. This is easily done now (in fact I did a rough cut and then removed it). I was hoping to get at least one real-life example usage of how it is used - say exactly how the syntax of a future<vector<T> > to future<T> conversion might work, as you had mentioned? There seems to be a fine line between embedded conversion and creating a more explicit function future<T> element_extractor(future<vector<T> > f, int index); Braddock Gaskill Dockside Vision Inc

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Friday 16 March 2007 11:05 am, Braddock Gaskill wrote:
One thing I did not do (yet) was add a generic conversion function capability. This is easily done now (in fact I did a rough cut and then removed it). I was hoping to get at least one real-life example usage of how it is used - say exactly how the syntax of a future<vector<T> > to future<T> conversion might work, as you had mentioned?
Here's an example extracting at(1) from a future vector: poet::future<std::vector<double> > myVec; poet::future<double> myVecElement(myVec, boost::bind<const double&>(&std::vector<double>::at, _1, 1)); - -- Frank -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (GNU/Linux) iD8DBQFF+sG75vihyNWuA4URAohrAJ4oXDjjZW3G6lQyh1EtYXHF0n7+kgCfZHqs QkQyMJCkmtnEsAA0WfZbNfY= =nkmB -----END PGP SIGNATURE-----

On Fri, 16 Mar 2007 12:11:38 -0400, Frank Mori Hess wrote:
On Friday 16 March 2007 11:05 am, Braddock Gaskill wrote:
One thing I did not do (yet) was add a generic conversion function
Here's an example extracting at(1) from a future vector:
poet::future<std::vector<double> > myVec; poet::future<double> myVecElement(myVec, boost::bind<const double&>(&std::vector<double>::at, _1, 1));
Hrm, this has profound implications. I don't know if they are good or bad, but they go beyond my current understanding of the future idiom. This allows you to not just embed conversion functions, but ANY function, in the guise of a future<T>. future<double> func1(const future<double> &a); future<double> func2(const future<double> &a); future<double> func3(const future<double> &a); promise<double> p; future<double> f0(p); future<double> f1(f0, func1); // f1.get() equiv to func1(f0) future<double> f2(f1, func2); // f2.get() equiv to func2(func1(f0)) f2.get(); // Does caller realize this is actually a call to func2(func1(f0))?? future<double> f3 = std::fork(bind(&func3, f2)); // Did the author of func3() realize his call to a.get() might do _anything_? Of course the assignable type conversions already let the cat out of the bag, not just the generic conversion functions. In fact, my add_callback() hook opens a similar bag of worms for the promise::set() call. I really only want add_callback as a hook to allow independent event notification models and composition to be developed on top, not for end user use. So, good, bad, or awesome? ;)

On Fri, 16 Mar 2007 13:32:05 -0400, Braddock Gaskill wrote:
future<double> func1(const future<double> &a); future<double> func2(const future<double> &a); future<double> func3(const future<double> &a);
My pseudo-code typing was wrong here. I suppose it would be: double func1(double a); with f1.get() equiv to func1(f0.get()), etc But you get the idea....

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Friday 16 March 2007 13:32 pm, Braddock Gaskill wrote:
On Fri, 16 Mar 2007 12:11:38 -0400, Frank Mori Hess wrote:
On Friday 16 March 2007 11:05 am, Braddock Gaskill wrote:
One thing I did not do (yet) was add a generic conversion function
Here's an example extracting at(1) from a future vector:
poet::future<std::vector<double> > myVec; poet::future<double> myVecElement(myVec, boost::bind<const double&>(&std::vector<double>::at, _1, 1));
Hrm, this has profound implications. I don't know if they are good or bad, but they go beyond my current understanding of the future idiom.
It could be implemented as a free function instead of a constructor, to give it a little conceptual separation from the future class proper. - -- Frank -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (GNU/Linux) iD8DBQFF+tXe5vihyNWuA4URAquWAJ0XSmoFXp8yOGZ0a/sKqEm+GYuAGQCfQ0Pn U3ykKs8ndzvzYX0R3m5Zxsc= =ifdb -----END PGP SIGNATURE-----

On Fri, 16 Mar 2007 13:37:33 -0400, Frank Mori Hess wrote:
Hrm, this has profound implications. I don't know if they are good or bad, but they go beyond my current understanding of the future idiom.
It could be implemented as a free function instead of a constructor, to give it a little conceptual separation from the future class proper.
Yeah, maybe I'll make a seperate "future_friends.hpp" with free functions for adding a future callback or a general conversion function. That way people can sort of hook into the implementation if they are making a broader framework and know what they are doing (like libpoet vector adaptors or compositions), but with a clear "do not abuse this at home" banner. I can see many interesting uses of these future adaptor functions, but at the end of the day an object that you call to get a return value and any arbitrary behavior is called a "functor" not a "future". I've previously developed a sort of heavyweight future concept that I called a "place". I'm excited about discovering futures because they are MORE constrained, while my "place" concept was powerful and convenient, but always left me wondering if I'd covered every conceivable case. I'll still be using "place" where needed, but future everywhere else... -braddock

On Friday 16 March 2007 14:24 pm, Braddock Gaskill wrote:
Yeah, maybe I'll make a seperate "future_friends.hpp" with free functions for adding a future callback or a general conversion function. That way people can sort of hook into the implementation if they are making a broader framework and know what they are doing (like libpoet vector adaptors or compositions), but with a clear "do not abuse this at home" banner.
I can see many interesting uses of these future adaptor functions, but at the end of the day an object that you call to get a return value and any arbitrary behavior is called a "functor" not a "future".
I might just drop the constructor with conversionFunction from libpoet's future class. The same effect can already be achieved with an active_function, for example extraction from a future vector: poet::future<std::vector<double> > myVecFuture(/*...*/); poet::active_function<double (std::vector<double>, unsigned i)> conversionFunction(boost::bind<const double&>(&std::vector<double>::at, _1, _2)); poet::future<double> myVecElementFuture = conversionFunction(myVecFuture, 1); It's a bit overkill to create a new thread just to extract a vector element, but the user can keep a long-lived scheduler around for use with trivial active_functions if they care. -- Frank
participants (15)
-
braddock
-
Braddock Gaskill
-
Christopher Kohlhoff
-
Frank Mori Hess
-
Giovanni Piero Deretta
-
Hartmut Kaiser
-
Hess, Frank
-
Howard Hinnant
-
Jeff Garland
-
Marcus Lindblom
-
Michael Marcin
-
Oliver.Kowalke@qimonda.com
-
Peter Dimov
-
Sohail Somani
-
Timmo Stange