
Hi, I uploaded a new version of the futures library to the boost vault. http://tinyurl.com/9lws4 Just a remainder, futures execute functions in separate threads. Later the future can be used to retrieve the result of the function. The major change is the addition of iterators over futures. Last time some people complained during the discussion that there were operators missing which they needed for combining futures. So instead of adding more operators I added iterators which can be used to implement almost any combination of futures. simple_future<int> f1 = bind(fac, 4); simple_future<int> f1 = bind(fac, 6); simple_future<int> f1 = bind(fac, 8); future<int> f = f1 || f2 || f3; for(future<int>::iterator it = f.begin(); f.end() != it; ++it){ cout << *it << endl; // prints fac(4), fac(6) and fac(8) if(*it > XXX) break; } That way, you can wait for as many futures to finish as you want and you can decide while waiting, for how many you want to wait. All feedback is appreciated. Thanks, Thorsten

Thorsten Schuett <schuett <at> zib.de> writes:
Hi,
I uploaded a new version of the futures library to the boost vault. http://tinyurl.com/9lws4
Just a remainder, futures execute functions in separate threads. Later the future can be used to retrieve the result of the function.
How about limiting the number of threads and queueing, all of course configurable in code? Aristid Breitkreuz

Thorsten Schuett <schuett <at> zib.de> writes:
Hi,
I uploaded a new version of the futures library to the boost vault. http://tinyurl.com/9lws4
Just a remainder, futures execute functions in separate threads. Later the future can be used to retrieve the result of the function.
How about limiting the number of threads and queueing, all of course configurable in code? I was thinking about binding a bunch of futures to a thread pool. Thereby you can control the number of spawned threads and you can reuse the created
On Saturday 15 October 2005 15:57, Aristid Breitkreuz wrote: threads. But I am not sure whether you can create deadlocks by limiting the number of concurrently running threads. Actually you can create deadlocks. Let's say the number of threads in the following is limited to one. On executing (1) the first thread is spawned. The spawned thread will try to spawn a second thread in (2). But it has to wait for the first thread to terminate which will never happen. :-( int foo(){ return 0; } int bar(){ future<int> f = int_[foo]; //(2) return f(); } future<int> f = int_[bar]; //(1) cout << f() << endl; Thorsten

At 9:15 AM +0200 10/17/05, Thorsten Schuett wrote:
From: Thorsten Schuett <schuett@zib.de> Date: Mon, 17 Oct 2005 09:15:20 +0200 Subject: Re: [boost] Futures
On Saturday 15 October 2005 15:57, Aristid Breitkreuz wrote:
How about limiting the number of threads and queueing, all of course configurable in code?
I was thinking about binding a bunch of futures to a thread pool. Thereby you can control the number of spawned threads and you can reuse the created threads. But I am not sure whether you can create deadlocks by limiting the number of concurrently running threads.
Actually you can create deadlocks. Let's say the number of threads in the following is limited to one. On executing (1) the first thread is spawned. The spawned thread will try to spawn a second thread in (2). But it has to wait for the first thread to terminate which will never happen. :-(
It has been a long time since I did anything with futures (good grief, more than 15 years!), so my recollections on this may be a bit fuzzy. And I haven't carefully read the proposal for futures in C++. But with those caveats... I think the blocking behavior you describe is not actually appropriate for futures. The thing to remember is that futures are not the same as lazy evaluation (a la Haskell and its ilk). Rather, by executing a future expression you are essentially asking for help from some other (otherwise idle) resource (i.e. processor). If there aren't any idle resources, then you always have the option of doing the work yourself. In other words, if there is no free thread in the thread pool, you could evaluate the future expression immediately in the current thread. However, if I remember correctly, the actual technique to use is a queue of pending futures and a thread pool in which each thread attempts to obtain from that queue a pending future to work on. So in the case you describe, there is no blocking, you just enqueue the future and move on, hoping that some other resource will have evaluated it by the time you need it. Note that if a future is still queued when it is requested, the requesting thread might pull it from the queue and process it directly, rather than blocking for some other resource to process it. A mechanism for aborting / discarding pending futures is also useful for many kinds of problems, i.e. a search algorithm might discard pending futures once a "good enough" solution has been found. Determining whether a pending future is no longer needed may be difficult without garbage collection. (The pending queue can have weak references to the futures, so that they can be collected if not referenced from elsewhere.) This is another area where futures are different from lazy evaluation. For the record, I found futures *very* difficult to use effectively. The optimal granularity at which to spawn futures seemed to depend on all of the number of available processors, the overhead in spawning a future, the overhead in obtaining a future's value, and probably other things that I've since forgotten. As a result, it seemed very difficult to determine exactly where to insert future spawns into a computation. There has been a lot of work over the years in the area of automatically parallelizing computations (and tailoring to specific target configurations) that is applicable, but I remain suspicious of manual use of futures.
participants (3)
-
Aristid Breitkreuz
-
Kim Barrett
-
Thorsten Schuett