[futures] future streams, lazy futures, implementation work

I've been continuing work on my futures implementation. It is still early, but the latest is available here: http://braddock.com/~braddock/future/ I've been enriching my implementation, and I'd love some feedback on some new features. The implementation is still VERY EARLY, but slowly maturing. STREAMS: Future streams - everywhere in the academic literature on futures and similar concepts, the implementation of future "streams" seem to be the number one cited pattern. Essentially, a future stream is a linked list where the "next" field is actually a future to a pointer to the next node. A producer writes to the stream by setting the next future to point to a new node, at which point waiting consumers can make use of it. Streams are usually implemented in garbage collected, pass-by-reference languages, but using shared_ptr's and a few tricks I have made a straightforward C++-safe implementation. I have made separate future_stream and promise_stream classes that offer many-to-many, one-to-many, or one-to-one communications channels. LAZY OR AS-NEEDED FUTURES: I drew some inspiration from Oz and their "as-needed" futures. I added three methods to promise and future: is_needed(), set_needed(), and wait_until_needed(). Additionally, whenever a future blocks attempting to obtain a value, set_needed() is automatically called. Any registered set_callback() callback is called when the future enters the needed state, so this mechanism can be used in many frameworks. This addition permits lazy evaluation of functions which may fulfill a promise, and requires NO awareness on the part of the future user (although they can signal when they need the future explicitly if they want). EXAMPLE OF LAZY PRODUCER: // produces 23 int values AS NEEDED void lazy_producer(boost::promise_stream<int> pstream) { for (int i=0; i<24; ++i) { pstream.wait_until_needed(); pstream.send(i); } } // consumes values and sums them until the stream ends int consumer(boost::future_stream<int>::hold stream_start) { int accum=0; boost::future_stream<int>::iterator iter = stream_start.release(); try { for (;;) accum += iter.recv(); } catch (boost::broken_promise &e) {} //producer stopped return accum; } The broken_promise throw is a mechanism to indicate that the last producer has left the stream. It needs to be handled better. Note that consumer() has no knowledge of whether producer is lazy or not. OPERATIONS/GROUPS: I've added some EARLY support for logical operations and combinations of futures. I've tried to reconcile the ambiguity of the implicit future conversion with the overloaded || and && operators. A solution I'd like some feedback on is to introduce a new templated function boost::op(const future<T> &f). Usage is as follows, where a, b, and c are futures: future<void> combination = op(a) && (op(b) || op(c)) op() actually returns a wrapper object around the future. The wrapper object class can then be used in operator||() and operator&&() functions. This way there is no risk of ambiguity, while still getting the expressive power of the binary operators. I haven't tried the tuple/variant ideas yet. OTHER CHANGES: -The implementation is now header-only -future_wrapper<T> provides a basic future wrapper function object for any function which returns a value of type T, which will be returned in a future<T> automatically with proper exception handling (based on Peter's future_wrapper) -future_wrapper<void> is a specialization of future_wrapper<T> for functions which return void. IDEA: I'm contemplating making promise<T> a proper subclass of future<T>. This is because in real-world code, I am often doing: promise<int> p; future<int> f(p); //get a future interface to the value f.ready(); //or whatever future method I want to access. Since I can always obtain a future<T> from a promise<T>, there doesn't seem to be any reason not to make promise a sub-class of future<T> so that future methods can be called directly if desired. So, should promise just be a subclass of future, with the additional set() methods provided? I'm currently starting to use this future library in a large project in the context of a full multi-threaded task scheduling system, which should mature things further (it certainly is helping understand real-world usage). I'll probably be looking to submit the library for review after a couple months of usage, if there is interest. Looking forward to feedback. Braddock Gaskill Dockside Vision Inc
participants (1)
-
Braddock Gaskill