
Hurd, Matthew <hurdm <at> sig.com> writes:
Had a little thought that might be relevant. Please spank me if this is OT given the rather specific implementation I'm dropping in on...
Not at all. If this proves useful, I would certainly want to support it.
As mentioned previously, future values could give you a lazy way of evaluating the result, even when you aren't multithreaded.
In the truly "active" object, that is, it has at least one thread, you could call the methods with a future instead of a normal value so that when the result arrives it percolates through the implied queue.
Now for a new thought. What about using an expression template mechanism so that when you combine futures they are glommed at compile time to remain lazy?
boost::future<double> sum = some_thing_maybe_active.total(); boost::future<double> count = some_thing_maybe_active.count(); boost::future<double> average = sum / count;
sum / count forms a lazy expression
if a method groks a future and then this becomes interesting...
boost::future<double> result = some_thing_maybe_active.do_stuff(average);
You can end up with the active object percolating all the way through to "result" without any blocking until the result of "result" is used.
Yes, certainly. I'm not au fait with expression templates, but you could alternatively implement arithmetic operations in future<T> which maintained shared pointers to operands, and stored a function pointer to std::plus, etc.
If you get rid of the thread(s) in the active object idea, you end up with a mechanism for lazy evaluation.
This I don't really follow. If you take away the thread from the active object then the calling thread must perform the evaluation itself, eventually. What you have left is equivalent to a sequence of boost::binds to a standard function, haven't you?
Perhaps there is a more general mechanism / pattern and bike shop name lurking in this pattern...
Regards,
Matt Hurd
Are you suggesting that the 'future' template is a convenient syntactic sugar for lazy evaluation, as a separate concern to the threading involved? If so, I can't disagree... Matt