On Sunday, January 04, 2015 06:52:41 Niall Douglas wrote:
On 3 Jan 2015 at 7:15, Hartmut Kaiser wrote:
First of all, I fully support Thomas here. Futures (and the extensions proposed in the 'Concurrency TS') are a wonderful concept allowing asynchronous computation. Those go beyond 'classical' futures, which just represent a result which has not computed yet. These futures allow for continuation style coding as you can attach continuations and compose new futures based on logical operations on others.
They are also severely limited and limiting:
1. They tie your code into "future islands" which are fundamentally incommensurate with all code which doesn't use the same future as your code. Try mixing code using boost::future and std::future for example, it's a nightmare of too easy to be racy and unmaintainable mess code. If Compute provided a boost::compute::future, it would yet another new future island, and I'm not sure that's wise design.
I absolutely agree. "future islands" are a big problem which need a solution very soon. To some extent the shared state as described in the standard could be the interface to be used by the different islands. What we miss here is a properly defined interface etc.. I probably didn't make that clear enough in my initial mail, but i think this unifying future interface should be the way forward so that different domains can use this to implement their islands. FWIW, we already have that in HPX and we are currently integrating OpenCL events within our "future island", this works exceptionally well.
2. Every time you touch them with change you unavoidably spend thousands of CPU cycles due to going through the memory allocator and (effectively) the internal shared_ptr. This makes using futures for a single SHA round, for example, a poor design despite how nice and clean it is.
I am not sure i fully understand that statement. All I read is that a particular implementation seems to be bad and you project this to the general design decision. I would like to see this SHA future code though and experiment with it a bit.
3. They force you to deal with exceptions even where that is not appropriate, and internally most implementations will do one or more internal throw-catches which if the exception type has a vtable, can be particularly slow.
I think this is a void statement. You always have to deal with exceptions in one way or another ... But yes, exception handling is slow, so what? It's only happening in exceptional circumstances, what's the problem here?
4. The compiler's optimiser really struggles to do much with the current future design because of all the implicit visibility to other threads. Even a very simple use of future requires hundreds of CPU instructions to be generated as a minimum, none of which can be elided because the compiler can't know visibility effects to other threads. I'll grant you that a HPX type design makes this problem much more tractable because the real problem here is the potential presence of hardware concurrency.
Which is still there, even in HPX :P Intra thread communication is expensive won current architectures irregardless of what higher level abstraction, what's the point?
This is why Chris has proposed async_result from ASIO instead, that lets the caller of an async API supply the synchronisation method to be used for that particular call. async_result is superior to futures in all but one extremely important way: async_result cannot traverse an ABI boundary, while futures can.
What's the difference between async_result and a future? I am unable to find that in the ASIO documentation.
What do you mean by 'making everything a future'? Having all functions return futures? If so - then yes - if you want to make a function asynchronously callable, let it return a future. There is nothing wrong with that (well, except that std::future is utterly bulky and slow as it is usually tied to std::sthread which in turn is usually representing kernel threads - for a proposed solution see my talk at MeetingC++ 2014 [2]). For the record, I'd just love if there were more HPX type thinking in how C++ concurrency is standardised.
However, I have learned with age and experience that people don't care much for whole new ways of thinking and approaching problems. They prefer some small incremental library which can be tacked onto their existing code without much conceptual change. To that end, when facing the limitations of std::future they can see the cost-benefit of boost:future, and can conceptualise replacing std::future with boost::future in their code. So that is a viable mental step for them.
Replacing the entire concurrency engine and indeed paradigm in your C++ runtime is, I suspect, too scary for most, even if the code changes are straightforward. It'll be the "bigness" of the concept which scares them off.
Neither me or Hartmut are proposing to use HPX within boost. However, we want to release a HPX-enhanced C++ stdlib in the near future to account for this exact deficiency.
To that end, the non-allocating basic_future toolkit I proposed on this list before Christmas I think has the best chance of "fixing" futures. Each programmer can roll their own future type, with optional amounts of interoperability and composure with other future islands. Then a future type lightweight enough for a SHA round is possible, as is some big thick future type providing STL future semantics or composure with many other custom future types. One also gains most of the (static) benefits of ASIO's async_result, but one still has ABI stability.
I missed that. Can you link the source/documentation/proposal once more please?
Niall