
Anthony Williams-3 wrote:
Maybe there isn't even a notion of a thread crashing without crashing the process.
No, there isn't. A thread "crashes" as a result of undefined behaviour, in which case the behaviour of the entire application is undefined.
I thought Windows' SetUnhandledExceptionFilter could handle this but I was wrong. Anthony Williams-3 wrote:
At the very least, I see a value in not behaving worst than if the associated client thread would have spawned it's own worker thread. That is: std::launch_in_pool(&crashing_function); should not behave worse than std::thread t(&crashing_function);
It doesn't: it crashes the application in both cases ;-)
You're right. Deadlocks will however be able to "spread" in this non-obvious way. Lets say thread C1 adds task T1 to the pool which is processed by worker thread W1. C1 then blocks until T1 is finished. When T1 waits on a future, it starts working on job T2 which deadlocks. This deadlock now spreads to the uninvolved thread C1 too. Don't now how much of a problem this is though - effective thread re-use might be worth more than this unexpected behaviour. Anthony Williams-3 wrote:
If it was a large list, I wouldn't /just/ do a timed_wait on each future in turn. The sleep here lacks expression of intent, though. I would write a dynamic wait_for_any like so:
void wait_for_any(const vector<future<void>>& futures) { while (1) { for (...f in futures...) { for (...g in futures...) if (g.is_ready()) return; if(f.timed_wait(1ms)) return; } } }
That way, you're never just sleeping: you're always waiting on a future. Also, you share the wait around, but you still check each one every time you wake.
Maybe you would, but I doubt most users would. I wouldn't expect that waiting on a future expresses interest in the value. Anthony Williams-3 wrote:
You're right: if there's lots of futures, then you can consume considerable CPU time polling them, even if you then wait/sleep. What is needed is a mechanism to say "this future belongs to this set" and "wait for one of the set".
Exactly my thoughts. Wait for all would probably be needed too. And to build composites, you need to be able to add both futures and these future-sets to a future-set. Might be one class for wait_for_any and another one for wait_for_all. Anthony Williams-3 wrote:
Currently, I can imagine doing this by spawning a separate thread for each future in the set, which then does a blocking wait on its future and notifies a "combined" value when done. The other threads in the set can then be interrupted when one is done. Of course, you need /really/ lightweight threads to make that worthwhile, but I expect threads to become cheaper as the number of cores increases.
Starting a thread to wait for a future doesn't seem very suitable to me. Imagine 10% of the core threads each waiting on (combinatorial) results from the remaining 90%. Also, waiting on many futures is probably applicable even on single core processors. For instance if you have 100s of pending requests to different types of distributed services, you could model each request with a future and be interested in the first response which arrives. Windows threads today aren't particularily light-weight. This might mean that condition_variable isn't a suitable abstraction to build futures on :( At least not the way it works today. But I don't think it's a good idea to change condition_variables this late. It is a pretty widespread, well working and well understood concurrent model. OTOH changing future's waiting model this late is not good either. Anthony Williams-3 wrote:
Alternatively, you could do it with a completion-callback, but I'm not entirely comfortable with that.
I'm not comfortable with this either, for the reasons I expressed in my response to Gaskill's propsal. This issue is my biggest concern with the future proposal. The alternatives I've seen so far: 1. Change/alter condition variables 2. Add future-complete callback (Gaskill's proposal) 3. Implement wait_for_many with a thread per future 4. Implement wait_for_many with periodic polling with timed_waits 5. Introduce new wait_for_many mechanism (public class or implementation details) 6. Don't ever support waiting on multiple futures 7. Don't support it until next version, but make sure we don't need to alter future semantics/interface when adding it. Alternative 7 blocks the possibility to write some exciting libraries on top of futures until a new future version is available. Do you have further alternatives? Johan -- View this message in context: http://www.nabble.com/Review-Request%3A-future-library-%28Gaskill-version%29... Sent from the Boost - Dev mailing list archive at Nabble.com.