I agree that making lifetime the problem of the user is not particularly nice. I'd prefer to split the executors interface where only the part which contains submit() is copyable and let the rest be non-copyable, we have successfully used a similar design internally (it makes the interface which is sent around and copied the smallest possible interface).
What you propose is something similar to the split in [p0113r0] where there is an execution_context and executor_type. However using shared_ptr as copyable ensures the lifetime issue, but I don't see the advantage in the split then. There is a problem with the shared_ptr approach that my current implementation in make_executors_copyable shares. The destructor of the shared state can be called in a thread that is part of the threads of the executor. That mean that the destructor must check if the thread to join is this thread and then not call the join.
In [p0113r0], the executor_context must outlive the executor_type copies that can be just references to the executor_context.
E.g
class priority_scheduler : public execution_context { public: class executor_type { public: executor_type(priority_scheduler& ctx, int pri) noexcept : context_(ctx), priority_(pri) { }
// ...
private: priority_scheduler& context_; int priority_; };
executor_type get_executor(int pri = 0) noexcept { return executor_type(*this, pri); }
// ... };
I don't think that an execution_context should represent a scheduling policy. This does not make sense conceptually (see below).
I don't see the need for the split in [p0113r0], as passing the executors by reference is equivalent.
There is a clear conceptual difference between the execution_context and executor concepts (see Parallelism TS and N4406). The execution_context is a type which sole purpose is to expresses thread-safety guarantees of the scheduled task, i.e. seq: the scheduled tasks cannot be run concurrently with anything else par: the scheduled tasks may run concurrently with any other task of the same batch (see parallel algorithms) HPX extensions: seq(task): the scheduled tasks can run concurrently only with tasks not from the same batch, tasks from the same batch have to be serialized par(task): the scheduled tasks can run concurrently with any other task, even those not from the same batch At the same time, executors encapsulate the 'how and when' of task execution, i.e. various scheduling policies and requirements. BTW, this distinction allows for the integration with yet another concept, which we call execution_parameters. Those encapsulate for instance grain size control (e.g. how many tasks should run on the same thread of execution?) and the control over the amount of resources the executor may use (e.g. how many cores should those tasks run on?). All in all in HPX we allow for vector<int> v = { ... }; parallel::for_each( par.on(my_executor).with(static_chunk_size), begin(v), end(v), [](auto v) {...}); Thus, letting most APIs (such as parallel algorithms, define_task_block, etc.) take an execution_policy instead of just an executor is a Good Thing(tm). For other, mostly lower level APIs - like future::then, async, dataflow, etc. - passing just the executor instance is sufficient as the API implies running the task asynchronously anyways. Regards Hartmut --------------- http://boost-spirit.com http://stellar.cct.lsu.edu
So, do we want a design that force the user to ensure that the executor (execution_context) outlive the executor sinks (executor_type? Or, just a copyable executor?
Best, Vicente [p0113r0] Executors and Asynchronous Operations, Revision 2 http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/p0113r0.html
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost