Le 27/05/15 12:08, Niall Douglas a écrit :
On 27 May 2015 at 7:02, Vicente J. Botet Escriba wrote:
So the question is, do we have the proposed interface for expected, or something else? Would be the 'to be named type" a literal type? I would far prefer if Expected were layered with increasing amounts of complexity, so you only pay for what you use.
I also think you need to await the WG21 variant design to become close to completion. It makes no sense to have an Expected not using that variant implementation, you're just duplicating work. My apologies. No seriously, this are implementation details. Could you describe the interface changes that expected need. That's a huge question Vicente. And it's very hard to answer in detail, because I don't really know. In other words, beside the slow compile time of expected, what you don't need from expected? I see that you want to be able to store error_code or exception_ptr. Wouldn't the interface of expected
> be enough? I don't think variant is an implementation detail. I suspect people will want to explicitly convert from an ordered variant into an expected and vice versa for example. They will also want to "repack" a variant from one ordering of type options into another without actually copying around any data. This is IMO a strong requirement, it should be enough to be able to convert from one to the other. Anyway, if people consider this is a must, a concrete proposal of a expected interface on top of variant supporting this cast will be needed. And also for optional, I guess. I can also see a Hana heterogeneous sequence or std::tuple being reduced into variant and/or into an expected. And so on. tuple -> product variant -> sum
I don't see how would you like to convert one to the other.
Until all this ecosystem stuff becomes more final, it is hard to imagine a new expected interface.
Other than that, I found the interface generally good. I only ever needed about 10% of it personally, but it would be nice if the remainder were available by switching it on maybe with an explicit conversion to a more intricate subclass. The more intricate subclass would of course live in a separate header. Pay only for what you use.
Could we define this interface? If I remember rightly, in your WG21 paper you had a table somewhere where you compared futures to optional to expected with a tick list of features shared and features not shared.
This table was comparing type interfaces.
I think you need an orthogonal table of:
1. Lightweight monad. 2. Intermediate monad. 3. Featureful monad.
... and another table showing the progression from minimal monad to maximum monad.
I believe that we should stop using the word monad in this way. Just wondering what would be the operations of these monads. I'm aware of the operations of a Monad and the operations of an Error Monad. I'm aware of the operations of a Functor, an Applicative, .... These operations have nothing to be with the operations the concrete type provides.
If you examine https://github.com/ned14/boost.spinlock/blob/master/include/boost/spin lock/future.hpp you should see that monad<>,
FWIK monad is a concept (type_class), not a type. Any type implementing bind/unit becomes a monad. Your concrete monad class template provides don't provides the monad interface. It merits another name.
from which future<> derives, implements simple then(), bind() and map(). I only see comments.
And it implements get() and all the getters, and swap, and assignments. BTW, why the setters don't need to be redefined? These operations need to be thread-safe, inst It? How would you reach to implement .then(), if you don't redefine the setter operations?
future<> only implements then(), but I am planning for it to allow a then(F(monad))
I've implemented then() in Boost.Thread as it was on the C++ proposals. But this doesn't make future<T> a monad. The continuation of the monadic bind function has a T as parameter, not a future<T>. I proposed to the C++ standard a function future<T>::next (bind) that takes a continuation having T as parameter (future<R>(T)), and and future<T>::catch_error that takes a as parameter a continuation that takes an errror as parameter. The proposal was not accepted. I have not added them to Boost.Thread, and will not add it as members as both can be implemented on top of then(). The alternative to then() is bind()+catch_error(). Note that future is an Error Monad. One advantage of a member implementation respect to a non-member one is the syntax f.next(...).next(...).catch_error(...) However if uniform syntax is adopted in the next C++ standard, the non_member function will gain this syntactic advantage. The other advantage is that a member function doesn't introduce a new name at a more global scope. I'm looking for a way to introduce non-member functions at a more restricted scope. I have not found any yet without changing the language (see Explicit namespaces). I'm all for having non-member function for map/bind/.... implemented on top of the concrete classes interface. The proposed expected contains more than needed. The map/bind/catch_error/catch_exception member functions could and should be non-members. The major question I have is how these non-member functions should be customized. The C++ standard committee is not a fan of non-member functions.
which would allow future to also provide bind() and map().
My idea is you can eventually switch freely between asynchronous monadic programming and synchronous monadic programming using a simple cast, so I am implementing a "two layer" monad where the first is a synchronous monad and the second is an asynchronous monad. But I'm a long way away from any of that right now. IMO, we don't need to add more operations than needed to these classes. They have already too much. We need to define the minimal interface (data) that allows to define other non-member functions (algorithms). This is where monads, functors, ... abstraction have a sense. Neither expected/optional/..../nor your synchronized monad class need to have bind/map/... member functions. These functions can be defined as non-member using the specific expected/optional/.... interface. This decoupling is essential, otherwise we will finish including too much member functions.
future<T> been asynchronous, needs a specific function to add a continuation when the future is ready. This is not the case for the synchronous classes, that can make use of the synchronous getter interface. I see now future<T>::then() as a specific interface for future than something that need to be generalized.
In other words, would the "to be named" type appear on the AFIO interface? Is for this reason that you need to name it? or it is really an implementation detail. This merits clarification. Right now AFIO's synchronisation object is a struct async_io_op which contains a shared_future
>. I'm going to replace async_io_op with a custom afio::future<T> which *always* carries a shared_ptr
plus some T (or void). Internally that converts into a future > but that isn't important to know.
My question was about the "to be named" type. Would this type be visible
on the AFIO interface?
I don't know nothing at all about AFIO, but why the following is not
good for you, then?
template <class T>
using your_name = future
That custom afio::future<T> will subclass the lightweight future<T> I am currently building.
Strictly speaking, there is nothing stopping me building the same custom afio::future<T> right now using std::shared_future. However, I would need to implement continuations via .then(), and if I am to bother with that, I might as well do the whole lightweight future because as you know from Boost.Thread, most of the tricky hard to not deadlock work is in the continuations implementation.
Yes, I'm aware of the difficulty. Any help to make it more robust is welcome. As I said in another thread (Do we can a non backward compatible version with non-blocking futures?), I was working on a branch for non-blocking futures and a branch for lightweight executors. I must take the time to merge both together. Any PR providing optimizations that improve performances are also welcome. Vicente P.S. Sorry to repeat myself everywhere.