Vicente J. Botet Escriba
Le 07/03/15 17:38, Louis Dionne a écrit :
Vicente J. Botet Escriba
writes: [...]
I see much more you datatype as a tag.
That's exactly what they are. However, Fusion and MPL did not see tags for what they are actually: a powerful way of lifting the type system one level higher up to make representation differences unimportant. Fusion and MPL only saw tags as part of their dispatching system. I think it's mostly a philosophical difference at this point.
Do you have an equivalence in Haskell to your datatype/generalized type?
No, but shooting from the hip you could approximate them with type classes. For example, the Tuple data type could be implemented as -- The different actual representations for tuples data Tuple0 = Tuple0 data Tuple1 a = Tuple1 a data Tuple2 a b = Tuple2 a b data Tuple3 a b c = Tuple3 a b c -- etc... -- The Tuple generalized type representing all those guys class Tuple t where len :: t -> Int instance Tuple Tuple0 where len _ = 0 instance Tuple (Tuple1 a) where len _ = 1 instance Tuple (Tuple2 a b) where len _ = 2 instance Tuple (Tuple3 a b c) where len _ = 3 -- etc... Admittedly, the above is missing something fundamental if we want to implement other functions like `head`, but that reaches the limit of my Haskell-fu. I think looking at Haskell's type families [1] might provide more insight.
[...]
This is because this nuance is _voluntarily_ left aside. More generally, the design choice I made was to leave parametric data types to the documentation level only, as is documented in the section about generalized data types[1].
I can understand that you want to let the constraint as comments or documentation as the Standard library does. However I expect that implementations do a check at compile time when possible.
Strictly speaking, a "check" is done at compile-time, in that you'll get a nasty compiler error if the types don't match. Nothing is "erased" and left to deal with by some runtime, if that's what you meant. Of course, in practice, Hana will static_assert in some places to make these errors more explicit.
As I already said on this list during the last informal review, what I make is a tradeoff between mathematical correctness and usability of the library. While Hana provides structures that satisfy the laws rigorously, I want users to be able to use those structures in a quick and dirty way as a mean of being more productive when needed. For example, I would not object to someone writing [pseudocode]:
auto to_string = [](auto x) { /* convert x to a std::string */ }; auto xs = hana::make_tuple(1, 2.2, "abc"); auto ys = hana::transform(xs, to_string);
Since `1`, `2.2` and `"abc"` do not share a common data type and the signature of `transform` is (for a Functor F)
transform : F(T) x (T -> U) -> F(U)
, the above should normally be ill-formed. However, the library makes such usage possible because I kept in mind that we're working in a "dirty" heterogeneously-typed context. Bottom line:
I don't think heterogeneous types are dirty. We need just a different type of function (Overloaded) to transform them.
if you have an heterogeneous type as pair for example, the mathematical transform function should
transform : pair(T,U) x (T -> R) x (U -> S) -> pair(R, S)
As C++ has overloaded functions we don't need to pass two functions, but just an overloaded function.
That's an interesting point of view. However, with this approach, you can't say that Pair is a Functor. It instead becomes a BiFunctor [2]. Furthermore, let's say you have a pair(T, T). transform is then transform : pair(T, T) x (T -> R) x (T -> S) -> pair(R, S) transform should hence still receive two functions, regardless of the fact that we could provide a single overloaded function. Let me now use a tuple instead of a pair as an example because it sticks more closely to what happens in Hana (Pair is not a Functor in Hana). transform then becomes: transform : Tuple(T1, ..., Tn) x (T1 -> R1) x ... x (Tn -> Rn) -> Tuple(R1, ..., Rn) First, if you use this approach, you can't say that Tuple is a Functor. It has to be a N-Functor, which is the N-argument equivalent to the BiFunctor. Furthermore, you have to provide N different functions to be able to transform it. So for example, when transforming a tuple(int, int, int), you would have to say transform(tuple(1, 2, 3), [](int i) { return i + 1; }, [](int i) { return i + 1; }, [](int i) { return i + 1; } ) That's not "heterogeneous" programming; it's just an equivalent way of writing r1 = f1(t1) r2 = f2(t2) ... rn = fn(tn) where `tk` are the elements of a tuple, and `fk` are the functions mapped on it by the N-Functor's `transform`. What we actually want is r1 = f(t1) r2 = f(t2) ... rn = f(tn) where `f` is defined as f : (intersection of the data types of the tuple's elements) -> (some other type) In other words, `f` is a function whose domain is "the set of all objects with which I can do X". If there are objects in the tuple that don't support "X", then the program is ill-formed. Concepts will put us much closer to this. The principle behind Hana's generalized types is really that we "lift" the types of the objects and then consider objects up-to-different-representations. I also think it can be seen as a quotient of the set of all types by the equivalence relation T ~ U if and only if T and U model exactly the same concepts in exactly the same way Concretely, the mathematical universe we're working with is not only a set; it's actually the C++ category where objects are Types and morphisms are functions. Hence, that "quotient" would have to also take morphisms into account. But I'm reaching the end of my math-fu now.
[...]
How the fact to have either and maybe<A> makes it more dificult to interact with heterogeneous objects?
Let's try to implement a type _maybe<A>:
template <typename A>
struct _maybe {
struct empty { };
union { A a; empty e; } val;
bool is_nothing;
constexpr _maybe() : val{empty{}}, is_nothing{true} { }
constexpr _maybe(A a) : val{a}, is_nothing{false} { }
};
Now, let's try to implement, for example, the `maybe` function,
which has signature maybe :: B x (A -> B) x _maybe<A> -> B.
Remember that those A's and B's are actually generalized types,
not usual C++ types (we're working with heterogeneous stuff):
template
[...]
Notice meta::quote.
I wonder if the meta::transform couldn't take care of quote.
If it takes care of quote, then we need to always pass templates to higher order algorithms. Basically, that would be like accepting metafunctions instead of metafunction classes in the MPL interface. There are good reasons not to do that.
[...]
My point was more on how the library is checking the fist argument of Transform is a Functor and that the second argument is a "function" from the Functor's underlying type T to another type U.
transform : Functor<T> x (T -> U) -> Functor(U)
If the implementation declares transform as
auto transform = [] (auto F, auto Fun)
Couldn't the the library check that the arguments passed in F and Fun respect the requirements?
Yes, but that would require `Fun` stating that it is a function
from data type T to data type U, which is impractical. In particular
it means that this:
transform(make_tuple(1, 2, 3), [](int i) {
return i + 1;
});
would not work. Instead one would need to write
transform(make_tuple(1, 2, 3), hana::function