
After playing around with functional languages, I've come to envy how easy they make it to curry functions. Call a 2-argument function with 1 argument and you get a function that takes 1 argument. Pass another argument and it evaluates the function. Simple. In contrast, C++ users have to use binders, which are not as nice. On a lark, I implemented a wrapper that turns any TR1-style function object into a "curryable" function object (attached). Successive function call invocations bind arguments until enough arguments are bound to invoke the wrapped function. With it you can do the following: curryable<std::plus<int> > p; auto curried = p(1); int i = curried(2); assert(i == 3); Is there any interest in such a thing? -- Eric Niebler BoostPro Computing http://www.boostpro.com

On 23/08/11 20:38, Eric Niebler wrote:
After playing around with functional languages, I've come to envy how easy they make it to curry functions. Call a 2-argument function with 1 argument and you get a function that takes 1 argument. Pass another argument and it evaluates the function. Simple. In contrast, C++ users have to use binders, which are not as nice.
On a lark, I implemented a wrapper that turns any TR1-style function object into a "curryable" function object (attached). Successive function call invocations bind arguments until enough arguments are bound to invoke the wrapped function. With it you can do the following:
curryable<std::plus<int> > p; auto curried = p(1); int i = curried(2); assert(i == 3);
Is there any interest in such a thing?
Might be handy, but I'd think the following syntax would be more useful: std::plus<int> p; auto curried = curry(p)(1); int i = curried(2); assert(i == 3); John Bytheway

John Bytheway <jbytheway+boost@gmail.com> writes:
On 23/08/11 20:38, Eric Niebler wrote:
curryable<std::plus<int> > p; auto curried = p(1); int i = curried(2); assert(i == 3); Is there any interest in such a thing?
I'd be very interested, Eric! -- John Wiegley BoostPro Computing, Inc. http://www.boostpro.com

On 08/23/2011 09:38 PM, Eric Niebler wrote:
After playing around with functional languages, I've come to envy how easy they make it to curry functions. Call a 2-argument function with 1 argument and you get a function that takes 1 argument. Pass another argument and it evaluates the function. Simple. In contrast, C++ users have to use binders, which are not as nice.
On a lark, I implemented a wrapper that turns any TR1-style function object into a "curryable" function object (attached). Successive function call invocations bind arguments until enough arguments are bound to invoke the wrapped function. With it you can do the following:
curryable<std::plus<int> > p; auto curried = p(1); int i = curried(2); assert(i == 3);
Is there any interest in such a thing?
It could be argued that Phoenix actors should be doing this. I also asked a lot of time ago for an easy function to turn a pfo into a phoenix actor, which I think would make phoenix much more ubiquitous. assuming that function was called lazy, and Phoenix did currying automatically, you could do int i = (lazy(std::plus<int>)(1) + 4)(3) assert(i == 1+3+4)

On 08/23/2011 11:15 PM, Mathias Gaunard wrote:
On 08/23/2011 09:38 PM, Eric Niebler wrote:
After playing around with functional languages, I've come to envy how easy they make it to curry functions. Call a 2-argument function with 1 argument and you get a function that takes 1 argument. Pass another argument and it evaluates the function. Simple. In contrast, C++ users have to use binders, which are not as nice.
On a lark, I implemented a wrapper that turns any TR1-style function object into a "curryable" function object (attached). Successive function call invocations bind arguments until enough arguments are bound to invoke the wrapped function. With it you can do the following:
curryable<std::plus<int> > p; auto curried = p(1); int i = curried(2); assert(i == 3);
Is there any interest in such a thing?
It could be argued that Phoenix actors should be doing this.
I also asked a lot of time ago for an easy function to turn a pfo into a phoenix actor, which I think would make phoenix much more ubiquitous.
assuming that function was called lazy, and Phoenix did currying automatically, you could do
int i = (lazy(std::plus<int>)(1) + 4)(3)
Sorry, this should have been int i = (lazy(std::plus<int>())(1) + 4)(3) of course. And I remember why Phoenix doesn't do currying: that's because that only works with monomorphic function objects. Detecting and propagating monomorphism could be nice though. It could eventually provide better error messages, faster compilation times, and automatic currying on monomorphic function objects.

similar as Mathias's thought, I would see the following approach more appropriate: boost::accumulative_function< r(a1, a2, ...)> because essentially, the currying is to call same thing repeatedly and remember the result each time. cheers, Jinqiang ZHANG On Wed, Aug 24, 2011 at 7:21 AM, Mathias Gaunard < mathias.gaunard@ens-lyon.org> wrote:
On 08/23/2011 11:15 PM, Mathias Gaunard wrote:
On 08/23/2011 09:38 PM, Eric Niebler wrote:
After playing around with functional languages, I've come to envy how easy they make it to curry functions. Call a 2-argument function with 1 argument and you get a function that takes 1 argument. Pass another argument and it evaluates the function. Simple. In contrast, C++ users have to use binders, which are not as nice.
On a lark, I implemented a wrapper that turns any TR1-style function object into a "curryable" function object (attached). Successive function call invocations bind arguments until enough arguments are bound to invoke the wrapped function. With it you can do the following:
curryable<std::plus<int> > p; auto curried = p(1); int i = curried(2); assert(i == 3);
Is there any interest in such a thing?
It could be argued that Phoenix actors should be doing this.
I also asked a lot of time ago for an easy function to turn a pfo into a phoenix actor, which I think would make phoenix much more ubiquitous.
assuming that function was called lazy, and Phoenix did currying automatically, you could do
int i = (lazy(std::plus<int>)(1) + 4)(3)
Sorry, this should have been
int i = (lazy(std::plus<int>())(1) + 4)(3) of course.
And I remember why Phoenix doesn't do currying: that's because that only works with monomorphic function objects.
Detecting and propagating monomorphism could be nice though. It could eventually provide better error messages, faster compilation times, and automatic currying on monomorphic function objects.
______________________________**_________________ Unsubscribe & other changes: http://lists.boost.org/** mailman/listinfo.cgi/boost<http://lists.boost.org/mailman/listinfo.cgi/boost>

On 8/23/2011 5:21 PM, Mathias Gaunard wrote:
On 08/23/2011 11:15 PM, Mathias Gaunard wrote:
On 08/23/2011 09:38 PM, Eric Niebler wrote:
After playing around with functional languages, I've come to envy how easy they make it to curry functions. Call a 2-argument function with 1 argument and you get a function that takes 1 argument. Pass another argument and it evaluates the function. Simple. In contrast, C++ users have to use binders, which are not as nice.
On a lark, I implemented a wrapper that turns any TR1-style function object into a "curryable" function object (attached). Successive function call invocations bind arguments until enough arguments are bound to invoke the wrapped function. With it you can do the following:
curryable<std::plus<int> > p; auto curried = p(1); int i = curried(2); assert(i == 3);
Is there any interest in such a thing?
It could be argued that Phoenix actors should be doing this.
I also asked a lot of time ago for an easy function to turn a pfo into a phoenix actor, which I think would make phoenix much more ubiquitous.
assuming that function was called lazy, and Phoenix did currying automatically, you could do
int i = (lazy(std::plus<int>)(1) + 4)(3)
Sorry, this should have been int i = (lazy(std::plus<int>())(1) + 4)(3) of course.
Could you explain this? I'm guessing that lazy(std::plus<int>()) creates a binary function, and that lazy(std::plus<int>())(1) binds the first argument and returns a unary function. But what does it mean to add an integer to a unary function? That doesn't make sense to me.
And I remember why Phoenix doesn't do currying: that's because that only works with monomorphic function objects.
What only works with monomorphic functions?
Detecting and propagating monomorphism could be nice though. It could eventually provide better error messages, faster compilation times, and automatic currying on monomorphic function objects.
What do monomorphic functions have to do with this? My currying code works with polymorphic functions. Sorry if I'm being dense. It's pretty late over here. -- Eric Niebler BoostPro Computing http://www.boostpro.com

On Tuesday, August 23, 2011 10:59:36 PM Eric Niebler wrote:
On 8/23/2011 5:21 PM, Mathias Gaunard wrote:
On 08/23/2011 11:15 PM, Mathias Gaunard wrote:
On 08/23/2011 09:38 PM, Eric Niebler wrote:
After playing around with functional languages, I've come to envy how easy they make it to curry functions. Call a 2-argument function with 1 argument and you get a function that takes 1 argument. Pass another argument and it evaluates the function. Simple. In contrast, C++ users have to use binders, which are not as nice.
On a lark, I implemented a wrapper that turns any TR1-style function object into a "curryable" function object (attached). Successive function call invocations bind arguments until enough arguments are bound to invoke the wrapped function. With it you can do the following:
curryable<std::plus<int> > p; auto curried = p(1); int i = curried(2); assert(i == 3);
Is there any interest in such a thing?
It could be argued that Phoenix actors should be doing this.
I also asked a lot of time ago for an easy function to turn a pfo into a phoenix actor, which I think would make phoenix much more ubiquitous.
assuming that function was called lazy, and Phoenix did currying automatically, you could do
int i = (lazy(std::plus<int>)(1) + 4)(3)
Sorry, this should have been int i = (lazy(std::plus<int>())(1) + 4)(3) of course.
I would rather extend and reuse bind for that.
Could you explain this? I'm guessing that lazy(std::plus<int>()) creates a binary function, and that lazy(std::plus<int>())(1) binds the first argument and returns a unary function. But what does it mean to add an integer to a unary function? That doesn't make sense to me.
And I remember why Phoenix doesn't do currying: that's because that only works with monomorphic function objects.
What only works with monomorphic functions?
Detecting and propagating monomorphism could be nice though. It could eventually provide better error messages, faster compilation times, and automatic currying on monomorphic function objects.
What do monomorphic functions have to do with this? My currying code works with polymorphic functions.
I also fail to see why currying should only work on monomorphic objects. However, the problem comes with function objects having operator() overloads with different arity. Another problem I couldn't solve yet is how to decide which overload gets curryied or not. Consider: struct foo { void operator()(int); void operator()(int, int); }; curryable<foo> f; auto curried = f(1); // To curry or not to curry, that is the question TBH, i haven studied your code yet, Eric. would be cool if solvd these problems. Additionally I second the call to integrte it into phoenix. Using bind and placeholders is just not elegant enough. There even is a feature request about it in trac: https://svn.boost.org/trac/boost/ticket/5541
Sorry if I'm being dense. It's pretty late over here.
And here it is too early ... will report back after studying the code. All in all i think such a thing would be very useful!

On 8/24/2011 4:28 AM, Thomas Heller wrote:
However, the problem comes with function objects having operator() overloads with different arity. Another problem I couldn't solve yet is how to decide which overload gets curryied or not. Consider:
struct foo { void operator()(int); void operator()(int, int); };
curryable<foo> f; auto curried = f(1); // To curry or not to curry, that is the question
As soon as enough arguments are collected to call the curried function, it gets called. So in this case, f(1) calls f::operator()(int).
TBH, i haven studied your code yet, Eric. would be cool if solvd these problems.
I doubt this approach has been tried before. It uses my is_callable_with_args hack, which is rather obscure.
Additionally I second the call to integrte it into phoenix. Using bind and placeholders is just not elegant enough. There even is a feature request about it in trac: https://svn.boost.org/trac/boost/ticket/5541
I'm not certain *all* phoenix lambdas should be curryable. That's not to say they shouldn't -- I'm really not sure. -- Eric Niebler BoostPro Computing http://www.boostpro.com

on Wed Aug 24 2011, Eric Niebler <eric-AT-boostpro.com> wrote:
On 8/24/2011 4:28 AM, Thomas Heller wrote:
However, the problem comes with function objects having operator() overloads with different arity. Another problem I couldn't solve yet is how to decide which overload gets curryied or not. Consider:
struct foo { void operator()(int); void operator()(int, int); };
curryable<foo> f; auto curried = f(1); // To curry or not to curry, that is the question
As soon as enough arguments are collected to call the curried function, it gets called. So in this case, f(1) calls f::operator()(int).
That's an asymmetry about most currying syntax that I never liked, at least for C++. I suppose when all functions are fully lazy there's no assymmetry, but that's not C++. In C++ we have parens to trigger evaluation. Even in Phoenix, laziness only goes partway: you still need parens to trigger final evaluation. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 8/24/2011 12:55 PM, Dave Abrahams wrote:
on Wed Aug 24 2011, Eric Niebler <eric-AT-boostpro.com> wrote:
On 8/24/2011 4:28 AM, Thomas Heller wrote:
However, the problem comes with function objects having operator() overloads with different arity. Another problem I couldn't solve yet is how to decide which overload gets curryied or not. Consider:
struct foo { void operator()(int); void operator()(int, int); };
curryable<foo> f; auto curried = f(1); // To curry or not to curry, that is the question
As soon as enough arguments are collected to call the curried function, it gets called. So in this case, f(1) calls f::operator()(int).
That's an asymmetry about most currying syntax that I never liked, at least for C++.
Could you explain what you mean by asymmetry here? That my currying code prefers one function over another based on the available arguments?
I suppose when all functions are fully lazy there's no assymmetry, but that's not C++. In C++ we have parens to trigger evaluation. Even in Phoenix, laziness only goes partway: you still need parens to trigger final evaluation.
I'm afraid I don't see your objection. -- Eric Niebler BoostPro Computing http://www.boostpro.com

On Wed, Aug 24, 2011 at 12:42 PM, Eric Niebler <eric@boostpro.com> wrote:
On 8/24/2011 12:55 PM, Dave Abrahams wrote:
on Wed Aug 24 2011, Eric Niebler <eric-AT-boostpro.com> wrote:
[...]
As soon as enough arguments are collected to call the curried function, it gets called. So in this case, f(1) calls f::operator()(int).
That's an asymmetry about most currying syntax that I never liked, at least for C++.
Could you explain what you mean by asymmetry here? That my currying code prefers one function over another based on the available arguments?
I think the asymmetry that Dave is referring to is that some applications of a curried function just...well...curry the argument, while others actually evaluate (immediately!) the underlying function with all the previously curried arguments in addition to the given argument. So there is an asymmetry in the effects of function application. At least, that's what I immediately thought of when seeing Dave's comment. - Jeff

on Wed Aug 24 2011, Eric Niebler <eric-AT-boostpro.com> wrote:
On 8/24/2011 12:55 PM, Dave Abrahams wrote:
on Wed Aug 24 2011, Eric Niebler <eric-AT-boostpro.com> wrote:
On 8/24/2011 4:28 AM, Thomas Heller wrote:
However, the problem comes with function objects having operator() overloads with different arity. Another problem I couldn't solve yet is how to decide which overload gets curryied or not. Consider:
struct foo { void operator()(int); void operator()(int, int); };
curryable<foo> f; auto curried = f(1); // To curry or not to curry, that is the question
As soon as enough arguments are collected to call the curried function, it gets called. So in this case, f(1) calls f::operator()(int).
That's an asymmetry about most currying syntax that I never liked, at least for C++.
Could you explain what you mean by asymmetry here? That my currying code prefers one function over another based on the available arguments?
I mean this, for a ternary function f: f(x) => doesn't call f f(x)(y) => doesn't call f f(x)(y)(z) => calls f That last step looks asymmetric to me. In a lazy language, f(x)(y)(z) *doesn't* call f... until you actually use the result for something... which is more consistent-looking. I suppose the symmetrical non-lazy version looks like: f(x) => doesn't call f f(x)(y) => doesn't call f f(x)(y)(z) => doesn't call f f(x)(y)(z)() => calls f -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Dave Abrahams wrote:
I suppose the symmetrical non-lazy version looks like:
f(x) => doesn't call f f(x)(y) => doesn't call f f(x)(y)(z) => doesn't call f f(x)(y)(z)() => calls f
This allows you to express bind( f, x, y, z ), which was impossible before, but you've now lost the capability to express bind( f, x, y, _1 ), which was. bind( f, _1, y, z ), which is often needed in practice, is possible under neither, which makes me view this whole exercise as somewhat academic.

Peter Dimov wrote:
Dave Abrahams wrote:
I suppose the symmetrical non-lazy version looks like:
f(x) => doesn't call f f(x)(y) => doesn't call f f(x)(y)(z) => doesn't call f f(x)(y)(z)() => calls f
This allows you to express bind( f, x, y, z ), which was impossible before, but you've now lost the capability to express bind( f, x, y, _1 ), which was.
My casting operator suggestion addresses this I think, though there could be something I overlooked.
bind( f, _1, y, z ), which is often needed in practice, is possible under neither, which makes me view this whole exercise as somewhat academic.
I was about to suggest a solution to that also, but I wanted to keep my post short. We would need some kind of syntax to enable binding arbitrary function arguments rather than just the first, and ideally more than one at a time as well. I don't think it would be hard to do, but it might be hard to do in a way that is very satisfying. If we could use simply _ as a placeholder argument then it seems like that would be an intuitive and elegant syntax to me. It may well be that the exercise is academic if the end result is no better than bind for the general case. Bind seems to be one of the easier boost libraries for people to pick up and use successfully. Perhaps Eric can expand upon his rationale for not using bind. Regards, Luke

on Wed Aug 24 2011, "Peter Dimov" <pdimov-AT-pdimov.com> wrote:
Dave Abrahams wrote:
I suppose the symmetrical non-lazy version looks like:
f(x) => doesn't call f f(x)(y) => doesn't call f f(x)(y)(z) => doesn't call f f(x)(y)(z)() => calls f
This allows you to express bind( f, x, y, z ), which was impossible before, but you've now lost the capability to express bind( f, x, y, _1 ), which was.
bind( f, _1, y, z ), which is often needed in practice, is possible under neither, which makes me view this whole exercise as somewhat academic.
Yes, it has always been my view that bind was unambiguously better for non-lazy languages, and at least more flexible even for lazy ones. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On Wednesday, August 24, 2011 02:15:03 PM Dave Abrahams wrote:
on Wed Aug 24 2011, "Peter Dimov" <pdimov-AT-pdimov.com> wrote:
Dave Abrahams wrote:
I suppose the symmetrical non-lazy version looks like: f(x) => doesn't call f f(x)(y) => doesn't call f f(x)(y)(z) => doesn't call f f(x)(y)(z)() => calls f
This allows you to express bind( f, x, y, z ), which was impossible before, but you've now lost the capability to express bind( f, x, y, _1 ), which was.
bind( f, _1, y, z ), which is often needed in practice, is possible under neither, which makes me view this whole exercise as somewhat academic.
Yes, it has always been my view that bind was unambiguously better for non-lazy languages, and at least more flexible even for lazy ones.
I start to think likewise. After all, most alternative solutions looked a lot like bind or at least would lead to something like bind.

On 8/24/2011 6:15 PM, Dave Abrahams wrote:
on Wed Aug 24 2011, "Peter Dimov" <pdimov-AT-pdimov.com> wrote:
Dave Abrahams wrote:
I suppose the symmetrical non-lazy version looks like:
f(x) => doesn't call f f(x)(y) => doesn't call f f(x)(y)(z) => doesn't call f f(x)(y)(z)() => calls f
I simply don't see the problem with: f(x) ==> curry f(x, y) or f(x)(y) ==> curry f(x, y, z) or etc. ==> evaluate Can you say specifically what you think the problem is? This is how its done in Haskell, for instance (although Haskell doesn't use parens around argument lists).
This allows you to express bind( f, x, y, z ), which was impossible before, but you've now lost the capability to express bind( f, x, y, _1 ), which was.
bind( f, _1, y, z ), which is often needed in practice, is possible under neither, which makes me view this whole exercise as somewhat academic.
I don't see any technical obstacles to: f(_, y, z) where _ is a placeholder. You could use positional placeholders for argument reordering.
Yes, it has always been my view that bind was unambiguously better for non-lazy languages, and at least more flexible even for lazy ones.
Better and more flexible in what ways? -- Eric Niebler BoostPro Computing http://www.boostpro.com

on Thu Aug 25 2011, Eric Niebler <eric-AT-boostpro.com> wrote:
On 8/24/2011 6:15 PM, Dave Abrahams wrote:
on Wed Aug 24 2011, "Peter Dimov" <pdimov-AT-pdimov.com> wrote:
Dave Abrahams wrote:
I suppose the symmetrical non-lazy version looks like:
f(x) => doesn't call f f(x)(y) => doesn't call f f(x)(y)(z) => doesn't call f f(x)(y)(z)() => calls f
I simply don't see the problem with:
f(x) ==> curry f(x, y) or f(x)(y) ==> curry f(x, y, z) or etc. ==> evaluate
Can you say specifically what you think the problem is?
It just rubs me the wrong way that sometimes passing arguments to a function has no execution semantics, but if you happen to complete an argument list, suddenly it's called.
This is how its done in Haskell, for instance (although Haskell doesn't use parens around argument lists).
Well, no, it's not exactly how it's done in Haskell, because in Haskell f is lazy, and that's what makes the difference. You probably know this, but I can pass: (f x y z) -- which would be f(x, y, z) in C++ to another function that will then throw it away, and f will never be invoked. let oar x y = if x then x else y if I invoke: oar True (f x y z) -- analogous to oar(True,f(x y z)) in C++ You can prove this to yourself by invoking putStrLn in f.
Yes, it has always been my view that bind was unambiguously better for non-lazy languages, and at least more flexible even for lazy ones.
Better and more flexible in what ways?
It's better because it avoids confusion about when things are invoked. It's more flexible because it allows the 2nd argument to be bound before the first argument is available. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 8/25/2011 4:16 PM, Dave Abrahams wrote:
on Thu Aug 25 2011, Eric Niebler <eric-AT-boostpro.com> wrote:
I simply don't see the problem with:
f(x) ==> curry f(x, y) or f(x)(y) ==> curry f(x, y, z) or etc. ==> evaluate
Can you say specifically what you think the problem is?
It just rubs me the wrong way that sometimes passing arguments to a function has no execution semantics, but if you happen to complete an argument list, suddenly it's called.
OK. You say tomahto.
This is how its done in Haskell, for instance (although Haskell doesn't use parens around argument lists).
Well, no, it's not exactly how it's done in Haskell, because in Haskell f is lazy, and that's what makes the difference.
You probably know this, but I can pass:
(f x y z) -- which would be f(x, y, z) in C++
to another function that will then throw it away, and f will never be invoked.
let oar x y = if x then x else y
if I invoke:
oar True (f x y z) -- analogous to oar(True,f(x y z)) in C++
You can prove this to yourself by invoking putStrLn in f.
Yes, I know. I don't consider that difference particularly relevant to this discussion, though. If (f x y x) type-checks as Int you pass it as an argument to a function that takes an Int, it type-checks. If you pass (f x y) as an argument to a function that takes a unary-function-that-returns-Int, it type-checks.
Yes, it has always been my view that bind was unambiguously better for non-lazy languages, and at least more flexible even for lazy ones.
Better and more flexible in what ways?
It's better because it avoids confusion about when things are invoked.
Meh.
It's more flexible because it allows the 2nd argument to be bound before the first argument is available.
You snipped the part of my message where I said that that problem is easily solvable, and now you're pretending I never said it. :-/ -- Eric Niebler BoostPro Computing http://www.boostpro.com

on Thu Aug 25 2011, Eric Niebler <eric-AT-boostpro.com> wrote:
On 8/25/2011 4:16 PM, Dave Abrahams wrote:
on Thu Aug 25 2011, Eric Niebler <eric-AT-boostpro.com> wrote:
I simply don't see the problem with:
f(x) ==> curry f(x, y) or f(x)(y) ==> curry f(x, y, z) or etc. ==> evaluate
Can you say specifically what you think the problem is?
It just rubs me the wrong way that sometimes passing arguments to a function has no execution semantics, but if you happen to complete an argument list, suddenly it's called.
OK. You say tomahto.
Let's call the whole thing eagerly!
This is how its done in Haskell, for instance (although Haskell doesn't use parens around argument lists).
Well, no, it's not exactly how it's done in Haskell, because in Haskell f is lazy, and that's what makes the difference.
Yes, I know. I don't consider that difference particularly relevant to this discussion, though. If (f x y x) type-checks as Int you pass it as an argument to a function that takes an Int, it type-checks. If you pass (f x y) as an argument to a function that takes a unary-function-that-returns-Int, it type-checks.
Yeah... but in C++ we generally distinguish nullary functions from their results, and we generally care when things are invoked.
Yes, it has always been my view that bind was unambiguously better for non-lazy languages, and at least more flexible even for lazy ones.
Better and more flexible in what ways?
It's better because it avoids confusion about when things are invoked.
Meh.
That's an argument?
It's more flexible because it allows the 2nd argument to be bound before the first argument is available.
You snipped the part of my message where I said that that problem is easily solvable, and now you're pretending I never said it. :-/
Please, can't we at least make a show of assuming we're discussing this with general goodwill? I didn't notice you saying it was easily solvable, nor did I notice a proposed solution, so I'm not pretending. If you said it, great. How do you solve it? -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 8/26/2011 1:10 PM, Dave Abrahams wrote:
on Thu Aug 25 2011, Eric Niebler <eric-AT-boostpro.com> wrote:
On 8/25/2011 4:16 PM, Dave Abrahams wrote:
on Thu Aug 25 2011, Eric Niebler <eric-AT-boostpro.com> wrote:
I simply don't see the problem with:
f(x) ==> curry f(x, y) or f(x)(y) ==> curry f(x, y, z) or etc. ==> evaluate
Can you say specifically what you think the problem is?
It just rubs me the wrong way that sometimes passing arguments to a function has no execution semantics, but if you happen to complete an argument list, suddenly it's called.
OK. You say tomahto.
Let's call the whole thing eagerly!
This is how its done in Haskell, for instance (although Haskell doesn't use parens around argument lists).
Well, no, it's not exactly how it's done in Haskell, because in Haskell f is lazy, and that's what makes the difference.
Yes, I know. I don't consider that difference particularly relevant to this discussion, though. If (f x y x) type-checks as Int you pass it as an argument to a function that takes an Int, it type-checks. If you pass (f x y) as an argument to a function that takes a unary-function-that-returns-Int, it type-checks.
Yeah... but in C++ we generally distinguish nullary functions from their results, and we generally care when things are invoked.
That's true. But nobody wants to curry a nullary function, so I guess you're arguing by extension that we should likewise be concerned about potential confusion when more arguments are involved. And I still don't see what is confusing about it. Can you give an example where plausible code gives surprising results because of implicit currying instead of explicit binding?
Yes, it has always been my view that bind was unambiguously better for non-lazy languages, and at least more flexible even for lazy ones.
Better and more flexible in what ways?
It's better because it avoids confusion about when things are invoked.
Meh.
That's an argument?
About as convincing an argument as "it rubs me the wrong way." ;-)
It's more flexible because it allows the 2nd argument to be bound before the first argument is available.
You snipped the part of my message where I said that that problem is easily solvable, and now you're pretending I never said it. :-/
Please, can't we at least make a show of assuming we're discussing this with general goodwill? I didn't notice you saying it was easily solvable, nor did I notice a proposed solution, so I'm not pretending. If you said it, great. How do you solve it?
Sorry, I regretted that as soon as I clicked send. Damn fingers. Here's the bit you snipped: Eric Niebler wrote:
I don't see any technical obstacles to:
f(_, y, z)
where _ is a placeholder. You could use positional placeholders for argument reordering.
You might say the syntactic advantages of this over bind(f, _, y, z) are minimal. You might be right. -- Eric Niebler BoostPro Computing http://www.boostpro.com

On Fri, Aug 26, 2011 at 7:31 PM, Eric Niebler <eric@boostpro.com> wrote:
On 8/26/2011 1:10 PM, Dave Abrahams wrote:
on Thu Aug 25 2011, Eric Niebler <eric-AT-boostpro.com> wrote:
On 8/25/2011 4:16 PM, Dave Abrahams wrote:
on Thu Aug 25 2011, Eric Niebler <eric-AT-boostpro.com> wrote: <snip> Eric Niebler wrote: I don't see any technical obstacles to:
f(_, y, z)
where _ is a placeholder. You could use positional placeholders for argument reordering. You might say the syntactic advantages of this over bind(f, _, y, z) are minimal. You might be right.
Exactly my thinking. I additionally have some problems in thinking about possible semantics: auto g = f(_, y, z); // pretend that f is a curryable function auto h = g(1); // same as f(1, y, z)? what if f had a 4th argument? As much as i like the idea, i can almost always construct trivial usecases where the proposed syntax is ambiguous.

On 8/26/2011 1:46 PM, Thomas Heller wrote:
I additionally have some problems in thinking about possible semantics:
auto g = f(_, y, z); // pretend that f is a curryable function
If f has a 4th argument, the above returns a binary function and ...
auto h = g(1); // same as f(1, y, z)? what if f had a 4th argument?
... that returns a unary function, same as f(1, y, z).
As much as i like the idea, i can almost always construct trivial usecases where the proposed syntax is ambiguous.
It's not ambiguous, AFAICT. -- Eric Niebler BoostPro Computing http://www.boostpro.com

On Fri, Aug 26, 2011 at 8:51 PM, Eric Niebler <eric@boostpro.com> wrote:
On 8/26/2011 1:46 PM, Thomas Heller wrote:
I additionally have some problems in thinking about possible semantics:
auto g = f(_, y, z); // pretend that f is a curryable function
If f has a 4th argument, the above returns a binary function and ...
auto h = g(1); // same as f(1, y, z)? what if f had a 4th argument?
... that returns a unary function, same as f(1, y, z).
As much as i like the idea, i can almost always construct trivial usecases where the proposed syntax is ambiguous.
It's not ambiguous, AFAICT.
Right ... ambiguous is the wrong word here ... its just that the semantics have to be defined, which might not be what you think they were ... So how is "painless currying" different than bind? To me it looks like they both have the same effect of partial function application. Bind is more verbose, and is thus easier understandable, at first sight. While painless currying looks like a very clean and elegant solution to the same problem it can lead to some undesired effects. The question now is: Do we really need this?

on Fri Aug 26 2011, Eric Niebler <eric-AT-boostpro.com> wrote:
On 8/26/2011 1:10 PM, Dave Abrahams wrote:
Yeah... but in C++ we generally distinguish nullary functions from their results, and we generally care when things are invoked.
That's true. But nobody wants to curry a nullary function, so I guess you're arguing by extension that we should likewise be concerned about potential confusion when more arguments are involved. And I still don't see what is confusing about it. Can you give an example where plausible code gives surprising results because of implicit currying instead of explicit binding?
When the function object takes a variable number (or two different numbers) of arguments, it's not clear when you're supposed to invoke it.
Yes, it has always been my view that bind was unambiguously better for non-lazy languages, and at least more flexible even for lazy ones.
Better and more flexible in what ways?
It's better because it avoids confusion about when things are invoked.
Meh.
That's an argument?
About as convincing an argument as "it rubs me the wrong way." ;-)
My argument, though subjective and aesthetic, at least is not content-free: I describe what about the situation gives me the willies.
Eric Niebler wrote:
I don't see any technical obstacles to:
f(_, y, z)
where _ is a placeholder. You could use positional placeholders for argument reordering.
That's more general than straight currying, and it's clearer: f(x,_,_) is a curried version of f that is a bit more explicit about what it means than just writing f(x).
You might say the syntactic advantages of this over bind(f, _, y, z) are minimal. You might be right.
I wouldn't say that. I like the placeholder syntax; it's similar to what MPL does, and frankly I had assumed we already had libraries (phoenix?) that did things this way. The only question is what happens when you write f(x,y,z). IIUC, phoenix leaves you with a nullary function, but you're proposing to actually call f. I'm just not sure that's quite as appropriate in C++ as it is in Haskell. Suppose you *wanted* a nullary function object (e.g. so you could launch a thread with it) rather than its result. Then you need to switch to a different syntax for the final argument? -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 8/26/2011 2:05 PM, Dave Abrahams wrote:
Eric Niebler wrote:
I don't see any technical obstacles to:
f(_, y, z)
where _ is a placeholder. You could use positional placeholders for argument reordering.
That's more general than straight currying, and it's clearer: f(x,_,_) is a curried version of f that is a bit more explicit about what it means than just writing f(x).
You might say the syntactic advantages of this over bind(f, _, y, z) are minimal. You might be right.
I wouldn't say that. I like the placeholder syntax; it's similar to what MPL does, and frankly I had assumed we already had libraries (phoenix?) that did things this way.
All the library solutions we have so far involve an explicit call to a function named "bind" or similar.
The only question is what happens when you write f(x,y,z). IIUC, phoenix leaves you with a nullary function, but you're proposing to actually call f. I'm just not sure that's quite as appropriate in C++ as it is in Haskell. Suppose you *wanted* a nullary function object (e.g. so you could launch a thread with it) rather than its result. Then you need to switch to a different syntax for the final argument?
Ah, thank you. Good point. I think I'll let this issue drop now. -- Eric Niebler BoostPro Computing http://www.boostpro.com

On 08/25/11 15:16, Dave Abrahams wrote:
on Thu Aug 25 2011, Eric Niebler <eric-AT-boostpro.com> wrote:
On 8/24/2011 6:15 PM, Dave Abrahams wrote:
on Wed Aug 24 2011, "Peter Dimov" <pdimov-AT-pdimov.com> wrote:
Dave Abrahams wrote:
I suppose the symmetrical non-lazy version looks like:
f(x) => doesn't call f f(x)(y) => doesn't call f f(x)(y)(z) => doesn't call f f(x)(y)(z)() => calls f
I simply don't see the problem with:
f(x) ==> curry f(x, y) or f(x)(y) ==> curry f(x, y, z) or etc. ==> evaluate
Can you say specifically what you think the problem is?
It just rubs me the wrong way that sometimes passing arguments to a function has no execution semantics, but if you happen to complete an argument list, suddenly it's called.
[snip] Since an n-dimensional array can be thought of as a function of n indexes, I would think then that subscripting multi-dimensional arrays would rub you the wrong way: int f[1][1][1]; f[0] => doesn't return an int f[0][0] => doesn't return an int f[0][0][0] => *does* return an int Instead, would you prefer the following symmetrical version? f[0] => doesn't return an int f[0][0] => doesn't return an int f[0][0][0] => doesn't return an int f[0][0][0]() => *does* return an int -regards, Larry

on Fri Aug 26 2011, Larry Evans <cppljevans-AT-suddenlink.net> wrote:
Since an n-dimensional array can be thought of as a function of n indexes, I would think then that subscripting multi-dimensional arrays would rub you the wrong way:
int f[1][1][1];
f[0] => doesn't return an int f[0][0] => doesn't return an int f[0][0][0] => *does* return an int
Instead, would you prefer the following symmetrical version?
f[0] => doesn't return an int f[0][0] => doesn't return an int f[0][0][0] => doesn't return an int f[0][0][0]() => *does* return an int
No, sorry, but that's just silly. Array indexing is entirely consistent: f[0] => returns int(&)[1][1] f[0][0] => returns int(&)[1] f[0][0][0] => returns int& -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 08/26/11 15:24, Dave Abrahams wrote:
on Fri Aug 26 2011, Larry Evans <cppljevans-AT-suddenlink.net> wrote:
Since an n-dimensional array can be thought of as a function of n indexes, I would think then that subscripting multi-dimensional arrays would rub you the wrong way:
int f[1][1][1];
f[0] => doesn't return an int f[0][0] => doesn't return an int f[0][0][0] => *does* return an int
Instead, would you prefer the following symmetrical version?
f[0] => doesn't return an int f[0][0] => doesn't return an int f[0][0][0] => doesn't return an int f[0][0][0]() => *does* return an int
No, sorry, but that's just silly. Array indexing is entirely consistent:
f[0] => returns int(&)[1][1] f[0][0] => returns int(&)[1] f[0][0][0] => returns int&
Then so is (using haskell notation): f :: X -> Y -> Z -> int f x => returns Y -> Z -> int f x y => returns Z -> int f x y z => returns int which is Eric's point. IOW, when supplied with enough arguments, the function returns the result_type of the function, just as, when a multi-dimensional array is supplied enough arguments (or indices) it returns the value type of the array. -Larry

AMDG On 08/26/2011 02:51 PM, Larry Evans wrote:
On 08/26/11 15:24, Dave Abrahams wrote:
No, sorry, but that's just silly. Array indexing is entirely consistent:
f[0] => returns int(&)[1][1] f[0][0] => returns int(&)[1] f[0][0][0] => returns int&
Then so is (using haskell notation):
f :: X -> Y -> Z -> int
f x => returns Y -> Z -> int f x y => returns Z -> int f x y z => returns int
which is Eric's point. IOW, when supplied with enough arguments, the function returns the result_type of the function, just as, when a multi-dimensional array is supplied enough arguments (or indices) it returns the value type of the array.
That's just how functions that take multiple arguments work in Haskell. It isn't how C++ works. Trying to create the same behavior in a language that supports multi-argument functions directly seems like it would cause more confusion than good. In Christ, Steven Watanabe

Larry Evans <cppljevans <at> suddenlink.net> writes:
Then so is (using haskell notation):
f :: X -> Y -> Z -> int
f x => returns Y -> Z -> int f x y => returns Z -> int f x y z => returns int
which is Eric's point. IOW, when supplied with enough arguments, the function returns the result_type of the function, just as, when a multi-dimensional array is supplied enough arguments (or indices) it returns the value type of the array.
Exactly. All this applies to return types only. It doesn't apply at all to when the function will be actually evaluated. And this is essentially the Dave's point (IIUC). Consider printf, for example, either vararg or variadic template version. Or any other function with default arguments. How should it look like with automatic currying/evaluating based on the number of the arguments provided? Also it's unclear even when we finally have passed the final argument - do we want evaluation or we want to have a nullary function? In Haskell, everything is lazy and functions are pure so it doesn't matter when and how many times you evaluate them. C++ is very different here. I'm on Dave's side here: we should be explicit about the time when we want to trigger actual evaluation. Thanks, Maxim

On 5 September 2011 12:10, Maxim Yanchenko
I'm on Dave's side here: we should be explicit about the time when we want to trigger actual evaluation.
The problem is that existing generic algorithms won't trigger evaluation, so currying couldn't be used with, say, std::transform. That's why I would make partial application explicit, with a normal function call causing evaluation. But I think that was considered too verbose.

On Mon, Sep 5, 2011 at 4:28 AM, Daniel James <dnljms@gmail.com> wrote:
On 5 September 2011 12:10, Maxim Yanchenko
I'm on Dave's side here: we should be explicit about the time when we
want to
trigger actual evaluation.
The problem is that existing generic algorithms won't trigger evaluation, so currying couldn't be used with, say, std::transform. That's why I would make partial application explicit, with a normal function call causing evaluation. But I think that was considered too verbose.
If you want to be explicit, why not curryN then? curry3(f)(x)(y) -> not yet evaluated... curry3(f)(x)(y)(z) -> evaluated There is still Dave's symmetry problem, but at least the evaluation behavior is clear. Maybe, also, curry3(f)(x,y)(z) == curry3(f)(x)(y)(z) and the like. - Jeff

On Mon, Sep 5, 2011 at 1:28 PM, Daniel James <dnljms@gmail.com> wrote:
On 5 September 2011 12:10, Maxim Yanchenko
I'm on Dave's side here: we should be explicit about the time when we want to trigger actual evaluation.
The problem is that existing generic algorithms won't trigger evaluation, so currying couldn't be used with, say, std::transform. That's why I would make partial application explicit, with a normal function call causing evaluation. But I think that was considered too verbose.
Actually, that is not the case. If you take phoenix, bind or lambda, and call the resulting functor with not enough arguments it will give you a pretty ugly error. And this is what the currying is about ... return another functional if the supplied arguments aren't enough to evaluate the lazy function object. What you mean is you supply too much arguments. Which won't be affected.

On 5 September 2011 19:11, Thomas Heller <thom.heller@googlemail.com> wrote:
On Mon, Sep 5, 2011 at 1:28 PM, Daniel James <dnljms@gmail.com> wrote:
On 5 September 2011 12:10, Maxim Yanchenko
I'm on Dave's side here: we should be explicit about the time when we want to trigger actual evaluation.
The problem is that existing generic algorithms won't trigger evaluation, so currying couldn't be used with, say, std::transform. That's why I would make partial application explicit, with a normal function call causing evaluation. But I think that was considered too verbose.
Actually, that is not the case.
It is if you have to be explicit about the time when you want to trigger actual evaluation.

on Mon Sep 05 2011, Thomas Heller <thom.heller-AT-googlemail.com> wrote:
On Mon, Sep 5, 2011 at 1:28 PM, Daniel James <dnljms@gmail.com> wrote:
On 5 September 2011 12:10, Maxim Yanchenko
I'm on Dave's side here: we should be explicit about the time when we want to trigger actual evaluation.
The problem is that existing generic algorithms won't trigger evaluation, so currying couldn't be used with, say, std::transform. That's why I would make partial application explicit, with a normal function call causing evaluation. But I think that was considered too verbose.
Actually, that is not the case. If you take phoenix, bind or lambda, and call the resulting functor with not enough arguments it will give you a pretty ugly error. And this is what the currying is about ... return another functional if the supplied arguments aren't enough to evaluate the lazy function object. What you mean is you supply too much arguments. Which won't be affected.
I don't think you're understanding what he means at all. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On Monday, September 05, 2011 11:12:17 AM Dave Abrahams wrote:
on Mon Sep 05 2011, Thomas Heller <thom.heller-AT-googlemail.com> wrote:
On Mon, Sep 5, 2011 at 1:28 PM, Daniel James <dnljms@gmail.com> wrote:
On 5 September 2011 12:10, Maxim Yanchenko
I'm on Dave's side here: we should be explicit about the time when we want to trigger actual evaluation.
The problem is that existing generic algorithms won't trigger evaluation, so currying couldn't be used with, say, std::transform. That's why I would make partial application explicit, with a normal function call causing evaluation. But I think that was considered too verbose.
Actually, that is not the case. If you take phoenix, bind or lambda, and call the resulting functor with not enough arguments it will give you a pretty ugly error. And this is what the currying is about ... return another functional if the supplied arguments aren't enough to evaluate the lazy function object. What you mean is you supply too much arguments. Which won't be affected.
I don't think you're understanding what he means at all.
Sorry, missed the part about triggering the evaluation explicitly.

On 26/08/2011 15:04, Larry Evans wrote:
Since an n-dimensional array can be thought of as a function of n indexes, I would think then that subscripting multi-dimensional arrays would rub you the wrong way:
int f[1][1][1];
f[0] => doesn't return an int f[0][0] => doesn't return an int f[0][0][0] => *does* return an int
-regards, Larry
Now that you mention this, I have considered the same issue currently being discussed when trying to implement a multi-dimensional subscript operator facade. Initially I started invoking the function as soon as there were enough arguments (using Eric's technique); but then I considered several overloads with different number of arguments and the only solution I could came with was invoking the underlying function on a conversion operator, which would do more harm than good (specially considering C++0X auto). Eventually I simply had to discard this use case scenario. Then for curried functions, explicit invocation via () seems like a viable solution to the problem. Agustín K-ballo Bergé.- http://talesofcpp.blogspot.com

On 08/26/11 17:56, Agustín K-ballo Bergé wrote:
On 26/08/2011 15:04, Larry Evans wrote:
Since an n-dimensional array can be thought of as a function of n indexes, I would think then that subscripting multi-dimensional arrays would rub you the wrong way:
int f[1][1][1];
f[0] => doesn't return an int f[0][0] => doesn't return an int f[0][0][0] => *does* return an int
-regards, Larry
Now that you mention this, I have considered the same issue currently being discussed when trying to implement a multi-dimensional subscript operator facade. Initially I started invoking the function as soon as there were enough arguments (using Eric's technique); but then I considered several overloads with different number of arguments and the only solution I could came with was invoking the underlying function on a conversion operator, which would do more harm than good (specially considering C++0X auto). Eventually I simply had to discard this use case scenario. Then for curried functions, explicit invocation via () seems like a viable solution to the problem.
I encountered a somewhat similar problem when implementing a multi-dimensional array where the number of dimensions was determined only at run-time. Thus, the number of indices needed to get an actual value out instead of a subarray was unknown until runtime. As a workaround, both: operator[](undigned I) and: operator()() were defined for the array. The latter just assumed all the remaining indices were 0 and returned that result. The former (the operator[]) returned a subarray. I don't think there was any check for too many indices (it was just a prototype). Code is here: http://svn.boost.org/svn/boost/sandbox/variadic_templates/sandbox/stepper/bo... -regards, Larry

On Thu, Aug 25, 2011 at 6:21 PM, Eric Niebler <eric@boostpro.com> wrote:
On 8/24/2011 6:15 PM, Dave Abrahams wrote:
on Wed Aug 24 2011, "Peter Dimov" <pdimov-AT-pdimov.com> wrote:
Dave Abrahams wrote:
I suppose the symmetrical non-lazy version looks like:
f(x) => doesn't call f f(x)(y) => doesn't call f f(x)(y)(z) => doesn't call f f(x)(y)(z)() => calls f
I simply don't see the problem with:
f(x) ==> curry f(x, y) or f(x)(y) ==> curry f(x, y, z) or etc. ==> evaluate
Can you say specifically what you think the problem is? This is how its done in Haskell, for instance (although Haskell doesn't use parens around argument lists).
This allows you to express bind( f, x, y, z ), which was impossible before, but you've now lost the capability to express bind( f, x, y, _1 ), which was.
bind( f, _1, y, z ), which is often needed in practice, is possible under neither, which makes me view this whole exercise as somewhat academic.
I don't see any technical obstacles to:
f(_, y, z)
where _ is a placeholder. You could use positional placeholders for argument reordering.
I'm a big fan of this syntax! +1 for both implicit currying and _ placeholders! -- gpd

Dave Abrahams wrote:
on Wed Aug 24 2011, Eric Niebler <eric-AT-boostpro.com> wrote:
On 8/24/2011 12:55 PM, Dave Abrahams wrote:
on Wed Aug 24 2011, Eric Niebler <eric-AT-boostpro.com> wrote:
On 8/24/2011 4:28 AM, Thomas Heller wrote:
However, the problem comes with function objects having operator() overloads with different arity. Another problem I couldn't solve yet is how to decide which overload gets curryied or not. Consider:
struct foo { void operator()(int); void operator()(int, int); };
curryable<foo> f; auto curried = f(1); // To curry or not to curry, that is the question
As soon as enough arguments are collected to call the curried function, it gets called. So in this case, f(1) calls f::operator()(int).
That's an asymmetry about most currying syntax that I never liked, at least for C++.
Could you explain what you mean by asymmetry here? That my currying code prefers one function over another based on the available arguments?
I mean this, for a ternary function f:
f(x) => doesn't call f f(x)(y) => doesn't call f f(x)(y)(z) => calls f
That last step looks asymmetric to me.
In a lazy language, f(x)(y)(z) *doesn't* call f... until you actually use the result for something... which is more consistent-looking.
I suppose the symmetrical non-lazy version looks like:
f(x) => doesn't call f f(x)(y) => doesn't call f f(x)(y)(z) => doesn't call f f(x)(y)(z)() => calls f
I tend to agree, but can't we have both? What about if the lazy f implicitly converts to its return type through a casting operator? This way we can pass the return type of f(x)(y)(z) as a function object, but make lazy execution of the function object implicit. f(x) => doesn't call f f(x)(y) => doesn't call f result_type result = f(x)(y)(z) => calls f through operator(result_type) If f had a void return type I guess we would have to force it's call with operator() and use of operator() would otherwise be optional. I suppose forgetting to force would be error prone if people are used to relying on the implicit conversion to force the function call. In general I see no reason not to make a nullary function object castable to its operator() return type, does the loss of type safety hurt us in this case? I like the simplicity of not having the extra () in the simple case where I want to get its result immediately. It may be more intuitive also, but I guess it could do more harm than good if it leads to astonishment. Just thinking out loud, Luke

On Aug 24, 2011, at 1:38 PM, Dave Abrahams wrote:
I mean this, for a ternary function f:
f(x) => doesn't call f f(x)(y) => doesn't call f f(x)(y)(z) => calls f
That last step looks asymmetric to me.
In a lazy language, f(x)(y)(z) *doesn't* call f... until you actually use the result for something... which is more consistent-looking.
I suppose the symmetrical non-lazy version looks like:
f(x) => doesn't call f f(x)(y) => doesn't call f f(x)(y)(z) => doesn't call f f(x)(y)(z)() => calls f
What about using [] for currying and () for calling? f[x] => returns binary function f[x][y] => returns unary function f[x][y][z] => returns nullary function All of these have the same effect: f(x, y, z) f[x](y, z) f[x][y](z) f[x][y][z]() Josh

On Wed, Aug 24, 2011 at 6:41 PM, Joshua Juran <jjuran@gmail.com> wrote:
On Aug 24, 2011, at 1:38 PM, Dave Abrahams wrote:
I mean this, for a ternary function f:
f(x) => doesn't call f f(x)(y) => doesn't call f f(x)(y)(z) => calls f
That last step looks asymmetric to me.
In a lazy language, f(x)(y)(z) *doesn't* call f... until you actually use the result for something... which is more consistent-looking.
I suppose the symmetrical non-lazy version looks like:
f(x) => doesn't call f f(x)(y) => doesn't call f f(x)(y)(z) => doesn't call f f(x)(y)(z)() => calls f
What about using [] for currying and () for calling?
f[x] => returns binary function f[x][y] => returns unary function f[x][y][z] => returns nullary function
All of these have the same effect:
f(x, y, z) f[x](y, z) f[x][y](z) f[x][y][z]()
My vote is to Keep It Simple: Use a "curry" function to "curry-fy" an immediate (i.e., normal C++) function F, with each application of "(arg)" directly either (a) evaluating F with the current argument together with all the previously curried argument (if possible); or (b) currying the argument. This was Eric's original proposal, as far as I could tell. Just add to this a family of "curryN" ("curry2", "curry3", ...) functions that "curry-fy" an immediate function *and* fix its arity to N; thus, the first N-1 "(arg)" applications would curry the argument. - Jeff

Le 25/08/11 03:41, Joshua Juran a écrit :
On Aug 24, 2011, at 1:38 PM, Dave Abrahams wrote:
I mean this, for a ternary function f:
f(x) => doesn't call f f(x)(y) => doesn't call f f(x)(y)(z) => calls f
That last step looks asymmetric to me.
In a lazy language, f(x)(y)(z) *doesn't* call f... until you actually use the result for something... which is more consistent-looking.
I suppose the symmetrical non-lazy version looks like:
f(x) => doesn't call f f(x)(y) => doesn't call f f(x)(y)(z) => doesn't call f f(x)(y)(z)() => calls f
What about using [] for currying and () for calling?
f[x] => returns binary function f[x][y] => returns unary function f[x][y][z] => returns nullary function
All of these have the same effect:
f(x, y, z) f[x](y, z) f[x][y](z) f[x][y][z]()
I like it. This le us see a curryable as a fuctor that maps args to new functors. Which operator could reflect better this mapping tahn the subscript operator? It solves the asymetric issue and the variadic one also. Best, Vicente

Could you explain what you mean by asymmetry here? That my currying code prefers one function over another based on the available arguments?
I mean this, for a ternary function f:
f(x) => doesn't call f f(x)(y) => doesn't call f f(x)(y)(z) => calls f
That last step looks asymmetric to me.
In a lazy language, f(x)(y)(z) *doesn't* call f... until you actually use the result for something... which is more consistent-looking.
I suppose the symmetrical non-lazy version looks like:
f(x) => doesn't call f f(x)(y) => doesn't call f f(x)(y)(z) => doesn't call f f(x)(y)(z)() => calls f
... which makes it lazy again :-P Regards Hartmut --------------- http://boost-spirit.com

on Thu Aug 25 2011, "Hartmut Kaiser" <hartmut.kaiser-AT-gmail.com> wrote:
Could you explain what you mean by asymmetry here? That my currying code prefers one function over another based on the available arguments?
I mean this, for a ternary function f:
f(x) => doesn't call f f(x)(y) => doesn't call f f(x)(y)(z) => calls f
That last step looks asymmetric to me.
In a lazy language, f(x)(y)(z) *doesn't* call f... until you actually use the result for something... which is more consistent-looking.
I suppose the symmetrical non-lazy version looks like:
f(x) => doesn't call f f(x)(y) => doesn't call f f(x)(y)(z) => doesn't call f f(x)(y)(z)() => calls f
... which makes it lazy again :-P
Not in the sense that I was using "lazy." I mean "lazy" in the sense that there's no syntactic distinction between f(x) and its result, but the computation only executes when it's finally needed. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 08/24/2011 06:55 PM, Dave Abrahams wrote:
That's an asymmetry about most currying syntax that I never liked, at least for C++. I suppose when all functions are fully lazy there's no assymmetry, but that's not C++. In C++ we have parens to trigger evaluation. Even in Phoenix, laziness only goes partway: you still need parens to trigger final evaluation.
ML-based languages are not lazy and have had currying for 40 years. Yet the academics behind functional programming, lambda calculus, theorem proving and logic still seem to fancy them a lot.

On 24 August 2011 23:11, Mathias Gaunard <mathias.gaunard@ens-lyon.org> wrote:
On 08/24/2011 06:55 PM, Dave Abrahams wrote:
That's an asymmetry about most currying syntax that I never liked, at least for C++. I suppose when all functions are fully lazy there's no assymmetry, but that's not C++. In C++ we have parens to trigger evaluation. Even in Phoenix, laziness only goes partway: you still need parens to trigger final evaluation.
ML-based languages are not lazy and have had currying for 40 years. Yet the academics behind functional programming, lambda calculus, theorem proving and logic still seem to fancy them a lot.
But ML doesn't have C++ style overloading. Currying certain overloaded functions is ambiguous. Eric's implementation disambiguates by preferring the original function with the least number of arguments, but allows multiple arguments to be passed at once (implicit uncurrying?) so that you can skip over an overload. I don't think the 'academics' would like that much. But regardless of such nit-picking, my issue with currying in C++ is that what we currently consider to be a safe API change (adding an unambiguous overload or giving a parameter a default value) will become a breaking change.

On 25/08/2011 01:18, Daniel James wrote:
On 24 August 2011 23:11, Mathias Gaunard<mathias.gaunard@ens-lyon.org> wrote:
On 08/24/2011 06:55 PM, Dave Abrahams wrote:
That's an asymmetry about most currying syntax that I never liked, at least for C++. I suppose when all functions are fully lazy there's no assymmetry, but that's not C++. In C++ we have parens to trigger evaluation. Even in Phoenix, laziness only goes partway: you still need parens to trigger final evaluation.
ML-based languages are not lazy and have had currying for 40 years. Yet the academics behind functional programming, lambda calculus, theorem proving and logic still seem to fancy them a lot.
But ML doesn't have C++ style overloading. Currying certain overloaded functions is ambiguous.
Which is why I suggested only applying currying to monomorphic functions.

On 25 August 2011 10:33, Mathias Gaunard <mathias.gaunard@ens-lyon.org> wrote:
On 25/08/2011 01:18, Daniel James wrote:
On 24 August 2011 23:11, Mathias Gaunard<mathias.gaunard@ens-lyon.org> wrote:
On 08/24/2011 06:55 PM, Dave Abrahams wrote:
That's an asymmetry about most currying syntax that I never liked, at least for C++. I suppose when all functions are fully lazy there's no assymmetry, but that's not C++. In C++ we have parens to trigger evaluation. Even in Phoenix, laziness only goes partway: you still need parens to trigger final evaluation.
ML-based languages are not lazy and have had currying for 40 years. Yet the academics behind functional programming, lambda calculus, theorem proving and logic still seem to fancy them a lot.
But ML doesn't have C++ style overloading. Currying certain overloaded functions is ambiguous.
Which is why I suggested only applying currying to monomorphic functions.
But that's not much good for C++. Explicitly specifying the number of arguments is a bit better, but then the situations where that can do something that bind can't are quite rare. I think explicit partial function calls would be a better fit for C++, i.e. std::transform(begin(), end, inserter, partial(std::plus<int>(), 10)); It would be clear when you're explicitly calling the function, and when you're not. And the implementation would be considerably simpler. It could support placeholders if you don't like simplicity. I think an 'uncurry' is also possible, I'm not sure how useful it would be.

On Thursday, August 25, 2011 11:16:21 AM Daniel James wrote:
On 25 August 2011 10:33, Mathias Gaunard <mathias.gaunard@ens-lyon.org> wrote:
On 25/08/2011 01:18, Daniel James wrote:
On 24 August 2011 23:11, Mathias Gaunard<mathias.gaunard@ens-lyon.org>
wrote:
On 08/24/2011 06:55 PM, Dave Abrahams wrote:
That's an asymmetry about most currying syntax that I never liked, at least for C++. I suppose when all functions are fully lazy there's no assymmetry, but that's not C++. In C++ we have parens to trigger evaluation. Even in Phoenix, laziness only goes partway: you still need parens to trigger final evaluation.
ML-based languages are not lazy and have had currying for 40 years. Yet the academics behind functional programming, lambda calculus, theorem proving and logic still seem to fancy them a lot.
But ML doesn't have C++ style overloading. Currying certain overloaded functions is ambiguous.
Which is why I suggested only applying currying to monomorphic functions.
But that's not much good for C++. Explicitly specifying the number of arguments is a bit better, but then the situations where that can do something that bind can't are quite rare.
I think explicit partial function calls would be a better fit for C++, i.e.
std::transform(begin(), end, inserter, partial(std::plus<int>(), 10));
It would be clear when you're explicitly calling the function, and when you're not. And the implementation would be considerably simpler. It could support placeholders if you don't like simplicity.
So, what is the difference to bind? Or the old bind1st bind2nd? Or just _1 + 10, or 10 + _1? The more I think of it the more confident I am that the current bind implementations are the best for the general case.
I think an 'uncurry' is also possible, I'm not sure how useful it would be.

On 25 August 2011 13:59, Thomas Heller <thom.heller@googlemail.com> wrote:
On Thursday, August 25, 2011 11:16:21 AM Daniel James wrote:
I think explicit partial function calls would be a better fit for C++, i.e.
std::transform(begin(), end, inserter, partial(std::plus<int>(), 10));
It would be clear when you're explicitly calling the function, and when you're not. And the implementation would be considerably simpler. It could support placeholders if you don't like simplicity.
So, what is the difference to bind? Or the old bind1st bind2nd? Or just _1 + 10, or 10 + _1?
When bind receives extra arguments it discards them, when partial receives extra arguments it appends them to the end of the argument list. It's a lot simpler to implement and use than bind. Obviously std::plus isn't the most interesting example since it only ever has two arguments. The name partial comes from partial application: http://en.wikipedia.org/wiki/Partial_application

on Wed Aug 24 2011, Mathias Gaunard <mathias.gaunard-AT-ens-lyon.org> wrote:
On 08/24/2011 06:55 PM, Dave Abrahams wrote:
That's an asymmetry about most currying syntax that I never liked, at least for C++. I suppose when all functions are fully lazy there's no assymmetry, but that's not C++. In C++ we have parens to trigger evaluation. Even in Phoenix, laziness only goes partway: you still need parens to trigger final evaluation.
ML-based languages are not lazy and have had currying for 40 years. Yet the academics behind functional programming, lambda calculus, theorem proving and logic still seem to fancy them a lot.
No doubt; I was just expressing a personal sense of discomfort. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 24/08/2011 04:59, Eric Niebler wrote:
Sorry, this should have been int i = (lazy(std::plus<int>())(1) + 4)(3) of course.
Could you explain this?
lazy is just template<class T> phoenix::function<T> lazy(const T& f) { return phoenix::function<T>(f); } lazy(std::plus<int>()) turns std::plus<int>() into a lazy function (I said phoenix actor before -- I think that's incorrect terminology, sorry); i.e. the operator() members return an expression instead of evaluating it. Of course you could also call this 'make_function' instead of 'lazy'. In this particular case, phoenix::function< std::plus<int> >() would work just as well, of course. Now what I'm suggesting is to add currying in Boost.Phoenix by implementing the expected logic in the call node evaluation. This has the desired effect of allowing lazy(std::plus<int>())(1)(2) But it also has the interesting effect of also allowing to do (lazy(std::plus<int>())(1) + 4)(3) as I wrote above. More about this below. I think that's pretty interesting since it essentially allows us to write (f + g)(x) to do f(x) + g(x) I haven't thought about this enough to tell whether it is really desirable or not. Let me unroll the example. so lazy(std::plus<int>())(1) + 4 is a tree similar to (pseudo-proto with values embedded in the types) plus< call< terminal< std::plus<int> >, terminal< int, 1 > >, terminal< int, 4 >
Now, when you run (lazy(std::plus<int>())(1) + 4)(3) you evaluate the above tree with the tuple (3) as the state. When evaluating a call node, you do the following: - if enough arguments are passed, evaluate the arguments then call the function on the evaluated arguments (default transform -- what is currently being done) - if not enough arguments are passed, add terminal children to the call node, which reference the value from the state tuple until the function has enough arguments. Then evaluate the node as above. So you end up to something semantically equivalent to evaluating the following tree with the default transform plus< call< terminal< std::plus<int> >, terminal< int, 1 >, terminal< int, 3 > >, terminal< int, 4 >
i.e., std::plus<int>(1, 3) + 4. This however, appears to have some possible issues, but nothing really problematic: let's consider I want to call foo(bar(1)) + _1 with foo taking one argument which must be a function, and bar taking two integers. both foo and bar are lazy functions. when I call it with a state of (2), this will "expand" to foo(bar(1))(2) + _1(2) foo(bar(1)(2)) + _1(2) foo(bar(1, 2)) + 2 which is not what I wanted (foo(bar(1)) + 2) phoenix::lambda[foo(bar(1)] + _1, however, will do what's desired, since it will mask the arguments to the lambda-body. Of course, foo(bar(1))() works as expected.
Detecting and propagating monomorphism could be nice though. It could eventually provide better error messages, faster compilation times, and automatic currying on monomorphic function objects.
What do monomorphic functions have to do with this? My currying code works with polymorphic functions.
I think it would be much safer to restrict it to monomorphic functions to avoid ambiguities. But then, I don't have a strong opinion on this.

On Wed, Aug 24, 2011 at 6:16 PM, Mathias Gaunard <mathias.gaunard@ens-lyon.org> wrote:
On 24/08/2011 04:59, Eric Niebler wrote:
Sorry, this should have been int i = (lazy(std::plus<int>())(1) + 4)(3) of course.
Could you explain this?
lazy is just
template<class T> phoenix::function<T> lazy(const T& f) { return phoenix::function<T>(f); }
This can be done, and makes sense. It is, in a way similar to bind though, that is why i didn't add something like this.
lazy(std::plus<int>()) turns std::plus<int>() into a lazy function (I said phoenix actor before -- I think that's incorrect terminology, sorry); i.e. the operator() members return an expression instead of evaluating it.
Exactly, see phoenix::function as some kind of a expression generator. To get the terminology right, the call to a operator() returns an expression, which is a phoenix actor.
Of course you could also call this 'make_function' instead of 'lazy'. In this particular case, phoenix::function< std::plus<int> >() would work just as well, of course.
Right, i like lazy though.
Now what I'm suggesting is to add currying in Boost.Phoenix by implementing the expected logic in the call node evaluation. This has the desired effect of allowing lazy(std::plus<int>())(1)(2)
But it also has the interesting effect of also allowing to do (lazy(std::plus<int>())(1) + 4)(3) as I wrote above. More about this below.
I think that's pretty interesting since it essentially allows us to write (f + g)(x) to do f(x) + g(x) I haven't thought about this enough to tell whether it is really desirable or not.
Let me unroll the example.
so lazy(std::plus<int>())(1) + 4 is a tree similar to (pseudo-proto with values embedded in the types) plus< call< terminal< std::plus<int> >, terminal< int, 1 > >, terminal< int, 4 >
Now, when you run (lazy(std::plus<int>())(1) + 4)(3) you evaluate the above tree with the tuple (3) as the state.
When evaluating a call node, you do the following: - if enough arguments are passed, evaluate the arguments then call the function on the evaluated arguments (default transform -- what is currently being done) - if not enough arguments are passed, add terminal children to the call node, which reference the value from the state tuple until the function has enough arguments. Then evaluate the node as above.
So you end up to something semantically equivalent to evaluating the following tree with the default transform
plus< call< terminal< std::plus<int> >, terminal< int, 1 >, terminal< int, 3 > >, terminal< int, 4 >
i.e., std::plus<int>(1, 3) + 4.
This however, appears to have some possible issues, but nothing really problematic:
let's consider I want to call
foo(bar(1)) + _1
with foo taking one argument which must be a function, and bar taking two integers. both foo and bar are lazy functions.
when I call it with a state of (2), this will "expand" to
foo(bar(1))(2) + _1(2) foo(bar(1)(2)) + _1(2) foo(bar(1, 2)) + 2
which is not what I wanted (foo(bar(1)) + 2)
phoenix::lambda[foo(bar(1)] + _1, however, will do what's desired, since it will mask the arguments to the lambda-body.
Of course, foo(bar(1))() works as expected.
Nice! I like the general idea of it! However, I am not really sure if adding stuff like this to phoenix::actor doesn't break stuff. Unfortunately i currently can't implement a feature like this, too handicapped for such a big task. However, patches are very much welcome!
Detecting and propagating monomorphism could be nice though. It could eventually provide better error messages, faster compilation times, and automatic currying on monomorphic function objects.
What do monomorphic functions have to do with this? My currying code works with polymorphic functions.
I think it would be much safer to restrict it to monomorphic functions to avoid ambiguities.
But then, I don't have a strong opinion on this.
As noted in one of my other posts, the problem is not really if the function is polymorphic or not. The problem is in variadic functions. I can't see a way to automically guess which overload the user wanted to call.

On 24/08/2011 18:43, Thomas Heller wrote:
Nice! I like the general idea of it! However, I am not really sure if adding stuff like this to phoenix::actor doesn't break stuff.
My suggestion would only affect evaluation of the 'call' proto node, not phoenix::actor.
As noted in one of my other posts, the problem is not really if the function is polymorphic or not. The problem is in variadic functions. I can't see a way to automically guess which overload the user wanted to call.
By polymorphic functions, I meant to say "functions with a template or overloaded operator()", which covers variadic functions. But that clarifies the problem indeed, the problem is in variable arity.

On Wed, Aug 24, 2011 at 6:52 PM, Mathias Gaunard <mathias.gaunard@ens-lyon.org> wrote:
On 24/08/2011 18:43, Thomas Heller wrote:
Nice! I like the general idea of it! However, I am not really sure if adding stuff like this to phoenix::actor doesn't break stuff.
My suggestion would only affect evaluation of the 'call' proto node, not phoenix::actor.
Of course, in order to transform such a call node though, you need to implement the specific behaviour in the operator() overloads of phoenix::actor. I can see this happening though ... could be almost trivial ... we could add a post processing actions step to the evaluation which returns false_ if the expression shouldnt be evaluated just yet, or true_ otherwise. A function_eval node (which is, in a way equivalent to your imaginary call node) can then select the proper action based on which types it holds. For that we can reuse the logic Eric proposed.
As noted in one of my other posts, the problem is not really if the function is polymorphic or not. The problem is in variadic functions. I can't see a way to automically guess which overload the user wanted to call.
By polymorphic functions, I meant to say "functions with a template or overloaded operator()", which covers variadic functions.
But that clarifies the problem indeed, the problem is in variable arity.

On 08/24/2011 07:09 PM, Thomas Heller wrote:
Of course, in order to transform such a call node though, you need to implement the specific behaviour in the operator() overloads of phoenix::actor. I can see this happening though ... could be almost trivial ... we could add a post processing actions step to the evaluation which returns false_ if the expression shouldnt be evaluated just yet, or true_ otherwise. A function_eval node (which is, in a way equivalent to your imaginary call node) can then select the proper action based on which types it holds. For that we can reuse the logic Eric proposed.
Ok, it seems phoenix::function doesn't work like I thought. Why doesn't it just do a proto::make_expr(proto::tag::function, f, a0, a1..., aN)? I don't really understand how Phoenix works. For some reason I thought the node type for function application in Proto was called 'call', looks like it's actually 'function', sorry.

On Thu, Aug 25, 2011 at 12:04 AM, Mathias Gaunard <mathias.gaunard@ens-lyon.org> wrote:
On 08/24/2011 07:09 PM, Thomas Heller wrote:
Of course, in order to transform such a call node though, you need to implement the specific behaviour in the operator() overloads of phoenix::actor. I can see this happening though ... could be almost trivial ... we could add a post processing actions step to the evaluation which returns false_ if the expression shouldnt be evaluated just yet, or true_ otherwise. A function_eval node (which is, in a way equivalent to your imaginary call node) can then select the proper action based on which types it holds. For that we can reuse the logic Eric proposed.
Ok, it seems phoenix::function doesn't work like I thought. Why doesn't it just do a proto::make_expr(proto::tag::function, f, a0, a1..., aN)? I don't really understand how Phoenix works.
For some reason I thought the node type for function application in Proto was called 'call', looks like it's actually 'function', sorry.
As i said, it is function_eval. To be more precise boost::phoenix::detail::function_eval. It is used by phoenix::function and phoenix::bind. Don't remember why i didn't just reuse proto::tag::function here ... One reason was that would have to rewrite the function evaulation once again to support phx2 style result type deduction .... Anyway. Hooking up this function_eval thing with curry capabilities will automatically give you "lazy" through bind and curry.

2011/8/23 Eric Niebler <eric@boostpro.com>
After playing around with functional languages, I've come to envy how easy they make it to curry functions. Call a 2-argument function with 1 argument and you get a function that takes 1 argument. Pass another argument and it evaluates the function. Simple. In contrast, C++ users have to use binders, which are not as nice.
On a lark, I implemented a wrapper that turns any TR1-style function object into a "curryable" function object (attached). Successive function call invocations bind arguments until enough arguments are bound to invoke the wrapped function. With it you can do the following:
curryable<std::plus<int> > p; auto curried = p(1); int i = curried(2); assert(i == 3);
Is there any interest in such a thing?
I think it's a very useful thing to have! curryable should probably be renamed to curried (this is the term http://en.wikipedia.org/wiki/Currying uses). A generator for curried objects named curry would be a nice addition too, as would be the reverse - uncurry. By the way, don't you need to manually pass arity in the general case? What if the underlying functor is callable with one argument, but also with two? I think this is the case with functors generated by bind -- they appear to be callable with any number of arguments. FWIW, Egg<http://p-stade.sourceforge.net/egg/doc/html/egg/function_adaptors.html>has curryN (where N is a number literal) and uncurry. Roman Perepelitsa.

On 8/24/2011 1:58 AM, Roman Perepelitsa wrote:
2011/8/23 Eric Niebler <eric@boostpro.com>
After playing around with functional languages, I've come to envy how easy they make it to curry functions. Call a 2-argument function with 1 argument and you get a function that takes 1 argument. Pass another argument and it evaluates the function. Simple. In contrast, C++ users have to use binders, which are not as nice.
On a lark, I implemented a wrapper that turns any TR1-style function object into a "curryable" function object (attached). Successive function call invocations bind arguments until enough arguments are bound to invoke the wrapped function. With it you can do the following:
curryable<std::plus<int> > p; auto curried = p(1); int i = curried(2); assert(i == 3);
Is there any interest in such a thing?
I think it's a very useful thing to have!
curryable should probably be renamed to curried (this is the term http://en.wikipedia.org/wiki/Currying uses). A generator for curried objects named curry would be a nice addition too, as would be the reverse - uncurry.
Thanks. It's important to get the terminology right. But I thought uncurry had to do with tuples holding arguments.
By the way, don't you need to manually pass arity in the general case? What if the underlying functor is callable with one argument, but also with two?
As soon as enough arguments are collected to make a valid call of the curried function, it gets called. This seems reasonable to me.
I think this is the case with functors generated by bind -- they appear to be callable with any number of arguments.
Ouch. :-( One option, which I don't like, is to "fix" bind to disable any operator() overloads that result in an invalid invocation of the bound function. That can be done easily with the is_callable_with metafunction in my curryable implementation, but would reduce portability somewhat.
FWIW, Egg<http://p-stade.sourceforge.net/egg/doc/html/egg/function_adaptors.html>has curryN (where N is a number literal) and uncurry.
Of course, I'd prefer it if it just worked without needing to be told the arity. That's what I was aiming for. -- Eric Niebler BoostPro Computing http://www.boostpro.com

On Wed, Aug 24, 2011 at 5:13 PM, Eric Niebler <eric@boostpro.com> wrote:
On 8/24/2011 1:58 AM, Roman Perepelitsa wrote:
2011/8/23 Eric Niebler <eric@boostpro.com> <snip> By the way, don't you need to manually pass arity in the general case? What if the underlying functor is callable with one argument, but also with two?
As soon as enough arguments are collected to make a valid call of the curried function, it gets called. This seems reasonable to me.
I think this is the case with functors generated by bind -- they appear to be callable with any number of arguments.
Ouch. :-(
Exactly. They need to be called with N number of arguments where N is larger than highest arity of the placeholders used (_1 has arity 1, _2 has 2 and so on ...). This is needed to allow to omit some of the passed parameters. For example asio callbacks were you might not be interested in the number of bytes written.
One option, which I don't like, is to "fix" bind to disable any operator() overloads that result in an invalid invocation of the bound function. That can be done easily with the is_callable_with metafunction in my curryable implementation, but would reduce portability somewhat.
We can certainly disable certain operator() overloads for phoenix actors. this is on my TODO for a very long time now ... Its actually quite simple. just introspect the current expression and look for placeholders. Reusing your is_callable_with metafunction would be cool to disable the bind functions for functors.
FWIW, Egg<http://p-stade.sourceforge.net/egg/doc/html/egg/function_adaptors.html>has curryN (where N is a number literal) and uncurry.
Of course, I'd prefer it if it just worked without needing to be told the arity. That's what I was aiming for.
I would like to pursue a general curry approach which just calls the function whenever it could be called, and the curryN approache to tell exactly which overload i woul dlike to have.

On 08/24/2011 08:13 AM, Eric Niebler wrote:
On 8/24/2011 1:58 AM, Roman Perepelitsa wrote:
I think this is the case with functors generated by bind -- they appear to be callable with any number of arguments. Ouch. :-(
One option, which I don't like, is to "fix" bind to disable any operator() overloads that result in an invalid invocation of the bound function. That can be done easily with the is_callable_with metafunction in my curryable implementation, but would reduce portability somewhat.
That isn't "broken" behavior. Very often that is exactly what you want from bind. -- Michael Caisse Object Modeling Designs www.objectmodelingdesigns.com

On Sat, Aug 27, 2011 at 2:43 PM, Michael Caisse < boost@objectmodelingdesigns.com> wrote:
On 08/24/2011 08:13 AM, Eric Niebler wrote:
On 8/24/2011 1:58 AM, Roman Perepelitsa wrote:
I think this is the case with functors generated by bind -- they appear
to be callable with any number of arguments.
Ouch. :-(
One option, which I don't like, is to "fix" bind to disable any operator() overloads that result in an invalid invocation of the bound function. That can be done easily with the is_callable_with metafunction in my curryable implementation, but would reduce portability somewhat.
That isn't "broken" behavior. Very often that is exactly what you want from bind.
Why would you prefer enabling operator() overloads of bind that result in a compiler error over "SFINAE-disabling" them? (Other than compile time considerations and minimizing complexity.) - Jeff

On 08/27/2011 04:20 PM, Jeffrey Lee Hellrung, Jr. wrote:
On Sat, Aug 27, 2011 at 2:43 PM, Michael Caisse< boost@objectmodelingdesigns.com> wrote:
On 08/24/2011 08:13 AM, Eric Niebler wrote:
On 8/24/2011 1:58 AM, Roman Perepelitsa wrote:
I think this is the case with functors generated by bind -- they appear
to be callable with any number of arguments.
Ouch. :-(
One option, which I don't like, is to "fix" bind to disable any operator() overloads that result in an invalid invocation of the bound function. That can be done easily with the is_callable_with metafunction in my curryable implementation, but would reduce portability somewhat.
That isn't "broken" behavior. Very often that is exactly what you want from bind.
Why would you prefer enabling operator() overloads of bind that result in a compiler error over "SFINAE-disabling" them? (Other than compile time considerations and minimizing complexity.)
- Jeff
Hi Jeff - I might be confused. I thought we were talking about the bound functor allowing a different number of arguments than the function that it is bound to. For example, more arguments like this: void log( std::string name, timestamp_t stamp ); ... function<void(std::string,timestamp_t)> notify = bind( &log, _2, _1 ); ... notify( timestamp, source, lat, long, message ); There is no error here. Have I missed the point? michael -- Michael Caisse Object Modeling Designs www.objectmodelingdesigns.com

On Sat, Aug 27, 2011 at 10:00 PM, Michael Caisse < boost@objectmodelingdesigns.com> wrote:
On 08/27/2011 04:20 PM, Jeffrey Lee Hellrung, Jr. wrote:
On Sat, Aug 27, 2011 at 2:43 PM, Michael Caisse< boost@objectmodelingdesigns.**com <boost@objectmodelingdesigns.com>> wrote:
On 08/24/2011 08:13 AM, Eric Niebler wrote:
On 8/24/2011 1:58 AM, Roman Perepelitsa wrote:
I think this is the case with functors generated by bind -- they appear
to be callable with any number of arguments.
Ouch. :-(
One option, which I don't like, is to "fix" bind to disable any operator() overloads that result in an invalid invocation of the bound function. That can be done easily with the is_callable_with metafunction in my curryable implementation, but would reduce portability somewhat.
That isn't "broken" behavior. Very often that is exactly what you want
from bind.
Why would you prefer enabling operator() overloads of bind that result
in a compiler error over "SFINAE-disabling" them? (Other than compile time considerations and minimizing complexity.)
- Jeff
Hi Jeff -
I might be confused. I thought we were talking about the bound functor allowing a different number of arguments than the function that it is bound to. For example, more arguments like this:
void log( std::string name, timestamp_t stamp ); ...
function<void(std::string,**timestamp_t)> notify = bind( &log, _2, _1 ); ...
notify( timestamp, source, lat, long, message );
There is no error here. Have I missed the point?
As far as I understood it, yes. As I understood Eric's proposal, "bind( &log, _2, _1 )( timestamp, source, lat long, message )" would still compile fine, but "bind( &log, _2, _1 )( 0.0, (void*)(0) )" (for example) would result in a compiler error *at the call site* due to operator() overloads within boost::bind function objects (whatever those implementation objects are) being SFINAE-disabled for any argument binding that results in an "invalid invocation of the bound function". This behavior requires something like Boost.Proto's can_be_called (I may have the precise name wrong here...) machinery to implement. This buys one the ability to likewise use function object types returned by boost::bind in the same can_be_called machinery to enable compile-time introspection of *that* type. In the context of this thread, this allows one to decide when an argument passed to a curried function triggers the underlying function to be evaluated. HTH, - Jeff

On 08/28/2011 04:06 AM, Jeffrey Lee Hellrung, Jr. wrote:
On Sat, Aug 27, 2011 at 10:00 PM, Michael Caisse< boost@objectmodelingdesigns.com> wrote:
On 08/27/2011 04:20 PM, Jeffrey Lee Hellrung, Jr. wrote:
<snip>
in a compiler error over "SFINAE-disabling" them? (Other than compile time considerations and minimizing complexity.)
- Jeff
Hi Jeff -
I might be confused. I thought we were talking about the bound functor allowing a different number of arguments than the function that it is bound to. For example, more arguments like this:
<snip>
There is no error here. Have I missed the point?
As far as I understood it, yes. As I understood Eric's proposal, "bind( &log, _2, _1 )( timestamp, source, lat long, message )" would still compile fine, but "bind(&log, _2, _1 )( 0.0, (void*)(0) )" (for example) would result in a compiler error *at the call site* due to operator() overloads within boost::bind function objects (whatever those implementation objects are) being SFINAE-disabled for any argument binding that results in an "invalid invocation of the bound function". This behavior requires something like Boost.Proto's can_be_called (I may have the precise name wrong here...) machinery to implement. This buys one the ability to likewise use function object types returned by boost::bind in the same can_be_called machinery to enable compile-time introspection of *that* type. In the context of this thread, this allows one to decide when an argument passed to a curried function triggers the underlying function to be evaluated.
HTH,
- Jeff _______________________________________________
Thanks Jeff. I guess I wasn't following close enough (o; -- Michael Caisse Object Modeling Designs www.objectmodelingdesigns.com

On Tue, Aug 23, 2011 at 03:38:19PM -0400, Eric Niebler wrote:
On a lark, I implemented a wrapper that turns any TR1-style function object into a "curryable" function object (attached). Successive function call invocations bind arguments until enough arguments are bound to invoke the wrapped function. With it you can do the following:
curryable<std::plus<int> > p; auto curried = p(1); int i = curried(2); assert(i == 3);
Is there any interest in such a thing?
For what it's worth, I made a feature request[1] about partial function application in Phoenix on the trac a few months ago after discussing it with Heller on IRC, but interest seemed mild, probably for lack of an investigation of the side effects and the lack of a proof-of-concept implementation. [1] https://svn.boost.org/trac/boost/ticket/5541 -- Lars Viklund | zao@acc.umu.se

On Wednesday, August 24, 2011 09:48:26 AM Lars Viklund wrote:
On Tue, Aug 23, 2011 at 03:38:19PM -0400, Eric Niebler wrote:
On a lark, I implemented a wrapper that turns any TR1-style function object into a "curryable" function object (attached). Successive function call invocations bind arguments until enough arguments are
bound to invoke the wrapped function. With it you can do the following: curryable<std::plus<int> > p; auto curried = p(1); int i = curried(2); assert(i == 3);
Is there any interest in such a thing?
For what it's worth, I made a feature request[1] about partial function application in Phoenix on the trac a few months ago after discussing it with Heller on IRC, but interest seemed mild, probably for lack of an investigation of the side effects and the lack of a proof-of-concept implementation.
More a lack of time than a lack of interest. FWIW, the feature described is currently perfectly doable with phoenix right now. With a combination of lambda, bind and placeholders. Which is, admittingly, not very elegant to do. A lot of boilerplate for such a seemingly simple task.

On 24/08/2011 10:41, Thomas Heller wrote:
FWIW, the feature described is currently perfectly doable with phoenix right now. With a combination of lambda, bind and placeholders. Which is, admittingly, not very elegant to do. A lot of boilerplate for such a seemingly simple task.
bind requires knowing the arity of the function, unless there is a trick to avoid that?

On Wed, Aug 24, 2011 at 6:18 PM, Mathias Gaunard <mathias.gaunard@ens-lyon.org> wrote:
On 24/08/2011 10:41, Thomas Heller wrote:
FWIW, the feature described is currently perfectly doable with phoenix right now. With a combination of lambda, bind and placeholders. Which is, admittingly, not very elegant to do. A lot of boilerplate for such a seemingly simple task.
bind requires knowing the arity of the function, unless there is a trick to avoid that?
Not that i know of. calling bind(f()); assumes a nullary functor.

On 08/23/11 14:38, Eric Niebler wrote:
After playing around with functional languages, I've come to envy how easy they make it to curry functions. Call a 2-argument function with 1 argument and you get a function that takes 1 argument. Pass another argument and it evaluates the function. Simple. In contrast, C++ users have to use binders, which are not as nice.
On a lark, I implemented a wrapper that turns any TR1-style function object into a "curryable" function object (attached). Successive function call invocations bind arguments until enough arguments are bound to invoke the wrapped function. With it you can do the following:
curryable<std::plus<int> > p; auto curried = p(1); int i = curried(2); assert(i == 3);
Is there any interest in such a thing?
Hi Eric, I downloaded then renamed curryable.h to currable.hpp (so my emacs editor would select c++ mode). I also made corresponding changes in source "curryable.h" -> "curryable.hpp". I'm using boost 1.46 and compiling the result gives an errors: In file included from /home/evansl/prog_dev/boost-svn/ro/boost_1_46_0/boost/preprocessor/iteration/detail/iter/forward2.hpp:50:0, from ./curryable.hpp:157, from /home/evansl/prog_dev/boost-svn/ro/boost_1_46_0/boost/preprocessor/iteration/detail/iter/forward1.hpp:47, from curryable.hpp:113, from curryable.cpp:1: ./curryable.hpp:117:1: error: macro "BOOST_PP_CHECK_2" requires 2 arguments, but only 1 given ./curryable.hpp:117:1: error: macro "BOOST_PP_CHECK_2" requires 2 arguments, but only 1 given ./curryable.hpp:117:1: error: pasting ")" and "BOOST_PP_CAT" does not give a valid preprocessing token ./curryable.hpp:175:1: error: macro "BOOST_PP_CHECK_2" requires 2 arguments, but only 1 given ./curryable.hpp:175:1: error: macro "BOOST_PP_CHECK_2" requires 2 arguments, but only 1 given ./curryable.hpp:175:1: error: pasting ")" and "BOOST_PP_CAT" does not give a valid preprocessing token In file included from /home/evansl/prog_dev/boost-svn/ro/boost_1_46_0/boost/preprocessor/iteration/detail/iter/forward2.hpp:55:0, What should I do to avoid those errors? TIA. -Larry

On 8/24/2011 12:29 PM, Larry Evans wrote:
Hi Eric,
I downloaded then renamed curryable.h to currable.hpp (so my emacs editor would select c++ mode). I also made corresponding changes in source "curryable.h" -> "curryable.hpp". I'm using boost 1.46 and compiling the result gives an errors:
In file included from /home/evansl/prog_dev/boost-svn/ro/boost_1_46_0/boost/preprocessor/iteration/detail/iter/forward2.hpp:50:0, from ./curryable.hpp:157, from /home/evansl/prog_dev/boost-svn/ro/boost_1_46_0/boost/preprocessor/iteration/detail/iter/forward1.hpp:47, from curryable.hpp:113, from curryable.cpp:1: ./curryable.hpp:117:1: error: macro "BOOST_PP_CHECK_2" requires 2 arguments, but only 1 given ./curryable.hpp:117:1: error: macro "BOOST_PP_CHECK_2" requires 2 arguments, but only 1 given ./curryable.hpp:117:1: error: pasting ")" and "BOOST_PP_CAT" does not give a valid preprocessing token ./curryable.hpp:175:1: error: macro "BOOST_PP_CHECK_2" requires 2 arguments, but only 1 given ./curryable.hpp:175:1: error: macro "BOOST_PP_CHECK_2" requires 2 arguments, but only 1 given ./curryable.hpp:175:1: error: pasting ")" and "BOOST_PP_CAT" does not give a valid preprocessing token In file included from /home/evansl/prog_dev/boost-svn/ro/boost_1_46_0/boost/preprocessor/iteration/detail/iter/forward2.hpp:55:0,
What should I do to avoid those errors?
Debug it? I don't know, I don't get those errors with either msvc (10.0) or gcc (4.3). -- Eric Niebler BoostPro Computing http://www.boostpro.com

Eric Niebler <eric@boostpro.com> writes:
On a lark, I implemented a wrapper that turns any TR1-style function object into a "curryable" function object (attached). Successive function call invocations bind arguments until enough arguments are bound to invoke the wrapped function. With it you can do the following:
curryable<std::plus<int> > p; auto curried = p(1); int i = curried(2); assert(i == 3);
Is there any interest in such a thing?
Most definitely! -Dave

On Tue, Aug 23, 2011 at 3:38 PM, Eric Niebler <eric@boostpro.com> wrote:
After playing around with functional languages, I've come to envy how easy they make it to curry functions. Call a 2-argument function with 1 argument and you get a function that takes 1 argument. Pass another argument and it evaluates the function. Simple. In contrast, C++ users have to use binders, which are not as nice.
On a lark, I implemented a wrapper that turns any TR1-style function object into a "curryable" function object (attached). Successive function call invocations bind arguments until enough arguments are bound to invoke the wrapped function. With it you can do the following:
curryable<std::plus<int> > p; auto curried = p(1); int i = curried(2); assert(i == 3);
Is there any interest in such a thing?
Possibly. I also wish templates worked more like this (partial specialization). I think we all need to learn more functional programming before embarking on C++1x. Tony

On 8/23/2011 3:38 PM, Eric Niebler wrote:
After playing around with functional languages, I've come to envy how easy they make it to curry functions. Call a 2-argument function with 1 argument and you get a function that takes 1 argument. Pass another argument and it evaluates the function. Simple. In contrast, C++ users have to use binders, which are not as nice.
On a lark, I implemented a wrapper that turns any TR1-style function object into a "curryable" function object (attached). Successive function call invocations bind arguments until enough arguments are bound to invoke the wrapped function. With it you can do the following:
curryable<std::plus<int> > p; auto curried = p(1); int i = curried(2); assert(i == 3);
Is there any interest in such a thing?
If I can correct the curryable.h code I believe the use of #elif BOOST_PP_ITERATION_FLAGS() == 1 and #elif BOOST_PP_ITERATION_FLAGS() == 2 is wrong because BOOST_PP_ITERATION_FLAGS() is always evaluated in these lines even when BOOST_PP_IS_ITERATING is not defined. I think instead you must specify: #else #if BOOST_PP_ITERATION_FLAGS() == 1 .... #else #if BOOST_PP_ITERATION_FLAGS() == 2 .... #endif #endif instead. Although this change still produces compiler errors with gcc and Boost 1.47 and below ( Paul Mensonsides is looking into that since he knows how iteration works with Boost PP ), it solves the problem compiling with gcc and the latest Boost PP on the trunk.

On Wed, 31 Aug 2011 16:09:30 -0400, Edward Diener wrote:
If I can correct the curryable.h code I believe the use of
#elif BOOST_PP_ITERATION_FLAGS() == 1
and
#elif BOOST_PP_ITERATION_FLAGS() == 2
is wrong because BOOST_PP_ITERATION_FLAGS() is always evaluated in these lines even when BOOST_PP_IS_ITERATING is not defined. I think instead you must specify:
It isn't "evaluated" so much as macro-expanded and parsed.
#else #if BOOST_PP_ITERATION_FLAGS() == 1 .... #else #if BOOST_PP_ITERATION_FLAGS() == 2 .... #endif #endif
It should be sufficient to do this: #if !BOOST_PP_IS_ITERATING #else #if BOOST_PP_ITERATION_FLAGS() == 1 ... #elif BOOST_PP_ITERATION_FLAGS() == 2 ... #endif #endif -Paul

On Wed, 31 Aug 2011 16:09:30 -0400, Edward Diener wrote:
instead. Although this change still produces compiler errors with gcc and Boost 1.47 and below ( Paul Mensonsides is looking into that since he knows how iteration works with Boost PP ), it solves the problem compiling with gcc and the latest Boost PP on the trunk.
Okay, here's what's going on... 1) GCC performs macro expansion and parsing on the expressions of all #if/ #elif/#endif blocks that it encounters when not already skipping because of an outer #if/#endif. It is questionable whether this is what was intended, but it is arguable either way. Thus, code like this #if !BOOST_PP_IS_ITERATING // ... #elif BOOST_PP_ITERATION_FLAGS() == 1 // ... #elif BOOST_PP_ITERATION_FLAGS() == 2 // ... #endif should be altered to #if !BOOST_PP_IS_ITERATING #else #if BOOST_PP_ITERATION_FLAGS() == 1 // ... #else // ... #endif #endif The second problem is that the iteration flags are not "evaluated" in the way that the iteration bounds are evaluated but are instead defined to be the whatever was put in the iteration parameters. The error occurs in the during the nested iteration (that includes the SUB, etc.). The ultimate problem is in the way that BOOST_PP_ITERATION_FLAGS() is defined in the <= 1.47 codebase which is essentially: CAT(ITERATION_FLAGS_, ITERATION_DEPTH()) This concatenation forms ITERATION_FLAGS_2, in this case, which, in turn tries to extract the flags via ARRAY_ELEM(3, ITERATION_PARAMS_2). The ITERATION_PARAMS_2 value, however, uses SUB, which uses one of the parentheses detection macros to check for zero. However, those detection macros require CAT--which doesn't expand because we're already "inside" CAT--which prevents the intermediate from expanding from a single "argument" to two "arguments", and then you get the diagnostic. The fix, which has already been in the trunk for a while is to alter the headers as follows: boost/preprocessor/iteration/iterate.hpp - alter the definition of ITERATION_FLAGS() from (BOOST_PP_CAT(BOOST_PP_ITERATION_FLAGS_, BOOST_PP_ITERATION_DEPTH())) to (BOOST_PP_CAT(BOOST_PP_ITERATION_FLAGS_, BOOST_PP_ITERATION_DEPTH())()) (i.e. changing ITERATION_FLAGS_n into a function-like macro) boost/preprocessor/iteration/detail/iter/forward1.hpp ... 2.hpp ... 3.hpp ... 4.hpp ... 5.hpp - each of these five headers contain definitions of ITERATION_FLAGS_n in three places near the top. These all need to be changed to nullary function-like macros instead of object-like macros. That will fix the problem. Alternatively, the iteration flags are "supposed" to be used to distinguish between different iterations at the same depth. Here, however, they are being used where they don't need to be. The iteration flags could have been elided and just use ITERATION_DEPTH() instead... #if !IS_ITERATING #else #if ITERATION_DEPTH() == 1 #elif ITERATION_DEPTH() == 2 #endif #endif

On 9/1/2011 12:07 AM, Paul Mensonides wrote:
On Wed, 31 Aug 2011 16:09:30 -0400, Edward Diener wrote:
instead. Although this change still produces compiler errors with gcc and Boost 1.47 and below ( Paul Mensonsides is looking into that since he knows how iteration works with Boost PP ), it solves the problem compiling with gcc and the latest Boost PP on the trunk.
Okay, here's what's going on... snipped...
Thanks for the full explanation, Paul. I will alert Larry Evans, who brought up this problem originally, about your post.

On 1 September 2011 05:07, Paul Mensonides <pmenso57@comcast.net> wrote:
1) GCC performs macro expansion and parsing on the expressions of all #if/ #elif/#endif blocks that it encounters when not already skipping because of an outer #if/#endif. It is questionable whether this is what was intended, but it is arguable either way.
I think it was made due to: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36320

On 9/1/2011 5:43 PM, Daniel James wrote:
On 1 September 2011 05:07, Paul Mensonides<pmenso57@comcast.net> wrote:
1) GCC performs macro expansion and parsing on the expressions of all #if/ #elif/#endif blocks that it encounters when not already skipping because of an outer #if/#endif. It is questionable whether this is what was intended, but it is arguable either way.
I think it was made due to:
The remark refers in the resolution above refers to "my reading of the standard" but does not say which standard is being referred to. Is this part of C99 or C++11 ? I have asked about this on the C++ standard NG and am awaiting an answer. I am surprised by the evaluation of "#elif constant expression" when the corrwesponding #if statament is true because it means that #if - #elif is not equivalent to #if - #else - #if in this particular case, and I am sure many C++ programmers would have expected that the two were indeed equivalent.

On Thu, 01 Sep 2011 18:23:16 -0400, Edward Diener wrote:
The remark refers in the resolution above refers to "my reading of the standard" but does not say which standard is being referred to. Is this part of C99 or C++11 ? I have asked about this on the C++ standard NG and am awaiting an answer.
I am surprised by the evaluation of "#elif constant expression" when the corrwesponding #if statament is true because it means that #if - #elif is not equivalent to #if - #else - #if in this particular case, and I am sure many C++ programmers would have expected that the two were indeed equivalent.
It doesn't "evaluate" it; it just parses it. Even in normal code and, in most cases, even with dynamically-typed code, the compiler or interpreter still has to parse the else-if expressions. Neither C99 or C++98 (or C+ +11) are particularly clear on this. To me that means that the solution is on the code side rather than requiring a compiler to allow semi- unspecified behavior in the short term and to alter the standard if it is important enough in the long term. -Paul

On 9/1/2011 9:40 PM, Paul Mensonides wrote:
On Thu, 01 Sep 2011 18:23:16 -0400, Edward Diener wrote:
The remark refers in the resolution above refers to "my reading of the standard" but does not say which standard is being referred to. Is this part of C99 or C++11 ? I have asked about this on the C++ standard NG and am awaiting an answer.
I am surprised by the evaluation of "#elif constant expression" when the corrwesponding #if statament is true because it means that #if - #elif is not equivalent to #if - #else - #if in this particular case, and I am sure many C++ programmers would have expected that the two were indeed equivalent.
It doesn't "evaluate" it; it just parses it. Even in normal code and, in most cases, even with dynamically-typed code, the compiler or interpreter still has to parse the else-if expressions.
So if I have: #if 1 #else nonC++gobbledygook #endif The compiler still parses nonC++gobbledygook and issues an error if it is invalid C++ code ?
Neither C99 or C++98 (or C+ +11) are particularly clear on this. To me that means that the solution is on the code side rather than requiring a compiler to allow semi- unspecified behavior in the short term and to alter the standard if it is important enough in the long term.
I admit that I have always thought that an #if - #else - #endif path which is not taken can be anything and does not have to be valid C++. You seem to be saying otherwise.

On Sat, Sep 3, 2011 at 8:15 AM, Edward Diener <eldiener@tropicsoft.com>wrote:
On 9/1/2011 9:40 PM, Paul Mensonides wrote:
On Thu, 01 Sep 2011 18:23:16 -0400, Edward Diener wrote:
The remark refers in the resolution above refers to "my reading of the
standard" but does not say which standard is being referred to. Is this part of C99 or C++11 ? I have asked about this on the C++ standard NG and am awaiting an answer.
I am surprised by the evaluation of "#elif constant expression" when the corrwesponding #if statament is true because it means that #if - #elif is not equivalent to #if - #else - #if in this particular case, and I am sure many C++ programmers would have expected that the two were indeed equivalent.
It doesn't "evaluate" it; it just parses it. Even in normal code and, in most cases, even with dynamically-typed code, the compiler or interpreter still has to parse the else-if expressions.
So if I have:
#if 1 #else nonC++gobbledygook #endif
The compiler still parses nonC++gobbledygook and issues an error if it is invalid C++ code ?
No, I believe here Paul is referring to a "regular" if/else if expressions, not preprocessor directives. Neither C99 or C++98 (or C+
+11) are particularly clear on this. To me that means that the solution is on the code side rather than requiring a compiler to allow semi- unspecified behavior in the short term and to alter the standard if it is important enough in the long term.
I admit that I have always thought that an #if - #else - #endif path which is not taken can be anything and does not have to be valid C++. You seem to be saying otherwise.
I think Paul is saying that for #if 1 #elif nonC++gobbledygook #endif the preprocessor may still parse the #elif expression, but it won't be evaluated. If you want to avoid it being parsed, do #if 1 #else #if nonC++gobbledygook #endif #endif - Jeff

On 9/3/2011 4:06 PM, Jeffrey Lee Hellrung, Jr. wrote:
On Sat, Sep 3, 2011 at 8:15 AM, Edward Diener<eldiener@tropicsoft.com>wrote:
On 9/1/2011 9:40 PM, Paul Mensonides wrote:
On Thu, 01 Sep 2011 18:23:16 -0400, Edward Diener wrote:
The remark refers in the resolution above refers to "my reading of the
standard" but does not say which standard is being referred to. Is this part of C99 or C++11 ? I have asked about this on the C++ standard NG and am awaiting an answer.
I am surprised by the evaluation of "#elif constant expression" when the corrwesponding #if statament is true because it means that #if - #elif is not equivalent to #if - #else - #if in this particular case, and I am sure many C++ programmers would have expected that the two were indeed equivalent.
It doesn't "evaluate" it; it just parses it. Even in normal code and, in most cases, even with dynamically-typed code, the compiler or interpreter still has to parse the else-if expressions.
So if I have:
#if 1 #else nonC++gobbledygook #endif
The compiler still parses nonC++gobbledygook and issues an error if it is invalid C++ code ?
No, I believe here Paul is referring to a "regular" if/else if expressions, not preprocessor directives.
Neither C99 or C++98 (or C+
+11) are particularly clear on this. To me that means that the solution is on the code side rather than requiring a compiler to allow semi- unspecified behavior in the short term and to alter the standard if it is important enough in the long term.
I admit that I have always thought that an #if - #else - #endif path which is not taken can be anything and does not have to be valid C++. You seem to be saying otherwise.
I think Paul is saying that for
#if 1 #elif nonC++gobbledygook #endif
the preprocessor may still parse the #elif expression, but it won't be evaluated. If you want to avoid it being parsed, do
#if 1 #else #if nonC++gobbledygook #endif #endif
I understand what you are saying but I still think it is a terrible C++ inconsistency that #elif is not the same as #else - #if. Then again maybe I see it that way because I was never aware of the "difference" but I would bet if you asked 100 very good C++ programmers the difference between them, 99 would almost immediately say that there is no difference, #elif is purely a shorthand for #else - #if.

On Sat, 03 Sep 2011 23:26:57 -0400, Edward Diener wrote:
On 9/3/2011 4:06 PM, Jeffrey Lee Hellrung, Jr. wrote:
the preprocessor may still parse the #elif expression, but it won't be evaluated. If you want to avoid it being parsed, do
#if 1 #else #if nonC++gobbledygook #endif #endif
I understand what you are saying but I still think it is a terrible C++ inconsistency that #elif is not the same as #else - #if. Then again maybe I see it that way because I was never aware of the "difference" but I would bet if you asked 100 very good C++ programmers the difference between them, 99 would almost immediately say that there is no difference, #elif is purely a shorthand for #else - #if.
For the record, I'm not saying that I think it is *good* that gcc is doing that. But, I don't think that the standard forbids it. Therefore, there are only two acceptable options: 1) use the "safe" method and, if worthwhile, 2) change the standard. I don't consider changing gcc and relying on unspecified behavior to be an option. -Paul

On 9/3/2011 11:54 PM, Paul Mensonides wrote:
On Sat, 03 Sep 2011 23:26:57 -0400, Edward Diener wrote:
On 9/3/2011 4:06 PM, Jeffrey Lee Hellrung, Jr. wrote:
the preprocessor may still parse the #elif expression, but it won't be evaluated. If you want to avoid it being parsed, do
#if 1 #else #if nonC++gobbledygook #endif #endif
I understand what you are saying but I still think it is a terrible C++ inconsistency that #elif is not the same as #else - #if. Then again maybe I see it that way because I was never aware of the "difference" but I would bet if you asked 100 very good C++ programmers the difference between them, 99 would almost immediately say that there is no difference, #elif is purely a shorthand for #else - #if.
For the record, I'm not saying that I think it is *good* that gcc is doing that. But, I don't think that the standard forbids it. Therefore, there are only two acceptable options: 1) use the "safe" method and, if worthwhile, 2) change the standard. I don't consider changing gcc and relying on unspecified behavior to be an option.
Since parsing the #elif expression or not, when it is not a valid path, can change the behavior of the translation unit ( possible compiler error or no compiler error), I would think the standard has to rule on this issue. I do not think it should say that the result is implementation defined.

On 08/23/11 14:38, Eric Niebler wrote:
After playing around with functional languages, I've come to envy how easy they make it to curry functions. Call a 2-argument function with 1 argument and you get a function that takes 1 argument. Pass another argument and it evaluates the function. Simple. In contrast, C++ users have to use binders, which are not as nice.
On a lark, I implemented a wrapper that turns any TR1-style function object into a "curryable" function object (attached). Successive function call invocations bind arguments until enough arguments are bound to invoke the wrapped function. With it you can do the following:
curryable<std::plus<int> > p; auto curried = p(1); int i = curried(2); assert(i == 3);
Is there any interest in such a thing?
With the 2 attached files (compiled with gcc4.6 and with boost trunk [which svn update showed was revision 74198]), the output is: /home/evansl/prog_dev/boost-svn/ro/trunk/sandbox-local/build/gcc4_6v/boost-svn/ro/trunk/sandbox/rw/variadic_templates/sandbox/painless_currying/curryable.exe type_u<0>() type_u<0>(type_u const&) type_u<0>(type_u const&) type_u<1>() type_u<1>(type_u const&) type_u<0>(type_u const&) type_u<0>(type_u const&) type_u<1>(type_u const&) type_u<2>() sout<<type_u<0> sout<<type_u<1> sout<<type_u<2> Compilation finished at Sat Sep 3 08:20:45 which shows several copy CTOR's executed. These copy CTOR calls were caused by the code like this: curryable (Fun f, Arg0 a0, Arg1 a1, Arg2 a2): fun (f), arg0 (a0), arg1 (a1), arg2 (a2) { } produced by the BOOST_PP code. Is there any way to prevent these copies, maybe by declarations like: Arg0 const& arg0; instead of the existing: Arg0 arg0; ? -regards, Larry

On 09/03/11 08:28, Larry Evans wrote: [snip]
With the 2 attached files (compiled with gcc4.6 and with boost trunk [which svn update showed was revision 74198]), the output is:
[snip]
which shows several copy CTOR's executed. These copy CTOR calls were caused by the code like this:
curryable (Fun f, Arg0 a0, Arg1 a1, Arg2 a2): fun (f), arg0 (a0), arg1 (a1), arg2 (a2) { }
produced by the BOOST_PP code. Is there any way to prevent these copies, maybe by declarations like:
Arg0 const& arg0;
instead of the existing:
Arg0 arg0;
?
[snip] OOPS. I should have thought a bit more. The reason: Arg0 const& arg0; won't work is because the arguments to the curryable CTOR: curryable (Fun f, Arg0 a0): maybe be gone by the time f is actually called, and that's why they have to be copied. Sorry for noise :( -Larry

On 09/03/11 08:28, Larry Evans wrote:
On 08/23/11 14:38, Eric Niebler wrote:
After playing around with functional languages, I've come to envy how easy they make it to curry functions. Call a 2-argument function with 1 argument and you get a function that takes 1 argument. Pass another argument and it evaluates the function. Simple. In contrast, C++ users have to use binders, which are not as nice.
On a lark, I implemented a wrapper that turns any TR1-style function object into a "curryable" function object (attached). Successive function call invocations bind arguments until enough arguments are bound to invoke the wrapped function. With it you can do the following:
curryable<std::plus<int> > p; auto curried = p(1); int i = curried(2); assert(i == 3);
Is there any interest in such a thing?
With the 2 attached files (compiled with gcc4.6 and with boost trunk [which svn update showed was revision 74198]), the output is: [snip] With the revised curryable.cpp, the output is:
/home/evansl/prog_dev/boost-svn/ro/trunk/sandbox-local/build/gcc4_6v/boost-svn/ro/trunk/sandbox/rw/variadic_templates/sandbox/painless_currying/curryable.exe test<1>: type_u<0>() type_u<0,1>(type_u const&) type_u<0,1>(type_u const&) type_u<1>() type_u<1,1>(type_u const&) type_u<0,1>(type_u const&) type_u<0,1>(type_u const&) type_u<1,1>(type_u const&) type_u<2>() sout<<type_u<0> sout<<type_u<1> sout<<type_u<2> test<0>: type_u<0>() type_u<0,0>(type_u const&) type_u<0,0>(type_u const&) type_u<1>() type_u<1,0>(type_u const&) type_u<0,0>(type_u const&) type_u<0,0>(type_u const&) type_u<1,0>(type_u const&) type_u<2>() type_u<2,0>(type_u const&) type_u<0,0>(type_u const&) type_u<1,0>(type_u const&) type_u<0,0>(type_u const&) type_u<1,0>(type_u const&) type_u<2,0>(type_u const&) type_u<3>() type_u<3,0>(type_u const&) type_u<0,0>(type_u const&) type_u<1,0>(type_u const&) type_u<2,0>(type_u const&) type_u<0,0>(type_u const&) type_u<1,0>(type_u const&) type_u<2,0>(type_u const&) type_u<3,0>(type_u const&) type_u<4>() type_u<4,0>(type_u const&) type_u<0,0>(type_u const&) type_u<1,0>(type_u const&) type_u<2,0>(type_u const&) type_u<3,0>(type_u const&) type_u<0,0>(type_u const&) type_u<1,0>(type_u const&) type_u<2,0>(type_u const&) type_u<3,0>(type_u const&) type_u<4,0>(type_u const&) type_u<5>() type_u<5,0>(type_u const&) type_u<0,0>(type_u const&) type_u<1,0>(type_u const&) type_u<2,0>(type_u const&) type_u<3,0>(type_u const&) type_u<4,0>(type_u const&) type_u<0,0>(type_u const&) type_u<1,0>(type_u const&) type_u<2,0>(type_u const&) type_u<3,0>(type_u const&) type_u<4,0>(type_u const&) type_u<5,0>(type_u const&) Compilation finished at Sat Sep 3 09:41:23 which reveals that, when passed arguments that cannot possibly be valid, the arguments just keep accummulating in the curryable<F,Args...>::argI. It would be helpful if some diagnostic were issued in case too many arguments are supplied. -regards, Larry

On 9/3/2011 10:48 AM, Larry Evans wrote:
when passed arguments that cannot possibly be valid, the arguments just keep accummulating in the curryable<F,Args...>::argI.
Correct.
It would be helpful if some diagnostic were issued in case too many arguments are supplied.
I agree it would be helpful, but I don't think that's possible. Without true introspection, there is no way to know a priori what the max arity of the bound function is, hence no way to know when we've accumulated more arguments than are valid. It just keeps collecting arguments in a vain attempt to satisfy the can_be_called predicated. -- Eric Niebler BoostPro Computing http://www.boostpro.com

On 09/03/11 11:12, Eric Niebler wrote:
On 9/3/2011 10:48 AM, Larry Evans wrote:
when passed arguments that cannot possibly be valid, the arguments just keep accummulating in the curryable<F,Args...>::argI.
Correct.
It would be helpful if some diagnostic were issued in case too many arguments are supplied.
I agree it would be helpful, but I don't think that's possible. Without true introspection, there is no way to know a priori what the max arity of the bound function is, hence no way to know when we've accumulated more arguments than are valid. It just keeps collecting arguments in a vain attempt to satisfy the can_be_called predicated.
OK, so *now* I see a good use-case for David Abraham's point: http://lists.boost.org/Archives/boost/2011/08/185096.php about using () to cause evaluation. The user would then know if invalid arguments were supplied.

On 09/03/11 11:21, Larry Evans wrote:
On 09/03/11 11:12, Eric Niebler wrote:
On 9/3/2011 10:48 AM, Larry Evans wrote:
when passed arguments that cannot possibly be valid, the arguments just keep accummulating in the curryable<F,Args...>::argI.
Correct.
It would be helpful if some diagnostic were issued in case too many arguments are supplied.
I agree it would be helpful, but I don't think that's possible. Without true introspection, there is no way to know a priori what the max arity of the bound function is, hence no way to know when we've accumulated more arguments than are valid. It just keeps collecting arguments in a vain attempt to satisfy the can_be_called predicated.
OK, so *now* I see a good use-case for David Abraham's point:
http://lists.boost.org/Archives/boost/2011/08/185096.php
about using () to cause evaluation. The user would then know if invalid arguments were supplied.
However, if that case (only evaluate if empty arglist provided), there's really no need to carry the functor, f, along since it's only useful at the last. Hence, instead of supplying () to trigger evaluation, why not just supply the f argment last. IOW, instead of: a_curryable(); do this: a_curryable(f); to trigger evaluation (or a compile-time error messages of the arguments accummulated by a_curryable are not valid. -regards, Larry
participants (26)
-
Agustín K-ballo Bergé
-
Daniel James
-
Dave Abrahams
-
Edward Diener
-
Eric Niebler
-
Giovanni Piero Deretta
-
Gottlob Frege
-
greened@obbligato.org
-
Hartmut Kaiser
-
Jeffrey Lee Hellrung, Jr.
-
Jinqiang Zhang
-
John Bytheway
-
John Wiegley
-
Joshua Juran
-
Larry Evans
-
Lars Viklund
-
Mathias Gaunard
-
Maxim Yanchenko
-
Michael Caisse
-
Paul Mensonides
-
Peter Dimov
-
Roman Perepelitsa
-
Simonson, Lucanus J
-
Steven Watanabe
-
Thomas Heller
-
Vicente J. Botet Escriba