Re: [boost] New libraries implementing C++11 features in C++03

On 24 Nov 2011 09:11:19 Joel de Guzman wrote:
Man! Of course you should use the examples in 1.45. Those examples are for 1.48 and uses new features implemented in 1.48! Sheesh!
I fully understand that. The point is that the Phoenix interface is evolving much more quickly than that of probably any other Boost library. Even apart from the fact that there already have been two bottom-up rewrites, there are enough significant changes between point releases that even small example programs are tied to a particular version of the library. Why is the optimal way to write a small calculator example today so different from the optimal way to write it six months ago? I'm not disputing that you're making forward progress, just highlighting this as another barrier to entry that Local likely won't have.
Double Sheesh! If you've spent even a few seconds looking into the error (beyond fault-finding), you will see that a simple textual search of "error" will reveal this:
Yes, it's possible to wade through the error messages and figure out what went wrong. And Qi is clearly trying to help out with traceable assertions; that may well be the best we can do. But no reasonable person would claim that hundreds of lines of error message, mostly filled with types that the user never directly constructed, are the most desirable way to report a simple syntax error. This problem is endemic to all ET libraries and compilers, and it would be very, very hard to do much better. Again, though, I think you're underselling this as a barrier to entry. How many people would ever learn C++ if all libraries did this?
Now having said that, and before I leave this thread, let me remind everyone that **you can also have statement syntax in Phoenix** if you have a complex statement and you fear writing complex phoenix lambda expressions. It's called phoenix::functions.
The important issue here isn't just statement /syntax/, it's that we have options available that don't turn what looks like an ordinary C++ statement into a massive AST with its own mini-compiler behind the scenes.

On Thu, Nov 24, 2011 at 11:08 PM, Brent Spillner <spillner@acm.org> wrote:
Yes, it's possible to wade through the error messages and figure out what went wrong. And Qi is clearly trying to help out with traceable assertions; that may well be the best we can do. But no reasonable person would claim that hundreds of lines of error message, mostly filled with types that the user never directly constructed, are the most desirable way to report a simple syntax error. This problem is endemic to all ET libraries and compilers, and it would be very, very hard to do much better. Again, though, I think you're underselling this as a barrier to entry. How many people would ever learn C++ if all libraries did this?
The "unintelligible" error messages are *not* the fault of the library. The *compiler* spews out these error messages. There's nothing you can do in C++ code to tell the compiler to "output exactly this message if you find an error here" -- sure with C++11 you can use static_assert but you can't control what the compiler will print (unless you're writing the compiler). Seriously, I don't know why people are blaming the libraries when the compilers are the ones generating these error messages. Am I missing something here?
Now having said that, and before I leave this thread, let me remind everyone that **you can also have statement syntax in Phoenix** if you have a complex statement and you fear writing complex phoenix lambda expressions. It's called phoenix::functions.
The important issue here isn't just statement /syntax/, it's that we have options available that don't turn what looks like an ordinary C++ statement into a massive AST with its own mini-compiler behind the scenes.
You're right. In this case, it's called functions -- normal namespace or class scope functions. They work perfectly fine. I still don't get it. Am I missing anything here? Cheers -- Dean Michael Berris http://goo.gl/CKCJX

On Nov 24, 2011, at 10:17 PM, Dean Michael Berris wrote:
Seriously, I don't know why people are blaming the libraries when the compilers are the ones generating these error messages. Am I missing something here?
Yes. What you are missing is that people just looking to employ a library to make their lives easier don't case *whose* fault it is when the library fails to make their lives easier, they just stop using the library. You can say that it is the fault of the compiler rather than the fault of the library, but that doesn't change the experience of using the library. Cheers, Greg

On Thu, Nov 24, 2011 at 11:43 PM, Gregory Crosswhite <gcrosswhite@gmail.com> wrote:
On Nov 24, 2011, at 10:17 PM, Dean Michael Berris wrote:
Seriously, I don't know why people are blaming the libraries when the compilers are the ones generating these error messages. Am I missing something here?
Yes. What you are missing is that people just looking to employ a library to make their lives easier don't case *whose* fault it is when the library fails to make their lives easier, they just stop using the library. You can say that it is the fault of the compiler rather than the fault of the library, but that doesn't change the experience of using the library.
Well, then the person who doesn't put the work in to learn to use the library *and* learn to read the compiler messages properly is *the* problem. I seriously think this is a case of PEBKAC. No amount of library engineering will solve that kind of problem IMO. Cheers -- Dean Michael Berris http://goo.gl/CKCJX

On Nov 24, 2011, at 10:48 PM, Dean Michael Berris wrote:
Well, then the person who doesn't put the work in to learn to use the library *and* learn to read the compiler messages properly is *the* problem.
I really don't understand this condescending attitude towards people who decide not to use a new library because in their particular circumstances the cost of using the library was greater than the utility it provided, as if *those kind of people* are the real problem rather than the *high cost of using the library* itself. Listen, I love learning new libraries and languages. When I first discovered Boost one of the things I loved to do was to skim through the documentation of all of the libraries to see what they had to offer me because I thought it was *exciting*. However, its not like I have an *infinite* amount of time to become an expert user in every library that exists. If a library requires a lot of time and trouble on my part and simply doesn't offer me enough to make up for this, then I will stop spending time learning how to use it so that I can spend that time learning things that are more useful, interesting, and fun. Maybe one day I will have a need that Phoenix fills so well that it will be worth the investment of my time to figure out how to harness its full power; I am completely open minded to that possibility, and if that day comes I will be happy to invest the time to learn how to use it. Unless and until that day occurs, however, it doesn't make me a "backward-thinking" person that I will instead focus my limited time on more useful pursuits (for me) than learning how to use Boost.Phoenix --- such as arguing with people on the Boost mailing list! Cheers, Greg

On Fri, Nov 25, 2011 at 12:22 AM, Gregory Crosswhite <gcrosswhite@gmail.com> wrote:
On Nov 24, 2011, at 10:48 PM, Dean Michael Berris wrote:
Well, then the person who doesn't put the work in to learn to use the library *and* learn to read the compiler messages properly is *the* problem.
I really don't understand this condescending attitude towards people who decide not to use a new library because in their particular circumstances the cost of using the library was greater than the utility it provided, as if *those kind of people* are the real problem rather than the *high cost of using the library* itself.
No, this is not condescending -- this is being realistic. There's a lot of C++ libraries out there in the world and only a few of them push the boundaries of what's possible with C++. Absolutely nobody is forcing anybody else to use whatever compiler or library to do whatever you need to do. We're all making choices here and if people choose to complain rather than bone up and do the work then I'm sorry but there's not much sympathy going around here for that.
Listen, I love learning new libraries and languages. When I first discovered Boost one of the things I loved to do was to skim through the documentation of all of the libraries to see what they had to offer me because I thought it was *exciting*. However, its not like I have an *infinite* amount of time to become an expert user in every library that exists. If a library requires a lot of time and trouble on my part and simply doesn't offer me enough to make up for this, then I will stop spending time learning how to use it so that I can spend that time learning things that are more useful, interesting, and fun.
I don't see how this is relevant.
Maybe one day I will have a need that Phoenix fills so well that it will be worth the investment of my time to figure out how to harness its full power; I am completely open minded to that possibility, and if that day comes I will be happy to invest the time to learn how to use it. Unless and until that day occurs, however, it doesn't make me a "backward-thinking" person that I will instead focus my limited time on more useful pursuits (for me) than learning how to use Boost.Phoenix --- such as arguing with people on the Boost mailing list!
Who's arguing? I certainly am not. Regarding the backward-thinking comment, it refers to developing libraries that use antiquated (read: non-modern) C++ approaches and hacks to shoehorn functionality that's not in the language to do something novel for novelty's sake. I also don't think it was directed at you specifically so I don't know why you're taking it personally. I was asking a question and I don't see how your answers to my questions were supposed to enlighten me about the situation I was originally curious about. So I ask again: Why are people blaming the libraries for horrible error messages when it's compilers that spew these error messages? Cheers -- Dean Michael Berris http://goo.gl/CKCJX

On Nov 24, 2011, at 11:35 PM, Dean Michael Berris wrote:
Absolutely nobody is forcing anybody else to use whatever compiler or library to do whatever you need to do. We're all making choices here and if people choose to complain rather than bone up and do the work then I'm sorry but there's not much sympathy going around here for that.
Hmm, I think that you may have misunderstood the context of the discussion in which you entered. Nobody here is criticizing Phoenix for the sake of beating on Phoenix. What bothers many of us is the claim that Local doesn't belong in Boost because Phoenix is a superior solution. We have responded by pointing out the usability issues that it has in practice (regardless of whether this is its own *fault* or not) in order to make the point that Local has important advantages over it and hence the presence of Phoenix in Boost does not make Local redundant. By including both Phoenix and Local, people who want to use Phoenix can use Phoenix and people who want to use Local can use Local, and everyone will be happy. (This is not to say that there aren't other valid arguments for why it might not be a good idea Local in Boost, just that it is wrong-headed to declare that Local is an inherently inferior solution to Phoenix *despite* any practical difference in the end-user experiences.)
Listen, I love learning new libraries and languages. When I first discovered Boost one of the things I loved to do was to skim through the documentation of all of the libraries to see what they had to offer me because I thought it was *exciting*. However, its not like I have an *infinite* amount of time to become an expert user in every library that exists. If a library requires a lot of time and trouble on my part and simply doesn't offer me enough to make up for this, then I will stop spending time learning how to use it so that I can spend that time learning things that are more useful, interesting, and fun.
I don't see how this is relevant.
It had been relevant because, given the full context of the discussion, it sounded a lot like like you were criticizing those of us who refused to adopt new libraries because of their steep learning curve. Now that I realize you weren't saying that, I will happily admit that my paragraph is no longer (and never was) relevant. :-)
Maybe one day I will have a need that Phoenix fills so well that it will be worth the investment of my time to figure out how to harness its full power; I am completely open minded to that possibility, and if that day comes I will be happy to invest the time to learn how to use it. Unless and until that day occurs, however, it doesn't make me a "backward-thinking" person that I will instead focus my limited time on more useful pursuits (for me) than learning how to use Boost.Phoenix --- such as arguing with people on the Boost mailing list!
Who's arguing? I certainly am not.
Great, we agree on something! :-)
Regarding the backward-thinking comment, it refers to developing libraries that use antiquated (read: non-modern) C++ approaches and hacks to shoehorn functionality that's not in the language to do something novel for novelty's sake.
Local is definitely not "doing something novel for novelty's sake", it is solving a problem that many of us face in a way that has advantages over Phoenix. It might use macros to do this, but given that there are important cases where it provides a significantly better user experience over Phoenix, citing it as an example "backwards-thinking" merely because it uses macros rather than TMP to be incredibly wrong-headed.
I also don't think it was directed at you specifically so I don't know why you're taking it personally.
*shrug* Maybe I am taking it personally, maybe I'm not. Regardless, I consider the use of the term "backwards-thinking" in this context to be wrong-headed at best and condescending at worst.
I was asking a question and I don't see how your answers to my questions were supposed to enlighten me about the situation I was originally curious about. So I ask again:
Why are people blaming the libraries for horrible error messages when it's compilers that spew these error messages?
I already answered that question for you quite clearly: we aren't. However if your library spews error messages and another library with overlapping functionality does not, then you shouldn't complain that your library is somehow being treated unfairly (because the faults of the compilers aren't being taken into account) when many of us decide that this makes the other library sufficiently superior in many circumstances to your own that it deserves to be included in Boost. Cheers, Greg

On Fri, Nov 25, 2011 at 1:27 AM, Gregory Crosswhite <gcrosswhite@gmail.com> wrote:
On Nov 24, 2011, at 11:35 PM, Dean Michael Berris wrote:
Absolutely nobody is forcing anybody else to use whatever compiler or library to do whatever you need to do. We're all making choices here and if people choose to complain rather than bone up and do the work then I'm sorry but there's not much sympathy going around here for that.
Hmm, I think that you may have misunderstood the context of the discussion in which you entered.
Or you may be misunderstanding the context of the discussion I entered.
Nobody here is criticizing Phoenix for the sake of beating on Phoenix. What bothers many of us is the claim that Local doesn't belong in Boost because Phoenix is a superior solution. We have responded by pointing out the usability issues that it has in practice (regardless of whether this is its own *fault* or not) in order to make the point that Local has important advantages over it and hence the presence of Phoenix in Boost does not make Local redundant. By including both Phoenix and Local, people who want to use Phoenix can use Phoenix and people who want to use Local can use Local, and everyone will be happy.
It's not about Phoenix. Realize the thread is started about whether libraries that try to approximate functionality from C++11 in C++03 should be included in Boost. Phoenix only came up as an example. Now people have brought up the issue of the error messages that come up when you use Phoenix *incorrectly*. The solution to the problem there is to use it correctly and to do that learn to read the compiler diagnostics and error messages. As far as Phoenix vs Local I view that discussion as myopic. That's not the point I wanted to tackle which over and over again gets brought up. Read my messages again and then tell me where I distinctly said Local shouldn't be included because Phoenix is there. If you're complaining about what other people have said in a different discussion then I can't comment about that -- and I don't even think that's also relevant in this discussion I keep wanting to have that is being muddied up by feelings and personal convictions. To bring it back to what I really want to understand is: Why does Local need to be in Boost? Can it not stand alone as something else outside of Boost? So what if people vote against the inclusion of Local into Boost because they feel it's an unnecessary library?
(This is not to say that there aren't other valid arguments for why it might not be a good idea Local in Boost, just that it is wrong-headed to declare that Local is an inherently inferior solution to Phoenix *despite* any practical difference in the end-user experiences.)
But it is inferior to Phoenix on a point-by-point basis. It's also inferior to C++11 lambda's. Why is there any doubt in this? I don't get why broken code (whether code using Phoenix or Local) should be the basis for whether a library is superior to another as far as end-user experiences is concerned. It's broken code, it doesn't even compile! What *should* be the basis is the merit of the library. I have already put forth my points as to why I think libraries like and including Local ought to be approached in a different manner -- and that it probably shouldn't be in Boost. I don't see you addressing any of those points but rather dwell on the Phoenix discussion which isn't the one I originally wanted to have when I first responded to the thread anyway.
Listen, I love learning new libraries and languages. When I first discovered Boost one of the things I loved to do was to skim through the documentation of all of the libraries to see what they had to offer me because I thought it was *exciting*. However, its not like I have an *infinite* amount of time to become an expert user in every library that exists. If a library requires a lot of time and trouble on my part and simply doesn't offer me enough to make up for this, then I will stop spending time learning how to use it so that I can spend that time learning things that are more useful, interesting, and fun.
I don't see how this is relevant.
It had been relevant because, given the full context of the discussion, it sounded a lot like like you were criticizing those of us who refused to adopt new libraries because of their steep learning curve. Now that I realize you weren't saying that, I will happily admit that my paragraph is no longer (and never was) relevant. :-)
Okay. :)
Maybe one day I will have a need that Phoenix fills so well that it will be worth the investment of my time to figure out how to harness its full power; I am completely open minded to that possibility, and if that day comes I will be happy to invest the time to learn how to use it. Unless and until that day occurs, however, it doesn't make me a "backward-thinking" person that I will instead focus my limited time on more useful pursuits (for me) than learning how to use Boost.Phoenix --- such as arguing with people on the Boost mailing list!
Who's arguing? I certainly am not.
Great, we agree on something! :-)
Happy day. :)
Regarding the backward-thinking comment, it refers to developing libraries that use antiquated (read: non-modern) C++ approaches and hacks to shoehorn functionality that's not in the language to do something novel for novelty's sake.
Local is definitely not "doing something novel for novelty's sake", it is solving a problem that many of us face in a way that has advantages over Phoenix. It might use macros to do this, but given that there are important cases where it provides a significantly better user experience over Phoenix, citing it as an example "backwards-thinking" merely because it uses macros rather than TMP to be incredibly wrong-headed.
Wait, why does the comparison have to be over Phoenix? Why can't the comparison be against normal class/namespace scope functions? As I've already pointed out in another message, I see absolutely 0 advantage to using local functions -- whether first-class in C++1x or using some library -- compared to C++11 lambdas or Phoenix lambda's defined in-lined. There's also 0 advantage to using local functions compared to namespace/class scope functions. It's a solution looking for a problem and unfortunately I don't see the problem with not having local functions in C++ (as I've already pointed out as well).
I also don't think it was directed at you specifically so I don't know why you're taking it personally.
*shrug* Maybe I am taking it personally, maybe I'm not. Regardless, I consider the use of the term "backwards-thinking" in this context to be wrong-headed at best and condescending at worst.
If you look at it objectively, using macros when you can type it out on your own is unnecessary obfuscation, and with the features that are already in C++11 and the libraries already in Boost now and don't forget normal functions at class/namespace scope, it's backward to think that you would get away with using macros to do what you can do with normal C++ facilities. It's 2011, we shouldn't be using C++98.
I was asking a question and I don't see how your answers to my questions were supposed to enlighten me about the situation I was originally curious about. So I ask again:
Why are people blaming the libraries for horrible error messages when it's compilers that spew these error messages?
I already answered that question for you quite clearly: we aren't. However if your library spews error messages and another library with overlapping functionality does not, then you shouldn't complain that your library is somehow being treated unfairly (because the faults of the compilers aren't being taken into account) when many of us decide that this makes the other library sufficiently superior in many circumstances to your own that it deserves to be included in Boost.
Hey I'm not complaining that people are bashing libraries. What I'm complaining about is why people aren't complaining (meta isn't it?) that their compilers suck at displaying better error messages for broken code. Again, *THE LIBRARY DOESN'T SPEW ERROR MESSAGES, COMPILERS DO THAT FOR BROKEN CODE*. So again, how does broken code using any library become a basis for whether the library is a good library for inclusion in Boost? It just seems silly to me. Cheers -- Dean Michael Berris http://goo.gl/CKCJX

On 24 Nov 2011, at 14:56, Dean Michael Berris wrote:
I don't get why broken code (whether code using Phoenix or Local) should be the basis for whether a library is superior to another as far as end-user experiences is concerned. It's broken code, it doesn't even compile!
Because I have spent more than 10 minutes figuring why Phoenix code wouldn't compile, and that hasn't happened with any other C++ library. In general, when writing code programmers spend much more time with code which is either compile-time, or run-time, incorrect. If I wrote perfect code first time, then my job would be much, much easier! It is the compiler's fault, but in practice, it makes the library very, very hard to use. Personally, if boost local was accepted I would expect to use it for a year or so, until I could assume people I work with all had decent C++11 compilers, and then drop it for lambdas. I'm never going to start using boost::phoenix in code I share with other people. Is that a good enough reason to accept it into boost? I'm not completely sure. Chris

On Fri, Nov 25, 2011 at 3:13 AM, Christopher Jefferson <chris@bubblescope.net> wrote:
On 24 Nov 2011, at 14:56, Dean Michael Berris wrote:
I don't get why broken code (whether code using Phoenix or Local) should be the basis for whether a library is superior to another as far as end-user experiences is concerned. It's broken code, it doesn't even compile!
Because I have spent more than 10 minutes figuring why Phoenix code wouldn't compile, and that hasn't happened with any other C++ library. In general, when writing code programmers spend much more time with code which is either compile-time, or run-time, incorrect. If I wrote perfect code first time, then my job would be much, much easier! It is the compiler's fault, but in practice, it makes the library very, very hard to use.
Speaking from experience, 10 minutes is too much. I've used Phoenix v2 extensively and I've found that after reading the docs first before trying to do anything with it that I got it easier that way. That said I haven't tried Phoenix v3 though I've been debugging Spirit code since the 2.x days -- again reading the docs, keeping them handy, and not getting scared with compiler barfage. I'm not saying there's no learning curve -- it involves reading the documentation, trying things out, and getting the "spirit" of the library and the semantics of usage. But that's just me though and I recognize that others might not have it as good as I did when I was learning both Spirit and Phoenix.
Personally, if boost local was accepted I would expect to use it for a year or so, until I could assume people I work with all had decent C++11 compilers, and then drop it for lambdas. I'm never going to start using boost::phoenix in code I share with other people.
Is that a good enough reason to accept it into boost? I'm not completely sure.
Exactly my point of contention too. Cheers -- Dean Michael Berris http://goo.gl/CKCJX

I don't get why broken code (whether code using Phoenix or Local) should be the basis for whether a library is superior to another as far as end-user experiences is concerned. It's broken code, it doesn't even compile!
Because I have spent more than 10 minutes figuring why Phoenix code wouldn't compile, and that hasn't happened with any other C++ library.
What's the problem with having to _think_ for 10 minutes? If you get through that experience it will take you only 9 minutes when you have to do it again. I still can't understand why people complain when they have to use their brains.
In general, when writing code programmers spend much more time with code which is either compile-time, or run-time, incorrect. If I wrote perfect code first time, then my job would be much, much easier! It is the compiler's fault, but in practice, it makes the library very, very hard to use.
If you don't love fixing bugs you're in the wrong profession.
Personally, if boost local was accepted I would expect to use it for a year or so, until I could assume people I work with all had decent C++11 compilers, and then drop it for lambdas. I'm never going to start using boost::phoenix in code I share with other people.
Great, do that. Boost.Local has not to be in Boost in order for this plan to succeed.
Is that a good enough reason to accept it into boost? I'm not completely sure.
Definitely no. Regards Hartmut --------------- http://boost-spirit.com http://stellar.cct.lsu.edu

On 24 November 2011 17:59, Hartmut Kaiser <hartmut.kaiser@gmail.com> wrote:
Because I have spent more than 10 minutes figuring why Phoenix code wouldn't compile, and that hasn't happened with any other C++ library.
What's the problem with having to _think_ for 10 minutes? If you get through that experience it will take you only 9 minutes when you have to do it again. I still can't understand why people complain when they have to use their brains.
When a casual phoenix user encounters the problem a second time, they're likely to have forgotten how they fixed it the first time. I think the problem is when they have to use their brain for something they don't feel they should have to, and would rather be using it for something more useful/productive/interesting/fun.

On Thu, Nov 24, 2011 at 12:59 PM, Hartmut Kaiser <hartmut.kaiser@gmail.com> wrote:
Personally, if boost local was accepted I would expect to use it for a year or so, until I could assume people I work with all had decent C++11 compilers, and then drop it for lambdas. I'm never going to start using boost::phoenix in code I share with other people.
Great, do that. Boost.Local has not to be in Boost in order for this plan to succeed.
This statement can be interpreted in many ways some of which are not fair at all to a library under review and never mentioned on the Boost review process... for a library not to be accepted so a user's "plan" of using Boost in some "way" will not "succeed". Which admission criteria is this... Can you please clarify your statement? Which "plan" is not to succeed by preventing Boost.Local from being accepted? I'd really appreciate you clarifying this. --Lorenzo

On Thu, Nov 24, 2011 at 12:59 PM, Hartmut Kaiser <hartmut.kaiser@gmail.com> wrote:
Personally, if boost local was accepted I would expect to use it for a year or so, until I could assume people I work with all had decent C++11 compilers, and then drop it for lambdas. I'm never going to start using boost::phoenix in code I share with other people.
Great, do that. Boost.Local has not to be in Boost in order for this plan to succeed.
This statement can be interpreted in many ways some of which are not fair at all to a library under review and never mentioned on the Boost review process... for a library not to be accepted so a user's "plan" of using Boost in some "way" will not "succeed". Which admission criteria is this...
I said during my review that IMHO Boost.Local does not belong into Boost. Nevertheless, it may be of value for others which you could ensure by maintaining the library outside of Boost.
Can you please clarify your statement? Which "plan" is not to succeed by preventing Boost.Local from being accepted?
I was referring to the 'plan' of the OP to use Boost.Local for a year or so and to switch to C++11 lambdas afterwards. Regards Hartmut --------------- http://boost-spirit.com http://stellar.cct.lsu.edu

On Thu, Nov 24, 2011 at 4:42 PM, Hartmut Kaiser <hartmut.kaiser@gmail.com> wrote:
Can you please clarify your statement? Which "plan" is not to succeed by preventing Boost.Local from being accepted?
I was referring to the 'plan' of the OP to use Boost.Local for a year or so and to switch to C++11 lambdas afterwards.
Thank you for the clarification. On this point, I would expect to use the C++11 algorithms that are part of Boost.Algorithm for the same time duration (1 year or so for OP, probably more 3 years for me :( ) while I still have to use C++03 compilers: all_of, any_of, none_of + the upcoming algs below: On Wed, Nov 23, 2011 at 12:56 AM, Marshall Clow <mclow.lists@gmail.com> wrote:
I've implemented several of the new features of the C++11 standard library for the upcoming Boost.Algorithm library. I will be putting them up for comment as soon as the library review is finished. [ Examples: copy_if, find_if_not, iota, is_partition, partition_point, and so on ]
For example "I plan to use boost::all_of for 1 year and to switch to std::all_of after that when my C++11 compiler becomes available". In your opinion, is this plan a problem or not for Boost.Algorithm? Thanks. --Lorenzo

On 24 Nov 2011, at 17:59, Hartmut Kaiser wrote:
I don't get why broken code (whether code using Phoenix or Local) should be the basis for whether a library is superior to another as far as end-user experiences is concerned. It's broken code, it doesn't even compile!
Because I have spent more than 10 minutes figuring why Phoenix code wouldn't compile, and that hasn't happened with any other C++ library.
What's the problem with having to _think_ for 10 minutes? If you get through that experience it will take you only 9 minutes when you have to do it again. I still can't understand why people complain when they have to use their brains.
This is not a helpful position to take, and to be honest I find it offensive. My job is researching AI algorithms, I consider myself to "use my brains" all day, every day, trying to design new and interesting algorithms to make progress in solving some really hard AI problems. However, I don't have infinite brain power, and 10 minutes spent decoding huge TMP error messages is 10 minutes I didn't spend solving them problems I actually care about solving. Chris

On Fri, Nov 25, 2011 at 8:59 AM, Christopher Jefferson <chris@bubblescope.net> wrote:
On 24 Nov 2011, at 17:59, Hartmut Kaiser wrote:
What's the problem with having to _think_ for 10 minutes? If you get through that experience it will take you only 9 minutes when you have to do it again. I still can't understand why people complain when they have to use their brains.
This is not a helpful position to take, and to be honest I find it offensive.
My job is researching AI algorithms, I consider myself to "use my brains" all day, every day, trying to design new and interesting algorithms to make progress in solving some really hard AI problems.
However, I don't have infinite brain power, and 10 minutes spent decoding huge TMP error messages is 10 minutes I didn't spend solving them problems I actually care about solving.
I don't see why your job has anything to do with the fact that since you're using C++ you're going to have to learn the error messages anyway and debug errors anyway in case your program is broken? Cheers -- Dean Michael Berris http://goo.gl/CKCJX

Because I have spent more than 10 minutes figuring why Phoenix code wouldn't compile, and that hasn't happened with any other C++ library.
What's the problem with having to _think_ for 10 minutes? If you get through that experience it will take you only 9 minutes when you have to do it again. I still can't understand why people complain when they have to use their brains.
This is not a helpful position to take, and to be honest I find it offensive.
My job is researching AI algorithms, I consider myself to "use my brains" all day, every day, trying to design new and interesting algorithms to make progress in solving some really hard AI problems.
However, I don't have infinite brain power, and 10 minutes spent decoding huge TMP error messages is 10 minutes I didn't spend solving them problems I actually care about solving.
+5! Matthias

On 24 November 2011 13:35, Dean Michael Berris <mikhailberis@gmail.com> wrote:
Why are people blaming the libraries for horrible error messages when it's compilers that spew these error messages?
They just don't want to use a library which has horrible error messages. It doesn't matter if it's the library's fault or the compiler's fault. Blame has nothing to do with it.

On 11/25/2011 3:10 AM, Daniel James wrote:
On 24 November 2011 13:35, Dean Michael Berris <mikhailberis@gmail.com> wrote:
Why are people blaming the libraries for horrible error messages when it's compilers that spew these error messages?
They just don't want to use a library which has horrible error messages. It doesn't matter if it's the library's fault or the compiler's fault. Blame has nothing to do with it.
Those are very good points, Daniel. So is this, from Nathan:
The problem is that errors point to code that is deep within the library's implementation, or worse, the implementations of helper libraries used by the library. To understand the error, the user has to look at the library's implementation. The user shouldn't have to do that - understanding the implementation of a library should not be a prerequisite for using it.
If you have any specific suggestions about what compilers could do to turn these errors from deep within the library implementation, into errors that do not require knowing anything about the library implementation, I would like to hear it, and I'm sure so would compiler writers.
Otherwise, you have to accept that library writers face a trade-off between the user-friendliness of error messages, and the expressiveness, terseness, and power obtained by extensive use of advanced techniques such as template metaprogramming. There is no one right answer to this tradeoff, and it is good for users to have different alternatives available to them.
It's not that we TMP/ET advocates are ignorant about it. There have been efforts to fix or at least alleviate this problem. Yet, it's still not enough to satisfy everyone. Eric Niebler noted that it is a bug if a library spews error messages and that the one encountering such problems should report it as a bug for the author to fix. We've done that in Spirit by detecting invalid expressions as early as possible with a static-assert: template <typename Auto, typename Expr> static void define(rule& lhs, Expr const& expr, mpl::false_) { // Report invalid expression error as early as possible. // If you got an error_invalid_expression error message here, // then the expression (expr) is not a valid spirit qi expression. BOOST_SPIRIT_ASSERT_MATCH(qi::domain, Expr); } The goal is to provide better diagnostics for the user. The code is isolated (notice that this branch of code calls nothing else in order to minimize the error trace). Brent Spillner triggered this static-assert:
But that's nothing; if you omit the '>>' from the expression or term definition you get the 33 kilochar beauty copied at the bottom of this email.
With MSVC, you see the error up front: C:\dev\boost\boost/spirit/home/qi/nonterminal/rule.hpp(176) : error C2664: 'boost::mpl::assertion_failed' : cannot convert parameter 1 from 'boost::mpl::failed ************(__thiscall boost::spirit::qi::rule<Iterator,T1,T2,T3,T4>::define::error_invalid_ expression::* ***********)(Expr)' to 'boost::mpl::assert<false>::type' Plus some more extra error trace after which leads you to the erroneous Spirit expression. G++ is worse. It spews 4 long lines of trace before the actual static-assert. You'll have to do a textual search for "error" to get to the actual error. You might think that 4 lines is not that long, but these 4 lines contain complex ET types that the casual user need not know about. Right now, we are using BOOST_MPL_ASSERT_MSG. I switched to c++11 static_assert and now I get a better error message in MSVC: C:\dev\boost\boost/spirit/home/qi/nonterminal/rule.hpp(179) : error C2338: Invalid Expression C:\dev\boost\boost/spirit/home/qi/nonterminal/rule.hpp(223) : see reference to ... plus some more extra error trace after. With g++, I get: C:\dev\boost/boost/spirit/home/qi/nonterminal/rule.hpp:179:13: error: static assertion failed: "Invalid Expression" Which looks good, but without any trailing error trace, there's no way to follow the trail to where the actual error is. (I think that's a g++ bug! If you care about g++, they should be informed about this problem.) Sooo... despite our valiant efforts, the current state of affairs is still not good enough. People scared with error messages are still turned off. Now. What else can we do about it? Here's an idea. I'd like to hear yours. Presentation: The error trace is very noisy! It contains lots of types with full qualifications. E.g. Expr = boost::proto::exprns_::expr<boost::proto::tagns_::tag::shift_right, boost::proto::argsns_::list2<const boost::proto::exprns_::expr<boost::proto::tagns_::tag::subscript It might be good to present that in a GUI where the types are collapsed by default. Qualifications in a trace can also be optionally collapsed. With this, the user will see only this, for example: [+] rule.hpp(179) : error C2338: Invalid Expression [+] rule.hpp(223) : see reference to function template instantiation [+] with[+] [+] calc3.cpp(54) : see reference to function template instantiation [+] with[+] [+] calc3.cpp(43) : while compiling class template member function [+] with[+] [+] calc3.cpp(90) : see reference to class template instantiation [+] with[+] Clicking one of the [+] will reveal further information, hierarchically. It is possible to write a parser for the common compilers (MSVC and g++) and have it generate the data in HTML with some javascript for collapsing the data, or maybe just XML since text editors can collapse XML nodes already. I care about TMP in general and especially ET. I'd like to make the situation better, lest we will have more anti-TMP sentiment. I'm sure many people here care too. It's so utterly frustrating to see people avoid TMP libraries because of this prevalent problem. If there are any more nice ideas (hey this is Boost and we are full of wonderful ideas!), please share them. Regards, -- Joel de Guzman http://www.boostpro.com http://boost-spirit.com

Joel de Guzman <joel <at> boost-consulting.com> writes:
Now. What else can we do about it? Here's an idea. I'd like to hear yours.
It might be good to present that in a GUI where the types are collapsed by default. Qualifications in a trace can also be optionally collapsed.
Hi Joel, it's a very correct idea. In our team, I made a perl script that cleans out all compiler noise (GCC) and has some special handling for common cases like MPL_ASSERT, CONCEPT_CHECK etc. This runs for years already and proven to be extremely effective in nailing down errors. Here is a couple of examples: *BOOST_MPL_ASSERT_RELATION: test.h:75: error: ************::assert_relation<2l == 3l>::************)' Original error message: test.h:75: error: no matching function for call to 'assertion_failed(mpl_::failed************ mpl_::assert_relation<(mpl_::assert_::relations)1u, 2l, 3l>::************)' * Concept checks: test.cpp:29: error: CONCEPT NOT SATISFIED: RandomAccessIterator<A> test.cpp:28: error: CONCEPT CHECK FAILED: RandomAccessIterator<list<int>::iterator > these come from 17 lines of long error msgs that I don't want to put here for their length, but you can easily imagine how they look like. * BOOST_MPL_ASSERT_MSG: test.cpp:311: error: ************::TAG_IS_NOT_IN_TAG_LIST::************(MTag<815, CharRange>, mpl::vector<MetaTag::OrderQty, MetaTag::Price>) this comes from this original line: test.cpp:311: error: no matching function for call to `assertion_failed(mpl_::failed************(ns::oc::<unnamed>::FlagChecker<Tags, RequiredTags>::operator()(const Tag&) const [with Tag = ns::MTag<815, ns::CharRange>, Tags = MyClass::update_from_msg(const ns::SharedMsg&)::fields, RequiredTags = MyClass::update_from_msg(const ns::SharedMsg&)::r_fields]::TAG_IS_NOT_IN_TAG_LIST::************)(ns::MTag<815, ns::CharRange>, boost::mpl::vector<ns::MetaTag::OrderQty, ns::MetaTag::Price, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>))' here my script also unfolds "where" part and gets rid of repetitive mpl_::na nonsense. As you can see these filters really help to emphasize what was the problem, without exposing too much guts. This script is being run in makefile transparent to user, and is controlled by environment variables, so if you want to get deeper (in case the error filter swallowed too much) you just adjust environment variables and rerun make. Regards, Maxim

Le 25/11/2011 03:04, Maxim Yanchenko a écrit :
Joel de Guzman<joel<at> boost-consulting.com> writes:
Now. What else can we do about it? Here's an idea. I'd like to hear yours.
It might be good to present that in a GUI where the types are collapsed by default. Qualifications in a trace can also be optionally collapsed.
Hi Joel, it's a very correct idea. In our team, I made a perl script that cleans out all compiler noise (GCC) and has some special handling for common cases like MPL_ASSERT, CONCEPT_CHECK etc. This runs for years already and proven to be extremely effective in nailing down errors. <snip goodness> This script is being run in makefile transparent to user, and is controlled by environment variables, so if you want to get deeper (in case the error filter swallowed too much) you just adjust environment variables and rerun make.
Is this something sharable ? Havign this widely available could be nice ?

On 11/25/2011 03:04 AM, Maxim Yanchenko wrote:
Joel de Guzman <joel <at> boost-consulting.com> writes:
Now. What else can we do about it? Here's an idea. I'd like to hear yours.
It might be good to present that in a GUI where the types are collapsed by default. Qualifications in a trace can also be optionally collapsed. Hi Joel, it's a very correct idea. In our team, I made a perl script that cleans out all compiler noise (GCC) and has some special handling for common cases like MPL_ASSERT, CONCEPT_CHECK etc. This runs for years already and proven to be extremely effective in nailing down errors. This sounds very interesting indeed. It would be cool to use it like ccache, i.e.
tmpMessageFilter -> ccache -> g++ Regards, Roland

On 11/25/2011 11:38 PM, Roland Bock wrote:
On 11/25/2011 03:04 AM, Maxim Yanchenko wrote:
Joel de Guzman <joel <at> boost-consulting.com> writes:
Now. What else can we do about it? Here's an idea. I'd like to hear yours.
It might be good to present that in a GUI where the types are collapsed by default. Qualifications in a trace can also be optionally collapsed. Hi Joel, it's a very correct idea. In our team, I made a perl script that cleans out all compiler noise (GCC) and has some special handling for common cases like MPL_ASSERT, CONCEPT_CHECK etc. This runs for years already and proven to be extremely effective in nailing down errors. This sounds very interesting indeed. It would be cool to use it like ccache, i.e.
tmpMessageFilter -> ccache -> g++
I'm very interested. How can we get ahold of this? G++ is a good start. I'd love to see something for the "other" popular compiler as well ;-) Such a tool will be invaluable in fighting the deluge of error messages. Ideally, I'd love to see the presentation dynamically, like in my original idea where users can click to expand the errors at any given point in the trace. Perhaps this is a good GSoC project. Let's have it, Maxim! :-) Regards, -- Joel de Guzman http://www.boostpro.com http://boost-spirit.com

on Fri Nov 25 2011, Joel de Guzman <joel-AT-boost-consulting.com> wrote:
On 11/25/2011 11:38 PM, Roland Bock wrote:
On 11/25/2011 03:04 AM, Maxim Yanchenko wrote:
Joel de Guzman <joel <at> boost-consulting.com> writes:
Now. What else can we do about it? Here's an idea. I'd like to hear yours.
It might be good to present that in a GUI where the types are collapsed by default. Qualifications in a trace can also be optionally collapsed. Hi Joel, it's a very correct idea. In our team, I made a perl script that cleans out all compiler noise (GCC) and has some special handling for common cases like MPL_ASSERT, CONCEPT_CHECK etc. This runs for years already and proven to be extremely effective in nailing down errors. This sounds very interesting indeed. It would be cool to use it like ccache, i.e.
tmpMessageFilter -> ccache -> g++
I'm very interested. How can we get ahold of this? G++ is a good start. I'd love to see something for the "other" popular compiler as well ;-) Such a tool will be invaluable in fighting the deluge of error messages.
I presume you know of stlfilt <http://bdsoft.com>?
Ideally, I'd love to see the presentation dynamically, like in my original idea where users can click to expand the errors at any given point in the trace. Perhaps this is a good GSoC project.
I've often thought about combining the filtering tricks of stlfilt with a folding error browser. Hmm... folding errors in Emacs sounds like an amazingly good idea. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Hi,
The problem is that errors point to code that is deep within the library's implementation, or worse, the implementations of helper libraries used by the library. To understand the error, the user has to look at the library's implementation. The user shouldn't have to do that - understanding the implementation of a library should not be a prerequisite for using it.
If you have any specific suggestions about what compilers could do to turn these errors from deep within the library implementation, into errors that do not require knowing anything about the library implementation, I would like to hear it, and I'm sure so would compiler writers.
Otherwise, you have to accept that library writers face a trade-off between the user-friendliness of error messages, and the expressiveness, terseness, and power obtained by extensive use of advanced techniques such as template metaprogramming. There is no one right answer to this tradeoff, and it is good for users to have different alternatives available to them.
We've been experimenting with returning error messages from TMP libraries that make sense in the domain of the library. Our approach is returning a class describing the error when a metafunction is called with invalid arguments (a pretty-printer can display that information later in a human-readable way). The assumption is that this error information is returned by a metafunction deeply inside the TMP library and need to be propagated out (maybe changed a bit along the way to make more sense to the user). For this we've used monads, using which we could simulate exceptions being thrown at compile-time. Using monads increases the complexity of the TMP library, however, most of it can be hidden by another library. We've been working on such a library (not in Boost). What TMP library authors can do is wrapping the body of every metafunction with a template that "adds" error propagation to it. For example instead of: template <class A, class B, class C> struct some_metafunction_in_the_library : f<g<A, C>, h<B>, A> {}; one can write template <class A, class B, class C> struct some_metafunction_in_the_library : try_<f<g<A, C>, h<B>, A> > {}; in which case if f, g or h returns an error, the same error is returned by some_metafunction_in_the_library instead of breaking the compilation with a cryptic error message. At the top level it can be pretty-printed by a test harness (eg. simple binary built to pretty-print it or by a TMP unit testing framework). A downside of using it is that it makes compilation slower (see http://plcportal.inf.elte.hu/en/publications/TechnicalReports/monad-tr.pdf for measurements). So far we've only been using it for improving error messages coming from template metafunctions. Cases where they can't be easily and clearly separated from runtime code haven't been addressed yet. This solution is part of a library implementing monads in C++ template metaprogramming. Documentation: http://abel.web.elte.hu/mpllibs/metamonad/index.html Source code: https://github.com/sabel83/mpllibs/tree/master/libs/metamonad Note that there are only a few examples so far. Regards, Abel

Ábel Sinkovics wrote:
Hi,
The problem is that errors point to code that is deep within the library's implementation, or worse, the implementations of helper libraries used by the library. To understand the error, the user has to look at the library's implementation. The user shouldn't have to do that - understanding the implementation of a library should not be a prerequisite for using it.
...
So far we've only been using it for improving error messages coming from template metafunctions. Cases where they can't be easily and clearly separated from runtime code haven't been addressed yet.
This solution is part of a library implementing monads in C++ template metaprogramming. Documentation: http://abel.web.elte.hu/mpllibs/metamonad/index.html Source code: https://github.com/sabel83/mpllibs/tree/master/libs/metamonad Note that there are only a few examples so far.
I've been poking around haskel/C++/monads just out of curiousity for some time. Can't say I've really made much progress understanding it all. But I did find your links very intriguing. In my own work, I've come to rely on Boost.ConceptChecks (BCC) to short circuit the error list. Also, after one becomes familiar with it it is very easy to use. It's basically a free compile time assertion to check metafunction arguments. Robert Ramey
Regards, Abel
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On 11/26/2011 9:22 PM, Ábel Sinkovics wrote:
Hi,
The problem is that errors point to code that is deep within the library's implementation, or worse, the implementations of helper libraries used by the library. To understand the error, the user has to look at the library's implementation. The user shouldn't have to do that - understanding the implementation of a library should not be a prerequisite for using it.
If you have any specific suggestions about what compilers could do to turn these errors from deep within the library implementation, into errors that do not require knowing anything about the library implementation, I would like to hear it, and I'm sure so would compiler writers.
Otherwise, you have to accept that library writers face a trade-off between the user-friendliness of error messages, and the expressiveness, terseness, and power obtained by extensive use of advanced techniques such as template metaprogramming. There is no one right answer to this tradeoff, and it is good for users to have different alternatives available to them.
We've been experimenting with returning error messages from TMP libraries that make sense in the domain of the library. Our approach is returning a class describing the error when a metafunction is called with invalid arguments (a pretty-printer can display that information later in a human-readable way). The assumption is that this error information is returned by a metafunction deeply inside the TMP library and need to be propagated out (maybe changed a bit along the way to make more sense to the user). For this we've used monads, using which we could simulate exceptions being thrown at compile-time. Using monads increases the complexity of the TMP library, however, most of it can be hidden by another library. We've been working on such a library (not in Boost).
What TMP library authors can do is wrapping the body of every metafunction with a template that "adds" error propagation to it. For example instead of:
template <class A, class B, class C> struct some_metafunction_in_the_library : f<g<A, C>, h<B>, A> {};
one can write
template <class A, class B, class C> struct some_metafunction_in_the_library : try_<f<g<A, C>, h<B>, A> > {};
in which case if f, g or h returns an error, the same error is returned by some_metafunction_in_the_library instead of breaking the compilation with a cryptic error message. At the top level it can be pretty-printed by a test harness (eg. simple binary built to pretty-print it or by a TMP unit testing framework). A downside of using it is that it makes compilation slower (see http://plcportal.inf.elte.hu/en/publications/TechnicalReports/monad-tr.pdf for measurements).
So far we've only been using it for improving error messages coming from template metafunctions. Cases where they can't be easily and clearly separated from runtime code haven't been addressed yet.
This solution is part of a library implementing monads in C++ template metaprogramming. Documentation: http://abel.web.elte.hu/mpllibs/metamonad/index.html Source code: https://github.com/sabel83/mpllibs/tree/master/libs/metamonad
Note that there are only a few examples so far.
I've been reading the docs. This is very clever! I wonder how it applies to expression templates with runtime code or Fusion with runtime components? A simple example would be a template plus function: template <typename A, typename B> auto plus(A const& a, B const& b) -> decltype(a+b) {return a+b;} The typical problem is an error if a+b is not allowed (e.g. a is an int but b is a std::vector). I don't see how your solution will help more than static_assert or better yet, Boost concepts. Regards, -- Joel de Guzman http://www.boostpro.com http://boost-spirit.com

Hi Joel,
I've been reading the docs. This is very clever! I wonder how it applies to expression templates with runtime code or Fusion with runtime components? A simple example would be a template plus function: template <typename A, typename B> auto plus(A const& a, B const& b) -> decltype(a+b) {return a+b;} The typical problem is an error if a+b is not allowed (e.g. a is an int but b is a std::vector). I don't see how your solution will help more than static_assert or better yet, Boost concepts. Regards,
Thank you for checking our solution. The goal of the "compile-time exceptions" is to be able to propagate error messages out from a chain of recursive template metafunction calls. What may be useful for template functions is the following: When you use a static assertion to verify the template arguments of your template function, you can write metafunctions as predicates that "throw" a "compile-time exception" when the condition fails (and return some value otherwise). You should be able to use them as predicates for static assertions: BOOST_MPL_ASSERT((is_exception< your predicate that may throw goes here... >)) This will break the compilation with some error message. If the "exception" that was "thrown" supports some pretty-printing solution (such as http://abel.web.elte.hu/mpllibs/metatest/index.html#_pretty_printing_custom_...) the developer can give the predicate which has issues to a pretty-printer in a separate compilation unit and get a human-readable and useful error message. Concept checking conditions may be pretty printed as well in a similar way. As I said earlier, this solution was originally developed for returning error messages from template metafunctions, they may be useful in template functions as well - I'll play with it and see if they can help there. Regards, Abel

On Fri, Nov 25, 2011 at 6:10 AM, Daniel James <dnljms@gmail.com> wrote:
On 24 November 2011 13:35, Dean Michael Berris <mikhailberis@gmail.com> wrote:
Why are people blaming the libraries for horrible error messages when it's compilers that spew these error messages?
They just don't want to use a library which has horrible error messages. It doesn't matter if it's the library's fault or the compiler's fault. Blame has nothing to do with it.
But it's not the library emitting these horrible error messages. So why is this tied with the library and not the compiler? Why aren't people saying "I don't want to use this compiler because it's crappy at generating error messages for *any* code"? -- Dean Michael Berris http://goo.gl/CKCJX

On 25 November 2011 02:48, Dean Michael Berris <mikhailberis@gmail.com> wrote:
But it's not the library emitting these horrible error messages. So why is this tied with the library and not the compiler?
Imagine you know nothing of boost. You're using C++ and you're used to the normal error messages, not great but you understand them. Then one day someone points you to this exiting lambda emulation called Phoenix, you see the demos and they look great. You sit down and try writing a small program. And pages and pages of error messages fill your screen. You don't then think, "my compiler generates bad error messages" as it's normally good enough. You think, "I don't like this" and it happened as a result of using Phoenix. So the link is made, "Phoenix creates horrible error messages".
Why aren't people saying "I don't want to use this compiler because it's crappy at generating error messages for *any* code"?
People are reluctant to change from what they're used to. The following is from a post about google's use of clang. Remember that many C++ programmers won't even bother to try clang, let alone give it a week. On 31 October 2011 20:54, Chandler Carruth <chandlerc@google.com> wrote:
That said, while the feedback was overwhelmingly positive, there were definitely some who were less enthusiastic. Many of these people had worked with GCC for so long that they exhibited strong change aversion. The messages from GCC are very familiar, and map to an existing set of problem descriptions for these people. They faced a learning curve when the messages changed *at all*, and that was costly. Interestingly, for a surprising number of people in this bucket, after a few months of using Clang, they were reluctant to switch back. They had slowly noticed and started using several common elements of Clang's diagnostics (such as typo-correction and macro backtraces) without even realizing it. When they looked at GCC's messages, they didn't have the information they wanted to understand the problem.
The full post is at: http://lists.cs.uiuc.edu/pipermail/cfe-dev/2011-October/018352.html

On Fri, Nov 25, 2011 at 7:48 PM, Daniel James <dnljms@gmail.com> wrote:
On 25 November 2011 02:48, Dean Michael Berris <mikhailberis@gmail.com> wrote:
But it's not the library emitting these horrible error messages. So why is this tied with the library and not the compiler?
Imagine you know nothing of boost. You're using C++ and you're used to the normal error messages, not great but you understand them. Then one day someone points you to this exiting lambda emulation called Phoenix, you see the demos and they look great. You sit down and try writing a small program. And pages and pages of error messages fill your screen. You don't then think, "my compiler generates bad error messages" as it's normally good enough. You think, "I don't like this" and it happened as a result of using Phoenix. So the link is made, "Phoenix creates horrible error messages".
This is exactly where I was a few years ago and what I though exactly was: wow, GCC creates horrible error messages. I never for one second blamed the library because I *knew* that it's not the library's fault that the error messages are generated by the compiler and not by the library. Why are we accepting illogical reasoning *at all* as a valid reason for "argument"?
Why aren't people saying "I don't want to use this compiler because it's crappy at generating error messages for *any* code"?
People are reluctant to change from what they're used to. The following is from a post about google's use of clang. Remember that many C++ programmers won't even bother to try clang, let alone give it a week.
I work at Google. Before I joined Google, I've been using four different compilers. Having a sane build system and *patience* and willingness to learn are *required* to get anything done in this world. Why are we suddenly surprised or even encouraging being lazy just for the sake of it? Besides, this is anecdotal. Hardly evidence. Anyway, I think this discussion is moot now because I haven't seen one logical refutal of my reasoning nor have I seen one clear answer to the questions I've asked that have sound logic. Have a good day guys, I'm now crawling under a rock. See you all in another 6/7 months. Cheers -- Dean Michael Berris http://goo.gl/CKCJX

Dean Michael Berris wrote:
On Fri, Nov 25, 2011 at 6:10 AM, Daniel James <dnljms@gmail.com> wrote:
On 24 November 2011 13:35, Dean Michael Berris <mikhailberis@gmail.com> wrote:
Why are people blaming the libraries for horrible error messages when it's compilers that spew these error messages?
They just don't want to use a library which has horrible error messages. It doesn't matter if it's the library's fault or the compiler's fault. Blame has nothing to do with it.
But it's not the library emitting these horrible error messages. So why is this tied with the library and not the compiler? Why aren't people saying "I don't want to use this compiler because it's crappy at generating error messages for *any* code"?
I'm astounded at this line of thought, Dean. When choosing between two courses of action with reasonably equivalent outcomes, surely the one with the least cost is better. Thus, if using a library that is prone to triggering horrible error messages when misused is compared against using another library this does not trigger such error messages when misused, the choice is clear. You've complained that the error messages only occur when the library user makes a mistake. When challenged with the notion that errors occur as a normal part of coding, you seemed incredulous. (I rarely write mistake-free code on the first try, particularly with libraries like Spirit, despite decades of experience.) A developer writes some code, compiles it, inspects the error messages, tries to determine the cause, changes the code, and repeats. Many libraries trigger reasonably easy to grok error messages. Others, notably those using TMP, trigger difficult to grok error messages. That's a fact. It doesn't matter whether those messages could be better by virtue of library or compiler changes. Thus, the middle steps are much easier with some libraries than others. If, given a particular library and compiler, a developer encounters troublesome error messages and has little time, inclination, or value expectation to learn enough to deal with them, that developer will select another solution that makes arriving at working code easier. This should be obvious. For many, then, avoiding a TMP-based program is a desirable choice. For them, a library like Local is a better means to the desired end. (This has nothing to do with whether Local should be accepted.) Both kinds of libraries can coexist in Boost, although those in each group should compare and contrast with the other choices to help users decide, once multiple choices exist. _____ Rob Stewart robert.stewart@sig.com Software Engineer using std::disclaimer; Dev Tools & Components Susquehanna International Group, LLP http://www.sig.com ________________________________ IMPORTANT: The information contained in this email and/or its attachments is confidential. If you are not the intended recipient, please notify the sender immediately by reply and immediately delete this message and all its attachments. Any review, use, reproduction, disclosure or dissemination of this message or any attachment by an unintended recipient is strictly prohibited. Neither this message nor any attachment is intended as or should be construed as an offer, solicitation or recommendation to buy or sell any security or other financial instrument. Neither the sender, his or her employer nor any of their respective affiliates makes any warranties as to the completeness or accuracy of any of the information contained herein or that this message or any of its attachments is free of viruses.

On 1 December 2011 12:26, Stewart, Robert <Robert.Stewart@sig.com> wrote:
I'm astounded at this line of thought, Dean. When choosing between two courses of action with reasonably equivalent outcomes, surely the one with the least cost is better. Thus, if using a library that is prone to triggering horrible error messages when misused is compared against using another library this does not trigger such error messages when misused, the choice is clear.
But the way people avoid the hard to read error messages is to move the error detection to runtime, where it is easy to control the error messages. Unfortunately, error detection at runtime is usually much less thorough (as it is both hard to do and people don't want to incur the cost in correct code) and hence much more expensive. The choice is between hard to read errors found early vs. (typically) much less thorough error checking found later. To me, the choice is clear, in that I want to find as many errors as possible at compile time, even if those error messages are painful to understand and compile times are noticeably increased. -- Nevin ":-)" Liber <mailto:nevin@eviloverlord.com> (847) 691-1404

I'm astounded at this line of thought, Dean. When choosing between two courses of action with reasonably equivalent outcomes, surely the one with the least cost is better. Thus, if using a library that is prone to triggering horrible error messages when misused is compared against using another library this does not trigger such error messages when misused, the choice is clear.
But the way people avoid the hard to read error messages is to move the error detection to runtime, where it is easy to control the error messages.
But that's not the case with Boost.Local. Boost.Local catches errors at compile time, the errors just look much better than with Phoenix because the errors come from actual C++ statements, not expression trees that approximate them. Regards, Nate

Nevin Liber wrote:
On 1 December 2011 12:26, Stewart, Robert <Robert.Stewart@sig.com> wrote:
I'm astounded at this line of thought, Dean. When choosing between two courses of action with reasonably equivalent outcomes, surely the one with the least cost is better. Thus, if using a library that is prone to triggering horrible error messages when misused is compared against using another library this does not trigger such error messages when misused, the choice is clear.
But the way people avoid the hard to read error messages is to move the error detection to runtime, where it is easy to control the error messages. Unfortunately, error detection at runtime is usually much less thorough (as it is both hard to do and people don't want to incur the cost in correct code) and hence much more expensive. The choice is between hard to read errors found early vs. (typically) much less thorough error checking found later. To me, the choice is clear, in that I want to find as many errors as possible at compile time, even if those error messages are painful to understand and compile times are noticeably increased.
That's a false dichotomy. I'm not suggesting there are not cases such as you describe, but it isn't a necessary difference: type safety can be managed in multiple ways. Furthermore, the choice may well be between compile time versus runtime execution of logic, in which case performance may be a sufficient driver to incur the troublesome error messages. _____ Rob Stewart robert.stewart@sig.com Software Engineer using std::disclaimer; Dev Tools & Components Susquehanna International Group, LLP http://www.sig.com ________________________________ IMPORTANT: The information contained in this email and/or its attachments is confidential. If you are not the intended recipient, please notify the sender immediately by reply and immediately delete this message and all its attachments. Any review, use, reproduction, disclosure or dissemination of this message or any attachment by an unintended recipient is strictly prohibited. Neither this message nor any attachment is intended as or should be construed as an offer, solicitation or recommendation to buy or sell any security or other financial instrument. Neither the sender, his or her employer nor any of their respective affiliates makes any warranties as to the completeness or accuracy of any of the information contained herein or that this message or any of its attachments is free of viruses.

On 24 Nov 2011, at 12:48, Dean Michael Berris wrote:
On Thu, Nov 24, 2011 at 11:43 PM, Gregory Crosswhite <gcrosswhite@gmail.com> wrote:
On Nov 24, 2011, at 10:17 PM, Dean Michael Berris wrote:
Seriously, I don't know why people are blaming the libraries when the compilers are the ones generating these error messages. Am I missing something here?
Yes. What you are missing is that people just looking to employ a library to make their lives easier don't case *whose* fault it is when the library fails to make their lives easier, they just stop using the library. You can say that it is the fault of the compiler rather than the fault of the library, but that doesn't change the experience of using the library.
Well, then the person who doesn't put the work in to learn to use the library *and* learn to read the compiler messages properly is *the* problem. I seriously think this is a case of PEBKAC. No amount of library engineering will solve that kind of problem IMO.
Really? You think it is completely the user's fault that they get upset by regularly seeing 100k+ error messages in spirit, and choose to use another system like lex/yacc instead? I really hope not, as the size of the compile errors is the only reason I have stopped using libraries like spirit in projects I work on. I view libraries like spirit as inspirational. They show the direction that I think C++ should take, and the power the language has. I really hope any future extensions (like concepts) are designed to help make libraries like spirit both more powerful and more user friendly. However, at the moment while they do the best that they can, it is unfortunately often not good enough in our current compilers. Chris

On Fri, Nov 25, 2011 at 1:00 AM, Christopher Jefferson <chris@bubblescope.net> wrote:
On 24 Nov 2011, at 12:48, Dean Michael Berris wrote:
Well, then the person who doesn't put the work in to learn to use the library *and* learn to read the compiler messages properly is *the* problem. I seriously think this is a case of PEBKAC. No amount of library engineering will solve that kind of problem IMO.
Really? You think it is completely the user's fault that they get upset by regularly seeing 100k+ error messages in spirit, and choose to use another system like lex/yacc instead? I really hope not, as the size of the compile errors is the only reason I have stopped using libraries like spirit in projects I work on.
There are two components to this problem of error messages: the code and the compiler. Let's get something out of the way: if your code is correct, then you don't see error messages. This is a given. Now the gap comes from the solution in the user's head and the solution that becomes code -- until you get the semantics of the library you're using, you can't even begin to understand how to use it *properly*. If the compiler you're using was able to point you to the problem in a manner that is not retarded or at least was meant for humans to understand, then you *wouldn't* complain about the error messages generated because the code you wrote is wrong. The compiler used to be something that just translated code from one form to another and unfortunately the human aspect to this equation got ignored. So there are two solutions that directly address the error message problem: 1) Learn how your compiler identifies errors and adapt to these conventions. Seriously, it's common sense: learn to use the tool and you'll be more effective with it. This applies to any tool. 2) Use a compiler that displays better error messages until a point when it stops complaining because your code has become acceptable. Corollary to 2 is that you should file a bug to your compiler vendor's bug tracking or support system to fix their horrible error messages. Another corollary is if the compiler is open source try to actually do something about it. So... my point is that even with Lex/Yacc or Antlr or <insert parser generating solution here>, you *still* have to learn to use the tool and you *still* have to deal with the error messages one way or another. What you can change is not only the library you use but also the compiler you use.
I view libraries like spirit as inspirational. They show the direction that I think C++ should take, and the power the language has. I really hope any future extensions (like concepts) are designed to help make libraries like spirit both more powerful and more user friendly. However, at the moment while they do the best that they can, it is unfortunately often not good enough in our current compilers.
I believe you have it backwards: the current compilers are not good enough to handle perfectly valid C++ in a manner that's helpful to humans who actually write/debug code that fails to compile. I believe if the same number of people who complain about Spirit complain to the GNU foundation, Intel, Microsoft, Apple, EDG, Comeau, <insert compiler vendor here> and say "your error messages suck" we'd be advancing the state of the art for *all* of C++. Anyway, I'm getting tired of this -- I'm crawling back under the rock where I've been hiding and let you guys figure this one out. Cheers -- Dean Michael Berris http://goo.gl/CKCJX

On 11/24/2011 8:08 PM, Brent Spillner wrote:
On 24 Nov 2011 09:11:19 Joel de Guzman wrote:
Man! Of course you should use the examples in 1.45. Those examples are for 1.48 and uses new features implemented in 1.48! Sheesh!
I fully understand that. The point is that the Phoenix interface is evolving much more quickly than that of probably any other Boost library. Even apart from the fact that there already have been two bottom-up rewrites, there are enough significant changes between point releases that even small example programs are tied to a particular version of the library. Why is the optimal way to write a small calculator example today so different from the optimal way to write it six months ago? I'm not disputing that you're making forward progress, just highlighting this as another barrier to entry that Local likely won't have.
The funny thing is that the new feature came from Spirit, not Phoenix. Phoenix API has been stable for many years now. Triple sheesh for ya! :-)
Double Sheesh! If you've spent even a few seconds looking into the error (beyond fault-finding), you will see that a simple textual search of "error" will reveal this:
Yes, it's possible to wade through the error messages and figure out what went wrong. And Qi is clearly trying to help out with traceable
Wade? Use search. It's a textual search for "error". Anyway, I'm really outta here. Enough strawman arguments. Regards, -- Joel de Guzman http://www.boostpro.com http://boost-spirit.com
participants (18)
-
Brent Spillner
-
Christopher Jefferson
-
Daniel James
-
Dave Abrahams
-
Dean Michael Berris
-
Gregory Crosswhite
-
Hartmut Kaiser
-
Joel de Guzman
-
Joel Falcou
-
Lorenzo Caminiti
-
Matthias Schabel
-
Maxim Yanchenko
-
Nathan Ridge
-
Nevin Liber
-
Robert Ramey
-
Roland Bock
-
Stewart, Robert
-
Ábel Sinkovics