Re: [Boost-users] Wave: how to recover after errors?
Max Motovilov wrote:
Yet something fishy happens with the state I can't quite put my finger on. Here's my example input:
============= #define Foo(x) bar##x Foo(foo1) #define Foo(x) x##bar #define Bar(x) x##foo Foo(foo2) Bar(bar1)
=============
[snip]
Yes, I'll have to look into this. My current implementation was a first shot and Ive expected to get problems. Wave is not written for error recovery, I'll have to invest some time to insert reliable synchronisation points.
That's fixed now in CVS. Regards Hartmut
That's fixed now in CVS.
Hartmut, Verified with CVS code, treatment of ignored macro re-definitions is now consistent (ignoring exception results in skipped #define directive, old macro definition stays in place). A complex file that used to be a problem before seems to go through fine. Now the only thing missing is being able to intercept and recover the new definition :) Thanks a lot! ...Max...
Max Motovilov wrote:
Verified with CVS code, treatment of ignored macro re-definitions is now consistent (ignoring exception results in skipped #define directive, old macro definition stays in place). A complex file that used to be a problem before seems to go through fine. Now the only thing missing is being able to intercept and recover the new definition :)
What exactly do you expect in this context? Regards Hartmut
What exactly do you expect in this context?
Well, like I said, it is typical for a C/C++ preprocessor to allow macro re-definitions in violation of the standard, and there's likely quite a bit of [sloppy] code that won't be processed correctly otherwise. I am not 100% sure what is the best paradigm for handling this situation in the context of Wave. It would, of course, be nice to have a generic mechanism for rolling back to pre-error state of the lexer/preprocessor system while providing sufficient information to enact recovery (i.e. providing the name of a failed macro so that user's code can call undefine_macro() and re-process the failed definition). I imagine that may be quite expensive, however, as it might entail having to save the state in the beginning of any operation that may consume tokens from the input without returning them to user -- potentially any operation! Perhaps a cheaper way is adding another hook to the context policy? For example, template<typename X> bool pre_exception( const X& err ); that will have a chance to recover and return true or return false and allow the exception to proceed. Again, I am not sure just how easy it would be to incorporate such a policy hook into the existing code. In simple cases, the sequence of: action; if( test ) throw some_exception( parameters ); could be rewritten as do { action; } while( !test && checked_throw( some_exception( parameters ) ) ); where template <typename X> bool checked_throw( const X& x ) { if( !pre_exception(x) ) throw x; return false; } but it requires "action" to leave the system state unchanged whenever "test" is true at the end of it. This may be easy to achieve in some cases and hard in others; aside from that, enough information has to be delivered to pre_exception() to undertake sensible actions which in all likelihood will require creating a separate exception class for each type of error or at least many of them. After all, the easiest course may be to provide a default error recovery option for most errors. Which may not be all that bad: for macro re-definition, use the new definition; for #undefine unknown macro -- ignore etc. I guess your is_recoverable() mechanics is good enough for that already, as long as the default recovery actions are reasonable. Wave is a fairly specialized library and natural expectation is that it would be used by either C/C++ compilers or other applications working with C or C++ code in a way similar to a compiler (in my case -- a code instrumentation tool that attempts partial parsing of class definitions). Thus any such code should benefit from handling preprocessor errors in the same exact way a typical C++ compiler would handle them. I figured you were asking for my 2c, so now you got them in spades ;-) ...Max...
Max Motovilov wrote: First of all, thanks for your suggestions!
What exactly do you expect in this context?
Well, like I said, it is typical for a C/C++ preprocessor to allow macro re-definitions in violation of the standard, and there's likely quite a bit of [sloppy] code that won't be processed correctly otherwise. I am not 100% sure what is the best paradigm for handling this situation in the context of Wave.
One of the reasons Wave was written was exactly this sloppyness of existing preprocessors, which makes it nearly impossible to rely on the Standard and beeing portable at the same time.
It would, of course, be nice to have a generic mechanism for rolling back to pre-error state of the lexer/preprocessor system while providing sufficient information to enact recovery (i.e. providing the name of a failed macro so that user's code can call undefine_macro() and re-process the failed definition). I imagine that may be quite expensive, however, as it might entail having to save the state in the beginning of any operation that may consume tokens from the input without returning them to user -- potentially any operation!
Yes, this involves deep copying of the whole context object which may turn out to be a costly operation not good as a 'preventive' error handling scheme. Error recovery itself may be costly, even very costly, because in case of an error time doesn't matter so much anymore, but this should not have heavy inpact on the normal operation.
Perhaps a cheaper way is adding another hook to the context policy? For example,
template<typename X> bool pre_exception( const X& err );
that will have a chance to recover and return true or return false and allow the exception to proceed. Again, I am not sure just how easy it would be to incorporate such a policy hook into the existing code. In simple cases, the sequence of:
action; if( test ) throw some_exception( parameters );
could be rewritten as
do { action; } while( !test && checked_throw( some_exception( parameters ) ) );
where
template <typename X> bool checked_throw( const X& x ) { if( !pre_exception(x) ) throw x; return false; }
but it requires "action" to leave the system state unchanged whenever "test" is true at the end of it.
This solution seems to be cheaper at the first look, but it may turn out to be very costly afterwards, because Wave has to provide the user supplied function with all the information it may need to decide, what todo.
This may be easy to achieve in some cases and hard in others;
Yeah, that's not always possible.
aside from that, enough information has to be delivered to pre_exception() to undertake sensible actions which in all likelihood will require creating a separate exception class for each type of error or at least many of them.
Wouldn't make a lot of different exception types the overall error handling a lot more difficult for the average user?
After all, the easiest course may be to provide a default error recovery option for most errors.
Default error recovery is bad for two reasons: - first of all it leads towards more weakness of Wave with regard to the Standard as it makes it more forgiving. As it was pointed out already, Wave's main focus is Standards compliance and it should report as much error information as possible to the user. - second, what seems to be a good default in one context may be a very bad idea in others, i.e. undefining the first definition of a macro in case of a redefinition and using the second definition provided may be useful for you, others may want to keep the first definition and disregard the second one. But I agree with you that Wave as a library should give the user the possibility to recover from errors in a way it is appropriate to him/her.
Which may not be all that bad: for macro re-definition, use the new definition; for #undefine unknown macro -- ignore etc. I guess your is_recoverable() mechanics is good enough for that already, as long as the default recovery actions are reasonable.
The question is: what is reasonable?
Wave is a fairly specialized library and natural expectation is that it would be used by either C/C++ compilers or other applications working with C or C++ code in a way similar to a compiler (in my case -- a code instrumentation tool that attempts partial parsing of class definitions).
AFAIU compilers normally do not recover from errors in the sense that they try to correct them. And that's for a good reason: compilers normally are not to read the programmers minds, so they don't have enough information to do the right thing :-P Compilers simply report errors and try to continue from this point on leaving the required corrections to the author of the source code.
Thus any such code should benefit from handling preprocessor errors in the same exact way a typical C++ compiler would handle them.
That's correct, but normally doesn't involve any default error correction as you suggested to do above. Overall, I'm still not sure how to solve all these issues and I guess we will have to take smaller steps first to get a feeling how to design a generic error recovery scheme suitable for most of the needs a user of Wave might have. Thanks again! Regards Hartmut
This solution seems to be cheaper at the first look, but it may turn out to be very costly afterwards, because Wave has to provide the user supplied function with all the information it may need to decide, what todo.
That's what I thought -- ultimately, the recovery may require access to (and what's even worse -- understanding of, on part of the Wave's user) large parts of Wave context. There may be some compromise point though -- to make the whole mechanism more useful, if not perfectly generic.
Wouldn't make a lot of different exception types the overall error handling a lot more difficult for the average user?
I tend to disagree with the idea of an "average" user ever using Wave for two reasons: first, writing any kind of C++-source-code-processing application is a fairly advanced activity, not undertaken lightly; and second, just setting up a "hello world" app with Wave is already nontrivial, especially if one wants his compile times to be shorter than the respective lunch break. Of course it is possible to start with one of your boilerplates but that's not generally a good approach to mastery of a tool (as legions of "MFC" programmers have so profoundly demonstrated! :-) ). It took me a long evening that rolled over into a medium-sized weekend session to get the Wave driver-like project to work from scratch; my usual ramp-up time on a new Boost library (Spirit EXCEPTED!!!) is half an hour. I guess what I am trying to say is adding a few more class names to Wave shouldn't change the complexity side of the equation too much.
Default error recovery is bad for two reasons: - first of all it leads towards more weakness of Wave with regard to the Standard as it makes it more forgiving. As it was pointed out already, Wave's main focus is Standards compliance and it should report as much error information as possible to the user. - second, what seems to be a good default in one context may be a very bad idea in others, i.e. undefining the first definition of a macro in case of a redefinition and using the second definition provided may be useful for you, others may want to keep the first definition and disregard the second one.
I agree that "default error recovery turned on by default" is a bad idea. However, a "default error recovery policy" -- default in a sense that Wave implements it itself requiring from the user only to turn it on -- may be OK, wherever such policy is indeed default or well accepted in existing C/C++ preprocessors.
The question is: what is reasonable?
At risk of repeating myself ;-) I'll say -- whatever most C/C++ preprocessors do by default or can be coaxed into doing by command-line options is probably reasonable. While you state Wave's primary purpose as being a benchmark for standard-conformal preprocessor functionality, I still suspect that most practical uses will require it to work with existing code. And if some bad practices prevail in the existing code, the tool maker often has to provide an option of supporting them in some way, just to make his tool more useful. Note how Micro$oft still preserves an option for non-conformant visibility of loop control variable names (even though they finally changed the default to OFF in 8.0).
AFAIU compilers normally do not recover from errors in the sense that they try to correct them.
That's only 99% true, I think. There's a degree of tolerance present everywhere, which usually shows up in discouraging, but not outright prohibiting certain practices either at the compiler level or even at the language definition level (like, support for "implicit int" names which is prohibited by C++ standard, or using base class names without an access tag which is still allowed though switched meanings from public to private at a certain point in time). I'd say that ability to re-define macro names falls squarely within that 1% -- from the times when I was actively concurrently working with a large number of C++ compilers on various platforms, I can't remember a single one that would fully prohibit the practice or produced different results. There may not be all that many of those "accepted errors" at preprocessor level, but being able to support them appears to be a clear plus... Oh, and I don't think I ever remembered to thank you for coming up with Wave in the first place! True, it is possible to use an available C++ preprocessor on the front end of a C++ code-analyzing tool but having a _library_ to do it, along with the lexical analysis, is a whole lot less messier -- especially outside of the Unix environment (read in Win*) where having to string together multiple executables into a pipe may bring along some undesirable externalities. Regards, ...Max...
-----Original Message----- From: boost-users-bounces@lists.boost.org [mailto:boost-users-bounces@lists.boost.org] On Behalf Of Max Motovilov
The question is: what is reasonable?
At risk of repeating myself ;-) I'll say -- whatever most C/C++ preprocessors do by default or can be coaxed into doing by command-line options is probably reasonable. While you state Wave's primary purpose as being a benchmark for standard-conformal preprocessor functionality, I still suspect that most practical uses will require it to work with existing code.
The solution is to fix existing code, not break the preprocessor.
And if some bad practices prevail in the existing code, the tool maker often has to provide an option of supporting them in some way, just to make his tool more useful.
If bad practices in the form of undefined or ill-formed code exist in some code, then the implementation should reject that code. There is nothing good about preserving defects--even if they keep people from having to update code. All it does is propogate bad code forever.
Note how Micro$oft still preserves an option for non-conformant visibility of loop control variable names (even though they finally changed the default to OFF in 8.0).
...and the option shouldn't even exist. If code will not build on a new implementation, fix the code or compile it on the old implementation.
AFAIU compilers normally do not recover from errors in the sense that they try to correct them.
public to private at a certain point in time). I'd say that ability to re-define macro names falls squarely within that
You can legally redefine macro names: #undef MACRO #define MACRO ...
1% -- from the times when I was actively concurrently working with a large number of C++ compilers on various platforms, I can't remember a single one that would fully prohibit the practice or produced different results. There may not be all that many of those "accepted errors" at preprocessor level, but being able to support them appears to be a clear plus...
IMO, it is clear negative. It is in users' best interest and the interest of the community as a whole for implementations to reject any translation unit that contains code that either introduces undefined behavior or is ill-formed. It is not always reasonable to diagnose the former, but it is always reasonable to diagnose the latter. Anything less than that encourages writing bad code. In nearly every case, the fixes to the code are simple. The fixes required are numerous, but they are one-time fixes, and you don't even have to search for them--the implementation should point them out to you. Regards, Paul Mensonides
Paul, I've argued both sides of this issue to death in the past, so I'm certainly not going to do it again. Suffice it to say that if you cannot fix the defective code (typically, because it is part of the codebase you are not expected to modify -- for example, part of the library headers shipped with your compiler, or a 3rd party library you have to use), and you cannot direct the tool to work around it then you have to throw away one or the other. The real world choice is often in favor of the bad code, not the good tool. I am not saying that it's right, just that it's the reality. Note that Boost libraries (and their developers, which, I understand, means you too :) ) go to great lengths to preserve compatibility with as many compilers as possible, despite sometimes having to work around glaring bugs and/or incompatibilities with the standard. Otherwise the number of Boost users would have been a lot smaller, and irrelevancy is a bigger risk than indirect support of bad practices. This is obviously a matter of balance, but I imagine that the heaviest weight on the opposite side of the scales comes from the expense of supporting the workarounds, not from the fact that those workarounds somehow encourage people to use broken compilers. Programming after all is not an art into itself, it is a process of building software for a specific purpose, and with a whole lot of constraints (budget, schedule, learning curve, irrational preferences of the management -- you name it...). The fewer constraints are violated by a specific tool, the more likely it is to be used. That's all I'm saying... Regards, ...Max...
-----Original Message----- From: boost-users-bounces@lists.boost.org [mailto:boost-users-bounces@lists.boost.org] On Behalf Of Max Motovilov
I've argued both sides of this issue to death in the past, so I'm certainly not going to do it again. Suffice it to say that if you cannot fix the defective code (typically, because it is part of the codebase you are not expected to modify -- for example, part of the library headers shipped with your compiler, or a 3rd party library you have to use), and you cannot direct the tool to work around it then you have to throw away one or the other. The real world choice is often in favor of the bad code, not the good tool. I am not saying that it's right, just that it's the reality.
I'm well aware of the status quo. I'm also aware that when users can easily take backdoor outs that "solve the problem", the problem never actually gets solved. As you say, it is a matter of balance, but in this case, we are taking about a preprocessor whose primary goal is to be conformant to the standard.
Note that Boost libraries (and their developers, which, I understand, means you too :) ) go to great lengths to preserve compatibility with as many compilers as possible, despite sometimes having to work around glaring bugs and/or incompatibilities with the standard. Otherwise the number of Boost users would have been a lot smaller, and irrelevancy is a bigger risk than indirect support of bad practices. This is obviously a matter of balance, but I imagine that the heaviest weight on the opposite side of the scales comes from the expense of supporting the workarounds, not from the fact that those workarounds somehow encourage people to use broken compilers.
It isn't just users. Workarounds in library code encourage implementors to not fix their compilers and thus the situation propogates. As always, other things become priorities--like changing the language however they see fit. Many compilers like to be able to say they support the Boost libraries, but, in reality, it is the Boost libraries supporting them. This limits what Boost can do significantly. I'm fully aware that it takes a broad user base to gain the clout that Boost currently enjoys. However, it would be nice to have a clean Boost library in parallel to the hacked to death Boost library (which is what we have now), and challenge implementors to support that. Back to the concrete case of a preprocessor--more specifically, a preprocessing library. There are many preprocessors out there--full of bugs and permissiveness. The fundamental purpose of Wave is to be a strictly conformant preprocessor. What you ask for violates a fundamental principle of Wave's existence. If you want a preprocessor that emulates some other preprocessor--just use that other preprocessor. Having to retokenize the output is insignificant compare to intentionally breaking another tool. As a library author, or a tool maker, for that matter, I'd rather have fewer, but higher-caliber, clients than many clients that don't care about ideals like Standard C++.
Programming after all is not an art into itself, it is a process of building software for a specific purpose, and with a whole lot of constraints (budget, schedule, learning curve, irrational preferences of the management -- you name it...).
It may not be an art unto itself, but it is an art nevertheless. I don't care to support any any programmer or organization whose goal is only getting the job done instead of getting the job done right. It reminds me of some industries that intentionally make inferior products which cost the same to make as their better products just so they can sell them for less and justify the prices of their superior products. I don't care to support that mentality.
The fewer constraints are violated by a specific tool, the more likely it is to be used. That's all I'm saying...
I understand your point of view; I just don't agree that this is a case where it is absolutely necessary. I also don't think that anything will ever change unless people draw a line in the sand. Over the years, implementors should have broken code incrementally, rather than intentionally preserve broken code. There should be no option in VC++ to change the 'for' loop variable scope, and they've known about the correct 'for' loop scope for many years. It isn't even a transition path--it will probably be around forever. How much more broken code has been written in that time because VC++ has allowed it? Say, for example, that I was implementing some library, and this library was working only because of a compiler's (or a few compilers') permissiveness. (Normally, I'd just run some tool on it that is strict to make sure it is right, but such tools don't exist because they all emulate the permissiveness of said compilers.) So, the code doesn't get fixed--maybe I don't even know it is wrong. Users don't have a serious problem because they just switch on (assuming it isn't the default) the allowance in their tools, so I never hear about it as a significant problem. Now some other library author produces a different library, but my library is popular, so this library has to be compatible with the set of allowances that my library requires. It propogates endlessly. (Raise your hand if you like Microsoft's min/max macros or the utter pile of crap that constitutes the Windows headers.) Add another library and another library, and eventually they all become crap because they are so burdened with legacy compatibility. They continue to function, but they don't improve. There should only be C++ (not Microsoft C++, not Borland C++, not GNU C++). Any code that requires allowances by a compiler is not C++, period. Of course, compilers aren't going to be perfect, but there is a difference between intentional permissiveness and bugs. For the sake of portability, we will always have to work around bugs, but we shouldn't have to workaround permissiveness, nor propogate it. Regards, Paul Mensonides
"Paul Mensonides"
However, it would be nice to have a clean Boost library in parallel to the hacked to death Boost library (which is what we have now), and challenge implementors to support that.
Great idea! Why don't we add a simple way to make all BOOST_WORKAROUNDs turn off? Then a scrappy group of interested testers can start publishing no-workarounds test results. -- Dave Abrahams Boost Consulting www.boost-consulting.com
David Abrahams wrote:
"Paul Mensonides"
writes: However, it would be nice to have a clean Boost library in parallel to the hacked to death Boost library (which is what we have now), and challenge implementors to support that.
Great idea! Why don't we add a simple way to make all BOOST_WORKAROUNDs turn off?
I think there already is: #define BOOST_STRICT_CONFIG See boost/detail/workaround.hpp -- Daniel Wallin
Daniel Wallin
David Abrahams wrote:
"Paul Mensonides"
writes: However, it would be nice to have a clean Boost library in parallel to the hacked to death Boost library (which is what we have now), and challenge implementors to support that.
Great idea! Why don't we add a simple way to make all BOOST_WORKAROUNDs turn off?
I think there already is:
#define BOOST_STRICT_CONFIG
See boost/detail/workaround.hpp
Oh, right, I forgot. So anyone who wants to start an embarrass-the-nonconforming-vendors campaign is free to do so. -- Dave Abrahams Boost Consulting www.boost-consulting.com
-----Original Message----- From: boost-users-bounces@lists.boost.org [mailto:boost-users-bounces@lists.boost.org] On Behalf Of David Abrahams
I think there already is:
#define BOOST_STRICT_CONFIG
See boost/detail/workaround.hpp
Oh, right, I forgot. So anyone who wants to start an embarrass-the-nonconforming-vendors campaign is free to do so.
Except that I'm not referring to just turning off the explicit workarounds. Nonconforming compilers implicitly change what the interface to libraries might be as well as the implementation. Regards, Paul Mensonides
On 12/10/05, David Abrahams
Oh, right, I forgot. So anyone who wants to start an embarrass-the-nonconforming-vendors campaign is free to do so.
I'm not favour of starting an embarrass-the-noncoforming-vendors campaign, but I'm certainly curious about which tests would pass in which compilers, using no workarounds. Although I'm not sure it would be nice to make this tests very public. After all, I doubt we want to fight compiler vendors, it wouldnt make any sense.
-- Dave Abrahams Boost Consulting www.boost-consulting.com
-- Felipe Magno de Almeida
Paul, I hear you :) The frustration surely breaks through. But, ugly workarounds or not, being _able_ to use tools such as Boost libraries is a very big deal. And, to a large degree, you are helping drag the [kicking and screaming] compiler vendors into the realm of standard C++ BECAUSE your libraries can be immediately used with MOST of the compilers out there. A shaming campaign is a pretty nice twist, I agree -- but just cutting off the non-conforming compilers would mean marginalization. You may be happy with a few "power clients", I don't know, but there's gonna be a whole lot of unhappy people left in the cold... though _I_ personally can probably get by with M$VC8 :) ...Max...
-----Original Message----- From: boost-users-bounces@lists.boost.org [mailto:boost-users-bounces@lists.boost.org] On Behalf Of Max Motovilov
Paul,
I hear you :) The frustration surely breaks through. But, ugly workarounds or not, being _able_ to use tools such as Boost libraries is a very big deal.
I agree.
And, to a large degree, you are helping drag the [kicking and screaming] compiler vendors into the realm of standard C++ BECAUSE your libraries can be immediately used with MOST of the compilers out there.
To some extent, yes. Let me be clear. Despite how much I hate workarounds, I recognize that they are necessary in production environments. However, what we are talking about here is a little different than the usual. We aren't talking about source code workarounds for an implementation; we're talking about implementation workarounds for source code. Compilers do that for compatibility with legacy code, usually code that worked with previous versions of those same compilers. If compilers had not done that, and had broken code incrementally, it would have caused an awful lot of work to fix code. However, it would have removed a great deal more work later (still ongoing into the forseeable future) for compiler vendors, library authors, and C++ programmers in general. The payoff of permissiveness is short-term and insignificant compared to the long-term cost. Maybe we just disagree about how significant that long-term cost actually is.
A shaming campaign is a pretty nice twist, I agree -- but just cutting off the non-conforming compilers would mean marginalization.
A shaming campaign is not the intent. I don't want to shame vendors. If anything, I want to encourage vendors. The intent is to see how much farther we can go with standard C++ than we currently do on account of implementation deficiencies. There are techniques and idioms that are barely usable that should be generally usable. That "barely usable" status is a significant barrior not just to the use those techniques, but the discovery and development of those that might be built on top of them. As a concrete example, we have yet to see the full effect of 'enable_if'. Workarounds often manifest themselves implicitly as interface changes, as well as waste the creative time of authors in both interface and implementation. As I said above, for production code, workarounds are often necessary. However, the necessity of workarounds need not be at the cost of idealism--which sometimes requires development of parallel or even diverging branches (which is something that we don't have at Boost). But again, the above is about source code workarounds for implementations, not vice versa. I am completely against any permissiveness in compiler (or preprocessor) implementations, because that is what propogates the problem at a fundamental level.
You may be happy with a few "power clients", I don't know, but there's gonna be a whole lot of unhappy people left in the cold... though _I_ personally can probably get by with M$VC8 :)
Obviously the more clients the better--unless there is another significant cost that comes with it. I don't know if its still the same situation now, but a number of years ago, VB had an awfully large user base. If C++ had become more VB-like, it may have come out higher in number of users than it has now. However, the cost would have been significant. (Of course, now we get C++ vendors making C++ more VB-like anyway.) The point is that popularity is not the only measure of success. In fact, popularity is often an indication of lower quality (e.g. MFC and pop music). Obviously, that is not a generalization that can be blindly applied, but it is true often enough that it should cause legitimate skepticism. Again, I'm not saying that we should stop applying workarounds--which would be foolish in the extreme. I am saying that we shouldn't start applying workarounds in a preprocessor or compiler implementation context, nor start the cycle of propogated compatibility for misfeatures or bugs in previous versions. I'm also saying (and this is not related to the primary point) that it would be nice to have a distinct, clean version of Boost (not an interface-to-interface mirror image) that, though idealistic, would serve as a breeding ground for the cutting edge and something to aspire to for vendors. I'm not just talking here, I've done it for the part of Boost that I consider my responsibility. (As usual, my problem is documentation--very important--which I hate writing and apparently suck at anyway.) My first observation based on that experience is that the pp-lib is stagnant. There isn't much I can do to improve it (even with massive workarounds) without much better implementations as the common starting point. A second observation is that I know, within my small area of expertise, how much farther we can go and how much better things can be with better implementations. A third observation is that pristine codebases can still serve as conformance tests for vendors. The original version of Loki was like that also. Once a significant degree of notoriety is achieved the risk of marginalization is considerably less. Regards, Paul Mensonides
payoff of permissiveness is short-term and insignificant compared to the long-term cost. Maybe we just disagree about how significant that long-term cost actually is.
We probably disagree about the short-term costs, not the long-term ones. The long term costs are an incremental and mounting burden, short-term costs are a barrier to entry. E.g. if the new compiler (VC7.1 for the sake of the argument) does not compile the existing code base, the management keeps everybody stuck with the old one (VC6 in this example) and bye-bye Boost, or most of it anyway. Yes, in principle it would have been better to resolve a few minor things such as incorrect scope of 'for' variables, or whatnot, but it's anyone's guess how it will turn out in every particular case.
that comes with it. I don't know if its still the same situation now, but a number of years ago, VB had an awfully large user base. If C++ had become more VB-like, it may have come out higher in number of users than it has now.
Well, as we both know, the beauty of C++ is that it can be as VB-like as you want. Didn't take me all that long to get all of VB's syntactic simplicity of access to COM and dispatch interfaces (special thanks for PP and enable_if go here...) -- too bad I can't put that library into open source right now... The biggest problem is that users often just don't know what functionality and convenience are available in C++ for the price of asking and C++ is still thought of as a strange monstrosity fit for a select few, and, if it cannot be avoided altogether, it is best "managed" [pun intended] by using design and coding practices dating back ten years or worse. I wonder if the best thing that could possibly happen to C++ at the moment would be a 4th edition of Bjarne's book, based on Boost in the same degree as 3rd was based on the standard library. I know there are good recent books that explore certain corners of modern C++ thinking, but they somehow don't seem to resonate. What little I've seen of college-level C++ curriculum looked quite atrocious. And in terms of continued education, there's just not one _single_ book that I could recommend to junior guys and tell them "write like THAT". In my mind, "like THAT" being -- learn to use and actively search for available library code and don't be scared if IT uses advanced paradigms -- YOU don't have to. But then again, even a new Bjarne's book might not cut through all the hype the industry is always awash in :(
vendors making C++ more VB-like anyway.) The point is that popularity is not the only measure of success. In fact, popularity is often an indication of lower quality (e.g. MFC and pop music).
Unfortunately, while popularity may be a poor indicator of quality, it is a very strong indicator of relevance. Almost to the point of being one and the same thing ;)
I've done it for the part of Boost that I consider my responsibility. (As usual, my problem is documentation--very important--which I hate writing and apparently suck at anyway.)
I don't think Boost-PP in particular suffers from deficient documentation. Again, what it needs is a massive how-to, which is best delivered in form of a [chapter of a] book. On the other hand, Boost-PP is more of a toolmaker's tool than an end-user's tool anyway... but it is an absolute must have for the toolmaker!
My first observation based on that experience is that the pp-lib is stagnant. There isn't much I can do to improve it (even with massive workarounds) without much better implementations as the common starting point.
I think for starters it would be nice to get it better adopted by other parts of Boost (not a dig at you by any means, but at those who can't be bothered to use it). There's just no excuse for a library to introduce an arbitrary, unconfigurable limit on the number of supported arguments in a user's function, or length of a [variable by design] list of template parameters etc. The only piece of Boost I have to replace with my own version in every new release is boost/tuple/detail/tuple_basic,hpp which is STILL limited to 10 elements and which I had rewritten using Boost-PP back in 2003... by the way if anyone DOES want it in Boost after all I'd be glad to re-submit ;) Sorry, this thread went way off-topic. Mea culpa. ...Max...
On 12/12/05 12:35 AM, "Max Motovilov"
payoff of permissiveness is short-term and insignificant compared to the long-term cost. Maybe we just disagree about how significant that long-term cost actually is.
We probably disagree about the short-term costs, not the long-term ones. The long term costs are an incremental and mounting burden, short-term costs are a barrier to entry. E.g. if the new compiler (VC7.1 for the sake of the argument) does not compile the existing code base, the management keeps everybody stuck with the old one (VC6 in this example) and bye-bye Boost, or most of it anyway. Yes, in principle it would have been better to resolve a few minor things such as incorrect scope of 'for' variables, or whatnot, but it's anyone's guess how it will turn out in every particular case.
I think one of your fundamental assumptions is bad. You're insisting that Wave has its preprocessor bug-for-bug compatible with major compiler(s), specifically MSVC++. However, AFAIK, we are _not_ competing against the actual implementations out there. We are doing something akin to research, not trying to conquer other products. If you want something bug-for-bug compatible with a published[1] preprocessor, then use that preprocessor (or something marketed as a work-alike). Don't try to use Wave then complain that our features aren't competitive. Our goal for Wave is getting a preprocessor/compiler library that is as close to being theoretically perfectly conforming as possible. Any changes to Wave would generally be for gaining standard conformity. Sometimes we optionally include considerations for current practices (e.g. allowing '$' in identifiers). But we don't want to go crazy on this because the time to implement every quirk on every compiler out there would take up our time for actual improvements. By going for perfect conformance, Wave can serve as an inspiration or a check for other compilers out there. I think someone else here said that Wave could also help in researching how far higher level techniques could go. [1] I wanted to say commercial, but that would exclude the F/OSS full-blown compilers out there (like GCC). [SNIP]
I wonder if the best thing that could possibly happen to C++ at the moment would be a 4th edition of Bjarne's book, based on Boost in the same degree as 3rd was based on the standard library. I know there are good recent books that explore certain corners of modern C++ thinking, but they somehow don't seem to resonate. What little I've seen of college-level C++ curriculum looked quite atrocious. And in terms of continued education, there's just not one _single_ book that I could recommend to junior guys and tell them "write like THAT". In my mind, "like THAT" being -- learn to use and actively search for available library code and don't be scared if IT uses advanced paradigms -- YOU don't have to.
Maybe Bjarne could study Wave (in the future) for some inspiration.
But then again, even a new Bjarne's book might not cut through all the hype the industry is always awash in :(
[SNIP]
I think for starters it would be nice to get it better adopted by other parts of Boost (not a dig at you by any means, but at those who can't be bothered to use it). There's just no excuse for a library to introduce an arbitrary, unconfigurable limit on the number of supported arguments in a user's function, or length of a [variable by design] list of template parameters etc. The only piece of Boost I have to replace with my own version in every new release is boost/tuple/detail/tuple_basic,hpp which is STILL limited to 10 elements and which I had rewritten using Boost-PP back in 2003... by the way if anyone DOES want it in Boost after all I'd be glad to re-submit ;)
Someone gave me patches to convert base_from_member to use the Boost.PP, which was also originally stuck at 10 constructor arguments at the most. Maybe you should re-submit a patch for Boost.Tuple. -- Daryle Walker Mac, Internet, and Video Game Junkie darylew AT hotmail DOT com
I think one of your fundamental assumptions is bad. You're insisting that Wave has its preprocessor bug-for-bug compatible with major compiler(s),
Well, let's be accurate -- I was _asking_ to be able to achieve such compatibility at the user side, not to change Wave into non-compliance.
actual implementations out there. We are doing something akin to research,
So far I was under impression that you [meaning Boost community in the larger sense] were creating libraries, extremely useful practically. Granted, the research part directed towards the future C++ standard is very important but c'mon, this stuff appears to be used massively. Whether each particular user request is worth satisfying is a whole another matter -- no _commercial_ product satisfies them all, either -- but you really can't say that Boost is not user- and use- driven...
in identifiers). But we don't want to go crazy on this because the time to implement every quirk on every compiler out there would take up our time for actual improvements.
Makes perfect sense. ...Max...
-----Original Message----- From: boost-users-bounces@lists.boost.org [mailto:boost-users-bounces@lists.boost.org] On Behalf Of Max Motovilov
the long-term cost. Maybe we just disagree about how significant that long-term cost actually is.
We probably disagree about the short-term costs, not the long-term ones. The long term costs are an incremental and mounting burden, short-term costs are a barrier to entry.
Yes, but not an unbreakable barrier.
E.g. if the new compiler (VC7.1 for the sake of the argument) does not compile the existing code base, the management keeps everybody stuck with the old one (VC6 in this example)
Which is a situation that can only exist for so long before management is forced--the hardware industry isn't going to stop making new hardware because of this, and nobody is going to update VC6. Also, in most cases the existing code base can be tweaked in minor ways that still compiles on the old compiler until it will compile on a new one. In the case of the scope of for-loop variables, it doesn't take that much to update massive codebases. Similarly, it takes even less time to fix erroneous macro redefinitions. If some library code that you aren't legally aloud to modify requires the permissiveness, complain loudly to the authors of that library. If VC7.1, for example, didn't allow the old for-loop behavior, things would change faster than you might think. It wouldn't be just one client complaining to the authors of that third-party library. Even if the library authors refuse, the market will produce alternatives. I realize that it isn't as easy as I'm making it sound, but sometimes sacrifices are necessary. Sometimes more effort must be expended in the short-term to reduce effort in the long-term. The willingness to do so, coupled with idealism and curiosity, is what ultimately drives technology.
and bye-bye Boost, or most of it anyway. Yes, in principle it would have been better to resolve a few minor things such as incorrect scope of 'for' variables, or whatnot, but it's anyone's guess how it will turn out in every particular case.
Any given particular case is unimportant when compared to the general case and the effect on the whole, especially into the future. It is unfortunate that a great deal of management is so short-sighted, and it is unfortunate that some programmers will be "left out in the cold", as you put it. But that is unlikely to be the case in most situations--not management being clueless, but management not being eventually forced. Of course, this is entirely hypothetical scenario where (e.g.) Microsoft is willing to break client code to do what is right--which isn't going to happen; they have a long and continuing track record of doing what is wrong. In this particular case, Wave, we are in a unique position in having control over an implementation, and we are at a point where we (ultimately, meaning Hartmut) can choose to stand up for what is right or choose to propogate the status quo.
VB-like, it may have come out higher in number of users than it has now.
Well, as we both know, the beauty of C++ is that it can be as VB-like as you want.
Right, but it does take some background work and some self-enforced limitations.
Didn't take me all that long to get all of VB's syntactic simplicity of access to COM and dispatch interfaces (special thanks for PP and enable_if go here...) -- too bad I can't put that library into open source right now...
Interesting. This has nothing to do with anything, but way back I used to use VB, and I was continually knocking my head against the wall trying to do some things. So, I moved to C++, but it wasn't a direct move. Instead, my first projects in C++ were COM components (the manual way, not with ATL) so that I could easily use them from VB. Of course, I eventually recovered from the VB disease. :) Nowadays, I think there are better ways than COM or CORBA for writing reusable components. (And, incidentally, those better ways don't involve making all languages a thin veneer over one runtime.)
The biggest problem is that users often just don't know what functionality and convenience are available in C++ for the price of asking and C++ is still thought of as a strange monstrosity fit for a select few, and, if it cannot be avoided altogether, it is best "managed" [pun intended] by using design and coding practices dating back ten years or worse.
Yes.
I wonder if the best thing that could possibly happen to C++ at the moment would be a 4th edition of Bjarne's book, based on Boost in the same degree as 3rd was based on the standard library. I know there are good recent books that explore certain corners of modern C++ thinking, but they somehow don't seem to resonate. What little I've seen of college-level C++ curriculum looked quite atrocious.
I doubt this will ever change regardless of what books exist. I don't believe this is always true (and it is nothing but my own conjecture, at that), but the reasons good mathematics and science professors teach is often so they can work on their own projects or studies with the facilities provided by a university. The same kind of environment typically doesn't exist in computer science. Nowadays you don't need a school's facilities to work on your own projects. You don't need particle accelerators or electron microscopes. Supercomputers are usually only necessary when your projects aren't about computer science, but about something else that happens to use computer science.
vendors making C++ more VB-like anyway.) The point is that popularity is not the only measure of success. In fact, popularity is often an indication of lower quality (e.g. MFC and pop music).
Unfortunately, while popularity may be a poor indicator of quality, it is a very strong indicator of relevance. Almost to the point of being one and the same thing ;)
Relevance to what, exactly? I don't mean to be snide; I just want a clarification. As I've said before (not just in this conversation), I don't care to support abject stupidity. If that means that less people use what I write, so be it. (I'm certainly not trying to imply that I am without flaws, BTW.) I will not sacrifice my principles for the sake of popularity or wealth--even if the alternative is marginalization. I aware that I'm taking the hardline here. We'd be in a lot better position if compilers had done so, and the situation will only get worse as time goes on--if anything will ultimately kill C++, it is this. Again, I'm not advocating the removal of workarounds in source code--that isn't practical. I am against adding permissiveness to and for removing them from implementations (i.e. workarounds *for* source code). Once the permissiveness is there, it becomes a normal feature to users (rather than a compatibility hack). Furthermore, the permissiveness required by a library forces all users to give up the checking that a less permissive tool would give them.
I've done it for the part of Boost that I consider my responsibility. (As usual, my problem is documentation--very important--which I hate writing and apparently suck at anyway.)
I don't think Boost-PP in particular suffers from deficient documentation. Again, what it needs is a massive how-to, which is best delivered in form of a [chapter of a] book.
Which is where it is deficient, and that is the part that I find the most difficult. Specifically because I find it difficult to put myself in the shoes of those without my expertise. When I have a conversation with a particular person, I can gauge their understanding of what I'm saying by their responses. With documentation, the audience is abstract and the conversation is far less tailored to need.
On the other hand, Boost-PP is more of a toolmaker's tool than an end-user's tool anyway...
Yes, it is.
My first observation based on that experience is that the pp-lib is stagnant. There isn't much I can do to improve it (even with massive workarounds) without much better implementations as the common starting point.
I think for starters it would be nice to get it better adopted by other parts of Boost (not a dig at you by any means, but at those who can't be bothered to use it).
That's fine; but that the pp-lib remains stagnant even with greater use. As far as preprocessors are concerned, one by one things are getting better. Metrowerks used to be horrible (if not the worst), but now it can handle most of Chaos. EDG's new front-ends (not yet in an official release of Comeau, BTW) can handle it too and should be significantly faster. GCC can handle it as of the last several releases. Walter just rewrote the Digital Mars preprocessor, and it is a 100% turnaround. Unfortunately, there are still some that must be supported by Boost, like VC, that aren't even in the ballpark. If the pp-lib were to become incompatible with the needs of the rest of Boost, the rest of Boost couldn't use it, and as you mentioned, it is a toolmaker's tool.
There's just no excuse for a library to introduce an arbitrary, unconfigurable limit on the number of supported arguments in a user's function, or length of a [variable by design] list of template parameters etc.
I agree. Regards, Paul Mensonides
disease. :) Nowadays, I think there are better ways than COM or CORBA for writing reusable components. (And, incidentally, those better ways don't involve making all languages a thin veneer over one runtime.)
No doubt about it, but with COM, it is often a matter of using EXISTING components. Which is where Visual Basic used to have so much of a foothold, and still has, as far as I can tell.
I doubt this will ever change regardless of what books exist. I don't believe [skipped] The same kind of environment typically doesn't exist in computer science.
But there's another difference too -- practicing industrial programmers often stand to benefit from improving their skills and sometimes are even pressured to do so :-) Good books are the #1 aid in this quest. I don't think this situation is anywhere close to what you have for math or physics (sorry, that's what I think about when I say "science" -- must be my own background showing through) -- unless you are in research, you're not very likely to need [to be current in] them so much in your work.
Relevance to what, exactly?
To what is being done (and used) in the world at large. Much as I am frustrated by the legacy of bad design decisions that the industry is doomed to carry (and slowly replace with other bad design decisions, or so it seems) I remember all too well the fate of Algol 68, Prolog, VAX/VMS -- to name just a few. ...Max...
On 12/9/05 8:47 AM, "Hartmut Kaiser"
Max Motovilov wrote:
[SNIP]
aside from that, enough information has to be delivered to pre_exception() to undertake sensible actions which in all likelihood will require creating a separate exception class for each type of error or at least many of them.
Wouldn't make a lot of different exception types the overall error handling a lot more difficult for the average user? [TRUNCATE]
We could be like the Standard and use a hierarchy for Wave's exception types. Further, we should use std::exception, and/or derived classes, as the base types. -- Daryle Walker Mac, Internet, and Video Game Junkie darylew AT hotmail DOT com
Daryle Walker wrote:
aside from that, enough information has to be delivered to pre_exception() to undertake sensible actions which in all
[SNIP] likelihood
will require creating a separate exception class for each type of error or at least many of them.
Wouldn't make a lot of different exception types the overall error handling a lot more difficult for the average user? [TRUNCATE]
We could be like the Standard and use a hierarchy for Wave's exception types.
Agreed. That's the only way to go.
Further, we should use std::exception, and/or derived classes, as the base types.
The existing wave exceptions are already derived from std::exception. Regards Hartmut
....make that template <typename X> bool checked_throw( const X& x ) { if( !pre_exception(x) ) throw x; return true; }
participants (7)
-
Daniel Wallin
-
Daryle Walker
-
David Abrahams
-
Felipe Magno de Almeida
-
Hartmut Kaiser
-
Max Motovilov
-
Paul Mensonides