Wave C++ Review Begins Today - February 7, 2005

The Wave C++ preprocessor library review begins today February 7, 2005. The library author is Hartmut Kaiser <hartmutkaiser@t-online.de> The review manager is Tom Brinkman <reportbase@yahoo.com> Download at http://spirit.sourceforge.net/dl_more/wave-1.1.12.zip Here are some questions you might want to answer in your review: * What is your evaluation of the design? * What is your evaluation of the implementation? * What is your evaluation of the documentation? * What is your evaluation of the potential usefulness of the library? * Did you try to use the library? With what compiler? Did you have any problems? * How much effort did you put into your evaluation? A glance? A quick reading? In-depth study? * Are you knowledgeable about the problem domain? And finally, every review should answer this question: * Do you think the library should be accepted as a Boost library? Be sure to say this explicitly so that your other comments don't obscure your overall opinion. The Wave C++ preprocessor library is a Standards conformant implementation of the mandated C99/C++ preprocessor functionality packed behind a simple to use interface, which integrates well with the well known idioms of the Standard Template Library (STL). It is not a monolitic application, it's rather a modular library, which exposes mainly a context object and an iterator interface. The context object helps to configure the actual preprocessing process (as search path's, predefined macros, etc.). The exposed iterators are generated by this context object too. Iterating over the sequence defined by the two iterators will return the preprocessed tokens, which are to be built on the fly from the given input stream. The C++ preprocessor iterator itself is fed by a C++ lexer iterator, which implements an abstract interface. The C++ lexers packaged with the Wave library may be used standalone, too, and are not tied to the C++ preprocessor iterator at all. The flexible interface exposed by the library makes it usable in a variety of applications ranging from simple lexing (tokenising) a C99/C++ input stream to complex preprocessing with fine control over the details of the macro expansion process. For instance it's very simple to inject your own #pragma handling into the preprocessing, allowing to expand it in virtually unlimited ways. The Wave C++ preprocessor library is known to be compilable on VC7.1, Intel7.1 and 8.0, gcc 3.2.x and 3.3.x. Work is currently underway to make it usable with VC6.5 as well. The library contains samples, which show different ways of it's usage. A full blown preprocessing command line oriented driver application is included in the Boost tools section as well. The review candidate version of Wave is accessible here: http://spirit.sourceforge.net/dl_more/wave-1.1.12.zip additionally it is included in the Boost-sandbox CVS. If you're interested in what's new or changed since the last releases, please refer to the enclosed documentation or the changelog file contained inside the archive. __________________________________ Do you Yahoo!? Yahoo! Mail - You care about security. So do we. http://promotions.yahoo.com/new_mail

The Wave C++ preprocessor library review begins today February 7, 2005.
I obviously did not have chance to look into this submission in details (though it does seems like a quite an achievement), but I have couple general questions: 1. Why do we need it? I mean why do we need it here in boost? I admit there maybe couple dozens people in a world who are interesting in implementing/use custom C+ preprocessors, but does it make it a widely reusable component? Note I do not comment on the quality of the submission (I most probably is not qualified enough to comment on that). After all this library/utility exist already and available to public. 2. How are we supposed to test this submission (by test I mean: make sure it works correct)? The submission package does not include any tests, while with utility of this complexity, I actually expected compliance testing facilities to exceed in side the implementation. 3. How are we supposed to comment on implementation? Beyond a sheer volume of submission (more that 1 meg in headers and sources), IMHO one needs to be an expert in both Spirit and C++ preprocessor specification to make any intelligent comments on what is written. 4. Why would you need 500k of headers? After all public interface should be something around: take this file, parse it, produce a text output? Please do not consider above as a negative comments on library by itself, I just wonder. Gennadiy

Gennadiy Rozental wrote:
The Wave C++ preprocessor library review begins today February 7, 2005.
I obviously did not have chance to look into this submission in details (though it does seems like a quite an achievement), but I have couple general questions:
1. Why do we need it? I mean why do we need it here in boost? I admit there maybe couple dozens people in a world who are interesting in implementing/use custom C+ preprocessors, but does it make it a widely reusable component? Note I do not comment on the quality of the submission (I most probably is not qualified enough to comment on that). After all this library/utility exist already and available to public.
IIRC it was discussed several times on this list, that - Boost should be more than a collection of libraries, it additionally should provide the C++ developer with general tools usable to improve there work. - many Boosters agreed on that it would be a good thing for Boost to have a publicly available C++ compiler (or at least parts of), which may be used for a broad range of tasks (just remember your recent discussion with Christopher, where the idea of having a C++ refactoring tool (to add intrusive profiling) popped up again). And I assume a preprocessor is an integral part of such a tool suite, just the first step towards this goal. - Boost is the melting pot for the next C++ Standard (currently mainly for the library working group), but only having available a codebase to play with we'll be able to test different aproaches to make the preprocessor more usable. Just take the recent discussion about macro scopes and such. Without a reference implementation in our hands it's very hard to judge the strengths and weakness's of a particular proposal. - Wave may be a helpful tool which could be used by developers sticking with older compilers (and bad preprocessors) but wanting to use the Boost.PP library in there code.
2. How are we supposed to test this submission (by test I mean: make sure it works correct)? The submission package does not include any tests, while with utility of this complexity, I actually expected compliance testing facilities to exceed in side the implementation.
Very good point. I must admit, that I underestimated the significance of adding an integrated test procedure to the Boost submission package. Certainly I do have a comprehensive test suite here (otherwise I wouldn't have been able to make the library such compliant as it is), but this isn't a fully automated test suite yet and I currently can't give it away. But I agree with you: there should be such a test suite and I'd be willing to provide one given Wave will be accepted into Boost. To answer your question regarding how you may test whether the library does what it claims: just take your everyday code and pass it through Wave before compiling (just as you do with other compilers you trust). But perhaps others who have used Wave may want to comment on the issue of compliance as well.
3. How are we supposed to comment on implementation? Beyond a sheer volume of submission (more that 1 meg in headers and sources), IMHO one needs to be an expert in both Spirit and C++ preprocessor specification to make any intelligent comments on what is written.
I think you won't have to understand every bit in a library to give a judgement. Looking at the documentation and skimming over the code often gives a good impression about the quality of it.
4. Why would you need 500k of headers? After all public interface should be something around: take this file, parse it, produce a text output?
That's very much related to your 1st question. If we agree on to make Wave a part of Boost to meet the goals I've re-iterated above, you'll have to take the burden of adding these headers to Boost as well. BTW: part of these headers aren't required to compile a program using Wave (especially the files related to the grammars). Part of them are required to compile the libraries only. So the question is, where to put these headers. I've put them all into one place (the boost/wave subdirectory) to comply with the Boost directory structure. But this is an issue I'm ready to discuss about. In the end it doesn't matter for the overall Boost size, where these files will end up: in the boost/wave or the libs/wave directory. If you look at the main header of the Wave library (it's the file cpp_context.hpp), which is the one containing the public interface, you'll see, that it exposes a really sparse public interface (as described in the documentation): - predefining macros, - managing include paths, - perhaps adjusting some additional parameters (as the mode to work in, i.e. C++ or C99, or the max allowed include nexting depth), - getting the current state the preprocessor in working in, and - getting the iterators to work on. Nothing more. The overall Wave codebase is heavily structured into namespaces (reflected by the directory structure) and the only classes in the boost::wave namespace are the context<> template and the iterators returned by the context<>. All the other classes reside in deeper namespaces (such as boost::wave::util) because they aren't part of the public interface. Hope this answers your questions. Regards Hartmut

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Hartmut Kaiser
Gennadiy Rozental wrote:
I obviously did not have chance to look into this submission in details (though it does seems like a quite an achievement), but I have couple general questions:
1. Why do we need it? I mean why do we need it here in boost? I admit there maybe couple dozens people in a world who are interesting in implementing/use custom C+ preprocessors, but does it make it a widely reusable component?
A C/C++ preprocessor is certainly a widely reusable (and reused) component. Obviously, source code reuse is going to be significantly less than a less specific (and considerably simpler) component like shared_ptr, but so what? C++ programmers could benefit greatly from tools that currently don't exist largely because the language is so complex. A library-based preprocessor simplifies the task of analysizing C++ source considerably. Because of realities like that, direct reuse isn't the only type of reuse worth pursuing. Furthermore, it comes down to whether or not it is possible for something to be too specific to be part of Boost, and, if so, where that line should be drawn. There is already precedent in Boost for highly specific (non-general purpose) libraries--such as the Python library.
- many Boosters agreed on that it would be a good thing for Boost to have a publicly available C++ compiler (or at least parts of), which may be used for a broad range of tasks (just remember your recent discussion with Christopher, where the idea of having a C++ refactoring tool (to add intrusive profiling) popped up again). And I assume a preprocessor is an integral part of such a tool suite, just the first step towards this goal.
This is an example of indirect reuse--user X may not care about Y, but user X does care about Z which uses Y--which might even be more important in the overall scheme of things. That happens all the time with libraries like the type_traits library, the MPL, and the pp-lib.
- Boost is the melting pot for the next C++ Standard (currently mainly for the library working group), but only having available a codebase to play with we'll be able to test different aproaches to make the preprocessor more usable. Just take the recent discussion about macro scopes and such. Without a reference implementation in our hands it's very hard to judge the strengths and weakness's of a particular proposal.
I'll second this as well. IMO, as a generalization, the library working group is much more focused on reality rather than theory. It is often the case that they know better what we actually need. (No offense is intended by that comment, BTW. It has simply been my observation. How many template facilities where added because of the STL, for example?)
- Wave may be a helpful tool which could be used by developers sticking with older compilers (and bad preprocessors) but wanting to use the Boost.PP library in there code.
Or libraries that are much better than the pp-lib... :)
2. How are we supposed to test this submission (by test I mean: make sure it works correct)? The submission package does not include any tests, while with utility of this complexity, I actually expected compliance testing facilities to exceed in side the implementation.
Very good point. I must admit, that I underestimated the significance of adding an integrated test procedure to the Boost submission package.
Though not directly relevant to Gennadiy's point (and not in opposition to it), testing against Chaos far exceeds the rigor of any C or C++ validation suite currently in existence.
4. Why would you need 500k of headers? After all public interface should be something around: take this file, parse it, produce a text output?
That's very much related to your 1st question. If we agree on to make Wave a part of Boost to meet the goals I've re-iterated above, you'll have to take the burden of adding these headers to Boost as well.
Note also that a preprocessor is a non-trivial entity. There are several general purpose facilities that Wave must implement itself because they aren't, for example, in Boost--such as a copy-on-write string (which is a very valid option in single-threaded code). If some of those general purpose components were in other libraries (or were libraries themselves), Wave's size would decrease. As a case in point, Wave would be bigger yet if it didn't reuse Spirit. Likewise, it is a template-based library, which makes it more extensible (i.e. Unicode source files, etc.) without resorting to unnecessary runtime dispatch. That also increases the header size. Generic code is simply bigger than non-generic code--10% actual code, 10% regurgitating typedefs, and 80% 'typename' and 'template'. :) Regards, Paul Mensonides

Gennadiy Rozental wrote:
The Wave C++ preprocessor library review begins today February 7, 2005.
I obviously did not have chance to look into this submission in details (though it does seems like a quite an achievement), but I have couple general questions:
1. Why do we need it? I mean why do we need it here in boost? I admit there maybe couple dozens people in a world who are interesting in implementing/use custom C+ preprocessors, but does it make it a widely reusable component? Note I do not comment on the quality of the submission (I most probably is not qualified enough to comment on that). After all this library/utility exist already and available to public.
I have a potential use for it. I am considering writing a Spirit-based IDL compiler for the the interfaces library which would accept pseudocode as input (see http://tinyurl.com/5oj6n) and output C++ class definitions. Using Wave I could allow the pseudocode input to contain preprocessing directives. Jonathan

Jonathan Turkanis wrote:
I have a potential use for it.
I am considering writing a Spirit-based IDL compiler for the the interfaces library which would accept pseudocode as input (see http://tinyurl.com/5oj6n) and output C++ class definitions. Using Wave I could allow the pseudocode input to contain preprocessing directives.
FYI, the submitted library contains the waveidl sample, which shows how to use the preprocessor library in conjunction with a custom lexer and for this reason contains an IDL lexer which may be used as a starting point for you. Additionally, the pair of iterators exposed by the wave::context<> object are directly usable as the input iterators for Spirit. Regards Hartmut

Gennadiy Rozental wrote:
2. How are we supposed to test this submission (by test I mean: make sure it works correct)? The submission package does not include any tests, while with utility of this complexity, I actually expected compliance testing facilities to exceed in side the implementation.
Just wanted to post the results produced by Wave while running against the publicly available preprocessor testsuite bundled with the MCPP preprocessor (available here: http://www.m17n.org/mcpp/index_eng.html) This test suite consists of over 250 single tests grouped into 168 category tests - Wave passes 161 out of these. It's a very comprehensive test of all kinds of different pieces mandated by the Standard. The failed tests are related to multi character input sequences mainly, which is one of the known problems of the Wave preprocessing engine. Regards Hartmut --------------------------------------------------------------- Legend: * test passed o test compiled but gave wrong result - test didn't pass General Standards conformance n_1: Trigraph sequences. 3 * n_2: Line splicing by <backslash>. 5 * n_3: Handling of comment. 3 * n_3_4: Handling of comment and \ \n 1 * n_4: Tokens spelled by digraphs. 2 * n_5: Spaces or tabs in pp-directive. 1 * n_6: #include directive. 3 * n_7: #line directive. 3 * n_8: #error directive 1 * n_8_2: #error directive 1 * n_9: #pragma directive. 1 * n_10: #if, #elif pp-directive. 2 * n_11: Operator "defined" in #if. 2 * n_12: Pp-number and type of #if expr. 7 * n_13: Valid operators in #if expr. 4 * n_13_5: Usual arithmetic conversion. 2 * n_13_7: Short-circuit evaluation of #if. 1 * n_13_8: Grouping of #if sub-expressions. 5 * n_13_13: #if expression with macros. 2 * n_15: #ifdef, #ifndef directives. 2 * n_18: #define directive. 3 * n_19: Valid re-definitions of macros. 2 * n_20: Macro name identical to keyword. 1 * n_21: Tokenization (no token merging). 1 * n_22: Tokenization of pp-number. 3 o passes 2 of 3 n_23: ## operator in macro definition. 2 * (C99 mode) n_24: # operator in macro definition. 4 * n_25: Pre-expansion of macro args. 5 * n_26: No recursive replacement. 5 * n_27: Rescanning of a macro. 5 - passes 4 of 5 n_28: Standard pre-defined macros. 7 * (C99 mode) n_29: #undef directive. 2 * n_30: Macro call crossing lines. 1 * n_32: Escape sequence in char-const. 2 * n_37: Translation limits. 8 * 104 2 Implementation defined behaviour i_32_3: Character constant in #if. 2 * i_33: Wide character constant in #if. 1 * i_34: Multi-byte character constant. 1 * i_35: Multi-character character const. 1 * i_35_3: Multi-character wide character. 1 o passes 0 of 1 6 1 Error reporting e_4_3: Illegal pp-token 1 * e_7_4: #line error 1 * e_12_8: Out of range integer pp-token 1 * e_14: Illegal #if expression 6 * e_14_7: Keywords in #if expression 2 * e_14_9: Division by 0 in #if expression 1 * e_14_10: Overflow of constant expression 1 - e_15_3: #ifdef, #ifndef syntax errors 3 * e_16: Trailing junk of #else, #endif 2 * e_17: Ill-formed group /#if, #else...) 7 * e_18_4: #define syntax errors 6 * e_19_3: Redefinitions of macros 5 * e_23_3: Operator ## placement 4 * reported at macro invocation time e_24_5: Operator # operand must be param 1 * reported at macro invocation time e_27_6: Error of rescanning 1 * e_29_3: #undef errors 3 * e_31: Illegal macro calls 2 * e_31_3: Invalid macro call 1 * e_32_5: Range error of character constant 1 - passes 0 of 1 e_33_2: Range error of wide char constant 1 - passes 0 of 1 e_35_2: Out of range of char constant 1 - passes 0 of 1 51 4 Total number of tests passed/unpassed. 161 7

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Tom Brinkman
Here are some questions you might want to answer in your review:
* What is your evaluation of the design? * What is your evaluation of the implementation?
I think that the design and implementation are good. The library has reasonable performance in preprocessing complex examples and is more conformant to the standards than most other vendors' preprocessors. The library is capable of handling Chaos, for example, which cannot be said of all but a handful of preprocessors. I do not have an extremely detailed knowledge of the implementation, but I do have a fairly detailed knowledge of the overall structure and design of the library. There is room for improvement in a few areas, and movement in that direction continues.
* What is your evaluation of the documentation?
The documentation is pretty good. Library usage is straightforward--which is a testament to to quality of the design.
* What is your evaluation of the potential usefulness of the library?
Nearly every tool that analyzes C or C++ source effectively needs to have the ability to preprocess that source. Having a plugable preprocessor is a boon for tool developers--including possible future Boost tools. That said, the potential usefulness of the library (as a library) is fairly restricted to tool development. OTOH, the driver can be used as a replacement for faulty preprocessors without a great deal of effort. Furthermore, the tracing ability of the library (and, by extension, the driver) makes it hands-down the best tool for debugging complex preprocessor metaprograms. This is especially true because tracing can be turned on and off mid-expansion with pragmas (in particular, with the _Pragma operator borrowed from C99).
* Did you try to use the library? With what compiler? Did you have any problems?
I have not extensively used the library as a library, but I have extensively used the driver. I test all Chaos code against Wave, and I regularly use the preprocessor in ways that most compilers cannot handle without help (from Wave). My use of the driver has spanned a couple years now, and during that time Hartmut has fixed nearly all problems that I've encountered.
* How much effort did you put into your evaluation? A glance? A quick reading? In-depth study?
I have been following Wave's design, conformance, and performance from its inception. Over the last several years, I have had many protracted discussions with Hartmut over what the preprocessor is supposed to do, etc.. Saying that Wave can handle Chaos is no small endorsement. To be exact, there is exactly one other preprocessor that can handle *all* of Chaos--gcc.
* Are you knowledgeable about the problem domain?
One might say that. :) I am probably the most knowledgeable person in the world regarding preprocessor metaprogramming--with the possible exception of Vesa.
And finally, every review should answer this question:
* Do you think the library should be accepted as a Boost library?
Yes, the library should be accepted into Boost--as should the driver. Beyond the utility and capabilities of the library, the existence of the library makes a worthwhile political statement to compiler vendors. I endorse the concept of a Boost preprocessor, and I endorse this particular realization of that concept. Furthermore, I endorse the author (Hartmut). He has been a very responsible and responsive implementor/maintainer--for Spirit as well as the Wave project. Regards, Paul Mensonides

"Tom Brinkman" wrote:
The Wave C++ preprocessor library review begins today...
I think Wave should become part of Boost. I tool look on documentation and shortly at its code. I don't use it now and didn't try to compile the code. Few notes are bellow. /Pavel ________________________________________________________ 1. Some parts may be separated: - flex_string may be standalone library (flex_string.hpp: the link to CUJ HTML page should be replaced to link to Andrei's website, CUJ is accessible to subscribers only). - load_file_to_string in cpp_iteration_context.hpp may be part of string_algos - transform_iterator.hpp - time_conversion_helper.hpp (don't know where to put it but it would come very handy many times) ________________________________________________________ 2. The documentation may contain diagrams and/or tables identifying what parts of library belong to what module. ________________________________________________________ 3. preface.html: sentence "...is by far one of the most powerful compile-time reflection/metaprogramming facilities that any language has ever supported." may be questioned by those using Lisp ;-) ________________________________________________________ 4. Performance question: if I add non-existent path on a slow (e.g. network drive) will the path be always examined during parsing or will it be checked only once? Also is there chance to cache files found on slow drives or keep cached file listing of these? I ask because this is nasty problem with BCB preprocessor. ________________________________________________________ 5. Is it possible to set language mode (C99, C++98) per individual file (or directory)? ________________________________________________________ 6. Does the library deal with Microsoft's #region and #endregion? ________________________________________________________ 7. There should be example(s) with #pragma wave system in supported_pragmas.html. (This feature may show _very_ useful, IMHO.) Other pragmas may also have examples in docs. ________________________________________________________ 8. Documentation should give example of often mentioned @config-file. ________________________________________________________ 9. Wave driver Win32 execuable should be available in Boost, version that doesn't need any external DLLs. Reasons: people using old compilers may like but may be unable/unwilling/scared to compile it. Having exe would made initial playing with the system less troubling. ________________________________________________________ 10. Question: is the #pragma wave system executed before any follwing #includes are scanned? For example if I have: #pragma wave system "generate xyz.h somehow" #include "xyz.h" May it be possible to pass somehow current header and end TU name into the "system" command? May it be possible to use something as #include CURRENT_FILE_NAME_BASE + ".generated_hpp" ________________________________________________________ 11. tracing_facility.html: some mess on line starting with "When preprocessed with ...". Maybe also "expand" pragma could be added, which simply expands block of code, w/o explaining how. ________________________________________________________ 12. Wishes: a. Check for digraphs/trigraphs. Maybe driver could have option that checks presence of digraphs and trighraphs and reports error when it found some. It may be used e.g. to check computer generated random string tables. b. Sometimes it may be useful to be able to "partially preprocess" given source. E.g. I would like to have cleaner version of STLport just for my platform. It would be nice if I could specify list of #defines and #undefines to be processed, the rest left as is. c. Own preprocessor: say there's app used for many customers. Code is shipped to them. I do not want one customer see code related to others. I would like to have something as my own private preprocessor: @@ifdef CUSTOMER_X .... @@else .... @@endif @@ifdef CUSTOMER_Y @@include "licensing_info_text" #include <...> ... @@endif Could it be described how/whether it can be done with Wawe? Or maybe even sample. ________________________________________________________ EOF

Pavel Vozenilek wrote:
I think Wave should become part of Boost.
Thanks!
1. Some parts may be separated:
- flex_string may be standalone library
(flex_string.hpp: the link to CUJ HTML page should be replaced to link to Andrei's website, CUJ is accessible to subscribers only).
This was discussed already on this list in conjunction with the dicussion about the const_string library. I don't know, what's the current status, but IIRC there were some people interested in putting together several different string implementations compatible with std::string et.el.
- load_file_to_string in cpp_iteration_context.hpp may be part of string_algos
Could you elaborate?
- transform_iterator.hpp
Transform_iterator _is_ part of boost already, the difference is, that wave uses a slightly different version (for performance reasons the version contained in Wave returns the transformed value by reference - I don't know, whether this is commonly possible - in Wave it is used to flatten a parse tree). That's the reason, why it is named ref_transform_iterator.
- time_conversion_helper.hpp (don't know where to put it but it would come very handy many times)
Hmmm... Where does this belong? Any ideas?
________________________________________________________ 2. The documentation may contain diagrams and/or tables identifying what parts of library belong to what module.
Good point. Noted.
________________________________________________________ 3. preface.html: sentence
"...is by far one of the most powerful compile-time reflection/metaprogramming facilities that any language has ever supported."
may be questioned by those using Lisp ;-)
Haha! No religious war please!
________________________________________________________ 4. Performance question: if I add non-existent path on a slow (e.g. network drive) will the path be always examined during parsing or will it be checked only once?
Also is there chance to cache files found on slow drives or keep cached file listing of these?
I ask because this is nasty problem with BCB preprocessor.
Never thought about this problem, so Wave doesn't try to optimise this yet. But I assume it should be possible to add such features. The preprocessor currently exposes some hooks, ie. functions called in certain situations, such as 'macro defined', 'macro expanded' etc. Possibly additional hooks should be added to allow to implement such a functionality outside of the library. This would keep Wave smaller and more focussed, OTOH it would allow to add those features where needed. I'll think about that.
________________________________________________________ 5. Is it possible to set language mode (C99, C++98) per individual file (or directory)?
Should be possible, but I'll have to investigate this further. Will report back on this later.
________________________________________________________ 6. Does the library deal with Microsoft's #region and #endregion?
No. But good point, it should be added (at least these should be ignored). The library already has the option to recognise MS specific tokens, such as __declspec etc. #region and #endregion fall into the same category.
________________________________________________________ 7. There should be example(s) with #pragma wave system in supported_pragmas.html.
(This feature may show _very_ useful, IMHO.)
Other pragmas may also have examples in docs.
Noted.
________________________________________________________ 8. Documentation should give example of often mentioned @config-file.
Ok.
________________________________________________________ 9. Wave driver Win32 execuable should be available in Boost, version that doesn't need any external DLLs.
Reasons: people using old compilers may like but may be unable/unwilling/scared to compile it. Having exe would made initial playing with the system less troubling.
That's one of the goals. As you may have noted, the wave driver already resides in the boost/tools directory and the Jamfiles are setup such, that all libraries are linked statically.
________________________________________________________ 10. Question: is the #pragma wave system executed before any follwing #includes are scanned?
Yes. It's executed at the point of its occurrence. That's needed, because the stdout stream contents of the executed command is inserted in place of the #pragma (but not rescanned for macros afterwards).
For example if I have:
#pragma wave system "generate xyz.h somehow" #include "xyz.h"
Should work. But as I've said above, you may output the anticipated contents of xyz.h directly to stdout (std::cout) and it will be intercepted by Wave as the replacement text for the overall #pragma directive.
May it be possible to pass somehow current header and end TU name into the "system" command?
The body of the #pragma is macro expanded before the command is executed. So it is possible to pass the __FILE__ pp constant as a command line parameter to the executed command: < file test.cpp> #pragma wave system(cmd /c echo __FILE__) </file test.cpp> will generate "test.cpp" as the output (on Windows; for linux et.al. you'll have to write /bin/echo). The name of the translation unit isn't available from inside the preprocessed stream, but may be added as a predefined 'macro' (such as __FILE__).
May it be possible to use something as #include CURRENT_FILE_NAME_BASE + ".generated_hpp"
That's explicitely allowed anyway (but not with the '+'). The #include directive may contain a macro, which should get expanded to a syntactically valid file name ("" or <> syntax) - see Standard 16.2.4 [cpp.inlcude].
________________________________________________________ 11. tracing_facility.html: some mess on line starting with "When preprocessed with ...".
Noted. My version of Dreamweaver inserts these messy things from time to time - *sigh*.
Maybe also "expand" pragma could be added, which simply expands block of code, w/o explaining how.
Could you elaborate, please?
________________________________________________________ 12. Wishes:
a. Check for digraphs/trigraphs. Maybe driver could have option that checks presence of digraphs and trighraphs and reports error when it found some.
It may be used e.g. to check computer generated random string tables.
This can be implemented by a special driver program which analyses the tokens returned by the preprocessor iterators. The trigraph tokens are flagged as such, so this should be possible already.
b. Sometimes it may be useful to be able to "partially preprocess" given source. E.g. I would like to have cleaner version of STLport just for my platform.
It would be nice if I could specify list of #defines and #undefines to be processed, the rest left as is.
Hmmm, that's more complex, I'll have to think about this.
c. Own preprocessor: say there's app used for many customers. Code is shipped to them. I do not want one customer see code related to others.
I would like to have something as my own private preprocessor:
@@ifdef CUSTOMER_X .... @@else .... @@endif
@@ifdef CUSTOMER_Y @@include "licensing_info_text" #include <...> ... @@endif
Could it be described how/whether it can be done with Wawe? Or maybe even sample.
By default Wave respects the standard directive syntax only (starting with a '#'). But you can implement this very easily by providing your own lexing component, which fakes the pp tokens (used internally for #include, #define etc.) when recognising the @@include, @@define etc. in the input stream. Thanks for your suggestions and reports! Regards Hartmut

"Hartmut Kaiser" wrote: ________________________________________________________
1. Some parts may be separated: - load_file_to_string in cpp_iteration_context.hpp may be part of string_algos
Could you elaborate?
Looking again - not really worth of troubles.
- time_conversion_helper.hpp (don't know where to put it but it would come very handy many times)
Hmmm... Where does this belong? Any ideas?
Maybe as small, standalone utility in date-time. ________________________________________________________
9. Wave driver Win32 execuable should be available in Boost, version that doesn't need any external DLLs.
That's one of the goals. As you may have noted, the wave driver already resides in the boost/tools directory and the Jamfiles are setup such, that all libraries are linked statically.
Very good. I didn't notice (having it in tools/). ________________________________________________________
Maybe also "expand" pragma could be added, which simply expands block of code, w/o explaining how.
Could you elaborate, please?
When I want to check how preprocessed source looks like now I do things like cl /E xyz.cpp and then search through multimegabyte file. If I could have ... #pragma wave expand .. what is here gets expanded but the rest not #pragma wave expand-end .... This would help to avoid these huge files when I am not interested in trace output. ________________________________________________________
I would like to have something as my own private preprocessor:
@@ifdef CUSTOMER_X .... @@else .... @@endif
@@ifdef CUSTOMER_Y @@include "licensing_info_text" #include <...> ... @@endif
Could it be described how/whether it can be done with Wawe? Or maybe even sample.
By default Wave respects the standard directive syntax only (starting with a '#'). But you can implement this very easily by providing your own lexing component, which fakes the pp tokens (used internally for #include, #define etc.) when recognising the @@include, @@define etc. in the input stream.
Maybe docs could give some hints what is needed to touch. ________________________________________________________ One possible feature: would it make sense to have macro as __TU_FILE__ which always expands into the name of "initially processed file"? Say one may like to automatically register modules where certain feature is used. (No idea how this would play with precompiled headers.) /Pavel

Pavel Vozenilek wrote:
- time_conversion_helper.hpp (don't know where to put it but it would come very handy many times)
Hmmm... Where does this belong? Any ideas?
Maybe as small, standalone utility in date-time.
Good idea. Other thought's?
________________________________________________________
Maybe also "expand" pragma could be added, which simply expands block of code, w/o explaining how.
Could you elaborate, please?
When I want to check how preprocessed source looks like now I do things like cl /E xyz.cpp and then search through multimegabyte file.
If I could have
... #pragma wave expand .. what is here gets expanded but the rest not #pragma wave expand-end ....
This would help to avoid these huge files when I am not interested in trace output.
Understood. That will require an additional thought (or two). I'll get back to you on this, ok?
________________________________________________________
I would like to have something as my own private preprocessor:
@@ifdef CUSTOMER_X .... @@else .... @@endif
@@ifdef CUSTOMER_Y @@include "licensing_info_text" #include <...> ... @@endif
Could it be described how/whether it can be done with Wawe? Or maybe even sample.
By default Wave respects the standard directive syntax only (starting with a '#'). But you can implement this very easily by providing your own lexing component, which fakes the pp tokens (used internally for #include, #define etc.) when recognising the @@include, @@define etc. in the input stream.
Maybe docs could give some hints what is needed to touch.
Ok, I'll add a new section about lexer customization, because this seems to be interesting for several people.
________________________________________________________ One possible feature: would it make sense to have macro as __TU_FILE__ which always expands into the name of "initially processed file"?
Say one may like to automatically register modules where certain feature is used.
(No idea how this would play with precompiled headers.)
The name of the translation unit is constant, right? And since wave doesn't support precompiled headers yet, it shouldn't be a problem at all. And even Wave will get such a feature added it is possible to keep track of this predefined macro. I'll give it an additional thought, but currently I don't see any reason not to add it to Wave. Regards Hartmut

On Wed, Feb 09, 2005 at 09:20:20PM +0100, Hartmut Kaiser <HartmutKaiser@t-online.de> wrote:
Pavel Vozenilek wrote:
________________________________________________________ 6. Does the library deal with Microsoft's #region and #endregion?
No. But good point, it should be added (at least these should be ignored). The library already has the option to recognise MS specific tokens, such as __declspec etc. #region and #endregion fall into the same category. What about gcc's #include_next?
Andreas Pokorny

Andreas Pokorny wrote:
________________________________________________________ 6. Does the library deal with Microsoft's #region and #endregion?
No. But good point, it should be added (at least these should be ignored). The library already has the option to recognise MS specific tokens, such as __declspec etc. #region and #endregion fall into the same category. What about gcc's #include_next?
This is implemented already. For this you'll have to define BOOST_WAVE_SUPPORT_INCLUDE_NEXT (in wave_config.hpp) while compiling your app. I've made this optional, because it isn't a Standard feature. Regards Hartmut
Andreas Pokorny
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Hartmut Kaiser
________________________________________________________ 3. preface.html: sentence
"...is by far one of the most powerful compile-time reflection/metaprogramming facilities that any language has ever supported."
may be questioned by those using Lisp ;-)
Haha! No religious war please!
Indeed. :) I think that is a partial quote from me. Lisp and Scheme certainly have powerful macro systems. However, Lisp/Scheme macros cannot generate nor deal with syntactically invalid code. I.e. the C preprocessor, while certainly not as powerful as the Lisp or Scheme macro systems, is more powerful in certain areas--like the deferral of syntactic correctness. A very simple example: #define RBRACKET ] int array[ RBRACKET = { ... }; This may not seem like it is very useful, but with preprocessor metaprogramming it is. Unlike the Lisp and Scheme macro systems, the preprocessor doesn't manipulate/transform expressions (i.e. syntactic elements in general). Instead, it writes them as tokens in arbitrary order. Combined with template metaprogramming, it *is* one of the most powerful compile-time reflection/metaprogramming facilities--though I agree that it isn't at the very top of that list (which would more likely be Scheme's hygienic system in particular). An intentional environment would easily surpass all of these languages.
________________________________________________________ 5. Is it possible to set language mode (C99, C++98) per individual file (or directory)?
Should be possible, but I'll have to investigate this further. Will report back on this later.
I'm not sure that this is a good idea. I assume that you (Pavel) are referring to inside a single translation unit, such as, for example, 'a.c' includes both 'b.h' and 'c.h' but includes 'b.h' as C++ and 'c.h' as C. The differences in the preprocessor between C and C++ are few, though C's is currently better. You'd be better off just enabling the "new" C features in C++.
________________________________________________________ 7. There should be example(s) with #pragma wave system in supported_pragmas.html.
(This feature may show _very_ useful, IMHO.)
Other pragmas may also have examples in docs.
Note, BTW, that one of the features of C99 that isn't present in C++ is the _Pragma operator. It acts just like #pragma, but it can be the result of macro expansion. Wave supports this and there are some pragmas that are executed mid-macro expansion. "system" is one of those. Likewise, the "trace" pragma, which can be used to trace macro expansion, is executed mid-expansion.
Could you elaborate, please?
________________________________________________________ 12. Wishes:
a. Check for digraphs/trigraphs. Maybe driver could have option that checks presence of digraphs and trighraphs and reports error when it found some.
It may be used e.g. to check computer generated random string tables.
Trigraphs are technically single characters (rather than token spellings). They are translated into their equivalents in phase one. This is why trigraphs are replaced inside string literals. Digraphs, OTOH, are distinct tokens that semantically mean the same thing as their equivalents. They are not subject to any kind of "replacement". A would-be digraph in a string literal is simply not a digraph.
This can be implemented by a special driver program which analyses the tokens returned by the preprocessor iterators. The trigraph tokens are flagged as such, so this should be possible already.
b. Sometimes it may be useful to be able to "partially preprocess" given source. E.g. I would like to have cleaner version of STLport just for my platform.
It would be nice if I could specify list of #defines and #undefines to be processed, the rest left as is.
Hmmm, that's more complex, I'll have to think about this.
That is actually a lot more complex. It implies a heavy duty dependency analysis on, for example, macros. I think that implementing this would be more complex (to do correctly) than the entire preprocessor altogether. Regards, Paul Mensonides

Hartmut Kaiser wrote:
Pavel Vozenilek wrote:
I think Wave should become part of Boost.
I agree. It is an invaluable tool. This would especially be the case if: 1. There is a wave.jam toolset for BBv2. 2. It is easy to chain wave in the build process. That is: exe demo : main.cpp ; # compiler preprocessor exe demo : main.cpp : <preprocessor>wave ; # wave preprocessor
1. Some parts may be separated:
- flex_string may be standalone library
(flex_string.hpp: the link to CUJ HTML page should be replaced to link to Andrei's website, CUJ is accessible to subscribers only).
This was discussed already on this list in conjunction with the dicussion about the const_string library. I don't know, what's the current status, but IIRC there were some people interested in putting together several different string implementations compatible with std::string et.el.
I have been working on this using a variant of flex_string for my fixed_string class because flex_string does not handle the way I am implementing flex_string. One issue I have with flex_string is that it is too big. Granted, that is because the std::string definition is big as well. Given this, I have been factoring out the various methods into groups (iterator, element access, etc.) The new approach makes use of the CRTP technique, with the helper macro: #define BOOST_CRTP_IMPL(T)\ T & derived(){ return *static_cast<T *>(this); }\ const T & derived() const{ return *static_cast<const T *>(this); } This is more in line with the approach for iterators and can be expanded to include common patterns for container implementations (e.g. with iterators/reverse iterators). This would make it easier to use subsets of the std::string interface. For example, Maxim's const_string doesn't use the mutating methods. I have also made several modifications to Andrei's flex_string: using Boost's reverse_iterator (for msvc-6.0 support) and the strings char_traits for copy/move/etc. vs using the pod_* methods. Regards, Reece

Reece Dunn wrote:
I think Wave should become part of Boost.
I agree. It is an invaluable tool.
Thanks!
This would especially be the case if: 1. There is a wave.jam toolset for BBv2. 2. It is easy to chain wave in the build process. That is:
exe demo : main.cpp ; # compiler preprocessor exe demo : main.cpp : <preprocessor>wave ; # wave preprocessor
Great idea! Unfortunately I'm not an expert in BBv2, so somebody else will have to take this...
This was discussed already on this list in conjunction with the dicussion about the const_string library. I don't know, what's the current status, but IIRC there were some people interested in putting together several different string implementations compatible with std::string et.el.
I have been working on this using a variant of flex_string for my fixed_string class because flex_string does not handle the way I am implementing flex_string.
One issue I have with flex_string is that it is too big. Granted, that is because the std::string definition is big as well. Given this, I have been factoring out the various methods into groups (iterator, element access, etc.)
The new approach makes use of the CRTP technique, with the helper macro:
#define BOOST_CRTP_IMPL(T)\ T & derived(){ return *static_cast<T *>(this); }\ const T & derived() const{ return *static_cast<const T *>(this); }
This is more in line with the approach for iterators and can be expanded to include common patterns for container implementations (e.g. with iterators/reverse iterators).
This would make it easier to use subsets of the std::string interface. For example, Maxim's const_string doesn't use the mutating methods.
I have also made several modifications to Andrei's flex_string: using Boost's reverse_iterator (for msvc-6.0 support) and the strings char_traits for copy/move/etc. vs using the pod_* methods.
May I try to test Wave with your current implementation(s)? Where I can find these? I'm not interested at all to keep and maintain a separate copy of flex_string (or whatever string class) inside Wave (just for the records: Wave was successfully tested with several different string implementations, such as std::string, flex_string and Maxim's const_string - it shouldn't be a problem to use just another one). As only there is a suitable string class available in Boost I'll happily change Wave to use this. Regards Hartmut

"Hartmut Kaiser" <HartmutKaiser@t-online.de> writes:
Transform_iterator _is_ part of boost already, the difference is, that wave uses a slightly different version (for performance reasons the version contained in Wave returns the transformed value by reference - I don't know, whether this is commonly possible - in Wave it is used to flatten a parse tree). That's the reason, why it is named ref_transform_iterator.
The boost::transform_iterator also returns by reference, if you set it up right. No need for code duplication. -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams wrote:
Transform_iterator _is_ part of boost already, the difference is, that wave uses a slightly different version (for performance reasons the version contained in Wave returns the transformed value by reference - I don't know, whether this is commonly possible - in Wave it is used to flatten a parse tree). That's the reason, why it is named ref_transform_iterator.
The boost::transform_iterator also returns by reference, if you set it up right. No need for code duplication.
Even better, I'll look into it. The transform_iterator in Wave is perhaps an overleft from the old IA library, I haven't looked into this for 2 years or so... Regards Hartmut

I'll have to respond to myself here... Hartmut Kaiser wrote:
The boost::transform_iterator also returns by reference, if you set it up right. No need for code duplication.
Even better, I'll look into it. The transform_iterator in Wave is perhaps an overleft from the old IA library, I haven't looked into this for 2 years or so...
After a second look at the transform_iterator in Wave I (re-)discovered, that it already uses the boost::transform_iterator (I had completely forgotten about this ;-). The own implementation kicks in only when used in conjunction with an older Boost version (containing the old IA library). There are several places in Wave, which allow its compilation with older Boost versions (back to V1.30.2), but I assume, that this backwards compatibility can be removed if only Wave will be accepted to be part of Boost in the future. Regards Hartmut

Tom Brinkman wrote:
* What is your evaluation of the design?
I think it's good.
* What is your evaluation of the implementation?
Very good. Hartmut's work is high quality.
* What is your evaluation of the documentation?
I haven't read it in detail, but after skimming it, it seems complete and accurate.
* What is your evaluation of the potential usefulness of the library?
While most developers may not find it useful, it could be indispensable for others, and will hopefully serve as a building block for more powerful c++ tools.
* Did you try to use the library? With what compiler?
No.
* How much effort did you put into your evaluation? A glance? A quick reading? In-depth study?
I have been following the development of Wave ever since Hartmut started it. I paid more attention when he first started, but have been keeping an eye on it in the past few years.
* Are you knowledgeable about the problem domain?
To an extent. I know Spirit fairly well, and I originally wrote the c++ lexers Wave uses. Obviously Hartmut has improved them, and Wave actually drove some of the features of SLex.
And finally, every review should answer this question:
* Do you think the library should be accepted as a Boost library?
Yes -- Dan Nuffer

On 02/07/2005 07:48 PM, Tom Brinkman wrote: [snip]
The review candidate version of Wave is accessible here:
The wave/libs/wave/build/Jamfile.v2 file in the above .zip file contains: lib boost_wave : $(SOURCES).cpp /boost/filesystem//boost_filesystem : <toolset>vc-7_1:<rtti>off # workaround for compiler bug # Not supported by V2 # <no-warn>cpp.re.cpp ; but I couldn't find in boost 1.32.0 any Jamfile.v2 in /boost/filesystem. Should this rather be $(BOOST_ROOT)/libs/filesystem///boost_filesystem since I did find in $(BOOST_ROOT)/libs/filesystem/Jamfile.v2: lib boost_filesystem : $(SOURCES).cpp : $(linkage) ; ?

On 02/11/2005 10:33 AM, Larry Evans wrote: [snip]
Should this rather be $(BOOST_ROOT)/libs/filesystem///boost_filesystem since I did find in $(BOOST_ROOT)/libs/filesystem/Jamfile.v2: These should be: {======================================= Should this rather be $(BOOST_ROOT)/libs/filesystem/build//boost_filesystem
since I did find in $(BOOST_ROOT)/libs/filesystem/build/Jamfile.v2: }======================================== In addition, the zip file's .html files contain links to files in cvs.sourceforge.net instead of other files in the zip. That would be OK if the files were the same; however, in the zip file: libs/wave/doc/class_reference_context.html#header_synopsis contains the boldface heading: Header wave/context.hpp synopsis and, although not apparent in my mozilla browser, the characters: wave/context.hpp are a link to: http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/spirit/spirit/wave/wave/cpp_context.hpp?rev=HEAD&content-type=text/vnd.viewcvs-markup which is confusing since the name is cpp_context.hpp instead of context.hpp as it appears in the heading. Also, the sourceforge cpp_context.hpp contains: template < typename IteratorT, typename TokenT, typename InputPolicyT = iteration_context_policies::load_file_to_string, typename TraceT = trace_policies::default_tracing
class context { yet the zip file cpp_context.hpp contains: template < typename IteratorT, typename LexIteratorT, typename InputPolicyT = iteration_context_policies::load_file_to_string, typename TraceT = context_policies::default_preprocessing_hooks
class context { So which code should we be reviewing?

Larry Evans wrote:
In addition, the zip file's .html files contain links to files in cvs.sourceforge.net instead of other files in the zip. That would be OK if the files were the same; however, in the zip file:
libs/wave/doc/class_reference_context.html#header_synopsis
contains the boldface heading:
Header wave/context.hpp synopsis
and, although not apparent in my mozilla browser, the characters:
wave/context.hpp
are a link to:
http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/spirit/spirit/w ave/wave/cpp_context.hpp?rev=HEAD&content-type=text/vnd.viewcvs-markup
which is confusing since the name is cpp_context.hpp instead
[snip]
So which code should we be reviewing?
That's a documentation bug! Thanks for reporting. Please review the code in the posted zip here: http://spirit.sourceforge.net/dl_more/wave-1.1.13.zip or (which is equivalent) in the boost-sandbox CVS. The development of Wave in the Spirit CVS is abandoned. Regards Hartmut

Larry Evans wrote:
The wave/libs/wave/build/Jamfile.v2 file in the above .zip file contains:
lib boost_wave : $(SOURCES).cpp /boost/filesystem//boost_filesystem : <toolset>vc-7_1:<rtti>off # workaround for compiler bug # Not supported by V2 # <no-warn>cpp.re.cpp ;
but I couldn't find in boost 1.32.0 any Jamfile.v2 in /boost/filesystem. Should this rather be $(BOOST_ROOT)/libs/filesystem///boost_filesystem since I did find in $(BOOST_ROOT)/libs/filesystem/Jamfile.v2:
lib boost_filesystem : $(SOURCES).cpp : $(linkage) ; ?
Thanks for noting, I'll look into this. Regards Hartmut

"Tom Brinkman" <reportbase@yahoo.com> wrote
The Wave C++ preprocessor library review begins today February 7, 2005.
The library author is Hartmut Kaiser <hartmutkaiser@t-online.de>
The review manager is Tom Brinkman <reportbase@yahoo.com>
Download at http://spirit.sourceforge.net/dl_more/wave-1.1.12.zip
Here are some questions you might want to answer in your review: * How much effort did you put into your evaluation?
I spent an hour looking at the documentation , compiled the driver and quickstart apps.
* What is your evaluation of the design?
Overall seems intuitive, simple. As to how easy it is to customise/extend the rules, which would seem to be the main point, I cant say however. Tokens, file positions could be reused as part of a compiler. Token groups is useful.
* What is your evaluation of the implementation?
Makes good use of boost components.
* What is your evaluation of the documentation?
Good. Reasonably short, but clarifies the main points of the library
* What is your evaluation of the potential usefulness of the library?
The potential usefulness would be in experimenting with preprocessor technology, extensions etc, as well as being part of a C++ compiler suite, ultimately. Even if reimplementing the rules, much of the nuts and bolts stuff such as input parsing includes and 'list' of tokens is nice to have off the shelf and because reusing boost components should be quite fast to put together from this.
* Did you try to use the library? With what compiler?
Tried in VC 7.1
Did you have any problems?
The only problems were with the wave.exe driver app. It appears that both gcc( via DJGPP and cygwin) have invalid filenames in headers, as does VC7.1. "invalid filenames" eg "c++config.h", threw exceptions reported from boost::file_system. Hence no substantial translation-units were processed.
* Are you knowledgeable about the problem domain?
As far as the C++ preprocessor goes No.
And finally, every review should answer this question:
* Do you think the library should be accepted as a Boost library?
Yes. regards Andy Little

Andy Little wrote:
Did you have any problems?
The only problems were with the wave.exe driver app. It appears that both gcc( via DJGPP and cygwin) have invalid filenames in headers, as does VC7.1. "invalid filenames" eg "c++config.h", threw exceptions reported from boost::file_system. Hence no substantial translation-units were processed.
As you've mentioned, this stems from the Boost.Filesystem library and actually I don't know, how to deal with this problem. But I'll try to investigate this and to find a way to circumvent these problems. Perhaps the authors of Boost.Filesystem could help out? Beman?
And finally, every review should answer this question:
* Do you think the library should be accepted as a Boost library?
Yes.
Thanks! Regards Hartmut

"Hartmut Kaiser" <HartmutKaiser@t-online.de> wrote in message news:1D0K8R-1t65Fg0@afwd00.sul.t-online.com...
Andy Little wrote:
Did you have any problems?
The only problems were with the wave.exe driver app. It appears that both gcc( via DJGPP and cygwin) have invalid filenames in headers, as does VC7.1. "invalid filenames" eg "c++config.h", threw exceptions reported from boost::file_system. Hence no substantial translation-units were processed.
As you've mentioned, this stems from the Boost.Filesystem library and actually I don't know, how to deal with this problem. But I'll try to investigate this and to find a way to circumvent these problems. Perhaps the authors of Boost.Filesystem could help out? Beman?
It may be possible to use boost::filesystem::path::default_name_check( name_check new_check ) function to achieve this? BTW While on the subject, documentation on errors, messages, exceptions etc is a bit sparse IMO regards Andy Little

Andy Little wrote:
As you've mentioned, this stems from the Boost.Filesystem library and actually I don't know, how to deal with this problem. But I'll try to investigate this and to find a way to circumvent these problems. Perhaps the authors of Boost.Filesystem could help out? Beman?
It may be possible to use boost::filesystem::path::default_name_check( name_check new_check ) function to achieve this?
I'm already using the boost::filesystem::native name checking routine, which on Windows should allow for the '+' symbol. So I'm out of ideas here.
BTW While on the subject, documentation on errors, messages, exceptions etc is a bit sparse IMO
Noted. Will add some text wrt this. Thanks! Regards Hartmut

Tom Brinkman wrote:
The Wave C++ preprocessor library review begins today February 7, 2005.
It's not stated when the review will end. Did I miss anything? I am still reviewing wave and I do not know how much time I still have. Is there a strict duration for each review, like, 1 week? Regards, -- Joel de Guzman http://www.boost-consulting.com http://spirit.sf.net

On 02/07/2005 07:48 PM, Tom Brinkman wrote:
The Wave C++ preprocessor library review begins today February 7, 2005. * What is your evaluation of the documentation?
The attached review.log file is only part of my evaluation. As time permits, I'll post updates to this review.log file.

On 02/07/2005 07:48 PM, Tom Brinkman wrote:
The Wave C++ preprocessor library review begins today
* What is your evaluation of the design? Didn't have time to evaluate that. * What is your evaluation of the implementation? No time, again. * What is your evaluation of the documentation? Needs some work. Details on what work provided by my previous posts and the current attachment. * What is your evaluation of the potential usefulness of the library? It would be very useful. * Did you try to use the library? With what compiler? Did you have any problems? Yes, with the g++ compiler, version 3.3.4. However, as noted by other respondents, the boost filesystem doesn't believe character, +, is valid filename character; yet, some g++ system header filenames include that character. Another problem was that my wave took about 4 minutes while using most of the cpu before aborting when parsing a BOOST_PP_ENUM_PARAMS call. * How much effort did you put into your evaluation? A glance? A quick reading? In-depth study? A lot into reading the documentation. A fair amount it getting it to work. * Are you knowledgeable about the problem domain? Not very. And finally, every review should answer this question: * Do you think the library should be accepted as a Boost library? Be sure to say this explicitly so that your other comments don't obscure your overall opinion. Most definitely, but I'd sure like some help getting wave to work for me. I suspect I've done something wrong since Paul Mensonides has obviously used it to do something more complicated than BOOST_PP_ENUM_PARAMS :(

Larry Evans wrote:
* What is your evaluation of the documentation?
Needs some work. Details on what work provided by my previous posts and the current attachment.
Thanks for these fixes, I'm currently incorporating them into the documentation.
* Did you try to use the library? With what compiler? Did you have any problems?
Yes, with the g++ compiler, version 3.3.4. However, as noted by other respondents, the boost filesystem doesn't believe character, +, is valid filename character; yet, some g++ system header filenames include that character.
Another problem was that my wave took about 4 minutes while using most of the cpu before aborting when parsing a BOOST_PP_ENUM_PARAMS call.
I really would like to further investigate this. I plan to build Wave here on my Windows machine with Cygwin to try to reproduce your problem. I'll get back to you as soon as I have any new results.
* Do you think the library should be accepted as a Boost library? Be sure to say this explicitly so that your other comments don't obscure your overall opinion.
Most definitely, but I'd sure like some help getting wave to work for me. I suspect I've done something wrong since Paul Mensonides has obviously used it to do something more complicated than BOOST_PP_ENUM_PARAMS :(
Thanks! Defintely it is usable in more sophisticated contexts, even if it is slower than gcc. Regards Hartmut

Michael Walter wrote:
Thanks! Defintely it is usable in more sophisticated contexts, even if it is slower than gcc.
Apologies if this was already discussed further above: How much slower is wave in your experience than gcc, and do you have ideas where that comes from?
The speed difference depends heavily on the complexity of the code to preprocess it may range from factor 2 upto factor 10. Compared to other more or less conformant preprocessors (such as EDG) Wave is comparable or slightly slower in speed for simple macro expansions and way faster in complex cases. The reasons for this are: A) Even if the reviewed version of Wave is 1.1.13 and the project is alive for slightly more than 3 years now, this is (from the design point of view) still the initial version. My main goal for this first version was conformity and flexibility but not performance. B) As I know today, Wave has much room for performance improvement such as for instance using a token class which is more efficient wrt copying (pointing to a symbol table and not carrying the symbol text itself). But because of Wave's modularity it is very simple to use such, essentially this comes down to changing a single template parameter. C) Paul Mensonides currently tries to convince me to completely re-implement the macro expansion engine, which along the lines should speed up things. D) As performance analysis showed, Wave currently spends . about 30% of the required time parsing (with Spirit) the input for preprocessor directives and parsing/evaluating expressions in #if/#elif directives. This is way too much and I hope the picture will change when the new version of Spirit will be available. . another 35% of the execution time Wave spends in the flex_string code. This may be changed by avoiding exessive string copying (as I pointed out above, by changing the token class implementation). Another option is to use a more efficient string implementation such as the const_string developed by Maxim. . about 10% of the time is spent for memory allocation/deallocation - this is tightly connected with the string problem, so I hope to get rid of this as stated above. . about 15% of the time is spent inside of list's and vectors's members, which is tightly coupled to the macro expansion process. I hope to lessen this during the rewriting of the macro expansion engine. E) the overall preprocessing could be sped up by implementing more efficient file lookup techniques, such as file name caching and/or automatic detection of "#pragma once" candidate files (through analysis for guards). Hope this answers your questions. Regards Hartmut

Thanks for your thoughtful and detailed answer. I will try to review the library in detail (and check boost.org whether reviews from "non-regulars" matter at all :-), but from what I read on the mailing list it sounds very nice. Regards, Michael

Michael Walter wrote:
Thanks for your thoughtful and detailed answer. I will try to review the library in detail (and check boost.org whether reviews from "non-regulars" matter at all :-),
"Non-regulars" count like everyone else. What matters is the content of your review.
Regards, Michael
Jonathan

Michael Walter wrote:
Thanks for your thoughtful and detailed answer. I will try to review the library in detail (and check boost.org whether reviews from "non-regulars" matter at all :-),
Sure they do!
but from what I read on the mailing list it sounds very nice.
Thanks. Regards Hartmut

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Hartmut Kaiser
C) Paul Mensonides currently tries to convince me to completely re-implement the macro expansion engine, which along the lines should speed up things.
:) Heh heh. If Hartmut's does this, which I'm pretty sure that he will, macro expansion should be much faster. Note that it isn't slow now (in the EDG sense), it just has a higher "constant-factor". Regards, Paul Mensonides

Tom Brinkman wrote:
The Wave C++ preprocessor library review begins today February 7, 2005.
I'm currently look into this library to see if it is possible to do some source to source transformation. For this I need correct line/column information. Using the slex based lexer, I found that it did not return correct line/column (end of token instead of start). The following change fixes this issue: in cpp_slex_lexer.hpp, slex_functor::get (line ~500) PositionT const &pos = first.get_position(); => PositionT pos = first.get_position(); I also noted the following typo in class_reference_contextpolicy.html: "his policy type is used as a template parameter to the wave::context<> object, where the default policy proviedes empty hooks functions only." proviedes => *provides* I don't have any feedback yet (I'm just getting started), but I though this little bit might be useful to other. Baptiste Lepilleur.

Baptiste Lepilleur wrote:
I'm currently look into this library to see if it is possible to do some source to source transformation. For this I need correct line/column information. Using the slex based lexer, I found that it did not return correct line/column (end of token instead of start). The following change fixes this issue: in cpp_slex_lexer.hpp, slex_functor::get (line ~500)
PositionT const &pos = first.get_position();
=> PositionT pos = first.get_position();
Thanks for the fix.
I also noted the following typo in class_reference_contextpolicy.html: "his policy type is used as a template parameter to the wave::context<> object, where the default policy proviedes empty hooks functions only." proviedes => *provides*
Noted. Regards Hartmut
participants (15)
-
Andreas Pokorny
-
Andy Little
-
Baptiste Lepilleur
-
Dan Nuffer
-
David Abrahams
-
Gennadiy Rozental
-
Hartmut Kaiser
-
Joel de Guzman
-
Jonathan Turkanis
-
Larry Evans
-
Michael Walter
-
Paul Mensonides
-
Pavel Vozenilek
-
Reece Dunn
-
Tom Brinkman