New Borland compiler has a conforming preprocessor?

Borland put some work into their preprocessor for the new release, so I have tried running the preprocessor library regression tests with the extra lines: # elif defined(__BORLANDC__) && __BORLANDC__ >= 0x581 # define BOOST_PP_CONFIG_FLAGS() (BOOST_PP_CONFIG_STRICT()) So far so good. I am far from a preprocessor expert though, and wonder what other tests I should look at before pushing this update? I believe Borland also support variadic macros, is there any way to enable this in the PP config? [not sure about _Pragma support. Anything else I should look for in a C++0x/C99 preprocessor?] -- AlisdairM

AlisdairM wrote:
I am far from a preprocessor expert though, and wonder what other tests I should look at before pushing this update?
Well, I would not call myself an expert either and I don't know if this post should affect your update plans. Anyway: since you seem to have that new compiler installed I would find it interesting to know whether the "tuple testers" in the detail directory of Boost.Preprocessor work as expected now (they did not work with older versions of the Borland preprocessor): #include <boost/preprocessor/detail/is_nullary.hpp> #include <boost/preprocessor/detail/is_unary.hpp> #include <boost/preprocessor/detail/is_binary.hpp> BOOST_PP_IS_NULLARY(foo) // should expand to 0 BOOST_PP_IS_UNARY(foo) // should expand to 0 BOOST_PP_IS_BINARY(foo) // should expand to 0 BOOST_PP_IS_NULLARY(()) // should expand to 1 BOOST_PP_IS_UNARY((foo)) // should expand to 1 BOOST_PP_IS_BINARY((foo,bar)) // should expand to 1 Regards, Tobias

Tobias Schwinger wrote:
Anyway: since you seem to have that new compiler installed I would find it interesting to know whether the "tuple testers" in the detail directory of Boost.Preprocessor work as expected now (they did not work with older versions of the Borland preprocessor):
#include <boost/preprocessor/detail/is_nullary.hpp> #include <boost/preprocessor/detail/is_unary.hpp> #include <boost/preprocessor/detail/is_binary.hpp>
BOOST_PP_IS_NULLARY(foo) // should expand to 0 BOOST_PP_IS_UNARY(foo) // should expand to 0 BOOST_PP_IS_BINARY(foo) // should expand to 0
BOOST_PP_IS_NULLARY(()) // should expand to 1 BOOST_PP_IS_UNARY((foo)) // should expand to 1 BOOST_PP_IS_BINARY((foo,bar)) // should expand to 1
AFAICT this is correct now, with the BOOST_PP_CONFIG_STRICT() setting. -- AlisdairM

AlisdairM wrote:
AFAICT this is correct now, with the BOOST_PP_CONFIG_STRICT() setting.
Further testing shows: BCB6 passes all tests in preprocessor library with 'strict' config too. BCB6 fails your additional test, with and without 'strict' config. BCB2006 passes your additional test. It looks like we could do with an extra test in the suite? -- AlisdiarM

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of AlisdairM
Tobias Schwinger wrote:
Anyway: since you seem to have that new compiler installed I would find it interesting to know whether the "tuple testers" in the detail directory of Boost.Preprocessor work as expected now (they did not work with older versions of the Borland preprocessor):
Tobias hit the nail on the head here. :) The Borland preprocessor is actually quite good. In fact, the separate (non-integrated) command-line preprocessor doesn't have this bug, but until know the integrated preprocessor had a problem in situations like this: #define CAT(a, b) PRIMITIVE_CAT(a, b) #define PRIMITIVE_CAT(a, b) a ## b #define SPLIT(i, im) PRIMITIVE_CAT(SPLIT_, i)(im) #define SPLIT_0(a, b) a #define SPLIT_1(a, b) b #define IS_NULLARY(x) \ SPLIT(0, CAT(IS_NULLARY_R_, IS_NULLARY_C x)) \ /**/ // ^^^^^^^^^^^^^^ #define IS_NULLARY_C() 1 #define IS_NULLARY_R_1 1, ~ #define IS_NULLARY_R_IS_NULLARY_C 0, ~ The Borland preprocessor would "merge" the marked function-like macro name with whatever follows it. E.g. if 'x' was 'foo', that part would become 'IS_NULLARY_Cfoo'. It is a pretty localized bug and predictable, but it ruins all sorts of detection idioms. Besides this, the rest of the preprocessor was pretty good--no major problems with order of evaluation or not doing certain steps. If this is fixed, than the preprocessor should be able to use the strict configuration without any problems, because this was the only thing that the Borland configuration is working around. There doesn't appear to be a way that I can get my hands on the new preprocessor, so I can't do more extensive testing. However, if you're willing, I can give you a stable copy of Chaos (much more advanced), and you can try some of the more advanced examples in the documentation. Regarding variadic macros... The pp-lib is not designed to support variadics, so there is no option to enable such support. Chaos, however, is built from the ground up to support them. The differences that matter between the current C++ and C99 preprocessors are variadic macros and placemarkers. Placemarkers are a fancy way of making empty arguments (no preprocessing tokens) work properly with concatenation and stringizing. E.g. #define MACRO(x) abc ## x + MACRO() Because the argument to 'x' is empty, when MACRO gets expanded, the preprocessor must substitute 'x' in the replacement list with a placemarker: abc ## <placemarker> + So that token-pasting works like it should... abc ## <placemarker> + |____________________| | Rather than: abc ## + |________| | Regards, Paul Mensonides

Paul Mensonides wrote:
There doesn't appear to be a way that I can get my hands on the new preprocessor, so I can't do more extensive testing. However, if you're willing, I can give you a stable copy of Chaos (much more advanced), and you can try some of the more advanced examples in the documentation.
I'm game! Even better if you have any test suite in there yet. I'm happy to copy examples out of docs, but my experience with PP tells me I'm happiest leaving the examples to the experts, for now! ;?) [I have actually used PP lib in a couple of places in our production code, mainly because my lack of PP knowledge was making simple-looking problems really hard. The PP library is a godsend in these circumstances!] -- AlisdairM

Hi Paul, Here are two questions regarding the BOOST_PP_IS_*ARY macros: My favourite use case is to allow macros to accept "user-friendly optional data structures". E.g: #define DO_UNARY(macro,x) \ BOOST_PP_IIF(BOOST_PP_IS_UNARY(x),macro,BOOST_PP_TUPLE_EAT(1))(x) #define DO_BINARY(macro,x) \ BOOST_PP_IIF(BOOST_PP_IS_BINARY(x),macro,BOOST_PP_TUPLE_EAT(1))(x) DO_UNARY(A_MACRO,(elem)) // passes '(elem)' to A_MACRO DO_UNARY(A_MACRO,-) // expands to nothing DO_BINARY(ANOTHER_MACRO,(x,y)) // passes '(x,y)' to ANOTHER_MACRO DO_BINARY(ANOTHER_MACRO,-) // expands to nothing or (the inverse logic for "non-generic scalar" arguments) #define OPTIONAL(x) \ BOOST_PP_IIF(BOOST_PP_IS_UNARY(x),BOOST_PP_TUPLE_EAT(1), \ BOOST_PP_TUPLE_REM(1))(x) OPTIONAL(foo) // expands to 'foo' OPTIONAL( (none) ) // expands to nothing . Is it safe to use this technique with sequences, e.g. #define OPT_SEQ_CAT(seq) \ BOOST_PP_IIF(BOOST_PP_IS_UNARY(seq),BOOST_PP_SEQ_CAT, \ BOOST_PP_TUPLE_EAT(1))(seq) OPT_SEQ_CAT(-) // expands to nothing OPT_SEQ_CAT((b)(o)(o)(s)(t)) // expands to 'boost' ?! And why do these components live in 'detail' (as opposed to 'tuple')? Because of Borland? Thanks, Tobias ( Includes to preprocess the code in this post: #include <boost/preprocessor/seq/cat.hpp> #include <boost/preprocessor/tuple/eat.hpp> #include <boost/preprocessor/tuple/rem.hpp> #include <boost/preprocessor/control/iif.hpp> #include <boost/preprocessor/detail/is_unary.hpp> #include <boost/preprocessor/detail/is_binary.hpp> )

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Tobias Schwinger
Here are two questions regarding the BOOST_PP_IS_*ARY macros:
Is it safe to use this technique with sequences, e.g.
#define OPT_SEQ_CAT(seq) \ BOOST_PP_IIF(BOOST_PP_IS_UNARY(seq),BOOST_PP_SEQ_CAT, \ BOOST_PP_TUPLE_EAT(1))(seq)
OPT_SEQ_CAT(-) // expands to nothing OPT_SEQ_CAT((b)(o)(o)(s)(t)) // expands to 'boost'
?!
Yes--with regard to what I say below. Those macros detect "parenthetic expressions"--which are non-pathological (see below) sequences of preprocessing tokens that begin with something that's parenthesized. E.g. these work also: IS_UNARY((+) -) IS_UNARY((+)(not, unary)) What compilers/preprocessors do you usually use?
And why do these components live in 'detail' (as opposed to 'tuple')? Because of Borland?
Because of Borland and others. The problem with these macros is that they don't work on several supported compilers. On others, they work, but only sometimes (IOW, they are unstable). In the latter case, it isn't just a simple case of "if you pass it this, it doesn't work". Rather, it is the result of fundamentally broken preprocessors that do things in weird orders (probably as some sort of optimization scheme). The library can get away with using them internally because the library is doing its best to force certain things to happen "nearby" where they should happen--and its doing it all over the entire library (look at the the VC configuration, for example). The result for client use is highly unpredictable because client code doesn't contain the scaffolding (the pp-lib bends over backward to minimize the this in user code). On reasonably good preprocessors, this isn't a problem at all, and using them is perfectly safe and predictable. ----- Regarding your DO_UNARY-esque macros... They would be more general if they were something like: #define DO_UNARY(macro, x) BOOST_PP_IIF(BOOST_PP_IS_UNARY(x),macro,BOOST_PP_TUPLE_EAT(1)) \ /**/ DO_UNARY(macro, x)(x) Even though it is more verbose, it keeps the actual invocation of 'macro' out of DO_UNARY (so-to-speak). The way you have it will fail if 'macro' tries to use DO_UNARY. This isn't a big deal at a small scale, but it's what I call a "vertical dependency". Vertical dependencies drastically lower the encapsulation of macros, which creates difficult to diagnose problems. The way that I have it won't fail if 'macro' tries to use DO_UNARY. ----- BTW, with variadics, you can do all kinds of fun things related to optional/default arguments or overloading based on number of arguments. Some simple examples... #define A_1(a) -a #define A_2(a, b) a - b #define A_3(a, b, c) a - b - c #define A(...) \ CHAOS_PP_QUICK_OVERLOAD(A_, __VA_ARGS__)(__VA_ARGS__) \ /**/ A(1) // -1 A(1, 2) // 1 - 2 A(1, 2, 3) // 1 - 2 - 3 The QUICK_OVERLOAD macro is a constant-time operation, but is limited to something like 25 arguments. There is also an OVERLOAD macro that doesn't have that limitation, but isn't constant-time (To be technically accurate, it counts n arguments in floor(n / 10) + n mod 10 + 2 steps.) ----- // B(a, b = 123) -> a + b #define B(...) \ CHAOS_PP_NON_OPTIONAL(__VA_ARGS__) \ + CHAOS_PP_DEFAULT(123, __VA_ARGS__) \ /**/ B(1, 2) // 1 + 2 B(67) // 67 + 123 Note that there is no difference (because of placemarkers) between 0 and 1 arguments. Furthermore, emptiness cannot be detected without restricting input (not counting what I call pathological input--which is something like passing just LPAREN()). Because of that, optional arguments (with or without default values) must always be "attached" to a non-optional argument--hence the name NON_OPTIONAL. Beyond that, you can have any number of optional or default arguments, but there must always be one that is required--which is fine for the majority of circumstances. // M(x, y = 2, z = 3) -> x + y + z #define M(...) \ CHAOS_PP_NON_OPTIONAL(__VA_ARGS__) \ + CHAOS_PP_DEFAULT_AT(0, 2, __VA_ARGS__), \ + CHAOS_PP_DEFAULT_AT(1, 3, __VA_ARGS__) \ /**/ M(a) // a + 2 + 3 M(a, b) // a + b + 3 M(a, b, c) // a + b + c The library itself uses optional arguments in a variety of places: CHAOS_PP_AUTO_REPEAT(3, A, data) // A(s, 0, data) A(s, 1, data) A(s, 2, data) CHAOS_PP_AUTO_REPEAT(3, B) // B(s, 0) B(s, 1) B(s, 2) CHAOS_PP_AUTO_REPEAT(3, C, d1, d2) // C(s, 0, d1, d2) C(s, 1, d1, d2) C(s, 2, d1, d2) (etc.) As referred to above, the 'count' and 'macro' arguments are required, but the algorithm makes the auxiliary data argument optional (as well as variadic) which is propogates on to the called macro. Regards, Paul Mensonides

Hi Paul, thanks for your detailed reply. Paul Mensonides wrote:
-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Tobias Schwinger
Here are two questions regarding the BOOST_PP_IS_*ARY macros:
Is it safe to use this technique with sequences, e.g.
[...]
Yes--with regard to what I say below. Those macros detect "parenthetic expressions"--which are non-pathological (see below) sequences of preprocessing tokens that begin with something that's parenthesized. E.g. these work also: IS_UNARY((+) -) IS_UNARY((+)(not, unary))
Non-pathological means "for every opening parentheses there's a closing one" (I guess it applies at least to the outer nesting level)? Correct?
What compilers/preprocessors do you usually use?
I'm interested in writing compiler independent portable C++ code that works in practice ;-)...
And why do these components live in 'detail' (as opposed to 'tuple')? Because of Borland?
Because of Borland and others. The problem with these macros is that they don't work on several supported compilers. On others, they work, but only sometimes (IOW, they are unstable). In the latter case, it isn't just a simple case of "if you pass it this, it doesn't work".
Do these problems (besides the ones with BCC) apply to pathological token sequences only?
Rather, it is the result of fundamentally broken preprocessors that do things in weird orders (probably as some sort of optimization scheme). The library can get away with using them internally because the library is doing its best to force certain things to happen "nearby" where they should happen--and its doing it all over the entire library (look at the the VC configuration, for example). The result for client use is highly unpredictable because client code doesn't contain the scaffolding (the pp-lib bends over backward to minimize the this in user code).
The only two places where is_*ary seem to be used are: list/adt.hpp facilities/apply.hpp I can't see no scaffolding in the VC-case there. Am I looking in the wrong spot?
On reasonably good preprocessors, this isn't a problem at all, and using them is perfectly safe and predictable.
Would you mind listing the "reasonably good" ones (that is, if there are more on the list than GNU and Wave)?
Regarding your DO_UNARY-esque macros... They would be more general if they were something like: [...]
Sorry for picking over-generalized examples (in fact, there was no intention to provide a general solution). They were intended to illustrate why these IS_*ARY macros would be a great for (very dedicated and non-generic ;-) ) user code. I'll leave the design of generic preprocessor components to the experts (although I definitely feel fascinated by their implementation details).
BTW, with variadics, you can do all kinds of fun things related to optional/default arguments or overloading based on number of arguments. Some simple examples...
[... code]
Amazing! Of course I wouldn't ask questions about using IS_*ARY for optional arguments if all preprocessors would support these extensions. Until then I will most hopefully find the time for an in-depth study of Chaos! BTW. are both variadic macros and overloading part of C99? Thanks, Tobias

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Tobias Schwinger
Yes--with regard to what I say below. Those macros detect "parenthetic expressions"--which are non-pathological (see below) sequences of preprocessing tokens that begin with something that's parenthesized. E.g. these work also: IS_UNARY((+) -) IS_UNARY((+)(not, unary))
Non-pathological means "for every opening parentheses there's a closing one" (I guess it applies at least to the outer nesting level)? Correct?
Well, for all nesting levels really. Many arguments to library macros are typed in the sense that they must be within some domain of input. E.g. the number of repetitions, a macro compatible with a particular signature, etc.. For those arguments, the input must correspond to the domain. Other arguments are untyped with a few domain restrictions (not yet getting into pathological input). IS_UNARY's argument is one of these--you can pass anything to it except a parenthetic expression of a different arity. E.g. IS_UNARY((1, 2)) is illegal input and will cause an error. Other arguments are untyped with no restrictions--except pathological input. Pathological input consists of passing half-open parentheses (only possible through indirect macros like LPAREN and RPAREN) and commas in certain situations. These tokens are functional operators to the preprocessor, and they can foul up all of the internals of a macro definition. E.g. #define A(m, args) m(args) A(MACRO, RPAREN()) -> m()) -> probably an error A(MACRO, LPAREN()) -> m(() -> probably an error A(MACRO, COMMA()) -> m(,) -> probably an error That is what I mean by pathological input, and you can pass it at any level of scanning. Lastly, some arguments are untyped an have no restrictions at all--including pathological input. These are a precious few, however--e.g. #define ID(x) x ID(LPAREN()) In general, because of the way that the preprocessor works, you simply cannot design macros to deal with pathological input.
What compilers/preprocessors do you usually use?
I'm interested in writing compiler independent portable C++ code that works in practice ;-)...
Then I'd avoid directly using those macros. They are perfectly legal, but even though they should work portably and consistently, they don't.
but only sometimes (IOW, they are unstable). In the latter case, it isn't just a simple case of "if you pass it this, it doesn't work".
Do these problems (besides the ones with BCC) apply to pathological token sequences only?
No, it is more like how a particular sequence of tokens is constructed. I'll try to explain what I'm referring to better... Say you have some token sequence '()' and you pass it to IS_NULLARY, which ultimately results in '1' (as it should). Say then you have some other macro M that ultimately results in '()'. You test this macro, and it works. The you try IS_NULLARY(M()) and that works. Then, you have yet another macro N that also ultimately results in '()' (and it does what it should), but when you try to pass it to IS_NULLARY(N()) it suddenly doesn't work--and not because of particular names being disabled (i.e. it works fine on a conformant preprocessor). It is suddenly related to *how* a particular sequence of tokens is constructed. Some preprocessors seem to do things in strange orders or delay doing certain things until they *think* the results are necessary as some sort of optimization. Because the assumptions about how these changes will affect accurate preprocessing results are wrong, it causes all kinds of problems. That is *precisely* the problem with these macros on several popular preprocessors.
VC configuration, for example). The result for client use is highly unpredictable because client code doesn't contain the scaffolding (the pp-lib bends over backward to minimize the this in user code).
The only two places where is_*ary seem to be used are:
list/adt.hpp facilities/apply.hpp
I can't see no scaffolding in the VC-case there. Am I looking in the wrong spot?
I'm referring to the sorts of manipulation that these macros do more than the macros themselves. Part of why they aren't used more often is because they can be unstable and part of it is because the "other way to do it that requires more macros" is already in place because of preprocessors where they don't work at all. Take a look at 'cat.hpp'. It is a good example of the kind of scaffolding required for VC++ and Metrowerks (before they rewrote their preprocessor). That scaffolding is not really about making CAT work properly, it is about making 1) input to CAT work properly and 2) input that uses CAT work properly to other macros--such as IS_UNARY. The library does this kind of thing all over the place, and it has a kind of stabilizing effect on the net result (though it isn't a complete solution). Client code, however, doesn't do this (and it shouldn't have to--neither should the library, really). How stable the library is in general on horribly broken preprocessors (like VC++) is a testament to how much effort the library puts into forcing a degree of stabilization.
On reasonably good preprocessors, this isn't a problem at all, and using them is perfectly safe and predictable.
Would you mind listing the "reasonably good" ones (that is, if there are more on the list than GNU and Wave)?
Wave Unicals GCC EDG's new preprocessor Digital Mars' new preprocessor Metrowerks' new preprocessor (probably Borland's new preprocessor) ...and there are others that I've seen embedded in other source processing tools, and I might be forgetting a few off the top of my head. This list doesn't mean that these preprocessors are perfect, but they are generally pretty good. EDG's new preprocessor is not yet available in a version of Comeau (I have a prerelease beta.) and I'm not sure if it is availabe in a new release of Intel's compiler. I have a prerelease beta that corrects several issues with Digital Mars' new preprocessor, but I don't know if those corrections have made it into an official release yet.
Regarding your DO_UNARY-esque macros... They would be more general if they were something like: [...]
Sorry for picking over-generalized examples (in fact, there was no intention to provide a general solution). They were intended to illustrate why these IS_*ARY macros would be a great for (very dedicated and non-generic ;-) ) user code.
Yes.
BTW, with variadics, you can do all kinds of fun things related to optional/default arguments or overloading based on number of arguments. Some simple examples...
[... code]
Amazing! Of course I wouldn't ask questions about using IS_*ARY for optional arguments if all preprocessors would support these extensions. Until then I will most hopefully find the time for an in-depth study of Chaos!
Be warned--the implementation of much of Chaos is extremely complex. It isn't difficult to use (in most cases) however. The parts where it can be difficult to use are the parts that are exposed as interfaces that allow users to construct components similar to those provided by the library itself. I.e. the philosophy is something like, "If component XYZ is generalizable enough to be used by the library itself in multiple places, then it should be exposed as a proper interface so that client code can use if desired." The detection macros like IS_UNARY are perfect examples of that--they are all proper, documented library interfaces (though there are other things that are far more complex). In other words, Chaos doesn't just provide you with tools that have complex implementations but solve simple or common tasks, it also provides you with tools to make the complex implementations themselves (Chaos reuses itself a lot, which is something that the pp-lib doesn't do.).
BTW. are both variadic macros and overloading part of C99?
Variadics macros and placemarkers are part of C99 (and part of the next C++ standard when it is released). Overloading can be simulated with those things--which is what Chaos does. Even without them, however, Chaos is still significantly more powerful than the pp-lib, and with them, it is significantly more powerful than without. The downside of Chaos is that it cannot be implemented without a very good preprocessor and therefore isn't as portable as the limited subset of possible functionality provided by the Boost pp-lib. Things are improving, slowly but surely--just look at the number of "new preprocessor" instances in the list of preprocessors above. Plus, both Wave and Unicals are relatively new. For Boost, there are three significant hurdles left: Microsoft's, Sun's, and IBM's preprocessors. VC++ is *by far* the biggest hurdle and the most broken "modern" preprocessor that I've seen (Metrowerks used to hold that honor). Though VC++ is probably the most popular compiler, the significance of Boost's support for Sun and IBM can't be ignored. If at some point, Chaos or something like it (say upgraded Boost.PP) was to become part of Boost, no Boost libraries could really use it until VC++ is fixed because supporting VC++ is too important. Regards, Paul Mensonides

Paul Mensonides wrote:
[...] That is what I mean by pathological input, and you can pass it at any level of scanning. Lastly, some arguments are untyped an have no restrictions at all--including pathological input. These are a precious few, however--e.g.
OK. Definition understood.
I'm interested in writing compiler independent portable C++ code that works in practice ;-)...
Then I'd avoid directly using those macros. They are perfectly legal, but even though they should work portably and consistently, they don't.
<snip>
No, it is more like how a particular sequence of tokens is constructed. I'll try to explain what I'm referring to better... Say you have some token sequence '()' and you pass it to IS_NULLARY, which ultimately results in '1' (as it should). Say then you have some other macro M that ultimately results in '()'. You test this macro, and it works. The you try IS_NULLARY(M()) and that works. Then, you have yet another macro N that also ultimately results in '()' (and it does what it should), but when you try to pass it to IS_NULLARY(N()) it suddenly doesn't work--and not because of particular names being disabled (i.e. it works fine on a conformant preprocessor). It is suddenly related to *how* a particular sequence of tokens is constructed. Some preprocessors seem to do things in strange orders or delay doing certain things until they *think* the results are necessary as some sort of optimization. Because the assumptions about how these changes will affect accurate preprocessing results are wrong, it causes all kinds of problems. That is *precisely* the problem with these macros on several popular preprocessors.
So things may break (for odd reasons) if the macro arguments are in turn macros. Guessing from the Boost.Preprocessor source IS_*ARY seem to /basically/ work unless I'm using Borland. So if the input is intended to be fed by the user directly (without more layers of PP metaprogramming or any macros in it) things would be considerably safe, wouldn't they?
Take a look at 'cat.hpp'. [...]
Aha - interesting! I've been wondering what this code does before... So it's about stabilizing the expansion order.
Be warned--the implementation of much of Chaos is extremely complex. It isn't difficult to use (in most cases) however. The parts where it can be difficult to use are the parts that are exposed as interfaces that allow users to construct components similar to those provided by the library itself. I.e. the philosophy is something like, "If component XYZ is generalizable enough to be used by the library itself in multiple places, then it should be exposed as a proper interface so that client code can use if desired."
Sounds like these parts will probably be the most fascinating ones to explore. The code looks extremely beautiful, but I hadn't had the time to decode all the idioms involved to really understand it, yet ;-(. Thanks again for your reply. If there was a "most informative list member award" you'd be most certainly one of the top candidates ;-). Regards, Tobias

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Tobias Schwinger
So things may break (for odd reasons) if the macro arguments are in turn macros.
*sometimes* It doesn't appear to be predicable in the general case. It should be safe enough if the input is literally (e.g.) (unary) or not a unary parenthetic expression. E.g. #define A(x) IS_UNARY(x) A((...)) A(~) Something like this should consistently work. Something like A(MACRO()) IS_UNARY(MACRO()) might and might not.
Guessing from the Boost.Preprocessor source IS_*ARY seem to /basically/ work unless I'm using Borland.
Or IBM or Sun which use the Borland configuration. (I don't even think those macros exist in the Borland configuration!?)
So if the input is intended to be fed by the user directly (without more layers of PP metaprogramming or any macros in it) things would be considerably safe, wouldn't they?
Yes, I think so.
Take a look at 'cat.hpp'. [...]
Aha - interesting! I've been wondering what this code does before... So it's about stabilizing the expansion order.
Yeah, it's a hack that is broadly applied to the library.
philosophy is something like, "If component XYZ is generalizable enough to be used by the library itself in multiple places, then it should be exposed as a proper interface so that client code can use if desired."
Sounds like these parts will probably be the most fascinating ones to explore.
To me they are--they are often the ones that do the gross manipulation. To me, the important thing about Chaos is not the functionality presented to some client, but rather the idioms and techniques that it contains.
The code looks extremely beautiful, but I hadn't had the time to decode all the idioms involved to really understand it, yet ;-(.
If you want to understand it, feel free to ask me. Some things are very hard to understand without a little help and background.
Thanks again for your reply. If there was a "most informative list member award" you'd be most certainly one of the top candidates ;-).
Thanks. I guess all you need to do is to master a really esoteric field! Regards, Paul Mensonides

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Tobias Schwinger
Take a look at 'cat.hpp'. [...]
Aha - interesting! I've been wondering what this code does before... So it's about stabilizing the expansion order.
The EDG configuration, in case you're wondering, isn't working around any semantic problems or bugs. Older versions of the EDG preprocessor--like the one in Comeau's current release--have a significant flaw that prevents Chaos from working (fixed in newer versions). However, the Boost pp-lib doesn't exploit techniques related to it, and so the EDG configuration isn't doing a bug workaround per se. Rather, it is a speed workaround--and a strange one. Basically, it just adds delays. E.g. instead of #define A(x) B(C(x)) ...it does... #define A(x) A2(x) #define A2(x) B(C(x)) This "extra" macro, used consistently in places where arguments contain macro expansions drastically improves the speed of that preprocessor (not that the improves the speed enough...). I don't know exactly why, but apparently has something to do with the data structures that EDG was using during macro expansion. The more you know... :) Regards, Paul Mensonides

On Sat, 21 Jan 2006 15:24:42 +0100, Tobias Schwinger <tschwinger@neoscientists.org> wrote:
AlisdairM wrote:
I am far from a preprocessor expert though, and wonder what other tests I should look at before pushing this update?
Well, I would not call myself an expert either and I don't know if this post should affect your update plans. Anyway: since you seem to have that new compiler installed I would find it interesting to know whether the "tuple testers" in the detail directory of Boost.Preprocessor work as expected now (they did not work with older versions of the Borland preprocessor):
#include <boost/preprocessor/detail/is_nullary.hpp> #include <boost/preprocessor/detail/is_unary.hpp> #include <boost/preprocessor/detail/is_binary.hpp>
BOOST_PP_IS_NULLARY(foo) // should expand to 0 BOOST_PP_IS_UNARY(foo) // should expand to 0 BOOST_PP_IS_BINARY(foo) // should expand to 0
BOOST_PP_IS_NULLARY(()) // should expand to 1 BOOST_PP_IS_UNARY((foo)) // should expand to 1 BOOST_PP_IS_BINARY((foo,bar)) // should expand to 1
Regards,
Tobias
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
After some adjusting of boost/config/borland.hpp, and adding the suggested test, these are the results of config_info nullary,unary and binary results at the end): #Unknown ISO C++ Compiler # __BORLANDC__ =0x0581 # __CDECL__ =1 # __CONSOLE__ =1 # _CPPUNWIND =1 # __cplusplus =1 # __FLAT__ =1 # __FUNC__ ="print_compiler_macros" # _M_IX86 =500 # __TLS__ =1 # _WCHAR_T [no value] # _Windows =1 # __WIN32__ =1 # _WIN32 =1 # _RTLDLL [no value] # _WCHAR_T_DEFINED [no value] # _DLL =1 # __i386__ =1 # _WCHAR_T [no value] # __STDC_HOSTED__ =1 # # #********************************************************************* # #Dinkumware standard library version 402 # _CPPLIB_VER =402 # _HAS_EXCEPTIONS =1 # # #********************************************************************* # #Detected Platform: Win32 # Type char is signed # Type wchar_t is unsigned # byte order for type short =0 8 # byte order for type int =0 8 16 24 # byte order for type long =0 8 16 24 # sizeof(wchar_t) =2 # sizeof(short) =2 # sizeof(int) =4 # sizeof(long) =4 # sizeof(size_t) =4 # sizeof(ptrdiff_t) =4 # sizeof(void*) =4 # sizeof(void(*)(void)) =4 # sizeof(float) =4 # sizeof(double) =8 # sizeof(long double) =10 # CHAR_BIT =8 # CHAR_MAX =127 # WCHAR_MAX =0x7fff # SHRT_MAX =32767 # INT_MAX =2147483647L # LONG_MAX =2147483647L # __STDC_IEC_559__ =1 # __STDC_IEC_559_COMPLEX__ =1 # __STDC_ISO_10646__ =200009L # # #********************************************************************* # #Boost version 103301 # BOOST_USER_CONFIG =<boost/config/user.hpp> # BOOST_COMPILER_CONFIG ="boost/config/compiler/borland.hpp" # BOOST_STDLIB_CONFIG ="boost/config/stdlib/dinkumware.hpp" # BOOST_PLATFORM_CONFIG ="boost/config/platform/win32.hpp" # BOOST_BCB_PARTIAL_SPECIALIZATION_BUG [no value] # BOOST_DEDUCED_TYPENAME =typename # BOOST_FUNCTION_SCOPE_USING_DECLARATION_BREAKS_ADL [no value] # BOOST_HAS_DECLSPEC [no value] # BOOST_HAS_FTIME [no value] # BOOST_HAS_PARTIAL_STD_ALLOCATOR [no value] # BOOST_MSVC6_MEMBER_TEMPLATES [no value] # BOOST_NESTED_TEMPLATE =template # BOOST_NO_DEPENDENT_NESTED_DERIVATIONS [no value] # BOOST_NO_FUNCTION_TEMPLATE_ORDERING [no value] # BOOST_NO_HASH [no value] # BOOST_NO_INTEGRAL_INT64_T [no value] # BOOST_NO_IS_ABSTRACT [no value] # BOOST_NO_LONG_LONG_NUMERIC_LIMITS [no value] # BOOST_NO_MEMBER_TEMPLATE_FRIENDS [no value] # BOOST_NO_MS_INT64_NUMERIC_LIMITS [no value] # BOOST_NO_PRIVATE_IN_AGGREGATE [no value] # BOOST_NO_SFINAE [no value] # BOOST_NO_SLIST [no value] # BOOST_NO_SWPRINTF [no value] # BOOST_NO_USING_TEMPLATE [no value] # BOOST_STD_EXTENSION_NAMESPACE =std # BOOST_UNREACHABLE_RETURN(0) [no value] # #nullary(foo):0 #unary(foo):0 #binary(foo):0 #nullary(()):1 #unary((foo)):1 #binary((foo,bar)):1 If I do understand it right, Borland has cirrectedthe preprocessor Best regads, Zara

Sebastian Redl wrote:
This sounds like a contradiction. If wchar_t is unsigned, then the max value should be 0x7ffff.
It appears wchar_t is defined to only use the first 15 bits on this compiler (assuming you intended to write 0xffff) as the min value is zero, not a big negative. This was the case with BCB6 too, I don't have copy of BCB5 to hand but guess it has always been the case on this platform. -- AlisdairM

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of AlisdairM
Borland put some work into their preprocessor for the new release, so I have tried running the preprocessor library regression tests with the extra lines:
# elif defined(__BORLANDC__) && __BORLANDC__ >= 0x581 # define BOOST_PP_CONFIG_FLAGS() (BOOST_PP_CONFIG_STRICT())
Based on the results of your tests with IS_NULLARY (etc.), I've committed this to the CVS, so the library should pick up the new version automatically. Regards, Paul Mensonides
participants (5)
-
AlisdairM
-
Paul Mensonides
-
Sebastian Redl
-
Tobias Schwinger
-
Zara