[preprocessor] Variadics suggestion

Hi Folks, First of all, I'd like to say "thank you" to Edward Diener and Paul Mensonides for incorporating variadics support into Boost.Preprocessor, which has been on my wishlist for a long time. In fact, I wanted it badly enough that I've been using my own slightly modified version of the library assuming variadics for a while now, and I was interested to see what a cross-platform version would look like (mine was only tested on MSVC). I think the chosen approach of converting variadics to non-variadics is probably superior to the breaking algorithm changes I made in my own version, and I would humbly suggest a few more additions in the same spirit. The reason I chose to use variadics in the first place was to simplify the user interface for the common case of manipulating what I'll call a "variadic sequence," (a series of concatenated tuples), which I wanted to look similar to the following: #define FILTER_PARAMETERS (A, int, 3)\ (B, float, 2.0f)\ (C, const char*, "Howdy!") Of course, without variadics you need either a set of macros for each tuple size, or else double parenthesis around each tuple, which becomes even more burdensome when users are also required to parenthesize inner arguments, leading to gems like this one: #define FILTER_PARAMETERS ((A, (TypeConverter<double, float>), (TypeConverter<double,float>())) My solution admitting the preferred format was to replace most private SEQ_*(X) macros with SEQ_*(...), which also required modifying SEQ_ELEM() and SEQ_HEAD() to return (X) instead of X. However, it occurs to me now that a conversion function to go from variadic to regular sequences would've been easier to implement and backwards compatible; see the attached file, which I've tested in basic cases on MSVC 10 and gcc-4.3.4. The usage would be: #define SEQ (a,b,c)(1,2,3,4) BOOST_PP_VARIADIC_SEQ_TO_SEQ( SEQ ) // expands to ((a,b,c))((1,2,3,4)) I'm submitting this because I couldn't find a way to use some combination of TUPLE/VARIADICS and SEQ transformations to do the same thing succinctly, but if I'm just missing something, please let me know. Next, I would suggest having a documented IS_EMPTY(...) function for detecting an empty variadic list. Since is_empty.hpp already exists for single arguments (though undocumented), I would guess the variadic version would look quite similar. My own (MSVC) version is attached. Finally, I would suggest using the preceding IS_EMPTY(...) function to allow VARIADIC_SIZE(...) to return 0, i.e. something along the lines of: # define BOOST_PP_VARIADIC_SIZE(...) BOOST_PP_IIF( BOOST_PP_IS_EMPTY(__VA_ARGS__), 0, BOOST_PP_VARIADIC_SIZE_I( __VA_ARGS__ ) ) where BOOST_PP_VARIADIC_SIZE_I is the current definition of BOOST_PP_VARIADIC_SIZE. Thanks, -Nick Kitten Software Engineer Center for Video Understanding Excellence ObjectVideo, Inc.

#define SEQ (a,b,c)(1,2,3,4) BOOST_PP_VARIADIC_SEQ_TO_SEQ( SEQ ) // expands to ((a,b,c))((1,2,3,4))
In your implementations couldn't you use just two overload macros instead of 256? Like this: # define BOOST_PP_VARIADIC_SEQ_TO_SEQ(seq) BOOST_PP_CAT( BOOST_PP_VARIADIC_SEQ_TO_SEQ_1_ seq, BOOST_PP_NIL )() # # define BOOST_PP_VARIADIC_SEQ_TO_SEQ_1_(...) (( __VA_ARGS__ )) BOOST_PP_VARIADIC_SEQ_TO_SEQ_2_ # define BOOST_PP_VARIADIC_SEQ_TO_SEQ_2_(...) (( __VA_ARGS__ )) BOOST_PP_VARIADIC_SEQ_TO_SEQ_1_ # define BOOST_PP_VARIADIC_SEQ_TO_SEQ_1_BOOST_PP_NIL() # define BOOST_PP_VARIADIC_SEQ_TO_SEQ_2_BOOST_PP_NIL() Also on a side note, when vardiacs are enabled it would be nice to add an `IS_PAREN` macro like this: #define IS_PAREN(x) IS_PAREN_CHECK(IS_PAREN_PROBE x) #define IS_PAREN_CHECK(...) IS_PAREN_CHECK_N(__VA_ARGS__,0) #define IS_PAREN_PROBE(...) ~, 1, #ifndef _MSC_VER #define IS_PAREN_CHECK_N(x, n, ...) n #else // MSVC workarounds #define IS_PAREN_CHECK_RES(x) x #define IS_PAREN_CHECK_II(x, n, ...) n #define IS_PAREN_CHECK_I(x) IS_PAREN_CHECK_RES(IS_PAREN_CHECK_II x) #define IS_PAREN_CHECK_N(...) IS_PAREN_CHECK_I((__VA_ARGS__)) #endif And you this macro to add support for zero sequences in boost pp.

On Mon, 24 Sep 2012 15:09:03 -0700, paul Fultz wrote:
#define SEQ (a,b,c)(1,2,3,4) BOOST_PP_VARIADIC_SEQ_TO_SEQ( SEQ ) // expands to ((a,b,c))((1,2,3,4))
In your implementations couldn't you use just two overload macros instead of 256? Like this:
# define BOOST_PP_VARIADIC_SEQ_TO_SEQ(seq) BOOST_PP_CAT( BOOST_PP_VARIADIC_SEQ_TO_SEQ_1_ seq, BOOST_PP_NIL )() # # define BOOST_PP_VARIADIC_SEQ_TO_SEQ_1_(...) (( __VA_ARGS__ )) BOOST_PP_VARIADIC_SEQ_TO_SEQ_2_ # define BOOST_PP_VARIADIC_SEQ_TO_SEQ_2_(...) (( __VA_ARGS__ )) BOOST_PP_VARIADIC_SEQ_TO_SEQ_1_
# define BOOST_PP_VARIADIC_SEQ_TO_SEQ_1_BOOST_PP_NIL() # define BOOST_PP_VARIADIC_SEQ_TO_SEQ_2_BOOST_PP_NIL()
Also on a side note, when vardiacs are enabled it would be nice to add an `IS_PAREN` macro like this:
Following the other similar macros that are already there, IS_NULLARY, IS_UNARY, and IS_BINARY, this would be named IS_N_ARY or IS_VARIADIC. However, what many people fail to realize is just how enormously difficult (and in some cases impossible) it is to modernize Boost.Preprocessor when support for VC++ (in particular) is required. It is usually easy to get a trivial case working. It is amazingly difficult to get non-trivial cases working because the nature of VC++'s macro replacement algorithm leads to combinatorial difficulties. E.g. It isn't that difficult to write macros A and B, test them individually to confirm that they work, but then some other macro C (possibly even written by a user) uses both and everything breaks. The point is that there is no black box testing. It is a *horrible* preprocessor. Edward has had some exposure to this when we worked on the limited variadic support that we did add. Besides the above, adding significant work to Boost.Preprocessor is iterating a dead horse. It is *way* outdated in terms of modern preprocessor metaprogramming technique. It needs a complete rewrite (i.e. it needs to be Chaos or similar to it) to support variadics properly. To modernize it, one has to abandon VC++'s preprocessor or somehow get MS to fix it (which is also beating a dead horse). There have been claims (on this list) that you can modernize it with current VC++, but, sorry, you can't. Not without throwing out a predictable, self-terminating recursion model. You could maybe get about a third of the way from Boost.Preprocessor to Chaos. Regards, Paul Mensonides

On Mon, 24 Sep 2012 16:15:34 -0400, Kitten, Nicholas wrote:
Hi Folks,
First of all, I'd like to say "thank you" to Edward Diener and Paul Mensonides for incorporating variadics support into Boost.Preprocessor, which has been on my wishlist for a long time.
In fact, I wanted it badly enough that I've been using my own slightly modified version of the library assuming variadics for a while now, and I was interested to see what a cross-platform version would look like (mine was only tested on MSVC). I think the chosen approach of converting variadics to non-variadics is probably superior to the breaking algorithm changes I made in my own version, and I would humbly suggest a few more additions in the same spirit.
Get a good preprocessor (such as Wave or a different compiler) and use Chaos. Bad preprocessors are precisely the reason that Boost.Preprocessor is not much more functional than it is.
The reason I chose to use variadics in the first place was to simplify the user interface for the common case of manipulating what I'll call a "variadic sequence," (a series of concatenated tuples), which I wanted to look similar to the following:
#define FILTER_PARAMETERS (A, int, 3)\ (B, float, 2.0f)\ (C, const char*, "Howdy!")
Chaos calls (1, 2, ..., n)(1, 2, ..., n) an n-ary sequence. It calls (1) (1, 2)(1, 2, 3)(1, 2)(1) a variadic sequence. It has first-class support for both. E.g. for a binary sequence: #define A(s, a, b) a + b #define B(s, a, b, x) a + b + x #define C(x, a, b, x, y) a + b + x + y CHAOS_PP_SEQ_AUTO_FOR_EACH(A, (a, b)(p, q)(x, y)) CHAOS_PP_SEQ_AUTO_FOR_EACH(B, (a, b)(p, q)(x, y), z) CHAOS_PP_SEQ_AUTO_FOR_EACH(C, (a, b)(p, q)(x, y), z, w)
#define SEQ (a,b,c)(1,2,3,4) BOOST_PP_VARIADIC_SEQ_TO_SEQ( SEQ ) // expands to ((a,b,c))((1,2,3,4))
This is probably doable.
I'm submitting this because I couldn't find a way to use some combination of TUPLE/VARIADICS and SEQ transformations to do the same thing succinctly, but if I'm just missing something, please let me know.
Next, I would suggest having a documented IS_EMPTY(...) function for detecting an empty variadic list. Since is_empty.hpp already exists for single arguments (though undocumented), I would guess the variadic version would look quite similar. My own (MSVC) version is attached.
No. Your implementation causes undefined behavior which happens to work on VC++. There is no way to make a general-purpose emptiness-detection macro. The closest you can get is to restrict the domain to arguments which do not end in function-like macro names.
Finally, I would suggest using the preceding IS_EMPTY(...) function to allow VARIADIC_SIZE(...) to return 0, i.e. something along the lines of:
Again, no. Emptiness is a valid argument. E.g. VARIADIC_SIZE() // 1 VARIADIC_SIZE(,) // 2 VARIADIC_SIZE(,,) // 3 Not only is this the nature of the preprocessor, an example use case would be cv-qualifiers. E.g. ()(const)(volatile)(const volatile) This is a unary sequence. Each element is a value of type "cv-qualifier". Any time that you want to consider emptiness as zero arguments is a special case, not the general case. Moreover, you can only do so when the above domain restriction on IS_EMPTY is known to be viable. Regards, Paul Mensonides

On Mon, 24 Sep 2012 15:09:03 -0700, paul Fultz wrote:
#define SEQ (a,b,c)(1,2,3,4) BOOST_PP_VARIADIC_SEQ_TO_SEQ( SEQ ) // expands to ((a,b,c))((1,2,3,4))
In your implementations couldn't you use just two overload macros instead of 256? Like this:
# define BOOST_PP_VARIADIC_SEQ_TO_SEQ(seq) BOOST_PP_CAT( BOOST_PP_VARIADIC_SEQ_TO_SEQ_1_ seq, BOOST_PP_NIL )() # # define BOOST_PP_VARIADIC_SEQ_TO_SEQ_1_(...) (( __VA_ARGS__ )) BOOST_PP_VARIADIC_SEQ_TO_SEQ_2_ # define BOOST_PP_VARIADIC_SEQ_TO_SEQ_2_(...) (( __VA_ARGS__ )) BOOST_PP_VARIADIC_SEQ_TO_SEQ_1_
# define BOOST_PP_VARIADIC_SEQ_TO_SEQ_1_BOOST_PP_NIL() # define BOOST_PP_VARIADIC_SEQ_TO_SEQ_2_BOOST_PP_NIL()
That would be the succinct solution I was looking for - it didn't occur to me that recursion wouldn't be an issue here because of the order of macro substitution. Thanks a ton! On Tue, Sep 25, 2012 at 3:45 AM, Paul Mensonides <pmenso57@comcast.net> wrote:
Get a good preprocessor (such as Wave or a different compiler) and use Chaos. Bad preprocessors are precisely the reason that Boost.Preprocessor is not much more functional than it is.
Yup, and support for bad preprocessors would be the reason I'm using Boost.Preprocessor instead of Chaos (or a dedicated code generator)
#define SEQ (a,b,c)(1,2,3,4) BOOST_PP_VARIADIC_SEQ_TO_SEQ( SEQ ) // expands to ((a,b,c))((1,2,3,4))
This is probably doable.
Next, I would suggest having a documented IS_EMPTY(...) function for detecting an empty variadic list.
No. Your implementation causes undefined behavior which happens to work on VC++.
Finally, I would suggest using the preceding IS_EMPTY(...) function to allow VARIADIC_SIZE(...) to return 0
Again, no. Emptiness is a valid argument.
1 out of 3... I'll take it :)
Besides the above, adding significant work to Boost.Preprocessor is iterating a dead horse. It is *way* outdated in terms of modern preprocessor metaprogramming technique. It needs a complete rewrite (i.e. it needs to be Chaos or similar to it) to support variadics properly. To modernize it, one has to abandon VC++'s preprocessor or somehow get MS to fix it (which is also beating a dead horse). There have been claims (on this list) that you can modernize it with current VC++, but, sorry, you can't. Not without throwing out a predictable, self-terminating recursion model. You could maybe get about a third of the way from Boost.Preprocessor to Chaos.
I understand why you feel that way, but it doesn't change the sad reality that VC's broken preprocessor (and code using it) is everywhere. I might even say the same about any working C preprocessor when compared to D's syntactically and type-safe template mixins, but unfortunately I don't get to choose the tools my company uses. So until that bright future where FORTRAN is completely dead, Google and Apple have deposed MS, and we've all long forgotten about the tragic mess which was the VC++ preprocessor, I'll just say thanks again for patiently educating the ignorant like me, and for iterating on a dead horse ;) Best, -Nick Kitten Software Engineer Center for Video Understanding Excellence ObjectVideo, Inc.

On Tue, 25 Sep 2012 15:33:57 -0400, Kitten, Nicholas wrote:
#define SEQ (a,b,c)(1,2,3,4) BOOST_PP_VARIADIC_SEQ_TO_SEQ( SEQ ) // expands to ((a,b,c))((1,2,3,4))
This is probably doable.
1 out of 3... I'll take it :)
VARIADIC_SEQ_TO_SEQ added to trunk (boost/preprocessor/seq/ variadic_seq_to_seq.hpp). Please review and/or test (in larger contexts than using it by itself). If everything appears to work okay, I will merge to the release branch.
Besides the above, adding significant work to Boost.Preprocessor is iterating a dead horse.
I understand why you feel that way, but it doesn't change the sad reality that VC's broken preprocessor (and code using it) is everywhere.
Right, and what will change that sad fact? Attempting to workaround forever isn't working. Similarly, is the whole world supposed to not use variadic templates because VC++ doesn't implement them? At least that's likely to change, but a line needs to be drawn. It is one thing for a workaround to be an implementation detail. It is another when it affects the interface. In the latter case, IMO, the best thing to do is provide the interface that *should* exist and either not support the compilers that don't work or provide some clunkier interface for those compilers. Doing else cripples technological advancement. Regards, Paul Mensonides

on Wed Sep 26 2012, Paul Mensonides <pmenso57-AT-comcast.net> wrote:
Right, and what will change that sad fact? Attempting to workaround forever isn't working. Similarly, is the whole world supposed to not use variadic templates because VC++ doesn't implement them? At least that's likely to change, but a line needs to be drawn. It is one thing for a workaround to be an implementation detail. It is another when it affects the interface. In the latter case, IMO, the best thing to do is provide the interface that *should* exist and either not support the compilers that don't work or provide some clunkier interface for those compilers. Doing else cripples technological advancement.
I'll mention again that I think we should accept some libraries that won't work on broken compilers (like Chaos). "Boost works on this compiler" is an important selling point. -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

On Thu, Oct 4, 2012 at 9:31 PM, Dave Abrahams <dave@boostpro.com> wrote:
I'll mention again that I think we should accept some libraries that won't work on broken compilers (like Chaos). "Boost works on this compiler" is an important selling point.
+1 IMO, Chaos should have been a part of Boost years ago. If a popular non-compliant compiler such as VC++ were able to be supported, then that would be great, but since it realistically cannot be supported, that's the problem of Microsoft. If Chaos were a part of boost and more libraries started using it, Microsoft would be much more concerned about fixing their horrible preprocessor. -- -Matt Calabrese

On Thu, Oct 4, 2012 at 9:46 PM, Matt Calabrese <rivorus@gmail.com> wrote:
On Thu, Oct 4, 2012 at 9:31 PM, Dave Abrahams <dave@boostpro.com> wrote:
I'll mention again that I think we should accept some libraries that won't work on broken compilers (like Chaos). "Boost works on this compiler" is an important selling point.
+1
IMO, Chaos should have been a part of Boost years ago. If a popular non-compliant compiler such as VC++ were able to be supported, then that would be great, but since it realistically cannot be supported, that's the problem of Microsoft. If Chaos were a part of boost and more libraries started using it, Microsoft would be much more concerned about fixing their horrible preprocessor.
+1 Chaos is an amazing piece of work! I could be the review manager if there were a Chaos submission for a Boost review. --Lorenzo

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Matt Calabrese Sent: Friday, October 05, 2012 5:47 AM To: boost@lists.boost.org Subject: Re: [boost] [preprocessor] Variadics suggestion
On Thu, Oct 4, 2012 at 9:31 PM, Dave Abrahams <dave@boostpro.com> wrote:
I'll mention again that I think we should accept some libraries that won't work on broken compilers (like Chaos). "Boost works on this compiler" is an important selling point.
IMO, Chaos should have been a part of Boost years ago. If a popular non-compliant compiler such as VC++ were able to be supported, then that would be great, but since it realistically cannot be supported, that's the problem of Microsoft. If Chaos were a part of boost and more libraries started using it, Microsoft would be much more concerned about fixing their horrible preprocessor.
+1 (or more!) Paul --- Paul A. Bristow, Prizet Farmhouse, Kendal LA8 8AB UK +44 1539 561830 07714330204 pbristow@hetp.u-net.com

On Thu, 04 Oct 2012 21:31:48 -0400, Dave Abrahams wrote:
on Wed Sep 26 2012, Paul Mensonides <pmenso57-AT-comcast.net> wrote:
Right, and what will change that sad fact? Attempting to workaround forever isn't working. Similarly, is the whole world supposed to not use variadic templates because VC++ doesn't implement them? At least that's likely to change, but a line needs to be drawn. It is one thing for a workaround to be an implementation detail. It is another when it affects the interface. In the latter case, IMO, the best thing to do is provide the interface that *should* exist and either not support the compilers that don't work or provide some clunkier interface for those compilers. Doing else cripples technological advancement.
I'll mention again that I think we should accept some libraries that won't work on broken compilers (like Chaos). "Boost works on this compiler" is an important selling point.
1) How is the modularization/git transition going? 2) For a macro library, do we still need to have a BOOST_ prefix or could I just keep the CHAOS_PP_ prefix? I cannot use BOOST_PP_ without bending over backwards to find contrived names for everything, and the namespace of brief names beginning with BOOST_ is tiny--especially when a library provides hundreds of user-interface (i.e. not implementation detail macros). 3) For those of you familiar with Chaos, how many actually use the lambda mechanism? I've been internally debating whether to preserve it for years. It complicates things. The other thing is whether C90/C++98 should be supported (i.e. variadic/placemarker-less mode). Currently, Chaos does support these, and the lack of them in some cases leads to interesting techniques (which is half the reason for Chaos in first place). 4) For those of you familiar with Chaos, what other things can you think of that would need to be done to Boost-ify it? ----- For those of you not familiar with Chaos... Chaos (a.k.a. chaos-pp) is a preprocessor metaprogramming library in the vein of Boost.Preprocessor except radically expanded both in terms of available tools and technological innovation. Its current state, which is in the CVS library at http://sourceforge.net/projects/chaos-pp contains 425 interface headers which defines 567 primary interface macros and 2161 primary and secondary interface macros. The secondary interface macros are related interfaces such as BOOST_PP_ENUM_PARAMS (primary) vs. BOOST_PP_ENUM_PARAMS_Z (secondary). (Without the lambda mechanism above, the total number would be about a third less as most interface macros define a lambda binding.) Chaos contains *zero* workarounds for broken preprocessors so it is almost completely unusable on VC++, for example, and probably others. It is almost completely usable on gcc and Wave except in some very dark corners (Hartmet: think partial spanning invocation). Last time I checked, it worked well on EDG-based compilers (sans possibly the above), but that was some time ago that I checked. Essentially, Chaos sets a very high mark which a preprocessor must meet. A quick example: #include <chaos/preprocessor/arithmetic/dec.h> #include <chaos/preprocessor/control/if.h> #include <chaos/preprocessor/facilities/empty.h> #include <chaos/preprocessor/punctuation/comma.h> #include <chaos/preprocessor/recursion/basic.h> #include <chaos/preprocessor/recursion/expr.h> #include <chaos/preprocessor/tuple/eat.h> // interface: #define DELINEATE(c, sep, m, ...) \ DELINEATE_S(CHAOS_PP_STATE(), c, sep, m, __VA_ARGS__) \ /**/ #define DELINEATE_S(s, c, sep, m, ...) \ DELINEATE_I(s, c, CHAOS_PP_EMPTY, sep, m, __VA_ARGS__) \ /**/ // implementation: #define DELINEATE_I(s, c, s1, s2, m, ...) \ CHAOS_PP_IF(c)( \ DELINEATE_II, CHAOS_PP_EAT \ )(CHAOS_PP_OBSTRUCT(), CHAOS_PP_NEXT(s), \ CHAOS_PP_DEC(c), s1, s2, m, __VA_ARGS__) \ /**/ #define DELINEATE_INDIRECT() DELINEATE_I #define DELINEATE_II(_, s, c, s1, s2, m, ...) \ CHAOS_PP_EXPR_S _(s)(DELINEATE_INDIRECT _()( \ s, c, s2, s2, m, __VA_ARGS__ \ )) \ m _(s, c, __VA_ARGS__) s1() \ /**/ // reusable: #define REPEAT(c, m, ...) DELINEATE(c, CHAOS_PP_EMPTY, m, __VA_ARGS__) #define REPEAT_S(s, c, m, ...) \ DELINEATE_S(s, c, CHAOS_PP_EMPTY, m, __VA_ARGS__) \ /**/ #define ENUM(c, m, ...) DELINEATE(c, CHAOS_PP_COMMA, m, __VA_ARGS__) #define ENUM_S(s, c, m, ...) \ DELINEATE_S(s, c, CHAOS_PP_COMMA, m, __VA_ARGS__) \ /**/ // recursive: #define TTP(s, n, id) \ template<CHAOS_PP_EXPR(ENUM( \ CHAOS_PP_INC(n), class CHAOS_PP_EAT, ~ \ ))> class id ## n \ /**/ CHAOS_PP_EXPR(ENUM(3, TTP, T)) -> template<class> class T0 , template<class, class> class T1 , template<class, class, class> class T2 Complicated?...yes. But compare that to the implementation of BOOST_PP_REPEAT. Unlike BOOST_PP_REPEAT, however, this macro can be recursively reentered about 500 times. The actual implementations of these in Chaos is far fancier. A list of headers is at http://chaos-pp.cvs.sourceforge.net/viewvc/chaos-pp/chaos-pp/built-docs/ headers.html of which the first link <chaos/preprocessor.h> leads to a more organized view of the library's contents. Alternately, a big list of primary interface macros is at http://chaos-pp.cvs.sourceforge.net/viewvc/chaos-pp/chaos-pp/built-docs/ primary.html There are a lot of bits of sample code throughout the documentation, though the topical documentation incomplete. Regards, Paul Mensonides

2) For a macro library, do we still need to have a BOOST_ prefix or could I just keep the CHAOS_PP_ prefix? I cannot use BOOST_PP_ without bending over backwards to find contrived names for everything, and the namespace of brief names beginning with BOOST_ is tiny--especially when a library provides hundreds of user-interface (i.e. not implementation detail macros).
Perhaps `BOOST_CPP_` could be used.
3) For those of you familiar with Chaos, how many actually use the lambda mechanism? I've been internally debating whether to preserve it for years. It complicates things. The other thing is whether C90/C++98 should be supported (i.e. variadic/placemarker-less mode). Currently, Chaos does support these, and the lack of them in some cases leads to interesting techniques (which is half the reason for Chaos in first place).
First, I think it should just support C99/C++11, for older preprocessors Boost PP can be used. I never use the lambda expression because they can be very slow, but I thought it would be nice to support some generic "invokable" expressions so perhaps supporting a form of bind for macros, like this: CHAOS_PP_EXPR(CHAOS_PP_REPEAT(3, CHAOS_PP_BIND(CHAOS_PP_CAT, T, _1)) // T0 T1 T2 As well as supporting lambda or even user-defined invokable expressions. Ultimately, bind would just be implemented as this(Yes I know chaos already has a bind macro): #define CHAOS_PP_BIND(m, ...) (CHAOS_PP_BIND)(m, __VA_ARGS__) And then the invoker would call the appropriate macro using the same mechanism used for generics. #define CHAOS_PP_GENERIC(m, e) CHAOS_PP_CAT(CHAOS_PP_CAT(CHAOS_PP_TYPEOF(e), _), m) #define CHAOS_IP_INVOKER_0(e) CHAOS_PP_MACRO_INVOKE #define CHAOS_IP_INVOKER_1(e) CHAOS_PP_GENERIC(INVOKE, e) #define CHAOS_PP_INVOKER(e) CHAOS_PP_CAT(CHAOS_IP_INVOKER_, CHAOS_PP_IS_VARIADIC(e))(e) #define CHAOS_PP_MACRO_INVOKE() CHAOS_IP_MACRO_INVOKE #define CHAOS_IP_MACRO_INVOKE(s, m, ...) CHAOS_PP_EXPR_S(s)(m CHAOS_PP_OBSTRUCT()(__VA_ARGS__)) So for the expression `(CHAOS_PP_BIND)(...)` it will call `CHAOS_PP_BIND_INVOKE`, and if lambda expressions were "typed" as well like `(CHAOS_PP_LAMBDA)(...)`, then it would call `CHAOS_PP_LAMBDA_INVOKE`. So instead of trampolining in higher-order macros, it would just call the `INVOKER` instead: #define CHAOS_PP_DELINEATE(n, sep, m) CHAOS_PP_DELINEATE_S(CHAOS_PP_STATE(), n, sep, m) #define CHAOS_PP_DELINEATE_S(s, n, sep, m) DETAIL_CHAOS_PP_DELINEATE_U(s, n, sep, m, CHAOS_PP_INVOKER(m)) #define DETAIL_CHAOS_PP_DELINEATE_U(s, n, sep, m, _m) \ DETAIL_CHAOS_PP_DELINEATE_I(CHAOS_PP_OBSTRUCT(), CHAOS_PP_INC(s), CHAOS_PP_DEC(n), sep, m, _m) #define DETAIL_CHAOS_PP_DELINEATE_INDIRECT() DETAIL_CHAOS_PP_DELINEATE_U #define DETAIL_CHAOS_PP_DELINEATE_I(_, s, n, sep, m, _m) \ CHAOS_PP_WHEN _(n) \ ( \ CHAOS_PP_EXPR_S _(s)(DETAIL_CHAOS_PP_DELINEATE_INDIRECT _()(s, n, sep, m, _m) sep _()) \ ) \ _m()(s, m, s, n) I don't know what anyone else thinks of that, as well.
http://chaos-pp.cvs.sourceforge.net/viewvc/chaos-pp/chaos-pp/built-docs/ primary.html
There are a lot of bits of sample code throughout the documentation, though the topical documentation incomplete.
The documentation is really missing explanations on recursion steps, and parametric resumptions. I never really fully understood what those additional repetition macros did. Thanks, Paul

On Fri, 05 Oct 2012 21:16:50 -0700, paul Fultz wrote:
3) For those of you familiar with Chaos, how many actually use the lambda mechanism? I've been internally debating whether to preserve it for years. It complicates things. The other thing is whether C90/C++98 should be supported (i.e. variadic/placemarker-less mode). Currently, Chaos does support these, and the lack of them in some cases leads to interesting techniques (which is half the reason for Chaos in first place).
First, I think it should just support C99/C++11, for older preprocessors Boost PP can be used.
The biggest reason that I consider leaving C90/C++03 support in place is because there are several techniques that would otherwise not get used-- and Chaos is my repository of techniques which are fleshed out enough to be useful.
I never use the lambda expression because they can be very slow, but I thought it would be nice to support some generic "invokable" expressions so perhaps supporting a form of bind for macros, like this:
CHAOS_PP_EXPR(CHAOS_PP_REPEAT(3, CHAOS_PP_BIND(CHAOS_PP_CAT, T, _1)) // T0 T1 T2
I personally never use the lambda expressions either, though the speed is not usually a big issue (since I'm not usually generating huge things). However, I can't define _1 et al--at least permanently, and having a separate macro tends to make things clearer (mnemonic names, etc.) rather than more difficult. One of the things that I've briefly been playing with--which is not fleshed out, and I'm not can be fleshed out--is doing lambda in a difficult way. The basic (seed of an) idea is to use a double (and maybe triple) rail wall. Essentially, what Chaos' calls a "rail" is some macro expression that holds itself, through any number of invocations, until some context is reached. For example: #define FLIP(...) __VA_ARGS__ #define FLOP(...) __VA_ARGS__ #define WAIT(macro) \ CHAOS_PP_IIF(CHAOS_PP_IS_NULLARY(macro(())))( \ WAIT_I, CHAOS_PP_EAT \ )(macro) \ /**/ #define WAIT_ID() WAIT #define WAIT_I(macro) CHAOS_PP_DEFER(WAIT_ID)()(macro) #define SCAN(...) __VA_ARGS__ With the above, you can delay the invocation of a macro arbitrarily with something like: MACRO WAIT(FLIP)(ARGS) Essentially, this just expands to MACRO WAIT_ID ()(FLIP)(ARGS) over and over again until the FLIP context is detected which then results in the deferred expression: MACRO ( ARGS ) In reality, you'd have to use rail-specific implementations of IIF and IS_NULLARY so that the rail can go through things like regular IIF and SPLIT (inside IS_NULLARY, if I recall correctly). So, if we have #define ARG(n) ARG_A WAIT(FLIP)(n) #define ARG_A(n) ARG_B WAIT(FLOP)() ) (,,,,, n) ( #define ARG_B() CHAOS_PP_EMPTY( // here the (,,,,, n) is a placeholder some encoding that is way out // past LIMIT_TUPLE or some value like that. #define APPLY(...) \ APPLY_A( \ SCAN( ( FLIP(__VA_ARGS__) ) ) \ ) \ /**/ #define APPLY_A(...) __VA_ARGS__ APPLY( inline ARG(1)( const ARG(1)& ); ) This produces the somewhat unglodly-looking: (inline ARG_B WAIT_ID ()(FLOP)() ) (,,,,, 1) (( const ARG_B WAIT_ID () (FLOP)() ) (,,,,, 1) (& );) However, that can be simplified to pseudo-show what would happen on the FLOP by replacing ARG_B WAIT_ID()(FLOP)() with "EMPTY(" (inline "EMPTY(" ) (,,,,, 1) (( const "EMPTY(" ) (,,,,, 1) (& );) Still doesn't look fun, but this can be parsed: ( inline "EMPTY(" ) ( ,,,,, 1 ) ( ( const "EMPTY(" ) ( ,,,,, 1 ) (&) ; ) So, let's say you parse it and replace ( ,,,,, n) with the n-th lambda argument (not shown here let's call it X) followed by POST which is defined as (which would probably require some delaying of POST) and simplified with ")". #define POST(...) __VA_ARGS__ POST_I WAIT_ID()(FLOP) #define POST_I() ) (inline "EMPTY(" ) (,,,,, 1) (( const "EMPTY(" ) (,,,,, 1) (& );) (inline "EMPTY(" ) X POST (( const "EMPTY(" ) X POST (& );) (inline "EMPTY(" ) X ( const "EMPTY(" ) X & ")"; ")" ...and then FLOP followed by the removal of the outer parentheses: (inline EMPTY( ) X ( const EMPTY( ) X & ) ; ) inline X ( const X & ) ; Now, I'm not sure if this scales to every scenario. And it is playing extremely fast and loose with hanging parenthesis. I haven't spent a great deal of time playing with it. But, this would be magic: #define _0 ARG(0) #define _1 ARG(1) APPLY( (A)(B), template<class T> class _0 : public _1 { public: inline _0(const _0& c) : _1(1, 2, 3) { // ... } }; ) #undef _0 #undef _1 If that can be made to work, that would be freakishly clever. You'd then have to do lambda bindings as another type of "control flag" to delay them (and hide the actual macros' names).
As well as supporting lambda or even user-defined invokable expressions. Ultimately, bind would just be implemented as this(Yes I know chaos already has a bind macro):
#define CHAOS_PP_BIND(m, ...) (CHAOS_PP_BIND)(m, __VA_ARGS__)
The library already allows what you pass to be a deferred expression in terms of NEXT(s). So you can put an arbitrary remapper: #define MACRO(x, y) x - y #define DROP_AND_SWAP(s, y, x) (x, y) CHAOS_PP_EXPR(CHAOS_PP_ENUM( 3, MACRO DROP_AND_SWAP, 3 )) ...contrived, I know.
I don't know what anyone else thinks of that, as well.
You could probably do something like that. However, depending on whatever it is (say lambda), you usually want to parse the expression into a quickly substitutable form once at the beginning.
http://chaos-pp.cvs.sourceforge.net/viewvc/chaos-pp/chaos-pp/built-docs/ primary.html
There are a lot of bits of sample code throughout the documentation, though the topical documentation incomplete.
The documentation is really missing explanations on recursion steps, and parametric resumptions. I never really fully understood what those additional repetition macros did.
A lot of this type of stuff is extremely trickly subject matter. A typical user isn't writing algorithms, etc.. They are just using the provided higher-level macros. What Chaos' does, for the most part, is take all implementation-level macros that could be useful as interfaces and define them as interfaces. I agree, however, that over-arching concepts, themes, idioms, and techniques need adequate documentation. Where I got the term "parametric" from in this context I have no idea! Essentially, those algorithms process through from (e.g.) EXPR_1 through EXPR_10, but leave an extra EXPR_1, then jump back to EXPR_2, leave an extra EXPR_2, and so on. I.e. they use the recursion backend like this: X X X X X X X X X X X X X X X I.e. it is sort of a backend linear multiplier. The _X macros use the backend in an exponential fashion--but those have some usability issues. I actually have a few combinations-of-both (i.e. _PARAMETRIC_X--whatever that means) algorithms laying around somewhere that are superior to the _X versions. In both cases, however, these algorithms trade some speed for headroom. _PARAMETRIC is a little slower than normal, and _X is slower yet. Regards, Paul Mensonides

On Sat, 06 Oct 2012 07:19:14 +0000, Paul Mensonides wrote:
#define _0 ARG(0) #define _1 ARG(1)
APPLY( (A)(B), template<class T> class _0 : public _1 { public: inline _0(const _0& c) : _1(1, 2, 3) { // ... } }; )
Implementation of this incoming. Regards, Paul Mensonides

Well, here's what I have so far: #define FLIP(...) __VA_ARGS__ #define FLOP(...) __VA_ARGS__ #define DO_FLIP(s, ...) \ (CHAOS_PP_EXPR_S(s)(FLIP(__VA_ARGS__))) \ /**/ #define DO_FLOP(s, ...) \ CHAOS_PP_REM_CTOR(CHAOS_PP_EXPR_S(s)(FLOP(__VA_ARGS__))) \ /**/ #define WAIT(m) \ CHAOS_PP_IIF_SHADOW(CHAOS_PP_IS_NULLARY(m(())))( \ WAIT_I, CHAOS_PP_EAT \ )(m) \ /**/ #define WAIT_ID() WAIT #define WAIT_I(m) CHAOS_PP_DEFER(WAIT_ID)()(m) #define FLAG(...) FLAG_A WAIT(FLIP)(__VA_ARGS__) #define FLAG_A(...) FLAG_B WAIT(FLOP)() ) (,,,,,,,,,, __VA_ARGS__) ( #define FLAG_B() CHAOS_PP_DEFER(CHAOS_PP_EMPTY)( #define FLAG_C() ) #define DROP(a, b, c, d, e, f, g, h, i, j, ...) __VA_ARGS__ #define ARG(n) FLAG(0XARG, n) #define ARGS(...) \ CHAOS_PP_QUICK_OVERLOAD(ARGS_, __VA_ARGS__)(__VA_ARGS__) \ /**/ #define ARGS_1(n) FLAG(0XREST, n) #define ARGS_2(n, m) \ FLAG( \ 0XRANGE, n, \ CHAOS_PP_INC(CHAOS_PP_SUB(m, n)) \ ) \ /**/ #define BIND(id) FLAG(0XBIND, id) #define QS_HEAD(...) \ CHAOS_PP_REM_CTOR(CHAOS_PP_SPLIT(0, QS_HEAD_I __VA_ARGS__)) \ /**/ #define QS_HEAD_I(...) (__VA_ARGS__), #define APPLY(args, ...) \ APPLY_BYPASS(CHAOS_PP_LIMIT_EXPR, args, __VA_ARGS__) \ /**/ #define APPLY_BYPASS(s, args, ...) \ DO_FLOP( \ s, \ CHAOS_PP_EXPR_S(s)(APPLY_A(s, args, DO_FLIP(s, __VA_ARGS__))) \ ) \ /**/ #define APPLY_A(s, args, ...) \ CHAOS_PP_IIF(CHAOS_PP_IS_VARIADIC(__VA_ARGS__))( \ APPLY_C, APPLY_B \ )(CHAOS_PP_OBSTRUCT(), CHAOS_PP_PREV(s), args, __VA_ARGS__) \ /**/ #define APPLY_A_ID() APPLY_A #define APPLY_B(_, s, args, ...) __VA_ARGS__ #define APPLY_C(_, s, args, ...) \ APPLY_D(s, args, QS_HEAD(__VA_ARGS__)) \ CHAOS_PP_EXPR_S _(s)(APPLY_A_ID _()( \ s, args, CHAOS_PP_EAT __VA_ARGS__ \ )) \ /**/ #define APPLY_D(s, args, ...) \ CHAOS_PP_CAT( \ APPLY_, \ CHAOS_PP_CAT( \ 0, \ CHAOS_PP_VARIADIC_ELEM(0, DROP(__VA_ARGS__,,,,,,,,,,)) \ ) \ )(CHAOS_PP_OBSTRUCT(), s, args, __VA_ARGS__) \ /**/ #define APPLY_0(_, s, args, ...) \ (CHAOS_PP_EXPR_S _(s)(APPLY_A_ID _()( \ s, args, __VA_ARGS__ \ ))) \ /**/ #define APPLY_00XARG(_, s, args, a, b, c, d, e, f, g, h, i, j, k, l) \ CHAOS_PP_SEQ_ELEM(l, args) APPLY_E \ /**/ #define APPLY_00XREST(_, s, args, a, b, c, d, e, f, g, h, i, j, k, l) \ CHAOS_PP_SEQ_ENUMERATE( \ CHAOS_PP_SEQ_DROP(l, args) \ ) APPLY_E \ /**/ #define APPLY_00XRANGE(_, s, args, a, b, c, d, e, f, g, h, i, j, k, l, m) \ CHAOS_PP_SEQ_ENUMERATE( \ CHAOS_PP_SEQ_TAKE(m, CHAOS_PP_SEQ_DROP(l, args)) \ ) APPLY_E \ /**/ #define APPLY_00XBIND(_, s, args, a, b, c, d, e, f, g, h, i, j, k, l) \ APPLY_BIND_I WAIT(FLOP)(l) APPLY_E \ /**/ #define APPLY_BIND_I(l) l CHAOS_PP_DEFER(CHAOS_PP_OBSTRUCT)()() #define APPLY_E(...) __VA_ARGS__ FLAG_C WAIT(FLOP)() Given the above, then APPLY( (X)(Y), template<class T> class ARG(0) { }; template<class T> class ARG(1) : public ARG(0)<T> { public: inline ARG(1)(const ARG(1)& o) : ARG(0)<T>(o) { } }; ) expands to template<class T> class X { }; template<class T> class Y : public X<T> { public: inline Y(const Y& o) : X<T>(o) { } }; Furthermore, ARGS(n) (i.e. plural) will result in all of the arguments including and following the n-th argument, and ARGS(n, m) will result in the arguments from the n-th argument to the m-th argument, inclusive. So, APPLY( (A)(B)(?)(C)(D), { ARGS(0, 1), ARGS(3) } ) results in { A, B, C, D }. Lastly, the above defines a means of producing a binding on macro. For example, #define CAT_ BIND(CHAOS_PP_CAT_ID) APPLY( (P)(Q), CAT_(ARGS(0)) ) results in CHAOS_PP_CAT_ID()(P, Q) which, when scanned as in CHAOS_PP_EXPR(APPLY( (P)(Q), CAT_(ARGS(0)) )) results in PQ. This doesn't work with all cases. For example: APPLY( (int), void f(ARG(0)); ) results in an error. What probably needs to be done is something along the lines of introducing commas (and flops that eat those commas) and iterating over the elements of the non-flag tuples. Regards, Paul Mensonides

I never use the lambda expression because they can be very slow, but I thought it would be nice to support some generic "invokable" expressions so perhaps supporting a form of bind for macros, like this:
CHAOS_PP_EXPR(CHAOS_PP_REPEAT(3, CHAOS_PP_BIND(CHAOS_PP_CAT, T, _1)) // T0 T1 T2
I personally never use the lambda expressions either, though the speed is not usually a big issue (since I'm not usually generating huge things). However, I can't define _1 et al--at least permanently, and having a separate macro tends to make things clearer (mnemonic names, etc.) rather than more difficult.
The _1, _2, etc, are not macros, but instead are parsed by the invoker.
Now, I'm not sure if this scales to every scenario. And it is playing extremely fast and loose with hanging parenthesis. I haven't spent a great deal of time playing with it. But, this would be magic:
#define _0 ARG(0) #define _1 ARG(1)
APPLY( (A)(B), template<class T> class _0 : public _1 { public: inline _0(const _0& c) : _1(1, 2, 3) { // ... } }; )
#undef _0 #undef _1
If that can be made to work, that would be freakishly clever. You'd then have to do lambda bindings as another type of "control flag" to delay them (and hide the actual macros' names).
This seems really cool.
As well as supporting lambda or even user-defined invokable expressions. Ultimately, bind would just be implemented as this(Yes I know chaos already has a bind macro):
#define CHAOS_PP_BIND(m, ...) (CHAOS_PP_BIND)(m, __VA_ARGS__)
The library already allows what you pass to be a deferred expression in terms of NEXT(s). So you can put an arbitrary remapper:
#define MACRO(x, y) x - y #define DROP_AND_SWAP(s, y, x) (x, y)
CHAOS_PP_EXPR(CHAOS_PP_ENUM( 3, MACRO DROP_AND_SWAP, 3 ))
...contrived, I know.
I didn't realize deferred expression could be given, this seems to be a better way to approach the problem. You should really write a book on this.
A lot of this type of stuff is extremely trickly subject matter. A typical user isn't writing algorithms, etc.. They are just using the provided higher-level macros. What Chaos' does, for the most part, is take all implementation-level macros that could be useful as interfaces and define them as interfaces. I agree, however, that over-arching concepts, themes, idioms, and techniques need adequate documentation.
Where I got the term "parametric" from in this context I have no idea! Essentially, those algorithms process through from (e.g.) EXPR_1 through EXPR_10, but leave an extra EXPR_1, then jump back to EXPR_2, leave an extra EXPR_2, and so on. I.e. they use the recursion backend like this:
X X X X X X X X X X X X X X X
I.e. it is sort of a backend linear multiplier.
The _X macros use the backend in an exponential fashion--but those have some usability issues. I actually have a few combinations-of-both (i.e. _PARAMETRIC_X--whatever that means) algorithms laying around somewhere that are superior to the _X versions.
In both cases, however, these algorithms trade some speed for headroom. _PARAMETRIC is a little slower than normal, and _X is slower yet.
So the extra headroom can be used for the macro thats passed in, or for more repetitions, or both?

On 10/10/2012 3:15 PM, paul Fultz wrote:
The _1, _2, etc, are not macros, but instead are parsed by the invoker.
Uh... you /can't/ parse them unless you restrict the input. The only way you can interact with them would be through token-pasting--which you can't do unless you know what you've got is an identifier or pp-number (that doesn't include decimal points).
Now, I'm not sure if this scales to every scenario. And it is playing extremely fast and loose with hanging parenthesis. I haven't spent a great deal of time playing with it. But, this would be magic:
#define _0 ARG(0) #define _1 ARG(1)
APPLY( (A)(B), template<class T> class _0 : public _1 { public: inline _0(const _0& c) : _1(1, 2, 3) { // ... } }; )
#undef _0 #undef _1
If that can be made to work, that would be freakishly clever. You'd then have to do lambda bindings as another type of "control flag" to delay them (and hide the actual macros' names).
This seems really cool.
I've played with this some (see my other email from a few days ago), but it is not fully fleshed out. I believe that it is possible, however. The basic idea is to make the placeholders accessible to the parser. Which you have to use someone elaborate methods to involving commas and unmatched parentheses.
As well as supporting lambda or even user-defined invokable expressions. Ultimately, bind would just be implemented as this(Yes I know chaos already has a bind macro):
#define CHAOS_PP_BIND(m, ...) (CHAOS_PP_BIND)(m, __VA_ARGS__) The library already allows what you pass to be a deferred expression in terms of NEXT(s). So you can put an arbitrary remapper:
#define MACRO(x, y) x - y #define DROP_AND_SWAP(s, y, x) (x, y)
CHAOS_PP_EXPR(CHAOS_PP_ENUM( 3, MACRO DROP_AND_SWAP, 3 ))
...contrived, I know.
I didn't realize deferred expression could be given, this seems to be a better way to approach the problem. You should really write a book on this.
The underlying /reason/ that it allows the argument to expand to a deferred expression in terms of NEXT(s) is so that the whatever is called can be reused recursively also. E.g. #define A(s, n) \ n B(CHAOS_PP_OBSTRUCT(), CHAOS_PP_NEXT(s), n) \ /**/ #define A_ID() A #define B(_, s, n) \ CHAOS_PP_EXPR_S _(s)(CHAOS_PP_REPEAT_S _( \ s, n, A_ID _() \ )) \ /**/ CHAOS_PP_EXPR(CHAOS_PP_REPEAT( 10, A )) Here REPEAT calls A, A calls REPEAT, REPEAT calls A, and so on. Essentially, the algorithm is starting a bootstrap on the call just like with at the top level. If you use that bootstrap to rearrange the arguments, you lose the ability to self-recurse as in the above. For the case of REPEAT, you /could/ add more bootstraps to the top-level but that is only because REPEAT does not care what the result of your macro is. In other cases, such as with predicates and state transition operators such as in WHILE. In that case, the algorithm needs to invoke the predicate and do something with the result. The operator here is involved with the algorithm because its result is passed to the predicate. In the case of something like FOLD, the predicate is internal (i.e. some variety of "is cons"), so the algorithm doesn't care what you do with the operator you pass (provided it doesn't produce unmatched parentheses). However, the operator itself will care. In my own use, however, I've rarely had scenarios were I needed to see-saw as in the above example, so using the extra bootstrap to mess with the arguments is usually viable.
A lot of this type of stuff is extremely trickly subject matter. A typical user isn't writing algorithms, etc.. They are just using the provided higher-level macros. What Chaos' does, for the most part, is take all implementation-level macros that could be useful as interfaces and define them as interfaces. I agree, however, that over-arching concepts, themes, idioms, and techniques need adequate documentation.
Where I got the term "parametric" from in this context I have no idea! Essentially, those algorithms process through from (e.g.) EXPR_1 through EXPR_10, but leave an extra EXPR_1, then jump back to EXPR_2, leave an extra EXPR_2, and so on. I.e. they use the recursion backend like this:
X X X X X X X X X X X X X X X
I.e. it is sort of a backend linear multiplier.
The _X macros use the backend in an exponential fashion--but those have some usability issues. I actually have a few combinations-of-both (i.e. _PARAMETRIC_X--whatever that means) algorithms laying around somewhere that are superior to the _X versions.
In both cases, however, these algorithms trade some speed for headroom. _PARAMETRIC is a little slower than normal, and _X is slower yet.
So the extra headroom can be used for the macro thats passed in, or for more repetitions, or both?
It depends on the algorithm. For something like REPEAT, all of the calls to user-provided macros are already trampolined back to s + 1. Playing with it a bit, I sort of misrepresented what the "parametric" algorithms do. I haven't ever used them in real code, so I was sort of recalling a combination of the them and algorithms that I have laying around somewhere--which reminds me of why they got the name "parametric". Here's what they /actually/ do.... When the end of the recursion backend is hit, the algorithm trampolines /itself/ back to s + 1. For example, CHAOS_PP_EXPR_S(500)(CHAOS_PP_REPEAT_PARAMETRIC_S( 500, 20, M )) There isn't enough headroom here to directly produce the 20 repetitions, so the algorithm produces as many as it can, and then trampolines itself--which you will see if you run the above. That result can be /resumed/ with an additional bootstrap CHAOS_PP_EXPR_S(500)( CHAOS_PP_EXPR_S(500)(CHAOS_PP_REPEAT_PARAMETRIC_S( 500, 20, M )) ) and this can be done as many times as needed. The "parametric" is short for /parametric resumption/ which is a piece of terminology I made up to describe the above scenario--re-bootstrapping the result by using it is a parameter again. I know--not very good terminology here. Regardless, however, the M will always be invoked with 501 in this scenario. The _X algorithms, on the other hand are more complicated still. They do the same sort of trampolined jump that the parametric algorithms do, but the use the recursion backend exponentially on the way. However, they take an additional argument which is the size of a buffer at the end of the backend which it won't use. They also don't trampoline their higher-order arguments. For example, CHAOS_PP_EXPR_S(500)(CHAOS_PP_REPEAT_X_S( 500, 5, 20, M )) The exponential here is sufficient to produce all 20 repetitions /without/ a parametric resumption. However, notice that the M is invoked with s-values which are all over the place put never exceed 512 - 5. And that is the flaw of these algorithms. That behavior doesn't scale. What they need to do is take a /relative/ number of steps which they can use (not a number of steps left over at the end). The exponential algorithms can get enormous numbers of steps extremely quickly. The number is actually: /f/(/s/, /?/, /p/, /b/, /?/) = /p/ * (1 - /b/^(/?/ - /s/ - /?/ - 1)) / (1 - /b/) - 1 where /p/ is the number of wrapping bootstraps, /b/ is the exponential base used by the algorithm, /?/ is LIMIT_EXPR which is 512 currently, and /?/ is the passed buffer size. The /?/ is global, so we have: /g/(/s/, /?/, /p/, /b/) = /f/(/s/, /?/, /p/, /b/, 512) The value of /b/ is algorithm-specific. I believe it is always 2 (which can be increased fairly easily), but for REPEAT_X, I /know/ it is 2. So, for REPEAT_X we have: /h/(/s/, /?/, /p/) = /g/(/s/, /?/, /p/, 2) So, in the case above, we have /h/(500, 5, 1) = 62. However, at the beginning with no buffer, we'd have /h/(1, 0, 1) which is 335195198248564927489350624955146153186984145514 809834443089036093044100751838674420046857454172 585692250796454662151271343847070298664248660841 2251521022 The number type that REPEAT_X uses cannot count that high, but the arbitrary precision arithmetic mechanism could /in theory/. Of course, the above is ridiculously high. Assuming infinite memory, if you generated one repetition per nanosecond, it would take about 3 x 10^226 years to complete (if I did the math right). So, obviously you don't need that many, but the what the algorithm should do is uses a relative depth so that the number of available steps is not dependent on s. Instead, it would be something like: /f/(/?/, /p/, /b/) = /p/ * (1 - /b/^(/?/ - 1)) / (1 - /b/) - 1 Where /?/ is now the number of backend steps that the algorithm is allowed to use. So EXPR(REPEAT_X(10, n, M)) could generate something like 510 steps before trampolining without going past s + 10 - 1 in the backend. Even something as low as 20 yields unrealistic numbers of steps (about 524286). So, what these algorithms /should/ be are algorithms which allow parametric resumption but use either linear multipliers or exponential multipliers with relative upper bounds on the backend. The linear version would be faster, but the exponential version would yield a lost more steps. The current _PARAMETRIC and _X algorithms don't do that. Regards, Paul Mensonides

On 10/11/2012 2:18 AM, Paul Mensonides wrote:
And that is the flaw of these algorithms. That behavior doesn't scale. What they need to do is take a /relative/ number of steps which they can use (not a number of steps left over at the end). The exponential algorithms can get enormous numbers of steps extremely quickly. The number is actually:
Translating the ASCII-ized garbage...
/f/(/s/, /?/, /p/, /b/, /?/) = /p/ * (1 - /b/^(/?/ - /s/ - /?/ - 1)) / (1 - /b/) - 1
f(s, delta, p, b, omega) = p * (1 - b^(omega - s - delta - 1)) / (1 - b) - 1 omega = LIMIT_EXPR delta = buffer size
where /p/ is the number of wrapping bootstraps, /b/ is the exponential base used by the algorithm, /?/ is LIMIT_EXPR which is 512 currently, and /?/ is the passed buffer size. The /?/ is global, so we have:
/g/(/s/, /?/, /p/, /b/) = /f/(/s/, /?/, /p/, /b/, 512)
g(s, delta, p, b) = f(s, delta, p, b, 512)
The value of /b/ is algorithm-specific. I believe it is always 2 (which can be increased fairly easily), but for REPEAT_X, I /know/ it is 2. So, for REPEAT_X we have:
/h/(/s/, /?/, /p/) = /g/(/s/, /?/, /p/, 2)
h(s, delta, p) = g(s, delta, p, 2)
So, in the case above, we have /h/(500, 5, 1) = 62. However, at the beginning with no buffer, we'd have /h/(1, 0, 1) which is
335195198248564927489350624955146153186984145514 809834443089036093044100751838674420046857454172 585692250796454662151271343847070298664248660841 2251521022
The number type that REPEAT_X uses cannot count that high, but the arbitrary precision arithmetic mechanism could /in theory/.
Of course, the above is ridiculously high. Assuming infinite memory, if you generated one repetition per nanosecond, it would take about 3 x 10^226 years to complete (if I did the math right).
So, obviously you don't need that many, but the what the algorithm should do is uses a relative depth so that the number of available steps is not dependent on s. Instead, it would be something like:
/f/(/?/, /p/, /b/) = /p/ * (1 - /b/^(/?/ - 1)) / (1 - /b/) - 1
f(delta, p, b) = p * (1 - b^(delta - 1)) / (1 - b) - 1 Where delta is now the number of backend steps...
Where /?/ is now the number of backend steps that the algorithm is allowed to use. So EXPR(REPEAT_X(10, n, M)) could generate something like 510 steps before trampolining without going past s + 10 - 1 in the backend. Even something as low as 20 yields unrealistic numbers of steps (about 524286).
So, what these algorithms /should/ be are algorithms which allow parametric resumption but use either linear multipliers or exponential multipliers with relative upper bounds on the backend. The linear version would be faster, but the exponential version would yield a lost more steps. The current _PARAMETRIC and _X algorithms don't do that.
Regards, Paul Mensonides
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Chaos contains *zero* workarounds for broken preprocessors so it is almost completely unusable on VC++, for example, and probably others. It is almost completely usable on gcc and Wave except in some very dark corners (Hartmet: think partial spanning invocation). Last time I checked, it worked well on EDG-based compilers (sans possibly the above), but that was some time ago that I checked. Essentially, Chaos sets a very high mark which a preprocessor must meet.
Chaos mostly works on clang too, except this bug here: http://llvm.org/bugs/show_bug.cgi?id=12767 Which affects the `CHAOS_PP_HIGHER_ORDER` macro and all the `CHAOS_PP_AUTO_*` macros.

On 10/5/2012 9:49 PM, Paul Mensonides wrote:
On Thu, 04 Oct 2012 21:31:48 -0400, Dave Abrahams wrote:
on Wed Sep 26 2012, Paul Mensonides <pmenso57-AT-comcast.net> wrote:
Right, and what will change that sad fact? Attempting to workaround forever isn't working. Similarly, is the whole world supposed to not use variadic templates because VC++ doesn't implement them? At least that's likely to change, but a line needs to be drawn. It is one thing for a workaround to be an implementation detail. It is another when it affects the interface. In the latter case, IMO, the best thing to do is provide the interface that *should* exist and either not support the compilers that don't work or provide some clunkier interface for those compilers. Doing else cripples technological advancement.
I'll mention again that I think we should accept some libraries that won't work on broken compilers (like Chaos). "Boost works on this compiler" is an important selling point.
1) How is the modularization/git transition going?
2) For a macro library, do we still need to have a BOOST_ prefix or could I just keep the CHAOS_PP_ prefix? I cannot use BOOST_PP_ without bending over backwards to find contrived names for everything, and the namespace of brief names beginning with BOOST_ is tiny--especially when a library provides hundreds of user-interface (i.e. not implementation detail macros).
BOOST_CHAOS_PP_... seems normal to me.

On Sun, 07 Oct 2012 15:49:40 -0400, Edward Diener wrote:
2) For a macro library, do we still need to have a BOOST_ prefix or could I just keep the CHAOS_PP_ prefix? I cannot use BOOST_PP_ without bending over backwards to find contrived names for everything, and the namespace of brief names beginning with BOOST_ is tiny--especially when a library provides hundreds of user-interface (i.e. not implementation detail macros).
BOOST_CHAOS_PP_... seems normal to me.
It seems *way* to long to me. -Paul

On Mon, Oct 8, 2012 at 5:00 PM, Paul Mensonides <pmenso57@comcast.net>wrote:
On Sun, 07 Oct 2012 15:49:40 -0400, Edward Diener wrote:
2) For a macro library, do we still need to have a BOOST_ prefix or could I just keep the CHAOS_PP_ prefix? I cannot use BOOST_PP_ without bending over backwards to find contrived names for everything, and the namespace of brief names beginning with BOOST_ is tiny--especially when a library provides hundreds of user-interface (i.e. not implementation detail macros).
BOOST_CHAOS_PP_... seems normal to me.
It seems *way* to long to me.
BOOST_CH_... Like BOOST_PP stands for Boost.PreProcessor, BOOST_CH could stand for BOOST_CHAOS /bikeshed -- gpd
-Paul
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Mon, Oct 8, 2012 at 9:00 AM, Paul Mensonides <pmenso57@comcast.net> wrote:
On Sun, 07 Oct 2012 15:49:40 -0400, Edward Diener wrote:
2) For a macro library, do we still need to have a BOOST_ prefix or could I just keep the CHAOS_PP_ prefix? I cannot use BOOST_PP_ without bending over backwards to find contrived names for everything, and the namespace of brief names beginning with BOOST_ is tiny--especially when a library provides hundreds of user-interface (i.e. not implementation detail macros).
BOOST_CHAOS_PP_... seems normal to me.
It seems *way* to long to me.
I don't think so, it's not long. However, "chaos" doesn't mean anything so I'd prefer a more descriptive name... maybe for pp meta-programming: BOOST_PPMETA or: BOOST_METAPP Note these opinions are all very subjective... --Lorenzo

On Mon, 08 Oct 2012 09:25:18 -0700, Lorenzo Caminiti wrote:
On Mon, Oct 8, 2012 at 9:00 AM, Paul Mensonides <pmenso57@comcast.net> wrote:
On Sun, 07 Oct 2012 15:49:40 -0400, Edward Diener wrote:
2) For a macro library, do we still need to have a BOOST_ prefix or could I just keep the CHAOS_PP_ prefix? I cannot use BOOST_PP_ without bending over backwards to find contrived names for everything, and the namespace of brief names beginning with BOOST_ is tiny--especially when a library provides hundreds of user-interface (i.e. not implementation detail macros).
BOOST_CHAOS_PP_... seems normal to me.
It seems *way* to long to me.
I don't think so, it's not long. However, "chaos" doesn't mean anything so I'd prefer a more descriptive name... maybe for pp meta-programming:
"Chaos" does mean something. It was a reference to chaos theory, where things look random but are ultimately following rules. There are plenty of other "terms" I used in Chaos that I look at now and think "where on Earth did that come from?" Regardless, the name must be short. Unlike normal code, there are no "say it once" scenarios. I.e. there is no namespace chaos { void f(int); inline void g() { f(123); } } -or- using namespace chaos; Instead, it must be qualified everywhere--even throughout the implementation--thousands and thousands of times. What if every name defined by Boost had boost::library:: on it throughout the implementation? In Chaos, I use CHAOS_PP_ for an interface prefix and CHAOS_IP_ for implementation macros (where the distinction is sometimes helpful when debugging). I suppose BOOST_CP_ and BOOST_CI_ aren't too bad if they aren't taken, though I actually think CHAOS_PP_ is better. The best name would be BOOST_PP_ but that is, surprisingly, already taken. There is another issue I have with submitting Chaos to Boost. Boost currently requires the use of .hpp file suffixes. However, like Boost.Preprocessor, but unlike all of the rest of Boost, Chaos is a C library also. The rule should be amended to allow for headers which are not specifically C++ headers. Chaos uses the generic .h for precisely this reason. Note that if .hpp is required, I simply will not submit the library. No hacks--including those introduced to workaround rules which should not apply. You have to understand my point of view: I do not really care whether Chaos becomes part of Boost or not. It is just that people keep asking me for that. That is fine unless doing so makes the library inferior according to my subjective or objective opinion. If I change the prefix, from CHAOS_PP_ to BOOST_CHAOS_ or similar, I have to make all of the libray source code longer, and all of my other present and future code which uses it longer--for no benefit to me. Why would I do that? I don't really care how many people use the library. Perhaps that's too strong. I don't mind if lots of people use it, but not if gaining users requires compromising the purpose of the library. I also don't require the Boost stamp of approval. I have internal motivations to do the work that I do. Not that I mind it, but I do not need the approval of others--otherwise I would never have gotten involved in extensive preprocessor usage. That's just asking for controversy and has garnered plenty of it. As Chaos is defined by a no-compromise approach, it isn't a question of whether Chaos is suitable for Boost. It's a question of whether Boost is suitable for Chaos. That's not simply arrogance. It's about the *purpose* of the library. I gave up on VC++ for lots of reasons including its preprocessor, so I don't really care whether the library's presence in Boost somehow causes VC ++'s preprocessor to be fixed--which it probably wouldn't anyway. Even if it was fixed, it likely wouldn't be for the right reasons, and that will just lead to a similar situations with different issues. If the library became part of Boost, and MS subsequently fixed their preprocessor, I still wouldn't touch VC++ with a ten foot pole unless I somehow became convinced of a radical change in MS's philosophy (unlikely)--which would require proving it, not just saying it. The language should be implemented correctly because it is an ISO standard and the future portability goals that give that weight--not because some popular library needs it or a large enough portion of the user base wants it. Generally speaking (not WRT Chaos), I'm willing to work with compilers that are trying to implement the standard--not just the parts of the standard that they agree with or want to. If a compiler does not have 100% conformance (in the limit) as a primary design goal, I'd rather that compiler fail. In the case of VC++, I no longer want the preprocessor to be fixed. Instead, I want VC++ to fall into disuse and die however likely or unlikely that is. Developing, implementing, and using a standard is about the future of computing. Look at Boost itself. It is an enormous mess of workarounds. The barrier to entry is enormously high almost entirely due to an edifice of compiler-dependent workarounds. With Boost, you don't target C++, you target the union (not intersection) of MS C++, GCC C++, clang C++, Sun C+ +, and so on. That's okay for something intended to be portable and practical at a specific point in time, but it stunts innovation even more than backwards compatibility requirements do. For me, Boost.Preprocessor is an ugly-but-practical solution that exists at a certain point of time. It has nowhere to go of technological significance given its requirements. Not that I'm suggesting this, but what would Boost look like if it was written to target *C++* in the theoretical case where such compilers existed? I believe it again would be a platform for innovative (in the technical C++ sense) which is something I think has been lost to some degree. At the very least, it would be clean and often elegant code. When either backward compatibility or compiler compatibility are fundamental design goals, eventually a library reaches a critical mass, can no longer innovate, and can barely be maintained. Not that I'm opposed to the modularization work (though I'd take hg over git any day of the week), but I'm not convinced that a lack of modularization is what is causing the majority of the underlying maintenance issues. (Perhaps modularization is an effective way to shelve those sub-libraries to make room for those sub-libraries that have not yet reached critical mass.) WRT Chaos, the library was and is supposed to be a technical exercise used to explore what a preprocessor metaprogramming library could be in theory via the creation, development, and usage of advanced technique. As a preprocessor metaprogramming library, it *happens* to be practical in all situations where Boost.Preprocessor would be practical and the preprocessor *happens* to be good enough. Those cases (that I know of) include gcc, *almost* includes clang, and EDG-based compilers (last time I checked). I haven't checked any others recently, so there may be more. So what motivation is there for me to make things more obnoxious to both implement and use given the fundamental purpose of the library? Chaos being in Boost gains me nothing which I consider worthwhile. It would only be altruistic work--which I don't mind, but I will not subvert the purpose of the library. Regards, Paul Mensonides

on Mon Oct 08 2012, Paul Mensonides <pmenso57-AT-comcast.net> wrote:
There is another issue I have with submitting Chaos to Boost. Boost currently requires the use of .hpp file suffixes. However, like Boost.Preprocessor, but unlike all of the rest of Boost, Chaos is a C library also. The rule should be amended to allow for headers which are not specifically C++ headers. Chaos uses the generic .h for precisely this reason. Note that if .hpp is required, I simply will not submit the library. No hacks--including those introduced to workaround rules which should not apply.
I'm not attached to that rule for non-C++ code. -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

on Fri Oct 05 2012, Paul Mensonides <pmenso57-AT-comcast.net> wrote:
On Thu, 04 Oct 2012 21:31:48 -0400, Dave Abrahams wrote:
on Wed Sep 26 2012, Paul Mensonides <pmenso57-AT-comcast.net> wrote:
Right, and what will change that sad fact? Attempting to workaround forever isn't working. Similarly, is the whole world supposed to not use variadic templates because VC++ doesn't implement them? At least that's likely to change, but a line needs to be drawn. It is one thing for a workaround to be an implementation detail. It is another when it affects the interface. In the latter case, IMO, the best thing to do is provide the interface that *should* exist and either not support the compilers that don't work or provide some clunkier interface for those compilers. Doing else cripples technological advancement.
I'll mention again that I think we should accept some libraries that won't work on broken compilers (like Chaos). "Boost works on this compiler" is an important selling point.
1) How is the modularization/git transition going?
I think https://groups.google.com/forum/?fromgroups=#!topic/ryppl-dev/12z1ahWsNtk contains the best answers to that question so far. Short answer: quite well; we just need to keep moving it forward.
2) For a macro library, do we still need to have a BOOST_ prefix or could I just keep the CHAOS_PP_ prefix? I cannot use BOOST_PP_ without bending over backwards to find contrived names for everything, and the namespace of brief names beginning with BOOST_ is tiny--especially when a library provides hundreds of user-interface (i.e. not implementation detail macros).
I don't have a strong position on this one, but will point out that there are other options such as BOOST_CHAOS_ or BOOST_CPP_
A quick example:
#include <chaos/preprocessor/arithmetic/dec.h> #include <chaos/preprocessor/control/if.h> #include <chaos/preprocessor/facilities/empty.h> #include <chaos/preprocessor/punctuation/comma.h> #include <chaos/preprocessor/recursion/basic.h> #include <chaos/preprocessor/recursion/expr.h> #include <chaos/preprocessor/tuple/eat.h>
// interface:
#define DELINEATE(c, sep, m, ...) \ DELINEATE_S(CHAOS_PP_STATE(), c, sep, m, __VA_ARGS__) \ /**/ #define DELINEATE_S(s, c, sep, m, ...) \ DELINEATE_I(s, c, CHAOS_PP_EMPTY, sep, m, __VA_ARGS__) \ /**/
// implementation:
#define DELINEATE_I(s, c, s1, s2, m, ...) \ CHAOS_PP_IF(c)( \ DELINEATE_II, CHAOS_PP_EAT \ )(CHAOS_PP_OBSTRUCT(), CHAOS_PP_NEXT(s), \ CHAOS_PP_DEC(c), s1, s2, m, __VA_ARGS__) \ /**/ #define DELINEATE_INDIRECT() DELINEATE_I #define DELINEATE_II(_, s, c, s1, s2, m, ...) \ CHAOS_PP_EXPR_S _(s)(DELINEATE_INDIRECT _()( \ s, c, s2, s2, m, __VA_ARGS__ \ )) \ m _(s, c, __VA_ARGS__) s1() \ /**/
// reusable:
#define REPEAT(c, m, ...) DELINEATE(c, CHAOS_PP_EMPTY, m, __VA_ARGS__) #define REPEAT_S(s, c, m, ...) \ DELINEATE_S(s, c, CHAOS_PP_EMPTY, m, __VA_ARGS__) \ /**/
What's the point of showing REPEAT here? It doesn't seem to be used below. Are you just illustrating that you can build REPEAT and ENUM using the same foundation?
#define ENUM(c, m, ...) DELINEATE(c, CHAOS_PP_COMMA, m, __VA_ARGS__) #define ENUM_S(s, c, m, ...) \ DELINEATE_S(s, c, CHAOS_PP_COMMA, m, __VA_ARGS__) \ /**/
// recursive:
#define TTP(s, n, id) \ template<CHAOS_PP_EXPR(ENUM( \ CHAOS_PP_INC(n), class CHAOS_PP_EAT, ~ \
Did CHAOS_PP_INC come in with CHAOS_PP_DEC? I don't see an #include for it.
))> class id ## n \ /**/
CHAOS_PP_EXPR(ENUM(3, TTP, T))
-> template<class> class T0 , template<class, class> class T1 , template<class, class, class> class T2
Complicated?...yes. But compare that to the implementation of BOOST_PP_REPEAT. Unlike BOOST_PP_REPEAT, however, this macro can be recursively reentered about 500 times. The actual implementations of these in Chaos is far fancier.
I went to look at the code and found the mercurial repo doesn't appear to contain most of chaos, which was a little confusing. Just thought it would be worth telling people to look at the CVS. -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

On 10/9/2012 6:07 PM, Dave Abrahams wrote:
A quick example:
#include <chaos/preprocessor/arithmetic/dec.h> #include <chaos/preprocessor/control/if.h> #include <chaos/preprocessor/facilities/empty.h> #include <chaos/preprocessor/punctuation/comma.h> #include <chaos/preprocessor/recursion/basic.h> #include <chaos/preprocessor/recursion/expr.h> #include <chaos/preprocessor/tuple/eat.h>
// interface:
#define DELINEATE(c, sep, m, ...) \ DELINEATE_S(CHAOS_PP_STATE(), c, sep, m, __VA_ARGS__) \ /**/ #define DELINEATE_S(s, c, sep, m, ...) \ DELINEATE_I(s, c, CHAOS_PP_EMPTY, sep, m, __VA_ARGS__) \ /**/
// implementation:
#define DELINEATE_I(s, c, s1, s2, m, ...) \ CHAOS_PP_IF(c)( \ DELINEATE_II, CHAOS_PP_EAT \ )(CHAOS_PP_OBSTRUCT(), CHAOS_PP_NEXT(s), \ CHAOS_PP_DEC(c), s1, s2, m, __VA_ARGS__) \ /**/ #define DELINEATE_INDIRECT() DELINEATE_I #define DELINEATE_II(_, s, c, s1, s2, m, ...) \ CHAOS_PP_EXPR_S _(s)(DELINEATE_INDIRECT _()( \ s, c, s2, s2, m, __VA_ARGS__ \ )) \ m _(s, c, __VA_ARGS__) s1() \ /**/
// reusable:
#define REPEAT(c, m, ...) DELINEATE(c, CHAOS_PP_EMPTY, m, __VA_ARGS__) #define REPEAT_S(s, c, m, ...) \ DELINEATE_S(s, c, CHAOS_PP_EMPTY, m, __VA_ARGS__) \ /**/ What's the point of showing REPEAT here? It doesn't seem to be used below. Are you just illustrating that you can build REPEAT and ENUM using the same foundation?
It is just a tacit illustration of generality and reusability. Doing the same ala Boost.Preprocessor has far greater implications. In fact, in Boost.Preprocessor, there are a relatively few number of core algorithms--each of which has its own recursion state. Then there are a few derived algorithms which are hacked to look like they use one of those states. Lastly, there are a bunch of algorithms implemented in terms of the more low-level ones. None of those are reentrant. The above redirection, on the other hand, has no usability consequences other than that a natively built REPEAT or ENUM would be /slightly/ faster due to not passing around and invoking the separators. Chaos' recursion backend is just a bunch of macros that look like this: #define EXPR_1(...) __VA_ARGS__ #define EXPR_2(...) __VA_ARGS__ #define EXPR_3(...) __VA_ARGS__ // ... When a macro such as MACRO(args) is called (provided args is used in the replacement list without being an operand of # or ##), args is scanned for macro replacement once on "entry" to MACRO(where a disabling context on MACROdoes not yet exist) and again after MACRO(args) has been replaced by MACRO's (parametized) replacement list (where a disabling context on MACRO does exist). What Chaos' does, fundamentally, is control /when/ a macro is invoked//. Given a function-like macro A(args), DEFER(A)(args) causes the invocation of A to /not/ happen through one scan for macro replacement. I.e. the result of scanning DEFER(A)(args) for macro replacement is A(args). A subsequent scan of that result will cause A(args) to be replaced. Chaos' then uses this to cause an algorithmic step to bootstrap into the next step. For example, assuming it is going to continue, B(1, args) expands to something that looks like EXPR_2(B_ID()(2, args)) where B_ID() just expands to B when replaced (used to avoid blue paint). When you say EXPR_1(B(1, args)), the scan on entry to EXPR_1 yields EXPR_2(B_ID()(2, args)). When that sequence of tokens is scanned again, after EXPR_1(B(1, args)) is replaced, the disabling context for B no longer exists (though now one for EXPR_1 does exist). That scan proceeds to cause EXPR_2(B_ID()(2, args)) to expand and now the entire process is self-bootstrapping provided there are enough EXPR_s macros to terminate. The actual algorithms in Chaos tend to be a lot more complicated than this. For example, the REPEAT et al algorithms generate their repetitions this way, but trampoline all of the higher-order calls back to the beginning. E.g. EXPR_s(REPEAT_S(s, 3, M)) pseudo-expands to M(s + 1, 0) M(s + 1, 1) M(s + 1, 3) where the state parameter is always s + 1. This type of thing occurs when the number of steps required by the algorithm is bounded by the input in a non-higher-order way (and when it isn't too complicated). For example, something like EXPR_s(FOLD_LEFT_S(s, op, seq, x)) generates op(s + 1, seq[n - 1], ... op(s + 1, seq[1], op(s + 1, seq[0],x)) ... ) prior to expanding any of it. It can do this because the input sequence (not the higher-order op argument) determines the length of the algorithm. Hence the name "Chaos". That doesn't come from "random" or "messy". It is a reference to chaos theory where a relatively simple set of rules and initial conditions can produce a complex result. Aside from the enormous number of workarounds in Boost.Preprocessor, the complexity of Chaos' implementation is drastically higher than that of Boost.Preprocessor. One has to know what they are doing when directly implementing an algorithm (rather than using those already provided). Under the right circumstances, with bad input, you can get bad output which is way larger than anything Boost.Preprocessor can generate and way larger than even template instantiation error messages. In some cases, there is so much headroom that that we're talking death by memory exhaustion. I.e. assuming infinite memory, it is possibly to construct an invocation that won't terminate for millenia. Assuming that they are used in order, the library can do a binary search to find the next available index (which the library calls s for "state"). This is like the "automatic recursion" in Boost.Preprocessor except that it is more efficient. Though it isn't necessary, the library distinguishes between higher-order algorithms and non-higher-order algorithms. A higher-order algorithm ALGO is natively invoked as EXPR(ALGO(...)) which deduces the state or EXPR_S(s)(ALGO_S(s, ...)) which does not. Non-higher-order algorithms are implemented under what the library calls "bypass semantics". They may not use higher-order algorithms in their implementation. What they do is start at the end of the recursion backed (i.e. EXPR_512 or whatever it is right now) and go backward. Because these algorithms are non-higher-order, they cannot be reentrant. A non-higher-order algorithm ALGO is natively invoked as ALGO(...) by "top-level" code or ALGO_BYPASS(s, ...) when being used by another non-higher-order algorithm. The restrictions on algorithms operating under bypass semantics are such that "normal" algorithms can always deduce the state (i.e. the binary search doesn't break) and so they can assume that they have all of the remaining backend to use (sometimes creatively). Higher-order algorithms, on the other hand, may use the non-higher-order algorithms. The library goes on to make a distinction between algorithmic steps (i.e. related to EXPR_s and the recursion backend) and algorithmic entry points. A call such as EXPR(REPEAT(3, M)) can have M reenter REPEAT around 500 times recursively. However, while a large number of steps is sometimes necessary, that many entry points is not. To that end, the library defines about 16 separate entry point thunks which which essentially translate AUTO_REPEAT(...) into EXPR(REPEAT(...)). However, there are only a low number of those that are shared over the entire library so one top-level call implemented entirely using the AUTO_ macros can only have a reentry depth of 16 at maximum--which is usually more than enough. There are only a low number of these thunks because each AUTO_ macro has to replicate macros to produce enable them for that macro. The replication could be avoid if the syntax was, for example, AUTO(REPEAT, ...) translated to EXPR(REPEAT(...)) though Chaos doesn't provide that.
#define ENUM(c, m, ...) DELINEATE(c, CHAOS_PP_COMMA, m, __VA_ARGS__) #define ENUM_S(s, c, m, ...) \ DELINEATE_S(s, c, CHAOS_PP_COMMA, m, __VA_ARGS__) \ /**/
// recursive:
#define TTP(s, n, id) \ template<CHAOS_PP_EXPR(ENUM( \ CHAOS_PP_INC(n), class CHAOS_PP_EAT, ~ \ Did CHAOS_PP_INC come in with CHAOS_PP_DEC? I don't see an #include for it. No. I missed it. It actually comes in (at least) via recursion/expr.h because that defines the NEXT(s) and PREV(s) macros which are implemented in terms of INC and DEC. ))> class id ## n \ /**/
CHAOS_PP_EXPR(ENUM(3, TTP, T))
-> template<class> class T0 , template<class, class> class T1 , template<class, class, class> class T2
Complicated?...yes. But compare that to the implementation of BOOST_PP_REPEAT. Unlike BOOST_PP_REPEAT, however, this macro can be recursively reentered about 500 times. The actual implementations of these in Chaos is far fancier. I went to look at the code and found the mercurial repo doesn't appear to contain most of chaos, which was a little confusing. Just thought it would be worth telling people to look at the CVS. I disabled the hg repo for the moment. I was starting a new version, started debating whether to include a lambda mechanism, started playing with alternate lambda mechanisms, and ran out of time pending work I have to do for my actual job. The library as it currently exists is in CVS and is stable. If Chaos' integration into Boost proceeds, I will shelve the hg repository at Sourceforge, and develop the Boost vesion in a private hg repo pending Boost's git migration. On the other hand, if Chaos' integration into Boost does not proceed, I will develop the next iteration of the library in the hg repo at Sourceforge.
Regards, Paul Mensonides

on Wed Oct 10 2012, Paul Mensonides <pmenso57-AT-comcast.net> wrote:
On 10/9/2012 6:07 PM, Dave Abrahams wrote:
What's the point of showing REPEAT here? It doesn't seem to be used below. Are you just illustrating that you can build REPEAT and ENUM using the same foundation?
It is just a tacit illustration of generality and reusability. Doing the same ala Boost.Preprocessor has far greater implications. In fact, in Boost.Preprocessor, there are a relatively few number of core algorithms--each of which has its own recursion state. Then there are a few derived algorithms which are hacked to look like they use one of those states. Lastly, there are a bunch of algorithms implemented in terms of the more low-level ones. None of those are reentrant. The above redirection, on the other hand, has no usability consequences other than that a natively built REPEAT or ENUM would be /slightly/ faster due to not passing around and invoking the separators.
Very nice :-) Sorry, TL;DR on the rest of the message right now but I will try to come back to it in coming days. -- Dave Abrahams BoostPro Computing Software Development Training http://www.boostpro.com Clang/LLVM/EDG Compilers C++ Boost

On 10/04/2012 09:31 PM, Dave Abrahams wrote:
on Wed Sep 26 2012, Paul Mensonides<pmenso57-AT-comcast.net> wrote:
Right, and what will change that sad fact? Attempting to workaround forever isn't working. Similarly, is the whole world supposed to not use variadic templates because VC++ doesn't implement them? At least that's likely to change, but a line needs to be drawn. It is one thing for a workaround to be an implementation detail. It is another when it affects the interface. In the latter case, IMO, the best thing to do is provide the interface that *should* exist and either not support the compilers that don't work or provide some clunkier interface for those compilers. Doing else cripples technological advancement.
I'll mention again that I think we should accept some libraries that won't work on broken compilers (like Chaos). "Boost works on this compiler" is an important selling point.
Equally important is that Boost is targeted to the C++ standard by the very nature of what Boost sets out to be. If some compiler does not follow the C++ standard in some area, that is no reason for not accepting a library which does not intend to find workarounds for that compiler in that area. This is more so the case if the compiler which does not follow the C++ standard in that particular area makes no effort to fix its problems for that area. The latter is clearly the case for VC++ and the C++ preprocessor. So having a library, such as Chaos, which follows the C++ standard as it relates to the C++ preprocessor seems an easy decision to me on Boost's part, if Paul wanted to submit his library as part of Boost.

On 09/24/2012 04:15 PM, Kitten, Nicholas wrote:
Hi Folks,
First of all, I'd like to say "thank you" to Edward Diener and Paul Mensonides for incorporating variadics support into Boost.Preprocessor, which has been on my wishlist for a long time.
Thanks !
In fact, I wanted it badly enough that I've been using my own slightly modified version of the library assuming variadics for a while now, and I was interested to see what a cross-platform version would look like (mine was only tested on MSVC). I think the chosen approach of converting variadics to non-variadics is probably superior to the breaking algorithm changes I made in my own version, and I would humbly suggest a few more additions in the same spirit.
The reason I chose to use variadics in the first place was to simplify the user interface for the common case of manipulating what I'll call a "variadic sequence," (a series of concatenated tuples), which I wanted to look similar to the following:
#define FILTER_PARAMETERS (A, int, 3)\ (B, float, 2.0f)\ (C, const char*, "Howdy!")
Of course, without variadics you need either a set of macros for each tuple size, or else double parenthesis around each tuple, which becomes even more burdensome when users are also required to parenthesize inner arguments, leading to gems like this one:
#define FILTER_PARAMETERS ((A, (TypeConverter<double, float>), (TypeConverter<double,float>()))
My solution admitting the preferred format was to replace most private SEQ_*(X) macros with SEQ_*(...), which also required modifying SEQ_ELEM() and SEQ_HEAD() to return (X) instead of X. However, it occurs to me now that a conversion function to go from variadic to regular sequences would've been easier to implement and backwards compatible; see the attached file, which I've tested in basic cases on MSVC 10 and gcc-4.3.4. The usage would be:
#define SEQ (a,b,c)(1,2,3,4) BOOST_PP_VARIADIC_SEQ_TO_SEQ( SEQ ) // expands to ((a,b,c))((1,2,3,4))
I'm submitting this because I couldn't find a way to use some combination of TUPLE/VARIADICS and SEQ transformations to do the same thing succinctly, but if I'm just missing something, please let me know.
Next, I would suggest having a documented IS_EMPTY(...) function for detecting an empty variadic list. Since is_empty.hpp already exists for single arguments (though undocumented), I would guess the variadic version would look quite similar. My own (MSVC) version is attached.
Please see my Variadic Macro Library in the sandbox for some further programming efforts with variadic macros. The IS_EMPTY(...) there is largely taken from one of Paul's posts on the Internet. As Paul has pointed out many times no completely perfect implementation of IS_EMPTY(...) is possible, In my VMD library I attempted some clever preprocessor programming with variadic macros even knowing that while IS_EMPTY(...) is always flawed, an understanding of how and when it is flawed might allow some ideas to work if one avoids the flawed situations. Also dealing with VC++ is a real PITA but I attempted my best to work around the VC++ preprocessor. However I would not wish the VC++ preprocessor on anyone who needs to do serious preprocessor programming. It really is too bad that MS has taken the road of supporting an essentially broken implementation because it works most of the time if one does fairly simply things with it, even while they know it is broken according to the C/C++ standard.
participants (9)
-
Dave Abrahams
-
Edward Diener
-
Giovanni Piero Deretta
-
Kitten, Nicholas
-
Lorenzo Caminiti
-
Matt Calabrese
-
Paul A. Bristow
-
paul Fultz
-
Paul Mensonides