
-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Andrei Alexandrescu (See Website For Email)
What high integration are you referring to here? I don't see much of a high integration between template metaprogramming and the rest of the language. In fact, there is very little integration. Instead, it's more like they coexist without interferring with each other, and communication is entirely one way. Even with the type system, the pattern-matching capabilities are limited to (in essence) syntactic matching rather than more general property matching. You have to bend over backwards to do that (read: SFINAE).
It's not syntactic any day of the week. Not at *all*.
There is a near direct correlation between syntax and its capabilities. I don't mean that it literally is syntactic matching. It can only match those type properties that can be formed syntactically--and it cannot even match all of those. Furthermore, the primary utility of that pattern matching is with templates themselves, not the "non-template part of C++." The template mechanism alone (meaning without major hacks) cannot do even the simplest manipulation on the non-template part. Even with major hacks, it can still only do a very limited amount of such manipulation. Other than a few commonalities (like name binding), there is virtually no integration between the two. The non-template part can statically cause the instantiation of a template and dynamically use it, and the template mechanism can statically introduce non-template code, but the template mechanism cannot statically invoke the non-template part, nor can the non-template part dynamically cause the instantiation of a template. There are only two types of communication, and, from the template side in particular, it is only one-way communication. That isn't integration, that is a layered separation. For metaprogramming, the template mechanism is usable (and a lot of interesting things can be done using it), but it is not robust.
Templates perform genuine recursive pattern matching. They feed on types and constants and offer types and constants back. I see good integration there.
The mechanism is systematic, but that doesn't say anything about its integration with the non-template part of C++.
If, on the other hand, you want to discuss the limited amount of introspection C++ offers, that's a different subject.
No, it really isn't a different subject. We are talking about metaprogramming, not simplistic type substitution ("regular" template usage). There are two purposes of metaprogramming: introspection and generation (i.e. generative metaprogramming). The template mechanism is only mediocre at both. BTW, I'm not saying that the preprocessor does better overall. The preprocessor is terrible at introspection, but (comparatively) excels at many types of generation. There are serious strengths and weaknesses in both mechanisms.
What idioms are those? This, BTW, from the perspective of a user, not library internals. What is really the difference, idiomwise, between fold(list, state), fold<list, state>, and FOLD(list, state)?
I am referring to idioms that ask you to #define certain things and then invoke macros that will use them (or worse, include files that will use them!) and then #undef them
Which is no different than limiting scope in any other context. How many people, for example, complain about the internal linkage of local structures which could otherwise be used for function objects (etc.)? When you write such a #define (in the context of metaprogramming), you are defining a function, which, in the case above, is passed to a higher-order algorithm. The #undef merely explicitly controls the lifetime of the function. In some ways, the preprocessor treats such 'functions' as first class objects better than the core language does.
, or that have you define files that rely on things like "I am iterating now through this file",
What is wrong with that? More specifically, what is wrong with viewing the preprocessing pass as a sort of script execution? That is exactly what it is. In fact, parsing non-preprocessor C++ is itself a sort of top-to-bottom script execution (e.g. function prototypes, predeclarations, etc.). Thinking of it any other way is, with C++ in its current form, fundamentally flawed.
all that comma handling,
There is actually very little comma handling in real preprocessor code. Granted, there is some, but with variadics, the difficulty in handling those nearly disappears--as I showed you before. Having to write TYPELIST(int, (std::pair<int, int>), double) is not that much of a burden. As I've also said before, the preprocessor's primary strength is its disassociation with the underlying language--for generation, the syntax and semantics of the generated language interfere**. That is also its primary weakness--it cannot do any realistic introspection into the syntax and semantics of the underlying language. Commas are a mildly annoying case entirely because a particular part of the syntax of the preprocessor is colliding with a particular part of the syntax of the underlying language. ** This is both a good thing and a bad thing. It is a good thing because you can directly express the generation. Syntactic (and semantic) integration leads to roundabout generation. On the other hand, syntactic integration allows for uniform syntax--which, in turn, allows for infinite "meta-levels" (i.e. metacode that operates on metacode (etc.) which operates on runtime code).
the necessarily long names,
For library facilities, yes. You need to apply a namespace. That isn't really that different than normal C++, except when actually designing the library facilities. In normal client C++, you can explicitly qualify names or manually bring them into scope. You can do the same things with the preprocessor. (There are also a variety of ways of locally shortening names without defining macros.)
the dynamic binding (a historical mistake that keeps on coming again and again in PL design! yuck!) etc.
...which hardly matters when there is no lexical scoping or namespaces/packages. Dynamic versus static binding is irrelevant when a symbol unambiguously refers to a distinct entity.
IMHO those idioms' place is in programming language history books as "awkward first attempts".
On of the most effective aspects of the preprocessor, BTW, is that there is no fundamental distinction between input/output and argument/return value (i.e. A() produces output, but the A() in B(A()) produces a return value). This lack of distinction eliminates a great deal of cruft in the generation process. If you do it in another language (i.e. external tool), you have to distinguish between them--which is more verbose. The preprocessor is naturally more domain-specialized for generation than existing external (general purpose) tools. Granted, you could define such abstractions in another language (or define another language), but that would eventually lead you nearly full circle--you'd end up with a custom preprocessor that does similar things. Yes, by throwing out other important considerations, you could make it better. Now, improving the preprocessor (as you and Joel both mention) is a much better alternative. However, I, for one, won't use non-standard extensions unless there is 1) no other way or 2) using them is drastically better than the alternative. There isn't much that you can do to the preprocessor to make it better to that extent. What that means is that the improvements need to be standardized--and the main thing preventing that is the common "wisdom" that the use of macros is evil--for whatever reasons (or lack of reasons). In other words, your approach leads to a circular non-solution. Denouncing the preprocessor is not going to lead to a better preprocessor. Improving the preprocessor requires first swaying the general opinion on preprocessor-like use (the hard part--an upcliff battle) and then move to add or update features (the easy part). Regards, Paul Mensonides