
-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Andy Little
The preprocessor isn't doing the work. It is just generating the code that does the work. Use of the preprocessor (here) is just making all that code write itself based on an initial sequence of fundamental units.
Unfortunately it wont compile on VC7.1, though I preprocessed it .
Yeah, that doesn't surprise me. There is even a hack in place for Comeau. The 'multiply' and 'divide' metafunctions shouldn't be necessary for the return types of the 'quantity' multiplicative operators. You should be able to say (e.g.) DIM(D1 * D2). However, EDG's overly-eager substitution failure (i.e. of SFINAE) bails at nearly the first sign of an expression. That's really quite annoying as SFINAE is not being manipulated here at all (other than the normal overloading of template functions).
The time element of the dimension works out as being to power -(2 1/2) FWIW!
You can certainly implement a static rational with a little template metaprogramming. Reducing rational values with template metaprogramming would be expensive in general, but here were mostly talking about very "small" rationals. You'd just end up with two "fields" per dimension (numerator and denominator) instead of one, and you'd have to use them as rational values. E.g. the way that I wrote it, you basically have:
dim<mass, time, ...>
...where 'mass' and 'time' are integers. Instead, you have to have 'mass' and 'time' be types that represent rational values (such as 'rational<-5, 2>').
Right (and I understand its for fun) , but presumably there is unlikely to be an advantage over using templates in compilation speed and certainly not in legibility ?
The above *is* using templates. I personally think that it is more legible (it's a DSL after all) in ad hoc situations. E.g. DIM(length / (time * time)) acceleration; vs. typedef divide<length, multiply<time, time>::type>::type acceleraton; In scenarios where such things are built in stages, it hardly matters: DIM(length / time) velocity; DIM(velocity / time) acceleration; vs. typedef divide<length, time>::type velocity; typedef divide<velocity, time>::type acceleration; In such situations there probably isn't a huge advantage over using regular templates. Furthermore, such situations would likely be more common. As far as compilation speed, using the DIM version is probably faster--it does less template instantiation. However, for most cases of the need such a library, the actual amount of calculation is minimal (unless you're doing rationals).
FWIW I have finally started using the preprocessor library in pqs. I needed to generate a large number of power of 10
The preprocessor code I ended up with is basically copied from http://boost-consulting.com/tmpbook/preprocessor.html ,which I found thanks to the useful link in the preprocessor docs ;-). seems to work well but I dont make any claim to really understand how it works though! (note: I'm quite happy to leave it that way too unless I really need to know otherwise ;-))
It isn't important that you understand how it works. It is important that you understand what each of the pieces is doing.
the purpose of extracting type. In a scheme like this, the user never even sees the actual types of these variables. Instead, the type is 'named' via an expression.
Presumably there is a chance that these end up in the executable though which could tend to get messy?
Given a horrible compiler, yes. Even so, we're only talking about one byte per variable.
[...]
All of this is slightly more complex that what I wrote (which is no where near being a full-fledged physical quantities library), but it is all possible.
I have to admit that I prefer to use the preprocessor as a last resort, however that might be seen as an improvement on my previous position which was to avoid its use at all if possible.
BTW, the preprocessor metaprogramming used in the example isn't necessary. Rather, it was easier to write ENUM_PARAMS(7, int X) than it is to write int X0, int X1, etc.. This isn't a scenario where the library needs to expand (or contract) based on user configuration. Instead, the number of fundamental units is small and more-or-less fixed.
Nevertheless I have found the Boost.Preprocessor library really useful recently as said above and also just plain impressive because it seems to makes use of the preprocessor a bit more like a traditional programming language, hence easier to understand or even just plain useable.
Just remember that overly broad guidelines (like "avoid macros") actually hurt programming more than help. There are indeed bad categories of macro use, and guidelines need to target those categories specifically. The same is true of a lot of different things in programming. For example, throwing OO at every problem is not a good idea. Nor is throwing generacity at every problem. All of these things are good or bad only in relation to the alternatives. No programming idiom is inherently good or bad. Of course, that implies that you have to know the alternatives and that you have to be smart (or more accurately, wise) in your deployment of an idiom or technique. There are always tradeoffs (if there's not, you're overlooking at least one alternative). Regards, Paul Mensonides