
I was playing around with Dave's and Aleksey's dimensional analysis example, and I decided that I couldn't leave well enough alone. To that end, I decided to reimplement it so that combining units could be written using infix operator notation. Anyway, the attached file implements it so the following works (on EDG). #include "dim.hpp" #include <iostream> int main(void) { using namespace dim; using dim::time; // annoying names from C headers! DIM( length / time ) velocity; DIM( velocity / time ) acceleration; DIM( mass * acceleration ) force; quantity<float, DIM(mass)> m = 5.0f; quantity<float, DIM(acceleration)> a = 9.8f; quantity<float, DIM(force)> f = m * a; std::cout << "force = " << f.value() << '\n'; return 0; } I find the syntax like DIM( mass * length / (time * time) ) force; mildly amusing, so I thought I'd share. Regards, Paul Mensonides

Paul Mensonides wrote:
I was playing around with Dave's and Aleksey's dimensional analysis example, and I decided that I couldn't leave well enough alone. To that end, I decided to reimplement it so that combining units could be written using infix operator notation. Anyway, the attached file implements it so the following works (on EDG).
Neato! I could actually really use something like this that's extensible so I can define my own units and conversions between them. Maybe I'll fiddle with it some but time is tight at the moment (when isn't it?). -Dave

"David Greene" wrote
Paul Mensonides wrote:
I was playing around with Dave's and Aleksey's dimensional analysis example, and I decided that I couldn't leave well enough alone. To that end, I decided to reimplement it so that combining units could be written using infix operator notation. Anyway, the attached file implements it so the following works (on EDG).
Neato! I could actually really use something like this that's extensible so I can define my own units and conversions between them. Maybe I'll fiddle with it some but time is tight at the moment (when isn't it?).
**** Advert **** No time to define your own units? No time to look up those pesky conversion factors? Now there is no need to define your own units or conversions ... pqs does it all for You! Do you need readymade output of units or fractional dimension, such as Johnson-noise voltage density?, physical constants? Standardised Repeatable Conversion-factors? pqs features all this and lots Lots More! NEW! IMPROVED! Now Includes detailed documentation including a Getting Started section Tests and Examples. Tested on VC7.1 ,VC8.0, gcc3.2 and gcc4.0. . pqs Version 3.0.6 is in the Boost vault Now: http://tinyurl.com/7m5l8 pqs is Also in the boost review queue. pqs is based on the SI as recommended down by NIST ~~~ Advert ~~~ regards Andy Little

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Andy Little
pqs Version 3.0.6 is in the Boost vault Now: http://tinyurl.com/7m5l8
This (my small sample code) is in no way supposed to be a full-blown library, BTW. It is just for amusement. The things that I find amusing in it are that dimensions are specified using variables, not types (somewhat unusual), and the way that the reconstruction of type from variables occurs. Regards, Paul Mensonides

"Paul Mensonides" wrote
-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Andy Little
pqs Version 3.0.6 is in the Boost vault Now: http://tinyurl.com/7m5l8
This (my small sample code) is in no way supposed to be a full-blown library, BTW. It is just for amusement.
How about rational dimension elements rather than integers. Would that be possible using the preprocessor? This occurs rarely but needs to be possible. One example is in measuring electrical noise where units for noise-voltage density are Volts * (Hertz to_power -1/2). The time element of the dimension works out as being to power -(2 1/2) FWIW!
The things that I find amusing in it are that dimensions are specified using variables, not types (somewhat unusual), and the way that the reconstruction of type from variables occurs.
Is that the same as how Boost.typeof works ...? acceleration a; mass m; BOOST_TYPEOF( m * a) f; //or BOOST_AUTO(f, m * a); regards Andy little

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Andy Little
This (my small sample code) is in no way supposed to be a full-blown library, BTW. It is just for amusement.
How about rational dimension elements rather than integers. Would that be possible using the preprocessor?
The preprocessor isn't doing the work. It is just generating the code that does the work. Use of the preprocessor (here) is just making all that code write itself based on an initial sequence of fundamental units.
This occurs rarely but needs to be possible. One example is in measuring electrical noise where units for noise-voltage density are Volts * (Hertz to_power -1/2). ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Out of curiosity, how does this occur starting from SI's fundamental units?
The time element of the dimension works out as being to power -(2 1/2) FWIW!
You can certainly implement a static rational with a little template metaprogramming. Reducing rational values with template metaprogramming would be expensive in general, but here were mostly talking about very "small" rationals. You'd just end up with two "fields" per dimension (numerator and denominator) instead of one, and you'd have to use them as rational values. E.g. the way that I wrote it, you basically have: dim<mass, time, ...> ...where 'mass' and 'time' are integers. Instead, you have to have 'mass' and 'time' be types that represent rational values (such as 'rational<-5, 2>').
The things that I find amusing in it are that dimensions are specified using variables, not types (somewhat unusual), and the way that the reconstruction of type from variables occurs.
Is that the same as how Boost.typeof works ...?
acceleration a; mass m;
BOOST_TYPEOF( m * a) f; //or BOOST_AUTO(f, m * a);
regards Andy little
It is probably similar (as far as reconstruction goes). I think that specifying dimensions using variables instead of types is different. Instead of the above, you have something like: // fundamentals: <unspecified-type> length; <unspecified-type> time; <unspecified-type> mass; // etc. // composites: DIM(length / time) velocity; DIM(velocity / time) acceleration; // or DIM(length / (time * time)) DIM(mass * acceleration) force; // or DIM(mass * length / (time * time)) All of the above are variable declarations, not typedefs. They are variables that have no function except to be used in expressions for the purpose of extracting type. In a scheme like this, the user never even sees the actual types of these variables. Instead, the type is 'named' via an expression. Using this scheme, however, you can't say something like: 2 * time So you'd have to say something like: r<2> * time (or r<1, 2> to get such a fractional exponent at all). All of this is slightly more complex that what I wrote (which is no where near being a full-fledged physical quantities library), but it is all possible. Regards, Paul Mensonides

"Paul Mensonides" wrote
-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Andy Little
This (my small sample code) is in no way supposed to be a full-blown library, BTW. It is just for amusement.
How about rational dimension elements rather than integers. Would that be possible using the preprocessor?
The preprocessor isn't doing the work. It is just generating the code that does the work. Use of the preprocessor (here) is just making all that code write itself based on an initial sequence of fundamental units.
Unfortunately it wont compile on VC7.1, though I preprocessed it . Seems to be based on doing maths inside array brackets to get different sizes? Sounds quite like Boost.Typeof though more direct maths!
This occurs rarely but needs to be possible. One example is in measuring electrical noise where units for noise-voltage density are Volts * (Hertz to_power -1/2). ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Out of curiosity, how does this occur starting from SI's fundamental units?
if : K is boltzmanns constant which is approx 1.38 e-23 Joules per Kelvin T is absolute temperature in Kelvin R is resistance in Ohms then Johnson noise density in Volts / (Hertz to_power 1/2). == sqrt( 4 * K * R * T); However volts, energy and resistance are all compound units of course . resistance = voltage / current, power = current * voltage, power = energy / time, energy = force * distance, force = mass * acceleration, acceleration = distance / (time*time). I think its possible to derive the units of johnson noise density from fundamental units that way.
The time element of the dimension works out as being to power -(2 1/2) FWIW!
You can certainly implement a static rational with a little template metaprogramming. Reducing rational values with template metaprogramming would be expensive in general, but here were mostly talking about very "small" rationals. You'd just end up with two "fields" per dimension (numerator and denominator) instead of one, and you'd have to use them as rational values. E.g. the way that I wrote it, you basically have:
dim<mass, time, ...>
...where 'mass' and 'time' are integers. Instead, you have to have 'mass' and 'time' be types that represent rational values (such as 'rational<-5, 2>').
Right (and I understand its for fun) , but presumably there is unlikely to be an advantage over using templates in compilation speed and certainly not in legibility ? FWIW I have finally started using the preprocessor library in pqs. I needed to generate a large number of power of 10 functors with a rational exponent and where the range is configurable by the user via a config macro. (For anyone interested these functors are in <boost/pqs/detail/united_value/operations/coherent_exponent_eval.hpp>. note: pqs is in the review queue but not an official boost library ) The preprocessor code I ended up with is basically copied from http://boost-consulting.com/tmpbook/preprocessor.html ,which I found thanks to the useful link in the preprocessor docs ;-). seems to work well but I dont make any claim to really understand how it works though! (note: I'm quite happy to leave it that way too unless I really need to know otherwise ;-))
The things that I find amusing in it are that dimensions are specified using variables, not types (somewhat unusual), and the way that the reconstruction of type from variables occurs.
Is that the same as how Boost.typeof works ...?
[..]
It is probably similar (as far as reconstruction goes). I think that specifying dimensions using variables instead of types is different. Instead of the above, you have something like:
[cut example]
All of the above are variable declarations, not typedefs. They are variables that have no function except to be used in expressions for the purpose of extracting type. In a scheme like this, the user never even sees the actual types of these variables. Instead, the type is 'named' via an expression.
Presumably there is a chance that these end up in the executable though which could tend to get messy? [...]
All of this is slightly more complex that what I wrote (which is no where near being a full-fledged physical quantities library), but it is all possible.
I have to admit that I prefer to use the preprocessor as a last resort, however that might be seen as an improvement on my previous position which was to avoid its use at all if possible. Nevertheless I have found the Boost.Preprocessor library really useful recently as said above and also just plain impressive because it seems to makes use of the preprocessor a bit more like a traditional programming language, hence easier to understand or even just plain useable. The Preprocessor library has to be the most original boost library I reckon! regards Andy Little

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Andy Little
The preprocessor isn't doing the work. It is just generating the code that does the work. Use of the preprocessor (here) is just making all that code write itself based on an initial sequence of fundamental units.
Unfortunately it wont compile on VC7.1, though I preprocessed it .
Yeah, that doesn't surprise me. There is even a hack in place for Comeau. The 'multiply' and 'divide' metafunctions shouldn't be necessary for the return types of the 'quantity' multiplicative operators. You should be able to say (e.g.) DIM(D1 * D2). However, EDG's overly-eager substitution failure (i.e. of SFINAE) bails at nearly the first sign of an expression. That's really quite annoying as SFINAE is not being manipulated here at all (other than the normal overloading of template functions).
The time element of the dimension works out as being to power -(2 1/2) FWIW!
You can certainly implement a static rational with a little template metaprogramming. Reducing rational values with template metaprogramming would be expensive in general, but here were mostly talking about very "small" rationals. You'd just end up with two "fields" per dimension (numerator and denominator) instead of one, and you'd have to use them as rational values. E.g. the way that I wrote it, you basically have:
dim<mass, time, ...>
...where 'mass' and 'time' are integers. Instead, you have to have 'mass' and 'time' be types that represent rational values (such as 'rational<-5, 2>').
Right (and I understand its for fun) , but presumably there is unlikely to be an advantage over using templates in compilation speed and certainly not in legibility ?
The above *is* using templates. I personally think that it is more legible (it's a DSL after all) in ad hoc situations. E.g. DIM(length / (time * time)) acceleration; vs. typedef divide<length, multiply<time, time>::type>::type acceleraton; In scenarios where such things are built in stages, it hardly matters: DIM(length / time) velocity; DIM(velocity / time) acceleration; vs. typedef divide<length, time>::type velocity; typedef divide<velocity, time>::type acceleration; In such situations there probably isn't a huge advantage over using regular templates. Furthermore, such situations would likely be more common. As far as compilation speed, using the DIM version is probably faster--it does less template instantiation. However, for most cases of the need such a library, the actual amount of calculation is minimal (unless you're doing rationals).
FWIW I have finally started using the preprocessor library in pqs. I needed to generate a large number of power of 10
The preprocessor code I ended up with is basically copied from http://boost-consulting.com/tmpbook/preprocessor.html ,which I found thanks to the useful link in the preprocessor docs ;-). seems to work well but I dont make any claim to really understand how it works though! (note: I'm quite happy to leave it that way too unless I really need to know otherwise ;-))
It isn't important that you understand how it works. It is important that you understand what each of the pieces is doing.
the purpose of extracting type. In a scheme like this, the user never even sees the actual types of these variables. Instead, the type is 'named' via an expression.
Presumably there is a chance that these end up in the executable though which could tend to get messy?
Given a horrible compiler, yes. Even so, we're only talking about one byte per variable.
[...]
All of this is slightly more complex that what I wrote (which is no where near being a full-fledged physical quantities library), but it is all possible.
I have to admit that I prefer to use the preprocessor as a last resort, however that might be seen as an improvement on my previous position which was to avoid its use at all if possible.
BTW, the preprocessor metaprogramming used in the example isn't necessary. Rather, it was easier to write ENUM_PARAMS(7, int X) than it is to write int X0, int X1, etc.. This isn't a scenario where the library needs to expand (or contract) based on user configuration. Instead, the number of fundamental units is small and more-or-less fixed.
Nevertheless I have found the Boost.Preprocessor library really useful recently as said above and also just plain impressive because it seems to makes use of the preprocessor a bit more like a traditional programming language, hence easier to understand or even just plain useable.
Just remember that overly broad guidelines (like "avoid macros") actually hurt programming more than help. There are indeed bad categories of macro use, and guidelines need to target those categories specifically. The same is true of a lot of different things in programming. For example, throwing OO at every problem is not a good idea. Nor is throwing generacity at every problem. All of these things are good or bad only in relation to the alternatives. No programming idiom is inherently good or bad. Of course, that implies that you have to know the alternatives and that you have to be smart (or more accurately, wise) in your deployment of an idiom or technique. There are always tradeoffs (if there's not, you're overlooking at least one alternative). Regards, Paul Mensonides
participants (3)
-
Andy Little
-
David Greene
-
Paul Mensonides