Re: [boost] formal review request : mcs::units

From: Andreas Harnack <ah.boost.02@justmail.de>
I might have been a bit imprecise here, so please consider:
const unsigned int Time = 2; const unsigned int Length = 3; const unsigned int Mass = 5; const unsigned int Current = 7; const unsigned int Temp = 11; const unsigned int Amount = 13; const unsigned int Intensity = 17;
template <unsigned int N, unsigned int D=1> struct Rational { ... };
typedef Rational<Time> time; typedef Rational<Length> length; typedef Rational<Mass*Length,Times*Times> force; typedef Rational<Mass*Length*Length,Times*Times> energy;
quantity<energy, double> e;
so the representation of energy would be just 5*3*3/2*2 = 45/4.
You're right, there is a computational limit, but I wouldn't expect to a see a dimension with the power of 31. Exponends of 4 are about the highest I've ever seen, and (2*3*5*7*11*13*17)^3 still fits in 57 bits, so we might want to use long or even long long unsigned ints, but that should be fine for most situations.
If exponents of 4 are the highest you've seen, shouldn't you consider the size of (2*3*5*7*11*13*17)^4, which requires 76 bits? Also, I may want to extend the list - some people have requested a "money" unit, for example, which would be assigned the value 19 in your system. Then I can't even represent all powers of 3. I suspect this idea, while clever, does not cover the problem domain sufficiently well to be superior to lists. - James Jones Administrative Data Mgmt. Webmaster 375 Raritan Center Pkwy, Suite A Data Architect Edison, NJ 08837

If exponents of 4 are the highest you've seen, shouldn't you consider the size of (2*3*5*7*11*13*17)^4, which requires 76 bits? Also, I may want to extend the list - some people have requested a "money" unit, for example, which would be assigned the value 19 in your system. Then I can't even represent all powers of 3. I suspect this idea, while clever, does not cover the problem domain sufficiently well to be superior to lists.
I agree that this is a clever idea; I imagine that implementing it would not be too difficult, just to see how much it speeds compilation relative to the optimized list-based code that Steven put together. Essentially, as James points out above, we would be trading generality for performance. I would certainly happy to test out a replacement for dimension.hpp based on this approach. One concern I have is how to ensure that each fundamental unit has a unique prime number associated with it. I guess the current system has the same problem, though counting integers is clearly easier than counting primes. That actually raises a question I had a while ago: Is there some method to count specializations of a template class? That is, if I have template<long N> struct ordinal { typedef typename boost::mpl::int_<N> value; }; I want to be able to do something like this (warning not C++): struct tag1 : public next_ordinal::type { }; struct tag2 : public next_ordinal::type { }; struct tag3 : public next_ordinal::type { }; etc... where next_ordinal::type resolves to ordinal<1>, ordinal<2>, and ordinal<3> on sequential calls. Anyone have any ideas? This would simplify and make safer the process of defining user-defined unit systems. Matthias ---------------------------------------------------------------- Matthias Schabel, Ph.D. Assistant Professor, Department of Radiology Utah Center for Advanced Imaging Research 729 Arapeen Drive Salt Lake City, UT 84108 801-587-9413 (work) 801-585-3592 (fax) 801-706-5760 (cell) 801-484-0811 (home) matthias dot schabel at hsc dot utah dot edu ----------------------------------------------------------------

Hello Matthias,
Is there some method to count specializations of a template class? That is, if I have
template<long N> struct ordinal { typedef typename boost::mpl::int_<N> value; };
I want to be able to do something like this (warning not C++):
struct tag1 : public next_ordinal::type { }; struct tag2 : public next_ordinal::type { }; struct tag3 : public next_ordinal::type { };
etc... where next_ordinal::type resolves to ordinal<1>, ordinal<2>, and ordinal<3> on sequential calls. Anyone have any ideas? This would simplify and make safer the process of defining user-defined unit systems.
Maybe something like this? // the_library.hpp template< unsigned int N = 0 > struct ordinal : public mpl::int_< N > { typedef ordinal< N + 1 > next; }; // We may use ordinals in the lib, in that case // initial_ordinal below should be the first // available ordinal for users typedef ordinal< > initial_ordinal; // user_code.hpp struct tag1 : public initial_ordinal {}; struct tag2 : public tag1::next {}; struct tag3 : public tag2::next {}; -- Best regards, Andrey mailto:andysem@mail.ru

Hi Andrey,
Maybe something like this?
// the_library.hpp
template< unsigned int N = 0 > struct ordinal : public mpl::int_< N > { typedef ordinal< N + 1 > next; };
// We may use ordinals in the lib, in that case // initial_ordinal below should be the first // available ordinal for users
typedef ordinal< > initial_ordinal;
// user_code.hpp
struct tag1 : public initial_ordinal {}; struct tag2 : public tag1::next {}; struct tag3 : public tag2::next {};
The problem with this is that you still need to keep track of previously defined tags. That is, you need to know what the last tag defined was to call ::next on it. Imagine that you wanted to extend the SI system with a new fundamental unit for fabulousness. As it stands now, you would do something like this in your code: namespace boost { namespace units { struct fabulousness_tag : public ordinal<10> { }; typedef fundamental_dimension<fabulousness_tag>::type fabulousness_type; namespace SI { // SI unit of fabulousness typedef unit<SI::system,fabulousness_type> fabulousness; static const fabulousness fab, fabs; } // namespace SI } // namespace units } // namespace boost Here, you need to know that there are already 9 tags defining the pre- existing fundamental types in the SI system. What I want to do is to query the compiler as to how many previous distinct instantiations of ordinal<N> have already been encountered so you could write struct fabulousness_tag : public ordinal<N_prev_ordinal_instantiations +1> { }; This would allow you to add tags without ever having to look around to determine the current upper limit...If may not be possible, though. Matthias

Hello Matthias, Saturday, February 10, 2007, 9:35:05 PM, you wrote:
Hi Andrey,
Maybe something like this?
[snip]
The problem with this is that you still need to keep track of previously defined tags. That is, you need to know what the last tag defined was to call ::next on it.
[snip]
This would allow you to add tags without ever having to look around to determine the current upper limit...If may not be possible, though.
I think, you won't get any closer to it, since tags may be declared in different translation units, which are not visible to each other until linking. And even if they are all in one TU, the tag type is static with its integral constants. The only way to get another tag is to define another type. -- Best regards, Andrey mailto:andysem@mail.ru

james.jones@firstinvestors.com wrote:
I suspect this idea, while clever, does not cover the problem domain sufficiently well to be superior to lists.
Please have a look at the attached list. (I hope sending attachments works.) This is a list of all SI units I could find on a quick search. The first two columns are nominator and denominator a corresponding rational number would have. Of course, this is not a proof, but the numbers suggest that for practical purposes we're not even getting near a range that's likely to be dangerous. By the way, there's no need to worry about growing intermediate results: a/b * c/d is equal to a/d * c/b and these two factors can be normalized before the multiplication is carried out. If that's done and a/b and c/d were in normal form, then so will be the result and there's no intermediate result growing larger then the final product. Andreas 9 1 3^2 m^2 area square metre m2 27 1 3^3 m^3 volume cubic metre m3 3 2 3/2 m/s speed, velocity metre per second m/s 3 4 3/2^2 m/s^2 acceleration metre per second squared m/s2 1 3 1/3 1/m wavenumber reciprocal metre m1 5 27 5/3^3 kg/m^3 mass density kilogram per cubic metre kg/m3 5 9 5/3^2 kg/m^2 surface density kilogram per square metre kg/m2 27 5 3^3/5 m^3/kg specific volume cubic metre per kilogram m3/kg 7 9 7/3^2 A/m^2 current density ampere per square metre A/m2 7 3 7/3 A/m magnetic field strength ampere per metre A/m 13 27 13/3^3 mol/m^3 concentration mole per cubic metre mol/m3 5 27 5/3^3 kg/m^3 mass concentration kilogram per cubic metre kg/m3 17 9 17/3^2 cd/m^2 luminance candela per square metre cd/m2 1 2 1/2 1/s frequency hertz (d) Hz 15 4 5*3/2^2 kg*m/s^2 force newton N 5 12 5/(3*2^2) kg/m*s^2 pressure, stress pascal Pa N/m2 45 4 5*3^2/2^2 kg*m^2/s^2 energy, work, heat joule J N m 45 8 5*3^2/2^3 kg*m^2/s^3 power, radiant flux watt W J/s 14 1 7*2 A*s electric charge coulomb C 45 56 5*3^2/(7*2^3) kg*m^2/A*s^3 electric potential volt V W/A 784 45 7^2*2^4/(5*3^2) A^2*s^4/kg*m^2 capacitance farad F C/V 45 392 5*3^2/(7^2*2^3) kg*m^2/A^2*s^3 electric resistance ohm V/A 392 45 7^2*2^3/(5*3^2) A^2*s^3/kg*m^2 electric conductance siemens S A/V 45 28 5*3^2/(7*2^2) kg*m^2/A*s^2 magnetic flux weber Wb V s 5 28 5/(7*2^2) kg/A*s^2 magnetic flux density tesla T Wb/m2 45 196 5*3^2/(7^2*2^2) kg*m^2/A^2*s^2 inductance henry H Wb/A 11 1 11 K Celsius temperature degree Celsius (e) °C 17 1 17 cd luminous flux lumen lm cd sr (c) 17 9 17/3^2 cd/m^2 illuminance lux lx lm/m2 1 2 1/2 1/s activity referred to a radionuclide (f) becquerel (d) Bq 9 4 3^2/2^2 m^2/s^2 absorbed dose, specific energy (imparted), kerma gray Gy J/kg 5 6 5/(3*2) kg/m*s dynamic viscosity pascal second Pa s 45 4 5*3^2/2^2 kg*m^2/s^2 moment of force newton metre N m 5 4 5/2^2 kg/s^2 surface tension newton per metre N/m 5 8 5/2^3 kg/s^3 heat flux density, watt per square metre W/m2 45 44 5*3^2/(11*2^2) kg*m^2/K*s^2 heat capacity, entropy joule per kelvin J/K 9 44 3^2/(11*2^2) m^2/K*s^2 specific heat capacity, joule per kilogram kelvin J/(kg K) 9 4 3^2/(2^2) m^2/s^2 specific energy joule per kilogram J/kg 15 88 5*3/(11*2^3) kg*m/K*s^3 thermal conductivity watt per metre kelvin W/(m K) 5 12 5/(3*2^2) kg/m*s^2 energy density joule per cubic metre J/m3 15 56 5*3/(7*2^3) kg*m/A*s^3 electric field strength volt per metre V/m 14 27 7*2/(3^3) A*s/m^3 electric charge density coulomb per cubic metre C/m3 14 9 7*2/3^2 A*s/m^2 surface charge density coulomb per square metre C/m2 14 9 7*2/3^2 A*s/m^2 electric flux density, coulomb per square metre C/m2 784 135 7^2*2^4/(5*3^3) A^2*s^4/kg*m^3 permittivity farad per metre F/m 15 196 5*3/(7^2*2^2) kg*m/A^2*s^2 permeability henry per metre H/m 45 52 5*3^2/(13*2^2) kg*m^2/mol*s^2 molar energy joule per mole J/mol 45 572 5*3^2/(13*11*2^2) kg*m^2/mol*K*s^2 molar entropy joule per mole kelvin J/(mol K) 14 5 7*2/5 A*s/kg exposure coulomb per kilogram C/kg 9 8 3^2/2^3 m^2/s^3 absorbed dose rate gray per second Gy/s 13 54 13/(3^3*2) mol/m^3*s catalytic activity katal per cubic metre kat/m3

On 9 Feb 2007, at 12:31, Andreas Harnack wrote:
james.jones@firstinvestors.com wrote:
I suspect this idea, while clever, does not cover the problem domain sufficiently well to be superior to lists.
Please have a look at the attached list. (I hope sending attachments works.)
This is a list of all SI units I could find on a quick search. The first two columns are nominator and denominator a corresponding rational number would have. Of course, this is not a proof, but the numbers suggest that for practical purposes we're not even getting near a range that's likely to be dangerous.
By the way, there's no need to worry about growing intermediate results: a/b * c/d is equal to a/d * c/b and these two factors can be normalized before the multiplication is carried out. If that's done and a/b and c/d were in normal form, then so will be the result and there's no intermediate result growing larger then the final product.
Andreas
Have you thought about negative powers, e.g. number densities (m^-3) Matthias

In message <7384990EC1040E4C8BC31CADFFDD428001C2A304@cedarrapids>, james.jones@firstinvestors.com writes
From: Andreas Harnack <ah.boost.02@justmail.de> ...
You're right, there is a computational limit, but I wouldn't expect to a see a dimension with the power of 31. Exponends of 4 are about the highest I've ever seen, and (2*3*5*7*11*13*17)^3 still fits in 57 bits, so we might want to use long or even long long unsigned ints, but that should be fine for most situations.
If exponents of 4 are the highest you've seen, shouldn't you consider the size of (2*3*5*7*11*13*17)^4, which requires 76 bits? But ISTM that this information could be given by a sequence of powers for each prime in the sequence: ie 4 bits * 7 in this second example. What am I missing?
Alec -- Alec Ross

I'm not sure if this library can handle this so I'll just describe the problem. When working in 3d simulations it is necessary to transform vectors to different spaces. Bugs sometimes crop up when, for instance, adding a object space vector to a world space vector or adding a vector from object a's object space to one from object b's object space. It would be nice to catch such inconsistencies at compile-time. I have a feeling a units-like library would be able to handle this but I have no idea how it would work. Is this too far off the main path for this units library? Thanks, Michael Marcin

Hi Michael,
When working in 3d simulations it is necessary to transform vectors to different spaces. Bugs sometimes crop up when, for instance, adding a object space vector to a world space vector or adding a vector from object a's object space to one from object b's object space. It would be nice to catch such inconsistencies at compile-time. I have a feeling a units-like library would be able to handle this but I have no idea how it would work.
Here are two ways to do it, with the quantity wrapping the vector or the vector containing quantities: // mcs::units - A C++ library for zero-overhead dimensional analysis and // unit/quantity manipulation and conversion // // Copyright (C) 2003-2007 Matthias Christian Schabel // // Distributed under the Boost Software License, Version 1.0. (See // accompanying file LICENSE_1_0.txt or copy at // http://www.boost.org/LICENSE_1_0.txt) // unit_example_15.cpp #include <iostream> #include <boost/array.hpp> #include <boost/units/quantity.hpp> namespace boost { namespace units { namespace wo { struct world_space_tag : public ordinal<1> { }; struct object_space_tag : public ordinal<2> { }; typedef fundamental_dimension<world_space_tag>::type world_space_type; typedef fundamental_dimension<object_space_tag>::type object_space_type; /// placeholder class defining test unit system struct system { }; /// unit typedefs typedef unit<system,dimensionless_type> dimensionless; typedef unit<system,world_space_type> world_space_unit; typedef unit<system,object_space_type> object_space_unit; static const world_space_unit world_space; static const object_space_unit object_space; } // namespace wo } // namespace units } // namespace boost int main(void) { using namespace boost::units; using namespace boost::units::wo; { typedef boost::array<double,3> vector; const vector vec1 = { 0, 0, 0 }, vec2 = { 1, 1, 1 }; quantity<world_space_unit,vector> wsv1(vec1*world_space), wsv2(vec2*world_space); quantity<object_space_unit,vector> osv1(vec1*object_space), osv2(vec2*object_space); quantity<world_space_unit,vector> wsv3(wsv1); quantity<object_space_unit,vector> osv3(osv1); // compile-time error if either of these is uncommented // quantity<world_space_unit,vector> wsv4(osv1); // quantity<object_space_unit,vector> osv4(wsv1); } { typedef quantity<world_space_unit> world_space_quantity; typedef quantity<object_space_unit> object_space_quantity; typedef boost::array<world_space_quantity,3> world_space_vector; typedef boost::array<object_space_quantity,3> object_space_vector; world_space_vector wsv1 = { 0*world_space, 0*world_space, 0*world_space }, wsv2 = { 1*world_space, 1*world_space, 1*world_space }; object_space_vector osv1 = { 0*object_space, 0*object_space, 0*object_space }, osv2 = { 1*object_space, 1*object_space, 1*object_space }; world_space_vector wsv3(wsv1); object_space_vector osv3(osv1); // compile-time error if either of these is uncommented // world_space_vector wsv4(osv1); // object_space_vector osv4(wsv1); } return 0; } You should be able to drop any well-defined vector class into this and have it work... Naturally, you can also define your own conversion function (see conversion.hpp for the default implementation). Because quantities obey the laws of dimensional analysis, the only operations supported by them are addition, subtraction, multiplication, division, and rational powers and roots. If you want to do other things, you will need to extract the raw value_type using the value() member function... Another way to do this would be to define two unit systems, the world unit system and the object unit system and define conversions between the two of them. For this sort of application, there are many paths to Rome. The right one probably depends on the details of the application. Cheers, Matthias

Matthias Schabel wrote:
Hi Michael,
When working in 3d simulations it is necessary to transform vectors to different spaces. Bugs sometimes crop up when, for instance, adding a object space vector to a world space vector or adding a vector from object a's object space to one from object b's object space. It would be nice to catch such inconsistencies at compile-time. I have a feeling a units-like library would be able to handle this but I have no idea how it would work.
Here are two ways to do it, with the quantity wrapping the vector or the vector containing quantities:
<snip>
That's pretty cool and would likely catch many of the problems. The other error cases seem harder to detect at compile time. At least for my purposes there is only one world space but there can be an arbitrary number of object spaces. In addition it can be difficult to define correct conversions between spaces. This is probably a pretty bad example but let's assume we have a node that can be attached to a parent node with an offset from the parent node's origin. struct node { node* parent; std::vector<vec> offsets; // in *this's object space vec position; // in *this's object space std::size_t which_offset; // index into parent's offsets vector }; vec get_parent_space_position( node* n ) { return n.position + n.parent.offsets[n.which_offset]; // oops n.position is in n's object space // but n.parent.offset's position is in n.parent's object space // should be return n.position + n.parent.position // transform to parent's space + n.parent.offsets[n.which_offset]; } The same operation is valid is some cases but not in others. I.E. n.position + n.offsets[0] is valid but n.position + n.parent.offsets[0] is not valid. Because of the recursive nature they are the same variable so I don't know how you could embed such validity checks in a type. Maybe this is asking too much from a library and programming is supposed to be hard :). Thanks, Michael Marcin

Michael Marcin wrote:
Matthias Schabel wrote:
Hi Michael,
When working in 3d simulations it is necessary to transform vectors to different spaces. Bugs sometimes crop up when, for instance, adding a object space vector to a world space vector or adding a vector from object a's object space to one from object b's object space. It would be nice to catch such inconsistencies at compile-time. I have a feeling a units-like library would be able to handle this but I have no idea how it would work. Here are two ways to do it, with the quantity wrapping the vector or the vector containing quantities:
<snip>
That's pretty cool and would likely catch many of the problems. The other error cases seem harder to detect at compile time. At least for my purposes there is only one world space but there can be an arbitrary number of object spaces. In addition it can be difficult to define correct conversions between spaces.
<snip>
If you think about it, you begin to realize that in *all* code involving arithmetic calculations (not just in physics or finance), you should wrap virtually every double with a type and define allowable operations using those types. This would catch a lot of errors at compile time.

If you think about it, you begin to realize that in *all* code involving arithmetic calculations (not just in physics or finance), you should wrap virtually every double with a type and define allowable operations using those types. This would catch a lot of errors at compile time.
My experience with implementing "simple" dimensional analysis has taught me one thing very clearly - rendering problems down to the right level of abstraction is very difficult, especially when constrained by practical realities of something concrete like a programming language. Matthias
participants (8)
-
Alec Ross
-
Andreas Harnack
-
Andrey Semashev
-
Deane Yang
-
james.jones@firstinvestors.com
-
Matthias Schabel
-
Matthias Troyer
-
Michael Marcin