mcs::units informal review request

I've been making incremental improvements (I like to think, anyway) in the mcs::units dimensional analysis and unit library. The most recent version (v0.5.4, in the boost vault, just uploaded) includes wrappers for all <cmath> functions for which operations on quantities make sense, provides Boost.Serialization support for units and quantities, and a significantly expanded and more flexible system for unit conversion. I have also tightened up requirements on construction to improve safety in unit computations by making strict unit construction the default. In the hopes of getting it review- ready and addressing potential issues before the formal review, I'm hoping to get some concrete feedback from anyone who has downloaded and used the library (especially the v0.5.x branch). To get things started, here are a few questions I have : 1) At the moment, the library enforces strict construction; that is, quantities must be fully defined at construction : quantity<double,SI::length> q1(1.5*SI::meters); is acceptable, but quantity<double,SI::length> q2(1.5); is not. Basically, construction of quantities (other than copy construction) is restricted to products/divisors of scalars and units, scalars and quantities, and/or units and quantities. This entails some redundancy, but also avoids errors in generic code where the unit system may change. In addition, I think it makes the intent of the code clearer. Direct construction from a value_type is supported, but through a static member function: quantity<double,SI::length> q3 = quantity<double,SI::length>::from_value(1.5); Obviously, in my opinion, this sort of stuff should be restricted to libraries where it is necessary and shouldn't be ubiquitous in user code. I know that there is a camp in favor of implicit unit conversions (which can be supported in the library through a #define, though I think this causes more problems than it is worth), and I have some further thoughts on this below, but I'd like input from the non-implicit crowd as to whether this is a reasonable approach. 2) I'm not currently satisfied with the I/O aspects, particularly the lack of a mechanism for internationalization. I'd love to hear any design/implementation suggestions, especially from anyone familiar with facets/locales, on how to most gracefully accomplish this... 3) At the moment, it is possible to have completely generic rules for explicit conversion of quantities between systems by specialization of the conversion_helper class. The default implementation also allows simpler conversion implementations by defining the convert_base_unit template classes for each fundamental dimension (length, time, etc...) between the two systems to be interconvertible. I think it may be possible to implement a system that allows for fine granularity in implicit unit conversion; for example, SI::second and CGS::second are degenerate units, so they could be implicitly converted to one another with no computation. Thus, implicit conversion would be allowed in this case : 1.5*SI::seconds/SI::kelvin <-> 1.5*CGS::seconds/CGS::kelvin since these are identical quantities, but not in this one : 1.5*SI::meter*SI::seconds/SI::kelvin <-/-> 1.5*CGS::centimeter*CGS::seconds/CGS::kelvin Is this of sufficient interest to invest the effort? 4) Any comments on implementation from those brave enough to look at the guts of the library would be especially welcome. Regards, Matthias

Hello Matthias, Saturday, January 13, 2007, 8:20:43 AM, you wrote: [snip]
To get things started, here are a few questions I have :
1) At the moment, the library enforces strict construction; that is, quantities must be fully defined at construction :
quantity<double,SI::length> q1(1.5*SI::meters);
is acceptable, but
quantity<double,SI::length> q2(1.5);
is not. Basically, construction of quantities (other than copy construction) is restricted to products/divisors of scalars and units, scalars and quantities, and/or units and quantities. This entails some redundancy, but also avoids errors in generic code where the unit system may change. In addition, I think it makes the intent of the code clearer. Direct construction from a value_type is supported, but through a static member function:
quantity<double,SI::length> q3 = quantity<double,SI::length>::from_value(1.5);
Obviously, in my opinion, this sort of stuff should be restricted to libraries where it is necessary and shouldn't be ubiquitous in user code. I know that there is a camp in favor of implicit unit conversions (which can be supported in the library through a #define, though I think this causes more problems than it is worth), and I have some further thoughts on this below, but I'd like input from the non-implicit crowd as to whether this is a reasonable approach.
[snip] I haven't read any docs or seen the library. Just a thought on spot. May be a cast-like synax would be more natural here? quantity< double, SI::length > q1 = quantity_cast< SI::length >(1.5); quantity< int, SI::length > q2 = quantity_cast< SI::length >(100); And construction with no explicit casting but via explicit constructor should be possible too: quantity< double, SI::length > q1(1.5); quantity< int, SI::length > q2(100); Therefore simply doing this will not work: quantity< int, SI::length > q2 = 100; I think, it is sufficient to make generic code safe. And this IMO should go implicitly: // Conversion from int to double via assignment q1 = q2; // And via construction quantity< double, SI::length > q3 = quantity_cast< SI::length >(100); And quantity conversion should require explicit casting: quantity< double, SI::centigrade > q4; quantity< double, SI::kelvin > q5; q5 = quantity_cast< SI::kelvin >(q4); I don't know, may be there already is something like that in the library. PS: And another couple of cents. Maybe the representation type of "quantity" template should be optional defaulted to double or, the better way, to the type that is natural for the quantity type. I.e., for "SI::length" the "double" would be natural. -- Best regards, Andrey mailto:andysem@mail.ru

Hi Andrey -
I haven't read any docs or seen the library. Just a thought on spot. May be a cast-like synax would be more natural here?
quantity< double, SI::length > q1 = quantity_cast< SI::length >(1.5); quantity< int, SI::length > q2 = quantity_cast< SI::length >(100);
I like the idea of quantity_cast, being partial to making anything potentially dangerous be easily identified in code - I think a similar syntax could be used for three different use cases: 1) construction from a value_type: template<class Y,class Unit> quantity<Y,Unit> quantity_cast(const Y&); 2) casting to a different value_type: template<class Y,class Z,class Unit> quantity<Y,Unit> quantity_cast(const quantity<Z,Unit>&) and 3) casting to a different unit system template<class Y,class System1,class System2,class Dim> quantity<Y,unit<System2,Dim> > quantity_cast(const quantity<Y,unit<System1,Dim> >&) This should be relatively easy to implement...(famous last words)
And construction with no explicit casting but via explicit constructor should be possible too:
quantity< double, SI::length > q1(1.5); quantity< int, SI::length > q2(100);
My paradigm in designing this library is to be able to 1) prevent the Mars Climate Orbiter disasters and 2) to facilitate changing of unit systems in code with maximal safety. My concern with having an explicit constructor with only a value_type argument is that someone who decided to change from SI to CGS systems could easily forget to update the constructor arguments : quantity<double,SI::length> q1(1.0); // 1 meter would get changed to quantity<double,CGS::length> q1(1.0); // 1 centimeter with no indication that there was any problem. If we allow explicit unit system conversion, this change would behave as expected: quantity<double,SI::length> q1(1.0*meter); // 1 meter becomes quantity<double,CGS::length> q1(1.0*meter); // 100 centimeters and, therefore, be safe. The same is true if we require quantity_cast for unit system conversion.
And this IMO should go implicitly:
// Conversion from int to double via assignment q1 = q2;
I'm not a big lover of implicit type conversion. This is again an issue of maximizing the safety of using quantities. If we allow implicit value_type conversions, we permit truncation in cases of things like double->int. Preventing this does add one more layer of safety netting. On the other hand, there is something to be said for having quantities simply delegate their value_type conversions to the value_type itself - I guess I'm happy to go either way, depending on the consensus...
And quantity conversion should require explicit casting:
quantity< double, SI::centigrade > q4; quantity< double, SI::kelvin > q5; q5 = quantity_cast< SI::kelvin >(q4);
Of course, the centigrade<->kelvin issue opens a new can of worms since that conversion is affine, not linear. At present, this conversion is not implemented in the library. My preference would be to define a value_type that models a unit with offset that could be converted to linear value_types... Right now, the library allows implicit unit system conversions (as described above), and, with quantity_cast, would allow casting as well.
PS: And another couple of cents. Maybe the representation type of "quantity" template should be optional defaulted to double or, the better way, to the type that is natural for the quantity type. I.e., for "SI::length" the "double" would be natural.
Paul Bristow has brought this up, too. I really, really wish there was a template typedef facility. Lacking that, I'm inclined to define, in the boost::units::SI namespace, another quantity class that defaults to the SI system and double precision value_type. This could invert the order of template arguments, too: namespace SI { template<class Unit,class Y = double> class quantity : public boost::units::quantity<Y,Unit> { ... }; } Thanks for the input. Matthias

On 1/15/07, Matthias Schabel <boost@schabel-family.org> wrote:
And quantity conversion should require explicit casting:
quantity< double, SI::centigrade > q4; quantity< double, SI::kelvin > q5; q5 = quantity_cast< SI::kelvin >(q4);
Of course, the centigrade<->kelvin issue opens a new can of worms since that conversion is affine, not linear. At present, this conversion is not implemented in the library. My preference would be to define a value_type that models a unit with offset that could be converted to linear value_types... Right now, the library allows implicit unit system conversions (as described above), and, with quantity_cast, would allow casting as well.
That difference between an affine space and a linear (vector) space is an important one. The Boost date & time library makes the distinction: http://www.boost.org/doc/html/date_time.html#date_time.domain_concepts If this library made the vector/point distinction, temperature conversion would be a non-issue. Something like: dimensioned<int, SI::kelvin> k1(100*SI::kelvin), k2(200*SI::kelvin); quantity<int, SI::kelvin> kdiff = k2 - k1; quantity<int, SI::celsius> cdiff = difference; // Celsius and Kelvin differences are really the same thing. —Ben

Hello Matthias, Tuesday, January 16, 2007, 7:23:44 AM, you wrote: [snip]
And construction with no explicit casting but via explicit constructor should be possible too:
quantity< double, SI::length > q1(1.5); quantity< int, SI::length > q2(100);
My paradigm in designing this library is to be able to 1) prevent the Mars Climate Orbiter disasters and 2) to facilitate changing of unit systems in code with maximal safety. My concern with having an explicit constructor with only a value_type argument is that someone who decided to change from SI to CGS systems could easily forget to update the constructor arguments :
quantity<double,SI::length> q1(1.0); // 1 meter
would get changed to
quantity<double,CGS::length> q1(1.0); // 1 centimeter
with no indication that there was any problem. If we allow explicit unit system conversion, this change would behave as expected:
quantity<double,SI::length> q1(1.0*meter); // 1 meter
becomes
quantity<double,CGS::length> q1(1.0*meter); // 100 centimeters
and, therefore, be safe. The same is true if we require quantity_cast for unit system conversion.
Well, maybe you're right here.
And this IMO should go implicitly:
// Conversion from int to double via assignment q1 = q2;
I'm not a big lover of implicit type conversion. This is again an issue of maximizing the safety of using quantities. If we allow implicit value_type conversions, we permit truncation in cases of things like double->>int. Preventing this does add one more layer of safety netting. On the other hand, there is something to be said for having quantities simply delegate their value_type conversions to the value_type itself - I guess I'm happy to go either way, depending on the consensus...
IMHO, such implicit conversions would be more natural for users (for me at least). Of course, such conversions should be valid only if these representation types are implicitly convertible (a user may ensure they are not, if it's necessary). And if there is a percision loss on such conversion, a well-mannered compiler will issue a warning.
And quantity conversion should require explicit casting:
quantity< double, SI::centigrade > q4; quantity< double, SI::kelvin > q5; q5 = quantity_cast< SI::kelvin >(q4);
Of course, the centigrade<->kelvin issue opens a new can of worms since that conversion is affine, not linear.
My humble knowledge may have confused me but I thought that the same temperature by Centigrade and by Kelvin will always differ by about 273 (i.e. the conversion is linear). Which is not right in case of Farenheit.
At present, this conversion is not implemented in the library. My preference would be to define a value_type that models a unit with offset that could be converted to linear value_types... Right now, the library allows implicit unit system conversions (as described above), and, with quantity_cast, would allow casting as well.
Maybe the library should offer an opportunity to extend the out of box set of supported quantities. If you agree with me here, there should be a way of specifying user-defined quantities and conversion rules between them. I see it something like this: template< typename FromT, typename ToT > struct conversion_rule; template< > struct conversion_rule< unit< SI::kelvin, SI::grade >, unit< SI::centigrade, SI::grade >
{ template< typename T > static T apply(T const& value) { return value - 273; } }; template< > struct conversion_rule< unit< SI::centigrade, SI::grade >, unit< SI::kelvin, SI::grade >
{ template< typename T > static T apply(T const& value) { return value + 273; } }; template< class System2, class System1, class Dim, class Y > quantity< Y, unit< System2, Dim > > quantity_cast(const quantity< Y, unit< System1, Dim > >& from) { typedef conversion_rule< unit< System1, Dim >, unit< System2, Dim >
conversion_rule_t; return conversion_rule::apply(from.get()); // here acquire the // actual value of the quantity, it'll be of type Y }
Although the structures as the conversion rules is not the best way to implement it (making use of free functions for this purpose would be more flexible because of ADL involvement), the main idea is to extract the conversion algorithm to a user-defined entity.
PS: And another couple of cents. Maybe the representation type of "quantity" template should be optional defaulted to double or, the better way, to the type that is natural for the quantity type. I.e., for "SI::length" the "double" would be natural.
Paul Bristow has brought this up, too. I really, really wish there was a template typedef facility. Lacking that, I'm inclined to define, in the boost::units::SI namespace, another quantity class that defaults to the SI system and double precision value_type. This could invert the order of template arguments, too:
namespace SI {
template<class Unit,class Y = double> class quantity : public boost::units::quantity<Y,Unit> { ... };
}
As it was mentioned by Peder Holt, it is quite possible to make these template parameters irrelevant of the position. I'd do something like that: struct unit_base {}; template< typename T1, typename T2 > struct unit : public unit_base { }; template< typename T > struct is_unit : public is_base_and_derived< T, unit_base > {}; // Double bool check is required to detect erroneous quantity // instantiations template< typename T1, typename T2, bool = is_unit< T1 >::value, bool = is_unit< T2 >::value
struct quantity_tmplt_params; template< typename T1, typename T2 > struct quantity_tmplt_params< T1, T2, true, false > { typedef T1 unit_type; typedef typename mpl::if_< is_same< T2, void >, typename unit_type::default_value_type, T2
::type value_type; };
template< typename T1, typename T2 > struct quantity_tmplt_params< T1, T2, false, true > { typedef T2 unit_type; typedef typename mpl::if_< is_same< T1, void >, typename unit_type::default_value_type, T1
::type value_type; };
// It's better to use void for defaults as it shortens mangled names template< typename T1, typename T2 = void > class quantity : public quantity_impl< typename quantity_tmplt_params< T1, T2 >::value_type, typename quantity_tmplt_params< T1, T2 >::unit_type
{ }; -- Best regards, Andrey mailto:andysem@mail.ru

Hi Andrey -
IMHO, such implicit conversions would be more natural for users (for me at least). Of course, such conversions should be valid only if these representation types are implicitly convertible (a user may ensure they are not, if it's necessary). And if there is a percision loss on such conversion, a well-mannered compiler will issue a warning.
OK, I have implicit value_type conversion implemented for the next release. I've also flipped the order of template parameters in quantity and added double as a default value_type...everything seems to work as before, but now you can write quantity<SI::length> q1(1.5*SI::meters); etc...
My humble knowledge may have confused me but I thought that the same temperature by Centigrade and by Kelvin will always differ by about 273 (i.e. the conversion is linear). Which is not right in case of Farenheit.
What I mean by linear vs. affine is that, while the scale factor for converting temperature differences in Kelvin to/from centigrade is one, there is a nonzero translation of the origin (MathWorld has a good description of linear and affine transformations): http://mathworld.wolfram.com/LinearTransformation.html http://mathworld.wolfram.com/AffineTransformation.html The point that Ben was making in his post is that, while absolute temperature conversion between Kelvin and centigrade requires an offset, conversion of temperature differences does not. As far as I can tell, to integrate this directly into the library would require strong coupling between a special value_type flagging whether a quantity was an absolute temperature or a difference, which I think is undesirable. By defining special value_types class absolute_temperature; class temperature_difference; and defining the operators between these so that you can add or subtract temperature_differences to/from absolute_temperatures to get another absolute_temperature, add or subtract two temperature_differences to get another temperature difference, and subtract two absolute_temperatures to get a temperature_difference you should be able to get the correct behavior. Maybe I'll try to put together a quick example of this...
Maybe the library should offer an opportunity to extend the out of box set of supported quantities. If you agree with me here, there should be a way of specifying user-defined quantities and conversion rules between them. I see it something like this:
This already exists in the library; I'm changing the syntax a little for the next release, but there are basically two ways to control unit conversion: 1) if your unit system is "normal" - so that it doesn't require special treatment of the value to perform dimensional analysis on quantities - you can just define specializations of the convert_base_unit class for each fundamental unit (length, time, etc..) for forward and inverse conversions: template<> struct convert_base_unit<length_tag,SI::system,CGS::system> { template<class Y> static Y factor() { return Y(100); } }; template<> struct convert_base_unit<length_tag,CGS::system,SI::system> { template<class Y> static Y factor() { return Y(0.01); } }; 2) if you need more radical surgery, all conversions are mediated by the conversion_helper class, so explicit quantity conversion between unit systems looks like this: // explicit conversion from different unit system template<class Y, class System, class Dim> template<class System2,class Dim2> quantity< Y,unit<System,Dim> >::quantity(const quantity< Y,unit<System2,Dim2> >& source) { *this = conversion_helper<System2,System>::convert_quantity(source); } where the default conversion_helper (in conversion.hpp) implementation is a bit messy, but really just goes through all the terms in the source unit, and determines and accumulates the scale factors to get a final scale factor for the conversion. If you specialize this for a new unit system, you can pretty much do anything you want here...
Although the structures as the conversion rules is not the best way to implement it (making use of free functions for this purpose would be more flexible because of ADL involvement), the main idea is to extract the conversion algorithm to a user-defined entity.
Hmmm...I actually was having the opposite problem when trying to implement quantity_cast : I could get one overload to work for casting from a raw value_type, but the overload implementing value_type conversion wouldn't resolve...maybe I'm doing something dumb, but it seems that the compiler isn't recognizing that the typedef'd length is really unit<SI::system,length_type>. Strange.
As it was mentioned by Peder Holt, it is quite possible to make these template parameters irrelevant of the position. I'd do something like that:
[snip] This looks interesting, although, as Janek pointed out, it only makes sense to have a default type for the value_type, so having arbitrary ordering of the template parameters doesn't seem to provide much real benefit for the added compile-time overhead. As always, I'm willing to be flexible if we can come up with a convincing reason why this is needed/desirable... Thanks, Matthias ---------------------------------------------------------------- Matthias Schabel, Ph.D. Assistant Professor, Department of Radiology Utah Center for Advanced Imaging Research 729 Arapeen Drive Salt Lake City, UT 84108 801-587-9413 (work) 801-585-3592 (fax) 801-706-5760 (cell) 801-484-0811 (home) matthias dot schabel at hsc dot utah dot edu ----------------------------------------------------------------

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Matthias Schabel Sent: 17 January 2007 02:29 To: boost@lists.boost.org Subject: Re: [boost] mcs::units informal review request I've also flipped the order of template parameters in quantity and added double as a default value_type...everything seems to work as before, but now you can write
quantity<SI::length> q1(1.5*SI::meters)
or what most people will do (as you said - and your examples should encourage) using namespace SI; ... quantity<length> q1(1.5 * meters); which seems pretty much as neat as you can get. Thanks - hope it wasn't too much of a PITA to change. (I've amazed myself by missing the obvious 1st time round - eg Math Toolkit where it took a question on the Boost to suggest the 'obvious' addition of = double). Paul --- Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539561830 & SMS, Mobile +44 7714 330204 & SMS pbristow@hetp.u-net.com

On 1/16/07, Matthias Schabel <boost@schabel-family.org > wrote:
...
My humble knowledge may have confused me but I thought that the same
temperature by Centigrade and by Kelvin will always differ by about 273 (i.e. the conversion is linear). Which is not right in case of Farenheit.
What I mean by linear vs. affine is that, while the scale factor for converting temperature differences in Kelvin to/from centigrade is one, there is a nonzero translation of the origin (MathWorld has a good description of linear and affine transformations):
http://mathworld.wolfram.com/LinearTransformation.html http://mathworld.wolfram.com/AffineTransformation.html
The point that Ben was making in his post is that, while absolute temperature conversion between Kelvin and centigrade requires an offset, conversion of temperature differences does not. As far as I can tell, to integrate this directly into the library would require strong coupling between a special value_type flagging whether a quantity was an absolute temperature or a difference, which I think is undesirable. By defining special value_types
class absolute_temperature; class temperature_difference;
and defining the operators between these so that you can add or subtract temperature_differences to/from absolute_temperatures to get another absolute_temperature, add or subtract two temperature_differences to get another temperature difference, and subtract two absolute_temperatures to get a temperature_difference you should be able to get the correct behavior. Maybe I'll try to put together a quick example of this...
I do think it would be great to distinguish between affine and vector spaces in a library like this, but in practical terms it seems like there are three levels of commitment that users might have to dimensional analysis: 1. Type of dimension (length versus temperature). 2. Units of measure (meters versus feet). 3. Absolute versus relative quantities (°C = K+273.15 versus °C = K). It seems to me that these are separate problems, each building upon the one before it, and that an ideal library would let the user work at any of these levels. I think 1 and 2 can be handled with appropriate dimensional types and casts among them; 3 can be handled by separate types for absolute and relative quantities. I think these could be implemented independently in that order. I'm thinking usage like this: // First: type of dimension quantity<double, distance> x1 = 42.0 * distance; // Second: units of measure quantity<double, meters> x2 = 20.0 * meters; // Casting to explicitly add units of measure to unitless quantities. x2 = quantity_cast<quantity<double, meters> >(x1); // The user claims x1 is in meters. // Allow casting a reference or pointer, too (important for interfacing with numerical solvers). quantity<double, meters>& xref = quantity_cast<quantity<double, meters>&>(x1); quantity<double, meters>* xptr = quantity_cast<quantity<double, meters>*>(x1); // Third, since most people won't want to get into this, let quantity be what // most would expect (i.e., a linear space) but add absolute_ and relative_ to be clear. absolute_quantity<double, temperature> t1 = 1000.0 * temperature; // Unspecified temperature type. absolute_quantity<double, temperature> t2 = 1010.0 * temperature; relative_quantity<double, temperature> tdiff = t2 - t1; // tdiff is now 10 temperature units. quantity<double, kelvin> t3 = 1020.0 * kelvin; // General kelvin quantity. relative_quantity<double, kelvin> t3rel = quantity_cast<relative_quantity<double, kelvin> >(t3); // Explicitly say it's relative. // So now t3rel is 1020.0 K absolute_quantity<double, celsius> t3C = quantity_cast<absolute_quantity<double, celsius> >(t3); // Explicitly say it's absolute. // Now t3C is 746.85°C. t3C /= 2.0; // Specialize absolute temperatures to have scalar multiplication. // Now t3C is 236.85°C = (1 020.0K/2) absolute_quantity<double, seconds> t = 10.0 * seconds; // t2 *= 2.0; // This shouldn't compile because absolute time doesn't have scalar multiplication. —Ben PS Temperature is particularly odd in that absolute temperature it is (almost) a linear space even though temperature differences are in a different linear space. That is, 0°C × 2 = 273.15K × 2 = 546.3K = 273.15°C. (I say "almost" because there's no negative absolute temperature, so it can't really be a linear space.) PPS Where can I find the code under discussion?

Hi Ben,
I do think it would be great to distinguish between affine and vector spaces in a library like this, but in practical terms it seems like there are three levels of commitment that users might have to dimensional analysis: 1. Type of dimension (length versus temperature). 2. Units of measure (meters versus feet). 3. Absolute versus relative quantities (°C = K+273.15 versus °C = K).
The way I've developed the abstractions for this library is similar to the way you describe : 1) "raw" dimensional analysis as demonstrated in unit_example_1.cpp. This is purely compile-time metaprogramming, and is a little messy because of that. However, I suspect that this will be a relatively rare use case for end users of the library. Furthermore, if it turned out to be useful, it would certainly be possible to add some metaprogramming and/or preprocessor support to simplify the syntax and handling of dimensional analysis typelists. 2) units, defined as you do, a type of dimension with an associated unit system but no numeric value : SI::length, SI::energy, SI::power, etc... 3) quantities, defined as a quantity of a specified unit : 1.5*meters, (1.5+0.2*i)*ohms, etc... Quantities need to implement two algebras : the compile-time dimensional analysis algebra and the algebra for the underlying value_type. This is where handling of the difference between absolute and relative quantities would happen. The advantage of this is that I have already implemented quantity algebra in such a way that the value_type algebra should just delegate to the value_type itself (this requires typeof support or manual specialization of operator helpers for heterogeneous algebras), so the dimensional analysis is completely decoupled from the value_type algebra.
I'm thinking usage like this: // First: type of dimension quantity<double, distance> x1 = 42.0 * distance;
As it stands, the library assumes that quantities are values associated with a specific unit system. That being said, you can easily implement a unit system for "abstract" dimensioned quantities so you would write quantity<double,abstract::length> x1 = 42.0*abstract::length;
// Second: units of measure quantity<double, meters> x2 = 20.0 * meters;
This is basically identical to the existing SI system as implemented.
// Casting to explicitly add units of measure to unitless quantities. x2 = quantity_cast<quantity<double, meters> >(x1); // The user claims x1 is in meters.
This will hopefully be implemented for the next release.
// Allow casting a reference or pointer, too (important for interfacing with numerical solvers). quantity<double, meters>& xref = quantity_cast<quantity<double, meters>&>(x1); quantity<double, meters>* xptr = quantity_cast<quantity<double, meters>*>(x1);
This is a good idea; of course, since it involves pointers and references, I'm sure the actual implementation will be ugly to get constness and everything else right...
// Third, since most people won't want to get into this, let quantity be what // most would expect (i.e., a linear space) but add absolute_ and relative_ to be clear. absolute_quantity<double, temperature> t1 = 1000.0 * temperature; // Unspecified temperature type. absolute_quantity<double, temperature> t2 = 1010.0 * temperature;
relative_quantity<double, temperature> tdiff = t2 - t1; // tdiff is now 10 temperature units.
[snip] The way I'd propose to do this within the existing framework is to implement two different value_types: template<class Y> class absolute_measure; template<class Y> class difference_measure; with the appropriate algebra defined within and between the two classes (don't quote me on this - I'd need to think carefully about the exact algebra): absoute_measure - absolute_measure = difference_measure absolute_measure +- difference_measure = absolute_measure absolute_measure +*/ absolute_measure = error absolute_measure */ scalar = absolute_measure (?) difference_measure -> normal quantity algebra, implicitly convertible to value_type (Y) Then you would have quantity<absolute_measure<double>,SI::temperature> t1 = 1000.0*SI::kelvin, t2 = 1010.0*SI::kelvin; quantity<difference_measure<double>,SI::temperature> tdiff = t2-t1; quantity_casts would just work by delegating to the appropriate value_type constructor when converting explicitly between absolute_measure and relative_measure...
PS Temperature is particularly odd in that absolute temperature it is (almost) a linear space even though temperature differences are in a different linear space. That is, 0°C × 2 = 273.15K × 2 = 546.3K = 273.15°C. (I say "almost" because there's no negative absolute temperature, so it can't really be a linear space.)
I'm not sure I follow the significance here...
PPS Where can I find the code under discussion?
It's in the Boost Vault : http://www.boost-consulting.com/vault/index.php? &direction=0&order=&directory=Units file is mcs_units_v0.5.4.zip Thanks for the feedback. Matthias

I've just posted mcs::units v0.5.5 - changes include 1) inverted quantity template parameter order and double precision default value_type, allowing this syntax: quantity<SI::length> q1(1.0*SI::metre); // added metre, etc... for our Commonwealth friends 2) implemented implicit value_type conversion (for value_types that are themselves implicitly convertible) 3) implemented quantity_cast for three cases (see unit_example_5.cpp) : a) construction of quantities from raw value types b) conversion of value_types c) conversion of unit systems I hope to have an example of absolute_measure/difference_measure soon... Matthias

On 1/18/07, Matthias Schabel <boost@schabel-family.org> wrote:
... 3) implemented quantity_cast for three cases (see unit_example_5.cpp) : a) construction of quantities from raw value types b) conversion of value_types c) conversion of unit systems
It isn't clear to me what the semantics of these three are. I'd quantity_cast should be a dimensionally safe operation. Is that the semantics you have in mind? It seems like (a) should just be handled by multiplication. That is, double x = 42.0; quantity<SI::length> y = x * SI::meter; For (b), it might be better to have a static_quantity_cas, as in quantity<SI::length> x = 42.0 * SI::meter; quantity<SI::length, int> y = static_quantity_cast<quantity<SI::length, int> >(x); For (c), do you mean this: quantity<SI::length> x = 42.0 * SI::meter; quantity<CGS::length> y = quantity_cast<quantity<CGS::length> >(x); where y ends up being 4200 cm? —Ben PS I must say I'm thrilled to see someone putting serious effort into this. I've thought about it for years.

3) implemented quantity_cast for three cases (see unit_example_5.cpp) : a) construction of quantities from raw value types b) conversion of value_types c) conversion of unit systems
It isn't clear to me what the semantics of these three are. I'd
Semantics are : a) construct quantity from value_type double x = 10.0; quantity<length,double> q = quantity_cast< quantity<length,double> >(x); b) change value_type quantity<length,double> y = 10.0*meters; quantity<length,std::complex<double> > q = quantity_cast< std::complex<double> >(y); c) change unit system quantity<SI::length> z = 10.0*meters; quantity<CGS::length> q = quantity_cast<CGS::length>(z);
quantity_cast should be a dimensionally safe operation. Is that the semantics you have in mind?
I was thinking the opposite - i was under the impression that explicit casting usually indicates something that's potentially _unsafe_...
It seems like (a) should just be handled by multiplication. That is, double x = 42.0; quantity<SI::length> y = x * SI::meter;
This is the default method for quantity construction.
For (b), it might be better to have a static_quantity_cas, as in quantity<SI::length> x = 42.0 * SI::meter; quantity<SI::length, int> y = static_quantity_cast<quantity<SI::length, int> >(x);
I guess I don't feel strongly one way or the other - using quantity_cast as the syntax for all three conversions is simpler since there isn't any potential for confusion, but I'm happy to go with the flow if there is consensus otherwise.
For (c), do you mean this: quantity<SI::length> x = 42.0 * SI::meter; quantity<CGS::length> y = quantity_cast<quantity<CGS::length> >(x); where y ends up being 4200 cm?
Exactly. There are essentially three components to a fully-specified quantity: the unit system, the unit type, and the value type. As it is implemented now, quantity_cast allows casting of all three of these...
PS I must say I'm thrilled to see someone putting serious effort into this. I've thought about it for years.
I'm glad for the interest...obviously there's been a lot of discussion of this kind of library in Boost over the past few years. Because many of these have been contentious, I'm trying to get as much early feedback as possible so I can make the library as flexible as possible without having it grow beyond control... Please let me know as you have a chance to play with the code and docs if you have other concerns, suggestions for improvement, editorial input, etc... Cheers, Matthias

Thanks for the clarification... On 1/19/07, Matthias Schabel <boost@schabel-family.org> wrote:
3) implemented quantity_cast for three cases (see unit_example_5.cpp) : a) construction of quantities from raw value types b) conversion of value_types c) conversion of unit systems
It isn't clear to me what the semantics of these three are. I'd
Semantics are :
a) construct quantity from value_type double x = 10.0; quantity<length,double> q = quantity_cast< quantity<length,double> >(x);
b) change value_type quantity<length,double> y = 10.0*meters; quantity<length,std::complex<double> > q = quantity_cast< std::complex<double> >(y);
c) change unit system quantity<SI::length> z = 10.0*meters; quantity<CGS::length> q = quantity_cast<CGS::length>(z);
quantity_cast should be a dimensionally safe operation. Is that the semantics you have in mind?
I was thinking the opposite - i was under the impression that explicit casting usually indicates something that's potentially _unsafe_...
With the exception of (a), these are safe(ish) operations. For (c), the operation is very dimensionally safe. (I'd argue it should be implicit, but I'm fine with being more strict for now.) For (b) the semantics really are the same as a static_cast since the only thing changing is the type, so it's dimensionally safe even if it's type-dangerous. But (a) I worry about. For something like that I'd rather see another name for the cast since it is a potential hole in the unit system – something like reinterpret_quantity_cast. (There's always reinterpret_cast for doing things that are really unsafe.) I picture a few reasonable casts: (0) Casting equivalent units (meters -> feet). This is very very safe and so should have its own cast (if it has a cast at all). (Perhaps quantity could just have an explicit constructor so that static_cast works? Then a precompiler definition could toggle that "explicit" for those who want this to be implicit.) (1) static_casting enclosed types (perhaps "quantity_static_cast"?) (2) Adding explicit dimensions to something that's got only general units. That is, quantity<length> -> quantity<meters> but not quantity<length> -> quantity<time>. This seems reasonably safe. (Perhaps that would get quantity_cast?) (2) a. Casting doubles to quantity<D>s. b. Casting back to the enclosed type for use in other libraries. These are both potentially dangerous, but not too bad as long as it allows only conversions to and from enclosed types (e.g., quantity<length> -> double and double -> quantity<time>) but disallows quantity-to-quantity conversion (e.g., quantity<length> -> quantity<time>). (Perhaps reinterpret_quantity_cast?) So what I'm proposing is three new casts: (1) quantity_static_cast to static_cast the enclosed type. (This would show up in searches for static_cast.) (2) quantity_cast to add or remove quantity information (length <-> meters). (3) reinterpret_quantity_cast to go to and from, e.g., doubles. (This would also show up in searches for quantity_cast and for reinterpret_.) This wouldn't be necessary for scalars since division and multiplication would work, but would be useful for arrays. —Ben

With the exception of (a), these are safe(ish) operations. For (c), the operation is very dimensionally safe. (I'd argue it should be implicit, but I'm fine with being more strict for now.)
The rationale for preventing implicit conversions is that much of the point of this library is to provide dimensional analysis with zero runtime overhead - as soon as you allow implicit unit conversions, that's out the window. In addition, you run the risk of losing precision if implicit conversion is allowed between unit systems having significantly different scales (say astronomical and high energy physics units). Even implicit value_type conversion poses a similar problem, but at least there the precedent is already established with built-in value_types... The performance issue is a big one because one of the major anticipated use areas for this sort of library is numerical computing where performance is critical and no overhead is acceptable at runtime...
For (b) the semantics really are the same as a static_cast since the only thing changing is the type, so it's dimensionally safe even if it's type-dangerous. But (a) I worry about. For something like that I'd rather see another name for the cast since it is a potential hole in the unit system – something like reinterpret_quantity_cast. (There's always reinterpret_cast for doing things that are really unsafe.)
Is it possible to overload the system reinterpret_cast, or is it not really a library function? If it was, that might be an option, too...
I picture a few reasonable casts: (0) Casting equivalent units (meters -> feet). This is very very safe and so should have its own cast (if it has a cast at all). (Perhaps quantity could just have an explicit constructor so that static_cast works? Then a precompiler definition could toggle that "explicit" for those who want this to be implicit.)
Actually this is how it is currently implemented. You can do explicit unit system conversions via constructor, and there is a precompiler option: #define MCS_UNITS_ENABLE_IMPLICIT_CONVERSIONS for enabling implicit conversions. So maybe the value_type quantity_cast is redundant... Let's see what others have to say on the topic...
(2) Adding explicit dimensions to something that's got only general units. That is, quantity<length> -> quantity<meters> but not quantity<length> -> quantity<time>. This seems reasonably safe. (Perhaps that would get quantity_cast?)
As now implemented, if you want to have generic code that works for units of any system, you need to template it on unit system like this (from unit_example_4.cpp): /// the physical definition of work - computed for an arbitrary unit system template<class System,class Y> quantity<unit<System,energy_type>,Y> work(quantity<unit<System,force_type>,Y> F, quantity<unit<System,length_type>,Y> dx) { return F*dx; }
b. Casting back to the enclosed type for use in other libraries. These are both potentially dangerous, but not too bad as long as it allows only conversions to and from enclosed types (e.g., quantity<length> -> double and double -> quantity<time>) but disallows quantity-to- quantity conversion (e.g., quantity<length> -> quantity<time>). (Perhaps reinterpret_quantity_cast?)
This is supported through the value() member function in quantity, too...
So what I'm proposing is three new casts: (1) quantity_static_cast to static_cast the enclosed type. (This would show up in searches for static_cast.) (2) quantity_cast to add or remove quantity information (length <-> meters). (3) reinterpret_quantity_cast to go to and from, e.g., doubles. (This would also show up in searches for quantity_cast and for reinterpret_.) This wouldn't be necessary for scalars since division and multiplication would work, but would be useful for arrays.
I think we basically agree on the desirable functionality; I'm not particularly wedded to the specific syntax I've chosen now, but would like to get more input from others before making changes... Thanks for the comments, Matthias

Hello Matthias, [snip]
For (b) the semantics really are the same as a static_cast since the only thing changing is the type, so it's dimensionally safe even if it's type-dangerous. But (a) I worry about. For something like that I'd rather see another name for the cast since it is a potential hole in the unit system – something like reinterpret_quantity_cast. (There's always reinterpret_cast for doing things that are really unsafe.)
Is it possible to overload the system reinterpret_cast, or is it not really a library function? If it was, that might be an option, too...
It's an operator and it cannot be overloaded. -- Best regards, Andrey mailto:andysem@mail.ru

On 1/19/07, Andrey Semashev <andysem@mail.ru> wrote:
Hello Matthias,
[snip]
For (b) the semantics really are the same as a static_cast since the only thing changing is the type, so it's dimensionally safe even if it's type-dangerous. But (a) I worry about. For something like that I'd rather see another name for the cast since it is a potential hole in the unit system – something like reinterpret_quantity_cast. (There's always reinterpret_cast for doing things that are really unsafe.)
Is it possible to overload the system reinterpret_cast, or is it not really a library function? If it was, that might be an option, too...
It's an operator and it cannot be overloaded.
However, assuming that a quantity<D,T> looks like a T in memory, reinterpret_cast should Just Work. Of course, it would let you change both D and T. (My thought was to have a cast that can change quantity<D,T> to and from a T.) —Ben

However, assuming that a quantity<D,T> looks like a T in memory, reinterpret_cast should Just Work. Of course, it would let you change both D
This is true and, since this is strictly a runtime library, quantity<D,T> will just be a T in memory, so a special reinterpret cast is probably not necessary...
and T. (My thought was to have a cast that can change quantity<D,T> to and from a T.)
As Andrey points out, conversion to T can be through the value() member function and conversion from T via explicit constructor specifying the dimensions, so any cast would be redundant. That said, I don't mind a little redundancy if it increases transparency... Matthias

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Matthias Schabel Sent: 19 January 2007 19:51 To: boost@lists.boost.org Subject: Re: [boost] mcs::units informal review request
I think we basically agree on the desirable functionality; I'm not particularly wedded to the specific syntax I've chosen now, but would like to get more input from others before making changes...
I'm getting the feeling that you've got things right unless someone comes up with some good examples of how it isn't working nicely. That isn't going to happen IMO until people are actually using the system 'for real and in anger'. Peoples comments so far suggest that more commentary on what the examples exemplify might forestall criticism? Paul --- Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539561830 & SMS, Mobile +44 7714 330204 & SMS pbristow@hetp.u-net.com

Hello Ben, [snip]
I picture a few reasonable casts: (0) Casting equivalent units (meters -> feet). This is very very safe and so should have its own cast (if it has a cast at all). (Perhaps quantity could just have an explicit constructor so that static_cast works? Then a precompiler definition could toggle that "explicit" for those who want this to be implicit.) (1) static_casting enclosed types (perhaps "quantity_static_cast"?) (2) Adding explicit dimensions to something that's got only general units. That is, quantity<length> -> quantity<meters> but not quantity<length> -> quantity<time>. This seems reasonably safe. (Perhaps that would get quantity_cast?) (2) a. Casting doubles to quantity<D>s. b. Casting back to the enclosed type for use in other libraries. These are both potentially dangerous, but not too bad as long as it allows only conversions to and from enclosed types (e.g., quantity<length> ->> double and double -> quantity<time>) but disallows quantity-to-quantity conversion (e.g., quantity<length> -> quantity<time>). (Perhaps reinterpret_quantity_cast?)
So what I'm proposing is three new casts: (1) quantity_static_cast to static_cast the enclosed type. (This would show up in searches for static_cast.) (2) quantity_cast to add or remove quantity information (length <-> meters). (3) reinterpret_quantity_cast to go to and from, e.g., doubles. (This would also show up in searches for quantity_cast and for reinterpret_.) This wouldn't be necessary for scalars since division and multiplication would work, but would be useful for arrays.
Sorry to interleave your discussion. I'd add (2.5) quantity_cast to translate compatible quantities, e.g. meters <-> miles. I'm also not quite happy with having three casts instead of one. IMO, this would complicate the library usage with almost nothing in return. Although I may imagine some searching profit of having quantity_static_cast, it doesn't worth it. And as for reinterpret_quantity_cast, I can really see no use for it. The value of the quantity may be obtained via a simple method, like "get" or "value" and the opposite conversion may be done with explicit constructor with applying a dimension. I'm not quite sure I followed you in "would be useful for arrays", so I may be missing something. And after all reinterpret_quantity_cast may need a better name if it is to be implemented. The difference in word order in quantity_static_cast and reinterpret_quantity_cast is confusing and I feel getting older while typing "reinterpret_quantity_cast". :) -- Best regards, Andrey mailto:andysem@mail.ru

Matthias Schabel said: (by the date of Thu, 18 Jan 2007 14:49:12 -0700)
I've just posted mcs::units v0.5.5 [on boost vault] - changes include
I want to thank you very much for your effort. My time currently is very limited, so I had a chance for a very short glance at your examples. And I liked very much what I've seen. I recalled another common request: some people wanted to add currency and unit conversions between various currencies (using a multiplier that can change during the program run). I work in engineering so it's not for me. But for those people who need it, will it be possible? -- Janek Kozicki |

Hi Janek,
I want to thank you very much for your effort. My time currently is very limited, so I had a chance for a very short glance at your examples. And I liked very much what I've seen.
Thanks for the encouragement - I'm actually on vacation right now (as embarrassing as that is to admit), so I've had some time to devote to getting things shaped up... I certainly appreciate the feedback - please keep it coming as you find time to look at the library in more detail...
I recalled another common request: some people wanted to add currency and unit conversions between various currencies (using a multiplier that can change during the program run). I work in engineering so it's not for me. But for those people who need it, will it be possible?
I recall the same; my opinion is that the full dimensional analysis machinery is probably overkill for this kind of application as I have a hard time envisioning units like $ m/s^2 being useful, but it should be possible, again, by currencies as value types...something like this: // currency unit systems struct currency_system { }; struct currency_tag : public ordinal<1> { }; typedef dimension< boost::mpl::list< dim< currency_tag,static_rational<1> > > >::type currency_type; typedef unit<currency_system,currency_type> currency; class us_dollar { ... }; class canadian_dollar { ... }; quantity<currency,us_dollar> usd(us_dollar(date1)); quantity<currency,canadian_dollar> cd(canadian_dollar(date2)); This way the value_type takes care of all the conversions (say to and from constant US dollars fixed at a certain date or however you like) and all the units library does is keep track of the fact that the quantity is a currency. By doing this, everything is completely decoupled. If implicit conversion of currencies is supported, this is transparent through quantity... Matthias

Haven't had a chance to look at the code or docs yet, but have been following the thread. Sounds great so far! Matthias Schabel wrote:
I recall the same; my opinion is that the full dimensional analysis machinery is probably overkill for this kind of application as I have a hard time envisioning units like $ m/s^2 being useful, but it should be possible, again, by currencies as value types...something like this:
I'm not one of the people who initially requested this, but I can absolutely see a use for dimensional analysis when it comes to finances. Think of $/flop, watt/flop or even $-watt/flop for procuring computing systems. Engineering, at its basic level, is trading off technical innovation for money. Measuring such things gets one into...interesting units. :) -Dave

I'm not one of the people who initially requested this, but I can absolutely see a use for dimensional analysis when it comes to finances. Think of $/flop, watt/flop or even $-watt/flop for procuring computing systems.
Good point. Anyway, the library was designed to make implementing new unit systems relatively easy (as library implementation goes), so this kind of usage should be simple. For things like $/watt, you can even just extend the existing SI system... Matthias

Matthias Schabel wrote:
I'm not one of the people who initially requested this, but I can absolutely see a use for dimensional analysis when it comes to finances. Think of $/flop, watt/flop or even $-watt/flop for procuring computing systems.
Good point. Anyway, the library was designed to make implementing new unit systems relatively easy (as library implementation goes), so this kind of usage should be simple. For things like $/watt, you can even just extend the existing SI system...
(butting in) I'm not sure about that. You had two ideas, the first was to forward everything to the value_type, so you'll have a us_dollar value_type, etc. I thing it's not convenient. Most users will find something like quantity<currency/*, default to double*/> q1(1500.0 * us_dollar); makes more sense than quantity<currency, us_dollar> q1(1500.0); It certainly fits the rest of the library better, IMO. More than that, the second idea of extending the SI system pretty much, AFAIU, forces the value_type to *not* be anything like us_dollar - it should fit length and mass to! So I believe it boils down to providing a runtime-changeable conversion ratios for some units (currency), in addition to the currently fixed ratios for other units (physics). Whether or not you want to support it, is a completely different matter... Yuval

Am Samstag, den 20.01.2007, 17:32 +0200 schrieb Yuval Ronen:
So I believe it boils down to providing a runtime-changeable conversion ratios for some units (currency), in addition to the currently fixed ratios for other units (physics). Whether or not you want to support it, is a completely different matter...
Worse, I think you can't fix the set of currencies at compile-time. Even if you were willing to change your program whenever a new currency is added to the set of world currencies (admittedly relatively rarely but it does happen), you might quickly want to add stuff like wheat or stocks (if only for completeness sake). Any ideas how to deal with this properly?
Yuval
Aristid PS: Now I think I'll check out the thing you're discussing about.

Hi Aristid,
Worse, I think you can't fix the set of currencies at compile-time. Even if you were willing to change your program whenever a new currency is added to the set of world currencies (admittedly relatively rarely but it does happen), you might quickly want to add stuff like wheat or stocks (if only for completeness sake).
Any ideas how to deal with this properly?
I really don't know of a way (at least in C++) to provide zero overhead compile-time dimensional analysis where you don't specify the set of currencies in advance. mcs::units allows arbitrary (or nearly so) unit conversions at runtime, but is restricted to a static set of fundamental units (whatever they may be). The kind of application you're envisioning probably calls for a runtime system or (even better) a specific library for dealing with currency... Matthias

Am Samstag, den 20.01.2007, 11:35 -0700 schrieb Matthias Schabel:
Hi Aristid,
Worse, I think you can't fix the set of currencies at compile-time. Even if you were willing to change your program whenever a new currency is added to the set of world currencies (admittedly relatively rarely but it does happen), you might quickly want to add stuff like wheat or stocks (if only for completeness sake).
Any ideas how to deal with this properly?
I really don't know of a way (at least in C++) to provide zero overhead compile-time dimensional analysis where you don't specify the set of currencies in advance.
Well... they all share the "price" dimension, right?
mcs::units allows arbitrary (or nearly so) unit conversions at runtime, but is restricted to a static set of fundamental units (whatever they may be).
Like I feared.
The kind of application you're envisioning probably calls for a runtime system or (even better) a specific library for dealing with currency...
Well I'm not actually having a specific application in mind. It's just that I'm currently writing an application that deals with money and I read Yuval's post... So you think mcs::units cannot possibly support run-time currencies in the near and far future?
Matthias
Aristid

Well... they all share the "price" dimension, right?
Potentially, yes...
mcs::units allows arbitrary (or nearly so) unit conversions at runtime, but is restricted to a static set of fundamental units (whatever they may be).
Like I feared.
Just to clarify, the set of fundamental units is defined at compile time, but is not mandated by the library. That is, you could define apples oranges flops 1970s USD tribbles as the set of fundamental units and the library would happily perform compile-time dimensional analysis on integer or fractional powers of combinations of these units...
So you think mcs::units cannot possibly support run-time currencies in the near and far future?
If you're willing to do runtime conversions within the value_type (as I discussed in a previous post on currency conversion), then mcs::units could support arbitrary runtime conversion of currencies relatively easily. Matthias ---------------------------------------------------------------- Matthias Schabel, Ph.D. Assistant Professor, Department of Radiology Utah Center for Advanced Imaging Research 729 Arapeen Drive Salt Lake City, UT 84108 801-587-9413 (work) 801-585-3592 (fax) 801-706-5760 (cell) 801-484-0811 (home) matthias dot schabel at hsc dot utah dot edu ----------------------------------------------------------------

Hi Yuval,
I'm not sure about that. You had two ideas, the first was to forward everything to the value_type, so you'll have a us_dollar value_type, etc. I thing it's not convenient. Most users will find something like
quantity<currency/*, default to double*/> q1(1500.0 * us_dollar);
makes more sense than
quantity<currency, us_dollar> q1(1500.0);
It certainly fits the rest of the library better, IMO. More than that,
Maybe - for me, it makes more sense to modify the value_type in this way, but this was just an off-the-cuff example of how one could accomplish time- varying currency conversions with compile-time dimension checking in the simplest (from the ease of implementation standpoint) way. I'm not sure I agree that using a double precision value_type fits the rest of the library better, though - I have spent a significant amount of time and effort to get the library to gracefully and correctly handle any value_type that models the necessary concepts, so it should be value_type agnostic as much as possible.
the second idea of extending the SI system pretty much, AFAIU, forces the value_type to *not* be anything like us_dollar - it should fit length and mass to!
This was presented as an option for convenience; you certainly don't have to extend the SI system. But you may be right; if we're mixing physical units and currency units, you may need to implement specific currency systems and the conversions between them...
So I believe it boils down to providing a runtime-changeable conversion ratios for some units (currency), in addition to the currently fixed ratios for other units (physics). Whether or not you want to support it, is a completely different matter...
You can do this already; it requires more work, because the conversion_helper class (in conversion.hpp) needs to be specialized to work with currency systems to accommodate the time-varying conversion factor. That all being said, I personally don't feel that dealing with currencies (among a number of other specialized use cases that have been proposed along the way during the discussions of this domain) is a reasonable expectation for this library to support out of the box; I have tried to make it flexible enough to deal with a wide range of potential applications, but for those applications that lie outside standard dimensional/analysis and units, I expect that users will use this library as a foundation for their specific application. Matthias

Matthias Schabel wrote:
// currency unit systems struct currency_system { };
struct currency_tag : public ordinal<1> { };
typedef dimension< boost::mpl::list< dim< currency_tag,static_rational<1> > > >::type currency_type;
typedef unit<currency_system,currency_type> currency;
class us_dollar { ... };
class canadian_dollar { ... };
quantity<currency,us_dollar> usd(us_dollar(date1)); quantity<currency,canadian_dollar> cd(canadian_dollar(date2));
This way the value_type takes care of all the conversions (say to and from constant US dollars fixed at a certain date or however you like) and all the units library does is keep track of the fact that the quantity is a currency. By doing this, everything is completely decoupled. If implicit conversion of currencies is supported, this is transparent through quantity...
Would something like this work for creating a quantity that's unit can change during runtime and allow support for multiple units all converting to a particular "system" (as you specify the meaning)? For instance, I work on fluid flow analysis programs and have started working on a unit library based on ch. 3 in the MPL book. One thing I need is for user entry of arbitrary (and sometimes user defined) units. So an entry of pressure might be in psi or inches of mercury; it also might be in gage (with some user defined reference atmospheric pressure) or absolute. These values need to be stored in those units (otherwise you run afoul of float migration) and are the ultimate entry data parameters for all the system's equations. How does your library meet this need? Another issue we have is that equations are more often than not referenced from some source written in arbitrary units. It would be nice to be able to reflect this in code so that if some source says "* 9.32 psi a" this could be done in the code itself with minimal to 0 impact on runtime performance. Does your library meet this need? I envision something akin to: typedef quantity<SI::pressure> length_qty; X f() { static length_qty const p9 = 9.32 * psi; }

Hi Noah,
Would something like this work for creating a quantity that's unit can change during runtime and allow support for multiple units all converting to a particular "system" (as you specify the meaning)?
I am drawing a "line in the sand" at implementation of runtime unit systems; this is not because it can't or shouldn't be done, but because it is basically a completely different problem domain with different objectives and performance criteria. It would certainly be possible to integrate a runtime unit and quantity library, if someone else wanted to implement it, with my proposed library by simply specializing the unit and quantity classes something like this: struct runtime_system { }; struct runtime_dim { }; template<> class unit<runtime_system,runtime_dim> { ... }; typedef unit<runtime_system,runtime_dim> runtime_unit; template<class Y> class quantity<runtime_unit,Y = double> { ... }; That being said, it is possible to have compile-time units with runtime varying unit conversions (as I alluded to in the previous post on currency conversion). You basically need to specialize the conversion_helper class (in conversion.hpp) to do what you want it to...
For instance, I work on fluid flow analysis programs and have started working on a unit library based on ch. 3 in the MPL book. One thing I need is for user entry of arbitrary (and sometimes user defined) units. So an entry of pressure might be in psi or inches of mercury; it also might be in gage (with some user defined reference atmospheric pressure) or absolute. These values need to be stored in those units (otherwise you run afoul of float migration) and are the ultimate entry data parameters for all the system's equations. How does your library meet this need?
My library does not deal with issues of runtime units, therefore, does not provide any facility for doing these sorts of things. I do believe that this is an interesting area, and potentially worthy of implementation, but I don't have the time, inclination, or expertise to do it. I would like to keep the focus in the present discussion on zero runtime overhead dimensional analysis systems and reserve consideration of runtime units for another time/ library...
Another issue we have is that equations are more often than not referenced from some source written in arbitrary units. It would be nice to be able to reflect this in code so that if some source says "* 9.32 psi a" this could be done in the code itself with minimal to 0 impact on runtime performance. Does your library meet this need?
I envision something akin to:
typedef quantity<SI::pressure> length_qty;
X f() { static length_qty const p9 = 9.32 * psi; }
There are various ways to accomplish this with the library; explicit conversion of units between unit systems would allow something like you wrote : static quantity<SI::pressure> const p9(9.32*psi); assuming you either 1) defined a pounds-inches unit system with appropriate conversions to the SI system or 2) simply defined psi as a constant in SI units Either option is probably acceptable; the former is best if many of the program's computations will be done in a pounds-inches system, the latter if you are just occasionally using non-SI units in a predominantly SI code. Cheers, Matthias

Matthias Schabel said: (by the date of Sat, 20 Jan 2007 11:04:43-0700)
I am drawing a "line in the sand" at implementation of runtime unit systems;
My library does not deal with issues of runtime units, therefore, does not provide any facility for doing these sorts of things. I do believe that this is an interesting area, and potentially worthy of implementation, but I don't have the time, inclination, or expertise to do it. I would like to keep the focus in the present discussion on zero runtime overhead dimensional analysis systems and reserve consideration of runtime units for another time/library...
In my opinion it is a wise decision. Andy Little tried to do both runtime and compile time units, and simply got overhelmed by the complexity of the problem. IIRC there was an agreement among the reviewers that he tried doing too much, and that was the cause of his failure. So, Matthias, do as are are doing now. Do not add unnecessary complexity to your project: solve only a small part of the problem, but with a good solution. -- Janek Kozicki |

Matthias Schabel wrote:
Hi Noah,
Would something like this work for creating a quantity that's unit can change during runtime and allow support for multiple units all converting to a particular "system" (as you specify the meaning)?
I am drawing a "line in the sand" at implementation of runtime unit systems; this is not because it can't or shouldn't be done, but because it is basically a completely different problem domain with different objectives and performance criteria.
I think it is a definite must that any unit library needs to be at least extensible to support the two problems I described. I personally don't see much use in being able to convert, statically, between two disparate unit systems. I don't know of any project that would do this. Perhaps for times when you are using third party libraries that assume one set of units while you work in another this would be necessary but that doesn't seem a general enough problem for a boost library and both would have to use this system. The primary goal of a boost units library should be to support safe unit conversions of user defined units in a way that ensures that conversions are safe and inexpensive (as in not done more than once per assignment) and the primary use of this will be during runtime. I was hoping that your library could be used as a base for such a runtime solution but you make it sound like more trouble than worth. I will probably continue to look at ways to work this in, especially if you get accepted, but since I already have a very simple answer to the problem it may be placed on the back burner. It would be nice to be positive that units share the same base system and provide a general solution when they might not but I really think, practically speaking, that the likelihood of someone needing to have two different static base systems is next to nil. I don't believe Andy's problem was that runtime units are too large a problem but that these unit libraries try to do too much in static mode. It certainly makes for a great excercize in tmp but without support for conversions to arbitrary, runtime selected, units I don't see how they have a whole lot of practical value. By support I mean either built in or easily extended in a well documented manner. You have a great library, something that might be a great backbone for a runtime units system and provide an extra level of safety, but I think the primary use to most people of a unit library is going to be runtime units you either need to support this directly or better document the methods to use your library for such purposes. 99.99% of the time users are going to stick with a single system, usually the SI system, as their static set of units and will need to do a lot of conversions into and out of these base units. I understand your desire to keep your library simple and to a single task and goal, but I just don't see much need out there for a library that does static unit conversions but has no concept of runtime units.

On 1/22/07, Noah Roberts <roberts.noah@gmail.com> wrote:
You have a great library, something that might be a great backbone for a runtime units system and provide an extra level of safety, but I think the primary use to most people of a unit library is going to be runtime units you either need to support this directly or better document the methods to use your library for such purposes. 99.99% of the time users are going to stick with a single system, usually the SI system, as their static set of units and will need to do a lot of conversions into and out of these base units. I understand your desire to keep your library simple and to a single task and goal, but I just don't see much need out there for a library that does static unit conversions but has no concept of runtime units.
I think the majority of people who participated in Andy Little's review and those who have responded to this thread disagree. We (I include myself, and hopefully have interpreted others' responses correctly) envision this being mostly a compile-time problem, with a much smaller use-case for run-time support. The purpose being to catch at compile-time errors that would have before gone unnoticed, such as: // returns feet! double get_altitude_from_sensor(); // assumes meters! bool should_deploy_chute(double altitude); // Somewhere in another module while (true) { double altitude = get_altitude_from_sensor(); if (should_deploy_chute(altitude)) break; } deploy_chute(); Sound familiar? ;) I'd like to encourage Matthias in what looks like a very promising library! --Michael Fawcett

Michael Fawcett wrote:
On 1/22/07, Noah Roberts <roberts.noah@gmail.com> wrote:
You have a great library, something that might be a great backbone for a runtime units system and provide an extra level of safety, but I think the primary use to most people of a unit library is going to be runtime units you either need to support this directly or better document the methods to use your library for such purposes. 99.99% of the time users are going to stick with a single system, usually the SI system, as their static set of units and will need to do a lot of conversions into and out of these base units. I understand your desire to keep your library simple and to a single task and goal, but I just don't see much need out there for a library that does static unit conversions but has no concept of runtime units.
I think the majority of people who participated in Andy Little's review and those who have responded to this thread disagree. We (I include myself, and hopefully have interpreted others' responses correctly) envision this being mostly a compile-time problem, with a much smaller use-case for run-time support.
The purpose being to catch at compile-time errors that would have before gone unnoticed, such as:
// returns feet! double get_altitude_from_sensor();
// assumes meters! bool should_deploy_chute(double altitude);
// Somewhere in another module while (true) { double altitude = get_altitude_from_sensor(); if (should_deploy_chute(altitude)) break; }
deploy_chute();
Sound familiar? ;)
Yes, it does sound familiar but it is just a common way of writing bad code. In any given project your base unit should be the same for any given function. Anything that accepts a length should accept either feet or meters, and a mix of both in your project is a Really Bad Thing. Really the only thing this static unit library provides is a way to enforce a policy of base units. This is a good thing but is rather incomplete without a way to interact with the user, who will want his/her display and entry in different unit formats. A value class that did automatic conversions to/from the base unit is, in my opinion, more useful. This class just always converts to an si unit appropriate for a given dimension (and static dimensional analysis is definitely useful) provides the same safety as static unit conversion/enforcement provided here. Another value type that has no unit but has conversion from a value with a unit provides an interface for functions to use in calculations so that conversion is not done in several places through calculations. If I where to use this currently proposed library that is where it's value type would reside. Now, if the library is extensible to handle this, in my experience, broader need then all that needs to be done is to more thoroughly document how one would go about it. Like I said before, it would be nice to use this library as a backbone for a runtime unit system, something I happen to be working on, but it seems like the way to do this isn't exactly straight forward. In my opinion a unit system library in the boost family should answer the runtime question at least as far as providing a documented and relatively easy method for doing so.

On 1/22/07, Noah Roberts <roberts.noah@gmail.com> wrote:
Yes, it does sound familiar but it is just a common way of writing bad code. In any given project your base unit should be the same for any given function. Anything that accepts a length should accept either feet or meters, and a mix of both in your project is a Really Bad Thing.
And how can you enforce this? Take for instance a database that holds radar characteristics. The units that pilots use are always feet (for altitude) and nautical miles (for distances). Ground elevation data is stored as DTED files which are in meters, where the distance between elevation postings is in latitude/longitude. Now convert an AGL (Above Ground Level) altitude to an MSL (Mean Sea Level) altitude. Wait...that requires adding feet (the altitude of the aircraft is always given in feet) to the ground elevation at that particular latitude/longitude (but wait, pilots measure distance in nautical miles, plus the ground elevation is given in meters!). These are all just errors waiting to happen that a good units library will catch at compile-time.
Really the only thing this static unit library provides is a way to enforce a policy of base units. This is a good thing but is rather incomplete without a way to interact with the user, who will want his/her display and entry in different unit formats.
I don't see how this library prevents a program from doing just that, albeit from a limited set of different units, but surely it had to be limited anyways to give them a set of options to choose from? --Michael Fawcett

Michael Fawcett wrote:
On 1/22/07, Noah Roberts <roberts.noah@gmail.com> wrote:
Yes, it does sound familiar but it is just a common way of writing bad code. In any given project your base unit should be the same for any given function. Anything that accepts a length should accept either feet or meters, and a mix of both in your project is a Really Bad Thing.
And how can you enforce this? Take for instance a database that holds radar characteristics. The units that pilots use are always feet (for altitude) and nautical miles (for distances). Ground elevation data is stored as DTED files which are in meters, where the distance between elevation postings is in latitude/longitude. Now convert an AGL (Above Ground Level) altitude to an MSL (Mean Sea Level) altitude. Wait...that requires adding feet (the altitude of the aircraft is always given in feet) to the ground elevation at that particular latitude/longitude (but wait, pilots measure distance in nautical miles, plus the ground elevation is given in meters!). These are all just errors waiting to happen that a good units library will catch at compile-time.
Well, I guess now that you're more interested in getting defensive than talking about this so this is likely to become non-productive. What you are talking about above is exactly what I mean. Those are *interface* issues. Doing all the conversions through casts, which will be converting between these units in-place, is less efficient. A good units library would convert all of the above to a base, meters usually, instantly and provide only that value to the underlying equations while reporting values using the units logical for the user for that use. There is no reason why the different calculations you speak of should actually do their calculations in different "systems". To make this clearer lets look at your above equation: MSLm = AGLft + GLm. Lets also assume you're doing this about a 1,000,000 times in some internal calculation that will result in a value reported in miles. You're answer seems to be something that resolves to: MSLAm = (AGL ft * ft/meter) + GLm Sure, the ft/meter conversion factor is statically calculated/provided and the developer doesn't see the equation like that but you still have an unnecessary conversion worked in. This isn't 0 runtime overhead. I do like the fact that you have a cast required to advertise this fact. The alternative approach would be more like: MSLAm = AGLm + GLm. The problem here is that AGL makes less sense in meters for the user. Let's further say that AGL might be measured in any given unit for some user (This is _very_ common in the field I'm working in). Now we need some way of getting an AGL value from the user in an arbitrary unit but making sure calculations don't keep converting it and adding overhead we don't want or need. The best way to do this is to have the AGL value convert when the user enters it or when the calculation performed to find it is assigned to the value...and only at those times. Then your whole underlying functions would all use a particular "system" that these runtime united values convert to. Your unit setup could be used in the underlying computations to enforce a given set of unit->dimension pairs but there is still most definitely a need to provide an easy to use runtime equivalent. Without such the unit library is not as useful as it could be and doesn't answer what seems to me as the more general and practical use. I also think it should be easy for the developer to write the equations in one set of units but have the code calculate in the base units without adding conversion overhead. Hence my question about qty<length> qt = 9.3 * psi and I have to say I really like that syntax that you've used. I'm not saying this library is no good or not useful. You've done good work and I think you have a better solution than Little's attempt. What I'm saying is that in order to answer the general case it needs to answer runtime units either directly or by thoroughly documenting how it could be done.

[snip]
What you are talking about above is exactly what I mean. Those are *interface* issues. Doing all the conversions through casts, which will be converting between these units in-place, is less efficient. A good units library would convert all of the above to a base, meters usually, instantly and provide only that value to the underlying equations while reporting values using the units logical for the user for that use. There is no reason why the different calculations you speak of should actually do their calculations in different "systems".
Noah - I really, really recommend you look at the review of PQS. There are several posts that deal with the question of why you can't do all your calculations in SI units and why it's bad to force library users to use SI (even though it is nominally an international standard).
user (This is _very_ common in the field I'm working in). Now we need some way of getting an AGL value from the user in an arbitrary unit but making sure calculations don't keep converting it and adding overhead we don't want or need. The best way to do this is to have the AGL value convert when the user enters it or when the calculation performed to find it is assigned to the value...and only at those times. Then your whole underlying functions would all use a particular "system" that these runtime united values convert to.
If all you're concerned with is being able to convert some input into a quantity in a pre-specified (at compile-time) system, and back again at the end, that's a relatively easy problem (other than deciding how to parse input).
seems to me as the more general and practical use. I also think it should be easy for the developer to write the equations in one set of units but have the code calculate in the base units without adding
See my previous post on electrostatic force for why this doesn't work - you can't always guarantee that equations remain the same in different unit systems...in fact, many of the "natural" unit systems used in physics are chosen specifically to simplify the form of common equations.
conversion overhead. Hence my question about qty<length> qt = 9.3 * psi
I'm hoping you mean : quantity<SI::pressure> qt = 9.3*psi; ? To solve the input/output problem, you could just write a runtime function that converts units in a variety of input systems into the system in which you want to do your computations (again, assuming that the internal system is fixed at compile time): template<class System,class Y> quantity<unit<System,pressure_type>,Y> pressure_converter(const std::string& q); quantity<SI::pressure> qt = pressure_converter<SI,double>("9.3 psi"); or you could just define the conversion factors in the desired unit system directly: static const quantity<SI::mass> pound(kilogram/2.2); static const quantity<SI::length> inch(meter*2.54/100.0); static const quantity<SI::pressure> psi = 1.0*pound/(inch*inch); quantity<SI::pressure> qt = 9.3*psi;
I'm not saying this library is no good or not useful. You've done good work and I think you have a better solution than Little's attempt. What I'm saying is that in order to answer the general case it needs to answer runtime units either directly or by thoroughly documenting how it could be done.
It sounds like you have me and Michael confused...any way, no hard feelings. I'm mainly concerned with not letting feature-creep derail the potential consideration of a library that may only solve a subset of the complete dimensional analysis/units problem for inclusion into Boost. I absolutely agree that there is scope for runtime units, just not from me right now. Matthias ---------------------------------------------------------------- Matthias Schabel, Ph.D. Assistant Professor, Department of Radiology Utah Center for Advanced Imaging Research 729 Arapeen Drive Salt Lake City, UT 84108 801-587-9413 (work) 801-585-3592 (fax) 801-706-5760 (cell) 801-484-0811 (home) matthias dot schabel at hsc dot utah dot edu ----------------------------------------------------------------

On 1/22/07, Noah Roberts <roberts.noah@gmail.com> wrote:
Michael Fawcett wrote:
On 1/22/07, Noah Roberts <roberts.noah@gmail.com> wrote:
Yes, it does sound familiar but it is just a common way of writing bad code. In any given project your base unit should be the same for any given function. Anything that accepts a length should accept either feet or meters, and a mix of both in your project is a Really Bad Thing.
And how can you enforce this? Take for instance a database that holds radar characteristics. The units that pilots use are always feet (for altitude) and nautical miles (for distances). Ground elevation data is stored as DTED files which are in meters, where the distance between elevation postings is in latitude/longitude. Now convert an AGL (Above Ground Level) altitude to an MSL (Mean Sea Level) altitude. Wait...that requires adding feet (the altitude of the aircraft is always given in feet) to the ground elevation at that particular latitude/longitude (but wait, pilots measure distance in nautical miles, plus the ground elevation is given in meters!). These are all just errors waiting to happen that a good units library will catch at compile-time.
Well, I guess now that you're more interested in getting defensive than talking about this so this is likely to become non-productive.
Not at all. Just trying to clarify use cases and possible implementations.
What you are talking about above is exactly what I mean. Those are *interface* issues. Doing all the conversions through casts, which will be converting between these units in-place, is less efficient. A good units library would convert all of the above to a base, meters usually, instantly and provide only that value to the underlying equations while reporting values using the units logical for the user for that use. There is no reason why the different calculations you speak of should actually do their calculations in different "systems".
I agree, but getting into and out of these different "systems" is often where the problem lies.
The problem here is that AGL makes less sense in meters for the user. Let's further say that AGL might be measured in any given unit for some user (This is _very_ common in the field I'm working in).
In my field as well, although in practice, pilots and analysts rarely look at AGL in any unit other than feet, but the option is at least there for them if they want to.
Now we need some way of getting an AGL value from the user in an arbitrary unit but making sure calculations don't keep converting it and adding overhead we don't want or need. The best way to do this is to have the AGL value convert when the user enters it or when the calculation performed to find it is assigned to the value...and only at those times. Then your whole underlying functions would all use a particular "system" that these runtime united values convert to.
Oh, I absolutely agree, 100%. Doing the conversions over and over is costly. The costly calculations should all be done in the same "system" for maximum efficiency. But again, how to *ensure* that programmers remember to do the conversion going into and out of that "system" and how to ensure that it is safe and correct (i.e. is a compile-time error rather than a (possibly unnoticed) run-time one...*crash*, and not necessarily a software crash, maybe a plane crash, or a polar lander crash ;)?
Your unit setup could be used in the underlying computations to enforce a given set of unit->dimension pairs but there is still most definitely a need to provide an easy to use runtime equivalent. Without such the unit library is not as useful as it could be and doesn't answer what seems to me as the more general and practical use. I also think it should be easy for the developer to write the equations in one set of units but have the code calculate in the base units without adding conversion overhead. Hence my question about qty<length> qt = 9.3 * psi and I have to say I really like that syntax that you've used.
I think you are wrongly attributing code to me. Matthias has done all of the work, I was merely commenting on it.
I'm not saying this library is no good or not useful. You've done good work and I think you have a better solution than Little's attempt. What I'm saying is that in order to answer the general case it needs to answer runtime units either directly or by thoroughly documenting how it could be done.
I'm having problems seeing why run-time units are necessary for the use case you came up with (or I'm confused as to your requirements). Let's say the system we are designing does something trivial - converts an elevation from AGL to MSL. The UI presents the user with an edit box that allows him to type in a number, and a combo box that allows him to choose the unit (feet, meters, etc). He hits "Calculate" and another read-only edit box is populated with a number, and then he can choose what unit that number is displayed in with yet another combo box. All of that can be accomplished using the library as is. All checking is done at compile-time to ensure that the units and calculations are done correctly. Are there more complicated uses that you were envisioning? Not defensive, just curious, --Michael Fawcett

I'd like to encourage Matthias in what looks like a very promising library!
--Michael Fawcett
MIchael, Thanks for the encouragement. Any concrete feedback? Matthias ---------------------------------------------------------------- Matthias Schabel, Ph.D. Assistant Professor, Department of Radiology Utah Center for Advanced Imaging Research 729 Arapeen Drive Salt Lake City, UT 84108 801-587-9413 (work) 801-585-3592 (fax) 801-706-5760 (cell) 801-484-0811 (home) matthias dot schabel at hsc dot utah dot edu ----------------------------------------------------------------

On 1/22/07, Matthias Schabel <boost@schabel-family.org> wrote:
I'd like to encourage Matthias in what looks like a very promising library!
--Michael Fawcett
MIchael,
Thanks for the encouragement. Any concrete feedback?
Not yet. I'd like to try to convert a small portion of a project of ours to use your library before commenting fully. --Michael Fawcett

Hi Noah,
I think it is a definite must that any unit library needs to be at least extensible to support the two problems I described. I personally don't see much use in being able to convert, statically, between two disparate unit systems. I don't know of any project that would do this. Perhaps
Boost Units efforts have a long and storied history; it is quite illuminating to go through the archives of this mailing list and read, in particular, the discussion and reviews of Andy Little's library (searching on [PQS] and [Quan] in the subject line will get most of them for you...) Furthermore, this discussion goes much further back, starting with Walter Brown's SI Units library and the Barton and Nackman text. While I understand the desire and, perhaps, even need for a runtime unit system, if you are in the majority in wanting this, it has been a relatively silent majority. While you may not be able to envision applications for a zero- overhead compile-time unit library, there are many physicists, engineers, and others out there who need precisely that. For many of these potential users, catching a single dimensional error at compile time can save many painful hours of debugging. In many, if not most, applications in scientific and high-performance computing the amount of acceptable overhead incurred is exactly zero, a goal that is impossible to achieve by any implementation of runtime unit checking. Since this is the application domain in which I am knowledgeable, that's been my focus.
have to use this system. The primary goal of a boost units library should be to support safe unit conversions of user defined units in a way that ensures that conversions are safe and inexpensive (as in not done more than once per assignment) and the primary use of this will be during runtime.
This could be the primary goal of a boost runtime units library. It is not the objective of the library I've proposed here. I understand that there is a potential user community for a runtime units system, and would fully support (your?) efforts to implement such a thing and have it incorporated into Boost as a complement to the compile-time units library. But, as I'm sure Andy Little would tell you, there are a huge number of complex decisions to be made in such an undertaking.
I was hoping that your library could be used as a base for such a runtime solution but you make it sound like more trouble than worth. I will probably continue to look at ways to work this in, especially if you get accepted, but since I already have a very simple answer to the problem it may be placed on the back burner. It would be nice to be
I would be more than happy to help you understand the current library implementation, and suggest ways of reimplementing the dimensional analysis functionality at runtime. In principle, this should be relatively straightforward and, as I mentioned in a previous post, it would be possible to simply specialize the unit and quantity classes for runtime support. Of course, this still leaves a significant amount of work in developing an efficient runtime system, implementing all the operators correctly, settling on a syntax for unit construction, IO, internationalization, etc...
positive that units share the same base system and provide a general solution when they might not but I really think, practically speaking, that the likelihood of someone needing to have two different static base systems is next to nil.
If you want to write a generic library that implemented basic formulas for electromagnetism, for which the equations themselves differ depending on whether you choose to use SI or one of the several CGS variant electromagnetic units (esu/emu/gaussian), it is impossible to get compile-time overloading with runtime units, so you would have to check the units at each function invocation. While this is fine for toy programs or interactive unit conversion calculators, in a simulation code where this function might be invoked millions of times, the overhead quickly becomes unacceptable. Furthermore, this is quite inelegant - if this function calls another one using units, the runtime checking will be replicated at each layer, adding further inefficiency. Similarly, any function that takes runtime units as arguments will need to check them for validity before doing anything. This can rapidly become a significant fraction of the total execution time for something simple like electrostatic force - compare: vector< quantity<runtime> > electrostatic_force(const quantity<runtime>& Q1, const quantity<runtime>& Q2, const vector< quantity<runtime> >& r) { assert(Q1 == SI_runtime_charge); assert(Q2 == SI_runtime_charge); for (int i=0;i<3;++i) assert(r[i] == SI_runtime_length); using namespace boost::units::SI::constants; const vector< quantity<runtime> > ret = Q1*Q2*unit_vector(r)/ (4*pi*epsilon_0*dot(r,r)); for (int i=0;i<3;++i) assert(ret[i] == SI_runtime_force); return ret; } with vector< quantity<SI::force> > electrostatic_force(const quantity<SI::charge>& Q1, const quantity<SI::charge>& Q2, const vector< quantity<SI::length> >& r) { using namespace boost::units::SI::constants; return Q1*Q2*unit_vector(r)/(4*pi*epsilon_0*dot(r,r)); } Which one of these is more self-documenting? More runtime efficient? Now imagine you want to be able to do this in CGS electrostatic units. Here we go (note that the equation is different): vector< quantity<runtime> > electrostatic_force(const quantity<runtime>& Q1, const quantity<runtime>& Q2, const vector< quantity<runtime> >& r) { if (unit_system(Q1) == SI && unit_system(Q2) == SI && unit_system(r[0]) == SI && unit_system(r[1]) == SI && unit_system(r[2]) == SI) { ... as above ... } if (unit_system(Q1) == CGS && unit_system(Q2) == CGS && unit_system(r[0]) == CGS && unit_system(r[1]) == CGS && unit_system(r[2]) == CGS) { assert(Q1 == CGS_runtime_charge); assert(Q2 == CGS_runtime_charge); for (int i=0;i<3;++i) assert(r[i] == CGS_runtime_length); const vector< quantity<runtime> > ret = Q1*Q2*unit_vector(r)/dot(r,r); for (int i=0;i<3;++i) assert(ret[i] == CGS_runtime_force); return ret; } } That's ugly and slow... For compile-time units: vector< quantity<CGS::force> > electrostatic_force(const quantity<CGS::charge>& Q1, const quantity<CGS::charge>& Q2, const vector< quantity<CGS::length> >& r) { return Q1*Q2*unit_vector(r)/dot(r,r); } No mess. No fuss. No overhead.
You have a great library, something that might be a great backbone for a runtime units system and provide an extra level of safety, but I think the primary use to most people of a unit library is going to be runtime units you either need to support this directly or better document the methods to use your library for such purposes. 99.99% of the time users are going to stick with a single system, usually the SI system, as their
I guess I'll take this as a mixed complement : a great library for 0.01% of users...sigh... Matthias ---------------------------------------------------------------- Matthias Schabel, Ph.D. Assistant Professor, Department of Radiology Utah Center for Advanced Imaging Research 729 Arapeen Drive Salt Lake City, UT 84108 801-587-9413 (work) 801-585-3592 (fax) 801-706-5760 (cell) 801-484-0811 (home) matthias dot schabel at hsc dot utah dot edu ----------------------------------------------------------------

I would be more than happy to help you understand the current library implementation, and suggest ways of reimplementing the dimensional analysis functionality at runtime.
No, I don't want this. As I indicated previously I think compile time dimensional analysis checking is very important. I also don't want runtime overhead of unit based checking...I have that now and hate it. What I want to see is two features I believe are completely complimentary. First, to be able to write equations in any set of units but have zero, or next to zero, runtime overhead due to conversions. Second, to have a value that will display and record in arbitrary units that interacts seamlessly with the values being used in equations. The first I see as being *partially* answered by your solution. It would be fully realized if I could do the following: static quantity<meters> const a_constant = 9.31 * ft; In this case the conversion is only done once and this is an acceptable overhead for my, and I believe most, cases. The equation can be written in the language of the domain yet use a standard set of unit/dimension pairs and be basically invisible to the developer and maintainer. The second part I don't see being answered by your library at all. I do have a solution in mind that should be perfectly complimentary to your library iff your library can deal with arbitrary units. Basically what you would end up with is something like: length_table::quantity qty(length_table::ft) = f(); where f() is defined as: units::quantity<length> f(); and the above constant: static units::quantity<length> const a_const = 9.31 * length_table::ft; I think such needs are going to be rather common but I don't see documentation on how to accomplish it.

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Matthias Schabel Sent: 13 January 2007 05:21 To: boost@lists.boost.org Subject: [boost] mcs::units informal review request
I've been making incremental improvements (I like to think, anyway) in the mcs::units dimensional analysis and unit library. .. I'm hoping to get some concrete feedback from anyone who has downloaded and used the library (especially the v0.5.x branch).
This seems to be 'maturing' very nicely and it feels right (though my understanding of it is rudimentary). I'm impressed with the Boost Library interoperability with serialization & interval demos, and the documentation is persuasive that the philosopy is good. It is nice to see typical output from examples.
To get things started, here are a few questions I have :
1) At the moment, the library enforces strict construction; that is, quantities must be fully defined at construction :
quantity<double,SI::length> q1(1.5*SI::meters);
is acceptable, but
quantity<double,SI::length> q2(1.5);
Explicit is fine with me, but can you ease the pain? In practice, many would usually be working only with SI units, so this would simplify to quantity<double, length> q1(1.5 * meters); Or could we have SI as the template class default, or some global choice? macro? and the FPType default to double? (Since nearly all work will be done using double, it would be nice to avoid typing/seeing double so often in the code. It would be much nicer to write quantity<length> L = 2.0 * meters; // quantity of length to get double? quantity<float, length> L = 2.0 * meters; // quantity of length to get float, but order makes this impossible. And/or some typedefs perhaps ? Like typedef quantity<double,SI::length> length_type;
2) I'm not currently satisfied with the I/O aspects, particularly the lack of a mechanism for internationalization. I'd love to hear any design/implementation suggestions, especially from anyone familiar with facets/locales, on how to most gracefully accomplish this...
No advice - except that I'm sure you don't need/want to be told it is probably a nightmare. ;-) And it would be useful to consider infinity and NaNs too?
3 Is this of sufficient interest to invest the effort? IMO - No.
4) Any comments on implementation from those brave enough to look at the guts of the library would be especially welcome.
I note that the items in folder test are really examples (and might live in an examples folder (and in /libs), and that there are no tests using the Boost.Test. It is never to early to start devising unit tests, even if it seems PITA to start. But I hope to study it more closely and even try it out ;-) Paul --- Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539561830 & SMS, Mobile +44 7714 330204 & SMS pbristow@hetp.u-net.com

<snip>
and the FPType default to double? (Since nearly all work will be done using double, it would be nice to avoid typing/seeing double so often in the code. It would be much nicer to write
quantity<length> L = 2.0 * meters; // quantity of length
to get double?
quantity<float, length> L = 2.0 * meters; // quantity of length
to get float, but order makes this impossible.
Not impossible, just add another layer of indirection: template<typename T1,typename T2> class quantity : public quantity_impl< typename mpl::if_< is_unit<T1>,T2,T1
::type, typename mpl::if_< is_unit<T1>,T1,T2 ::type {};
Peder

Not impossible, just add another layer of indirection:
template<typename T1,typename T2> class quantity : public quantity_impl< typename mpl::if_< is_unit<T1>,T2,T1
::type, typename mpl::if_< is_unit<T1>,T1,T2 ::type {};
This is intriguing, but I'm not sure how one would enable a default template argument on just the value_type... Any ideas? Matthias

2007/1/16, Matthias Schabel <boost@schabel-family.org>:
Not impossible, just add another layer of indirection:
template<typename T1,typename T2> class quantity : public quantity_impl< typename mpl::if_< is_unit<T1>,T2,T1
::type, typename mpl::if_< is_unit<T1>,T1,T2 ::type {};
This is intriguing, but I'm not sure how one would enable a default template argument on just the value_type... Any ideas?
template<typename T1,typename T2=double> struct quantity : public ... { }; Using quantity, you would then get a compilation error if you tried using quantity<double>, because then both T1 and T2 would evaluate to double... Alternatively: template<typename T1,typename T2=double> struct quantity { //Implementation of quantity here, T1 is the value type, T2 is the unit type. }; template<typename T1> struct quantity<T1,double> : public quantity<double,T1> {}; Peder
Matthias
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Paul,
This seems to be 'maturing' very nicely and it feels right (though my understanding of it is rudimentary). I'm impressed with the Boost Library interoperability with serialization & interval demos, and the documentation is persuasive that the philosopy is good.
It is nice to see typical output from examples.
Thanks for the positive feedback.
Explicit is fine with me, but can you ease the pain?
In practice, many would usually be working only with SI units, so this would simplify to
quantity<double, length> q1(1.5 * meters);
Right now you get this syntax is you (gasp) do using namespace SI;
Or could we have SI as the template class default, or some global choice? macro?
This is somewhat problematic with the current implementation because the unit system is encapsulated as a template argument of the unit, which is itself a template argument of quantity. I'll have to think about it a bit more...
and the FPType default to double? (Since nearly all work will be done using double, it would be nice to avoid typing/seeing double so often in the code. It would be much nicer to write
quantity<length> L = 2.0 * meters; // quantity of length
to get double?
quantity<float, length> L = 2.0 * meters; // quantity of length
to get float, but order makes this impossible.
As I mention in my previous post, I could easily add a quantity in the SI namespace that allows double precision by default and inverts the order of the template arguments. I am also not strongly opposed to flipping the order in quantity itself, so the value_type is the second argument: template<class Unit,class Y = double> class quantity;
And/or some typedefs perhaps ? Like
typedef quantity<double,SI::length> length_type;
I already have typedefs for units themselves: typedef unit<SI::system,length_type> length; but would be happy to add quantity typedefs, too. You're probably right that a simple double-precision SI unit set is going to be by far the predominant use case in practical code... Maybe typedef quantity<double,SI::length> SI_length_q; (analogous to time_t nomenclature)? Or I could rename the unit typedefs to length_unit, etc... to free up the simpler typedefs: typedef unit<SI::system,length_type> length_unit; typedef quantity<double,SI::length_unit> length; I'd be happy to hear additional opinions/ideas about this point...
No advice - except that I'm sure you don't need/want to be told it is probably a nightmare. ;-)
Harrumph... I may have to do the unthinkable and buy Langer & Kraft's book...
And it would be useful to consider infinity and NaNs too?
IO of value_type will be delegated to the value_type itself, so this is decoupled from the library.
I note that the items in folder test are really examples (and might live in an examples folder (and in /libs), and that there are no tests using the Boost.Test. It is never to early to start devising unit tests, even if it seems PITA to start.
Good suggestion - I'll make the move for the next release. Anyone know of a good tutorial on using Boost.Test?
But I hope to study it more closely and even try it out ;-)
Please do, when you have a chance. I'd like to optimize the chances for success before submission, especially given the general level of contentiousness that this issue seems to have generated in the past... With a little luck I'll be able to have default behavior that makes most people reasonably happy, then facilitate enough flexibility to accommodate those with other needs relatively easily. Matthias

Matthias Schabel said: (by the date of Mon, 15 Jan 2007 21:42:39 -0700)
As I mention in my previous post, I could easily add a quantity in the SI namespace that allows double precision by default and inverts the order of the template arguments. I am also not strongly opposed to flipping the order in quantity itself, so the value_type is the second argument:
template<class Unit,class Y = double> class quantity;
You can look at Boost.Parameter http://www.boost.org/libs/parameter/doc/html/index.html It allows to use both argument orders in a single template, I mean: template<class Unit,class Y = double> class quantity; template<class Y, class Unit> class quantity; become the same template. But in fact, maybe following Ockham's razor you should just follow your suggestion and use template<class Unit,class Y = double> class quantity; because 'class Unit' *cannot* have a reasonable default. While class Y=double *can* have a default. I hope that you have read the reviews of Andy's Little quantity library, and why it was rejected. Some of the reasones were considered show-stoppers. I remember few of them, but better you will read the reviews. Here's what I remember: 1) support for powers of N (where N is different from 10) in multipliers: usually people operate using kilo (10^3), mega (10^6). But sometimes a unit is expressed in different system, like a power of 2 (harddrive capacity, file size: http://en.wikipedia.org/wiki/Kibi ) 2) try to decouple: A) dimension analysis (length,temperature,velocity) B) from dimension units (meter,inch,kelvin,fahrenheit,kph) C) from dimension multipliers (kilo, mega, kibi) Because: A) people might want to use ONLY dimension analysis in their program without any units at all. During compilation compiler witll check ONLY if the dimensions of the variable match. For example one variable is length another is a temperature, and the compiler will catch an error when someone tries to add them. BUT the program will NOT perform any conversions (for example from inch to meters) because the dimension units are not used. B) if someone decides to use them, he agrees that some additional work is done by the program when converting (implicitly, or explicitly) from inches to meters. C) only those multipliers that are declared either by the user, or in an included file that contains some predeclared multipliers are used by the program. Allow the user to chose between explicit and implicit conversion, when he wants. You don't know if someone wants to store 1km or 1000m in his variable. 3) Do not eforce SI on anyone: Make it easy to declare one's own unit system. This point it is not about imperial and metric. It is about physics. For instance parsecs and light-years do not belong to SI. But try to store 100 parsecs with 'double' precision using meters. (or yottameters 10^24 (?)). This is not practical, any calculations that operate on parsecs must be done in parsecs, or we lose precision. Of course you can provide few #include <boost/unit/SI.hpp>, #include <boost/unit/imperial.hpp>, #include <boost/unit/relativity.hpp>, etc files. But you never know what kind of unit system will be useful for somebody. It should be equally easy to declare that I will use a meter (1 line) and a second (1 line, makes 2 lines total) in my program, instead of including whole SI.hpp (1 line), which I don't need. For instance in calculations involving general relativity, the speed is expressed dimensionless. (because it is always a fraction of light-speed). 4) Make input/output as decoupled as possible. Basic cout << of the unit is enough for beginning. Do NOT make it sophisticated in any way, otherwise you will start writing another library. 5) leave door open for adapting your library into other math libraries. For example, let's say that I have a 3D vector of meters: vector<3,quantity<meter> > a,b; vector<3,quantity<meter*meter> > c; c=dot_product(a,b); What is the return type of function dot_product ? Think about it, and you will see what I mean. Where that function is defined? Well, boost still has not a small vector library, but you should leave an open door for making one. Ok, I don't remember any more. Better you will have a look at that reviews. I still have them in my inbox, so in fact I can forward them to you. -- Janek Kozicki |

Janek,
become the same template. But in fact, maybe following Ockham's razor you should just follow your suggestion and use
template<class Unit,class Y = double> class quantity;
because 'class Unit' *cannot* have a reasonable default. While class Y=double *can* have a default.
I agree that that's the most sensible choice. Since there's really no gain, and potentially some compile-time cost on top of a library that's already compile-time heavy, I'll just flip the order of the template arguments for the quantity class and default to double precision value_type.
I hope that you have read the reviews of Andy's Little quantity library, and why it was rejected. Some of the reasones were considered show-stoppers. I remember few of them, but better you will read the reviews.
I followed the debate fairly closely, and saved a number of the posts. I have made an effort to address as many issues as seemed feasible without experiencing extreme "feature creep"... I don't think it is reasonable to try to address every conceivable application in unit conversion with this library, so I am targeting a flexible system that can form a foundation for dimensional analysis that can be used to build more specialized user libraries.
1) support for powers of N (where N is different from 10) in multipliers: usually people operate using kilo (10^3), mega (10^6). But sometimes a unit is expressed in different system, like a power of 2 (harddrive capacity, file size: http://en.wikipedia.org/wiki/ Kibi )
My approach to this is to delegate this behavior to a value_type that handles powers (see scaled_value.hpp and unit_example_6.cpp). I have not defined prefixes for metric or binary prefixes in that file, but that would be easy. The rationale for this approach is that a kilometer in the MKS system really is 1000 meters, just as a kilobyte is 2^10 bytes... Therefore, it is more a question of how to preserve precision in numerical computations using values that may have large power values (nanometers/femtoseconds etc...), which falls to the value_type.
C) only those multipliers that are declared either by the user, or in an included file that contains some predeclared multipliers are used by the program. Allow the user to chose between explicit and implicit conversion, when he wants. You don't know if someone wants to store 1km or 1000m in his variable.
As I alluded to above, if the user wants 1km, it can be done like this: quantity< scaled_value<double,10,3>,SI::length> q1 (scaled_value<double,10,3>(1.0)*meter); while 1000m could be either quantity<double,SI::length> q2(double(1000)*meter); or quantity< scaled_value<double,10,0>,SI::length> q3 (scaled_value<double,10,0>(1000.0)*meter); If you don't care about the potential loss of precision, of course you can just define prefix factors as constants (like I do in si_units.hpp): static const long double kilo(1e3); quantity<double,length> q2(1*kilo*meter); Part of the problem is that there are many equally reasonable needs to address specific domains; it isn't possible to accommodate all of these in a single library. That's why I've tried to make this library as flexible as possible to facilitate add-on libraries dealing with special requirements.
3) Do not eforce SI on anyone:
I absolutely agree. This library is completely system agnostic, with SI (and CGS) systems provided for convenience.
Make it easy to declare one's own unit system. This point it is not about imperial and metric. It is about physics. For instance parsecs and light-years do not belong to SI. But try to store 100 parsecs with 'double' precision using meters. (or yottameters 10^24 (?)). This is not practical, any calculations that operate on parsecs must be done in parsecs, or we lose precision.
This forms the core of the example documentation, in which a mini-SI system is implemented. The code is in test_system.hpp. If you look at the documentation, you can see how to do completely general dimensional analysis on any units you choose to define.
Of course you can provide few #include <boost/unit/SI.hpp>, #include <boost/unit/imperial.hpp>, #include <boost/unit/ relativity.hpp>, etc files. But you never know what kind of unit system will be useful for somebody.
This was a major design consideration for the library - the whole header file for test_system.hpp is only about 50 lines of code, including IO, various convenience typedefs, etc... - basically everything one would need to define for a unit system with length, time, and mass only... Physics-based systems such as various natural units should be easy to implement. Systems such as imperial units are not intrinsically harder, but are left to the interested individual because they do not actually define any base units - is the fundamental unit of length in imperial units the inch? the foot? the yard? the mile? The same problem arises for mass, area, etc...
It should be equally easy to declare that I will use a meter (1 line) and a second (1 line, makes 2 lines total) in my program, instead of including whole SI.hpp (1 line), which I don't need.
This would just involve increasing the granularity of the SI header to split it up into mini-headers for each fundamental unit: <boost/units/systems/SI.hpp> would become #include <boost/units/systems/SI_length.hpp> #include <boost/units/systems/SI_mass.hpp> #include <boost/units/systems/SI_time.hpp> etc... Certainly easy enough to implement this, since there is no coupling between fundamental units. This fact, which I should probably make clearer, also makes it easy for the user to add new fundamental units to existing unit systems: #include <boost/units/systems/SI.hpp> namespace SI { struct foo_tag : public ordinal<10> { }; typedef dimension< boost::mpl::list<dim<foo_tag,static_rational<1> >
::type foo_type;
typedef unit<system,foo_type> foo; static const foo foo_unit; } quantity<double,SI::foo> q1(1.5*foo_unit); q1*quantity<double,SI::length>(1.0*meter) -> 1.5 foo meter Adding preprocessor shortcuts would make this even simpler, though I suspect that the number of users who will implement their own unit systems will be a small minority...
4) Make input/output as decoupled as possible. Basic cout << of the unit is enough for beginning. Do NOT make it sophisticated in any way, otherwise you will start writing another library.
This is what is already implemented. I demonstrate interoperability with Boost.Serialization in one of the examples. I'm happy to stop there for now...
5) leave door open for adapting your library into other math libraries. For example, let's say that I have a 3D vector of meters:
vector<3,quantity<meter> > a,b; vector<3,quantity<meter*meter> > c;
c=dot_product(a,b);
What is the return type of function dot_product ? Think about it, and you will see what I mean. Where that function is defined? Well, boost still has not a small vector library, but you should leave an open door for making one.
I've dealt with this quite carefully in the existing library; take a look at unit_example_7.cpp for interoperability with my own array class (output only, since the array class code is not included) unit_example_8.cpp for interoperability with Boost.Quaternion unit_example_9.cpp for interoperability with std::complex and a toy reimplementation of complex that handles operators correctly The latter example shows some of the problems that can arise when heterogeneous operators are not implemented by the container class. For example, in the case you provide, to be completely general, the return type of the dot_product function should be template<class A,class B> add_typeof_helper< multiply_typeof_helper<A,B>::type,multiply_typeof_helper<A,B>::type
::type dot_product(const vector<N,A>& a,const vector<N,B>& b)
In your particular case, quantity<meter*meter> is correct, but not in general... This algebra is (to the best of my knowledge) correctly implemented in the library as it stands. For new value_types with unconventional algebras, the following helper classes need to be specialized: add_typeof_helper subtract_typeof_helper multiply_typeof_helper divide_typeof_helper power_typeof_helper root_typeof_helper so the compiler can determine the result type of an algebraic expression involving that type.
Ok, I don't remember any more. Better you will have a look at that reviews. I still have them in my inbox, so in fact I can forward them to you.
I'd appreciate that; thanks for the input. If you get a chance to look at the library itself, I'd appreciate your opinion... Matthias

Matthias Schabel wrote:
The rationale for this approach is that a kilometer in the MKS system really is 1000 meters, just as a kilobyte is 2^10 bytes...
Poking my head into the discussion with a completely offtopic comment: This is "incorrect" as a matter of standards and clear communication. kilo = 10^3 in the MKS (SI) system, never ever 1024. The IEEE recommends a list of "binary prefixes" (IEC 60027-2/IEEE 1541) for computer usage. For instance, "Kibi" for 1024, with prefix Ki. Similarly for Mebi, Gibi, etc. I heartily recommend this practice for reduction of ambiguity, and I note that it is (very) slowly catching on around the Net. It's also the subject of a legal standardization process in the EU (prEN 60027-2:2006) http://en.wikipedia.org/wiki/IEEE_1541 Of course, this has no bearing on the library under discussion :-) Back to lurking ... -- ------------------------------------------------------------------------------- Kevin Lynch voice: (617) 353-6025 Physics Department Fax: (617) 353-9393 Boston University office: PRB-361 590 Commonwealth Ave. e-mail: krlynch@bu.edu Boston, MA 02215 USA http://budoe.bu.edu/~krlynch -------------------------------------------------------------------------------

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Kevin Lynch Sent: 17 January 2007 17:18 To: boost@lists.boost.org Subject: Re: [boost] mcs::units informal review request
Matthias Schabel wrote:
The rationale for this approach is that a kilometer in the MKS system really is 1000 meters, just as a kilobyte is 2^10 bytes...
Poking my head into the discussion with a completely offtopic comment:
This is "incorrect" as a matter of standards and clear communication. kilo = 10^3 in the MKS (SI) system, never ever 1024. The IEEE recommends a list of "binary prefixes" (IEC 60027-2/IEEE 1541) for computer usage. For instance, "Kibi" for 1024, with prefix Ki. Similarly for Mebi, Gibi, etc. I heartily recommend this practice for reduction of ambiguity, and I note that it is (very) slowly catching on around the Net. It's also the subject of a legal standardization process in the EU (prEN 60027-2:2006)
http://en.wikipedia.org/wiki/IEEE_1541
Of course, this has no bearing on the library under discussion ...
On the contrary, doesn't the proposed library promise to make it easy to include this as an additional system of units and abbreviations? The ease of use might encourage use of this sensible standard? Paul PS Not that I am suggesting Matthias should do it now ;-) --- Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539561830 & SMS, Mobile +44 7714 330204 & SMS pbristow@hetp.u-net.com

Hello Matthias, Tuesday, January 16, 2007, 7:42:39 AM, you wrote:
Paul,
[snip]
Explicit is fine with me, but can you ease the pain?
In practice, many would usually be working only with SI units, so this would simplify to
quantity<double, length> q1(1.5 * meters);
Right now you get this syntax is you (gasp) do using namespace SI;
Or could we have SI as the template class default, or some global choice? macro?
This is somewhat problematic with the current implementation because the unit system is encapsulated as a template argument of the unit, which is itself a template argument of quantity. I'll have to think about it a bit more...
Maybe a simple "using namespace SI" would be sufficient? IMO, macros should be a kind of last resort and may eventually bring more problems than convenience. [snip]
And/or some typedefs perhaps ? Like
typedef quantity<double,SI::length> length_type;
I already have typedefs for units themselves:
typedef unit<SI::system,length_type> length;
but would be happy to add quantity typedefs, too. You're probably right that a simple double-precision SI unit set is going to be by far the predominant use case in practical code... Maybe
typedef quantity<double,SI::length> SI_length_q;
(analogous to time_t nomenclature)? Or I could rename the unit typedefs to length_unit, etc... to free up the simpler typedefs:
typedef unit<SI::system,length_type> length_unit;
typedef quantity<double,SI::length_unit> length;
I'd be happy to hear additional opinions/ideas about this point...
I think these typedefs, if they are necessary (I'm not quite sure they are, though), should have quite distinctive and telling names and should reside in system namespaces, like SI. Even when importing the whole SI namespace via "using" the library names should not conflict with commonly used symbols, such as "length" or "time". Maybe: namespace SI { typedef quantity<double, length> length_quantity_t; typedef unit<system, length_type> length_unit_t; } [snip] -- Best regards, Andrey mailto:andysem@mail.ru

hmmm...decided to try and test this guy out and based on the docs I would think the following code would compile, but it doesn't in VC++ 8.0...g++ gobbles it up ok though. #include <boost/units/quantity.hpp> #include <boost/units/io.hpp> #include <boost/units/systems/si_units.hpp> using namespace boost::units; using namespace boost::units::SI; int const TEST_LIMIT = 5000000; quantity<volume> f_mcs(quantity<length> x, quantity<length> y, quantity<length> z) { static quantity<length> const C = 3.14 * meters; quantity<volume> r = 0 * cubic_meters; for (int i = 0; i < TEST_LIMIT; ++i) r = r + ((x + y) * z * C); return r; } int main() { } output: 1>profile_units.cpp 1>.\profile_units.cpp(22) : error C2752: 'boost::units::add_typeof_helper<X,Y>' : more than one partial specialization matches the template argument list 1> with 1> [ 1> X=boost::units::quantity<boost::units::SI::length>, 1> Y=boost::units::quantity<boost::units::SI::length> 1> ] 1> C:\Documents and Settings\nroberts\Desktop\mcs_units_v0.5.5\units\boost/units/quantity.hpp(296): could be 'boost::units::add_typeof_helper<boost::units::quantity<Unit,Y>,boost::units::quantity<Unit2,Y>>' 1> C:\Documents and Settings\nroberts\Desktop\mcs_units_v0.5.5\units\boost/units/operators.hpp(71): or 'boost::units::add_typeof_helper<X,X>' 1>.\profile_units.cpp(22) : error C2893: Failed to specialize function template 'add_typeof_helper<boost::units::quantity<Unit,Y>,boost::units::quantity<Unit2,Y>>::type boost::units::operator +(const boost::units::quantity<Unit,Y> &,const boost::units::quantity<Unit2,Y> &)' 1> With the following template arguments: 1> 'boost::units::SI::length' 1> 'boost::units::SI::length' 1> 'double' 1> 'double' 1>.\profile_units.cpp(22) : error C2784: 'add_typeof_helper<boost::units::unit<System,Dim>,boost::units::unit<System,Dim2>>::type boost::units::operator +(boost::units::unit<System,Dim>,boost::units::unit<System,Dim2>)' : could not deduce template argument for 'boost::units::unit<System,Dim>' from 'boost::units::quantity<Unit>' 1> with 1> [ 1> Unit=boost::units::SI::length 1> ] 1> C:\Documents and Settings\nroberts\Desktop\mcs_units_v0.5.5\units\boost/units/unit.hpp(154) : see declaration of 'boost::units::operator +' 1>.\profile_units.cpp(22) : error C2676: binary '+' : 'boost::units::quantity<Unit>' does not define this operator or a conversion to a type acceptable to the predefined operator 1> with 1> [ 1> Unit=boost::units::SI::length 1> ]

hmmm...decided to try and test this guy out and based on the docs I would think the following code would compile, but it doesn't in VC++ 8.0...g++ gobbles it up ok though.
[snip] Thanks for pointing this out - this is a problem when MCS_HAS_TYPEOF is not defined. I believe that just putting a #define MCS_HAS_TYPEOF at the beginning of your code should solve the problem. I've implemented a more permanent correction for the next release... Matthias

hmmm...decided to try and test this guy out and based on the docs I would think the following code would compile, but it doesn't in VC++ 8.0...g++ gobbles it up ok though.
A better fix is to replace the code between the #ifdef MCS_HAS_TYPEOF ... #endif // MCS_HAS_TYPEOF with the following: #ifdef MCS_HAS_TYPEOF template<typename X> struct unary_plus_typeof_helper { typedef typeof(+X()) type; }; template<typename X> struct unary_minus_typeof_helper { typedef typeof(-X()) type; }; template<typename X,typename Y> struct add_typeof_helper { typedef typeof(X()+Y()) type; }; template<typename X,typename Y> struct subtract_typeof_helper { typedef typeof(X()-Y()) type; }; template<typename X,typename Y> struct multiply_typeof_helper { typedef typeof(X()*Y()) type; }; template<typename X,typename Y> struct divide_typeof_helper { typedef typeof(X()/Y()) type; }; #else // MCS_HAS_TYPEOF template<typename X> struct unary_plus_typeof_helper { typedef X type; }; template<typename X> struct unary_minus_typeof_helper { typedef X type; }; template<typename X,typename Y> struct add_typeof_helper { BOOST_STATIC_ASSERT((is_same<X,Y>::value == true)); typedef X type; }; template<typename X,typename Y> struct subtract_typeof_helper { BOOST_STATIC_ASSERT((is_same<X,Y>::value == true)); typedef X type; }; template<typename X,typename Y> struct multiply_typeof_helper { BOOST_STATIC_ASSERT((is_same<X,Y>::value == true)); typedef X type; }; template<typename X,typename Y> struct divide_typeof_helper { BOOST_STATIC_ASSERT((is_same<X,Y>::value == true)); typedef X type; }; #endif // MCS_HAS_TYPEOF Matthias

Matthias Schabel wrote:
hmmm...decided to try and test this guy out and based on the docs I would think the following code would compile, but it doesn't in VC++ 8.0...g++ gobbles it up ok though.
A better fix is to replace the code between the #ifdef MCS_HAS_TYPEOF ... #endif // MCS_HAS_TYPEOF with the following:
Ok, I probably won't get to do this until Friday again...that's the day I get to work on personal things at work for a few hours...I don't have VC. I was profiling my own version and it was dog slow compared to doubles. I wanted to check yours. I did here at home with g++ on Linux and our two versions compare equally wrt the static dim quantity. I had to really vamp up the optimizations of course to get them as fast as doubles (I couldn't find the right options for VC) and I still can't do as deep a recursion level without a seg fault...this is actually by quite a margin...several orders of magnitude. Both versions fail at the same point. Here is test code...add a 0 to the end of tlim and I get a crash on the quantity version: #include <boost/units/io.hpp> #include <boost/units/systems/si_units.hpp> int const tlim = 50000; double f_dbl(double x, double y, double z, int count) { static double const C = 3.14; if (!count) return 0; return ((x + y) * z * C) + f_dbl(x, y, z, count - 1); } using namespace boost::units; using namespace boost::units::SI; quantity<volume> f_qty(quantity<length> x, quantity<length> y, quantity<length> z, int count) { static quantity<length> const C = 3.14 * meters; if (!count) return 0 * cubic_meters; return ((x + y) * z * C) + f_qty(x, y, z, count - 1); } #include <iostream> int main() { std::cout << f_dbl(1,2,3,tlim) << std::endl; std::cout << f_qty(1 * meters, 2 * meters, 3 * meters, tlim) << std::endl; } COMMAND LINE: nroberts@localhost ~/projects/prof_units $ g++ -I/home/nroberts/units -pg -O3 prof_mcs.cpp This is likely implementation dependant and maybe there's still more ops.

Ok, I probably won't get to do this until Friday again...that's the day I get to work on personal things at work for a few hours...I don't have VC.
I don't have access to it at all, so I appreciate any help in getting things to function correctly under VC++...
I was profiling my own version and it was dog slow compared to doubles. I wanted to check yours. I did here at home with g++ on Linux and our two versions compare equally wrt the static dim quantity. I had to really vamp up the optimizations of course to get them as fast as doubles (I couldn't find the right options for VC) and I still can't do
I haven't spent much time (following the Boost admonition to focus on clarity and correctness before optimization) tuning performance yet. I don't believe that there is anything in the quantity class that can't, in principle, be optimized away. That being said, principle and practice can be separated by a significant gulf at times...good compiler inlining will be critical to optimization. Any input on optimizing the library would be most welcome - I expect that it should be possible to have code using quantities run exactly as fast as for built in types, but that remains to be proven...
as deep a recursion level without a seg fault...this is actually by quite a margin...several orders of magnitude. Both versions fail at the same point. Here is test code...add a 0 to the end of tlim and I get a crash on the quantity version:
I'm personally more concerned about runtime performance than recursion at this point, but this is an interesting point... I'm not completely clear on what you mean about increasing tlim : do you mean that the equivalent code with doubles can recurse several orders of magnitude deeper, and mcs::units recurses one order of magnitude deeper than your quantity code?
COMMAND LINE:
nroberts@localhost ~/projects/prof_units $ g++ -I/home/nroberts/units -pg -O3 prof_mcs.cpp
This is likely implementation dependant and maybe there's still more ops.
It looks like I'm having a dumb day today. Do you mean that there are more operations in the quantity code than there are for doubles??? Anyway, thanks for the feedback... Matthias

Matthias Schabel wrote:
Any input on optimizing the library would be most welcome - I expect that it should be possible to have code using quantities run exactly as fast as for built in types, but that remains to be proven...
So far that seems to run true if you use the highest optimization level for g++. In VC++ I'm finding otherwise.
as deep a recursion level without a seg fault...this is actually by quite a margin...several orders of magnitude. Both versions fail at the same point. Here is test code...add a 0 to the end of tlim and I get a crash on the quantity version:
I'm personally more concerned about runtime performance than recursion at this point, but this is an interesting point... I'm not completely clear on what you mean about increasing tlim : do you mean that the equivalent code with doubles can recurse several orders of magnitude deeper, and mcs::units recurses one order of magnitude deeper than your quantity code?
tlim is the variable in the code I pasted that sets the recursion depth. Yours and mine both have the same issue and crash at the same level (or close enough). Doubles can recurse several orders of magnitude deeper. If you add a zero to 50000 then that is the point with both mcs units, and my own similar construct, crash...
It looks like I'm having a dumb day today. Do you mean that there are more operations in the quantity code than there are for doubles???
There must be but I haven't looked at the asm. It is possible that even the recursive double function is getting inlined and the quantity version not but I used recursion hoping the actual functions wouldn't be and only the quantity would so I could get an accurate assessment of double vs. quantity speed. It is important in my case that they are the same or close to it. Having a recursion limit depth of 50,000 isn't a big deal..not for me anyway, but it is interesting to note. BTW, my project, which has different goals than yours but some similar constructs (and similar issues), is unitlib on sourceforge.

Matthias Schabel wrote:
To get things started, here are a few questions I have :
1) At the moment, the library enforces strict construction; that is, quantities must be fully defined at construction :
quantity<double,SI::length> q1(1.5*SI::meters);
is acceptable, but
quantity<double,SI::length> q2(1.5);
On the one hand it is useful to know what unit the value is in and to enforce such knowledge be explicitly stated in the code. On the other hand, since this library has no knowledge of units with conversion factors other than 1 in any given system...specifying the system in the template parameter is probably enough to say what is needed; the second use of the unit then could be considered redundant. I would leave the requirement since it necessitates that the developer know what unit they are working with in case they don't know the system and can't be bothered to look.
2) I'm not currently satisfied with the I/O aspects, particularly the lack of a mechanism for internationalization. I'd love to hear any design/implementation suggestions, especially from anyone familiar with facets/locales, on how to most gracefully accomplish this...
I don't think the unit library should do I/O. I think it is one area that would be better served as completely left out and let the user devise the method best suited for their I/O requirements, including i18n. Surely the developer knows what dimensions they need to be able to display and can do something as simple as overriding >><< for that purpose.
4) Any comments on implementation from those brave enough to look at the guts of the library would be especially welcome.
Since this library is heavily based on MPL it should use MPL constructs. Multiplying dimensions for example should be done with mpl::times<> instead of creating a new operator (multiply<>) for that purpose. It would also be good if the concepts were documented with concepts (the boost::concepts library) and enforced in algorithms. There is also a lack of meta functions to identify units and quantities as types (only dimensions have this functionality) for use in such things as enable_if and other meta programming utilities. I also disagree with the assessment that runtime unit conversions are unnecessary. It is my contention that the more common use for a unit library is to do conversions at runtime with minimal cost in an easy and efficient (from programmer perspective) way. Compile time dimension enforcement is a very useful tool but compile time unit enforcement without runtime conversion and unit selection is not. The target audience for this library, as specified, seem too small to warrant boost inclusion. That's my take.

Noah Roberts wrote: Sorry, meant to post this in the "formal" review request. Oh well. No need to repost it there.

AMDG Noah Roberts <roberts.noah <at> gmail.com> writes:
4) Any comments on implementation from those brave enough to look at the guts of the library would be especially welcome.
Since this library is heavily based on MPL it should use MPL constructs. Multiplying dimensions for example should be done with mpl::times<> instead of creating a new operator (multiply<>) for that purpose.
Not necessarily. static_multiply has to be able to operate on an mpl::list.
It would also be good if the concepts were documented with concepts (the boost::concepts library) and enforced in algorithms.
I don't think it matters much. Have you tried something that should be illegal but worked or gave an incomprehensible error message?
There is also a lack of meta functions to identify units and quantities as types (only dimensions have this functionality) for use in such things as enable_if and other meta programming utilities.
Good point.
I also disagree with the assessment that runtime unit conversions are unnecessary. It is my contention that the more common use for a unit library is to do conversions at runtime with minimal cost in an easy and efficient (from programmer perspective) way. Compile time dimension enforcement is a very useful tool but compile time unit enforcement without runtime conversion and unit selection is not. The target audience for this library, as specified, seem too small to warrant boost inclusion.
That's my take.
I don't understand what you want beyond what is provided in conversion.hpp systems/si/convert_si_to_cgs.hpp, and systems/cgs/convert_cgs_to_si.hpp In Christ, Steven Watanabe

Steven Watanabe wrote:
AMDG
Noah Roberts <roberts.noah <at> gmail.com> writes:
4) Any comments on implementation from those brave enough to look at the guts of the library would be especially welcome. Since this library is heavily based on MPL it should use MPL constructs. Multiplying dimensions for example should be done with mpl::times<> instead of creating a new operator (multiply<>) for that purpose.
Not necessarily. static_multiply has to be able to operate on an mpl::list.
I don't know where you come up with that requirement. It certainly makes sense to me that if you have a library that is as intimately tied with MPL as this one is that you use the standards created by the MPL.
I also disagree with the assessment that runtime unit conversions are unnecessary. It is my contention that the more common use for a unit library is to do conversions at runtime with minimal cost in an easy and efficient (from programmer perspective) way. Compile time dimension enforcement is a very useful tool but compile time unit enforcement without runtime conversion and unit selection is not. The target audience for this library, as specified, seem too small to warrant boost inclusion.
That's my take.
I don't understand what you want beyond what is provided in conversion.hpp systems/si/convert_si_to_cgs.hpp, and systems/cgs/convert_cgs_to_si.hpp
quantinty<SI::length> l = 1.5 * ft; And no I don't really think ft being a quantity is adequate. quantity<SI::length> cent(cents); cent = 2 * meters; assert(cent.value() == 200); quantity<SI::length> unknown(user_unit); unknown = 5 * meters; quantity<SI::length> m(meters); m = unknown; assert(m.value() == 5); void f_cgs(quantity<CGS::length> l); f_cgs(quantity_cast<CGS>(cent)); The syntax is not necessarily what mcs does or would do but the concepts are what is important. It's been argued that this adds unnecessary complexity or would cause inefficiencies. I don't agree.

Not necessarily. static_multiply has to be able to operate on an mpl::list.
I don't know where you come up with that requirement. It certainly makes sense to me that if you have a library that is as intimately tied with MPL as this one is that you use the standards created by the MPL.
As Steven pointed out, this is not a detail - how would you write a template specialization of mpl::times<> that works for an arbitrary mpl::list? By defining my own compile time operators, I can default to mpl lists and specialize where necessary.
quantinty<SI::length> l = 1.5 * ft;
And no I don't really think ft being a quantity is adequate.
You can already do this. Define an imperial unit system with ft as the unit of length, then specialize the conversion_helper template: template<class Y> class conversion_helper< quantity<unit<imperial::system,length_type>,Y>, quantity<unit<si::system,length_type>,Y> > { public: static quantity<unit<si::system,length_type>,Y> convert(const quantity<unit<imperial::system,length_type>,Y>& source) { return quantity<unit<imperial::system,length_type>,Y>::from_value (source.value()*0.3048); } }; static const unit<imperial::system,length_type> ft; With conversion_helper, you can define basically arbitrary quantity conversions between any two quantities (even if they are not dimensionally congruent, if you want, which can be useful in natural unit systems, for example)...
quantity<SI::length> cent(cents); cent = 2 * meters; assert(cent.value() == 200);
I have no idea what this means. Cents == centimeters? If so, centimeter is not an SI unit of length. And with no value, the first line doesn't give you a quantity anyway...
quantity<SI::length> unknown(user_unit); unknown = 5 * meters; quantity<SI::length> m(meters); m = unknown; assert(m.value() == 5);
void f_cgs(quantity<CGS::length> l); f_cgs(quantity_cast<CGS>(cent));
The syntax is not necessarily what mcs does or would do but the concepts are what is important.
It's been argued that this adds unnecessary complexity or would cause inefficiencies. I don't agree.
I'm sorry to say that I can't make heads or tails out of what you want here. I would need some real functioning C++ code demonstrating what you think you need to say whether I think it's possible within the existing framework of this library... Matthias

AMDG Matthias Schabel <boost <at> schabel-family.org> writes:
As Steven pointed out, this is not a detail - how would you write a template specialization of mpl::times<> that works for an arbitrary mpl::list?
Well, it is possible, but not a good idea. template<> struct mpl::times_impl<mpl::list0<>::tag, mpl::list0<>::tag> { template<class X, class Y> struct apply { //... }; };
quantity<SI::length> cent(cents); cent = 2 * meters; assert(cent.value() == 200);
I have no idea what this means. Cents == centimeters? If so, centimeter is not an SI unit of length. And with no value, the first line doesn't give you a quantity anyway...
I see. What Noah is asking for is template<class Unit, class T> class quantity { //... private: T val_; T multiplier; }; IMO, separating val_ into two factors is not generally useful.
quantity<SI::length> unknown(user_unit); unknown = 5 * meters; quantity<SI::length> m(meters); m = unknown; assert(m.value() == 5);
void f_cgs(quantity<CGS::length> l); f_cgs(quantity_cast<CGS>(cent));
The syntax is not necessarily what mcs does or would do but the concepts are what is important.
It's been argued that this adds unnecessary complexity or would cause inefficiencies. I don't agree.
It does add overhead -- about 2x. This may be fine for many uses, but for some it would be unacceptable. In Christ, Steven Watanabe

Matthias Schabel wrote:
Not necessarily. static_multiply has to be able to operate on an mpl::list. I don't know where you come up with that requirement. It certainly makes sense to me that if you have a library that is as intimately tied with MPL as this one is that you use the standards created by the MPL.
As Steven pointed out, this is not a detail - how would you write a template specialization of mpl::times<> that works for an arbitrary mpl::list? By defining my own compile time operators, I can default to mpl lists and specialize where necessary.
SHRUG - I don't know. When I find I have to alter standards because of a path of implementation I chose I look at my choice and see if there might be an alternative. Granted, MPL isn't exactly a standard but it does try to set a few. A dimension type that works with mpl::times<> doesn't seem like that outrageous a thing to do when compared to implementing a new type of wheel and all that entails.
quantity<SI::length> cent(cents); cent = 2 * meters; assert(cent.value() == 200);
I have no idea what this means. Cents == centimeters? If so, centimeter is not an SI unit of length. And with no value, the first line doesn't give you a quantity anyway...
Yes, I know...your library can't do this. That's my point so your saying so doesn't exactly do much to counter. I think it pretty obvious what it 'means'. 2 meters get assigned to a value in centimeters...should come out with the value 200...you're syntax would have to be expanded. ...
I'm sorry to say that I can't make heads or tails out of what you want here.
It's a moot point. No, it isn't possible within your framework...at least not easy. But as you're set in your way and not really open to this anyway...and since I'm in the minority it seems...it's going to be what it's going to be. I'm not going to struggle against the wind on this one.

quantity<SI::length> cent(cents); cent = 2 * meters; assert(cent.value() == 200);
saying so doesn't exactly do much to counter. I think it pretty obvious what it 'means'. 2 meters get assigned to a value in centimeters...should come out with the value 200...you're syntax would have to be expanded.
Lengths in the SI unit system are not measured in centimeters, so the first line here doesn't make sense. If you want this behavior, you need to enable implicit unit conversions by defining #define MCS_UNITS_ENABLE_IMPLICIT_CONVERSIONS Then this works how you seem to want it to: quantity<CGS::length> cent(1*centimeter); cent = 2*meters; will give cent.value() == 200.
It's a moot point. No, it isn't possible within your framework...at least not easy. But as you're set in your way and not really open to this anyway...and since I'm in the minority it seems...it's going to be what it's going to be. I'm not going to struggle against the wind on this one.
I'm not trying to discourage you from having your voice heard. This library has undergone a number of dramatic changes in response to various suggestions and requests. If your concept is something that appears to be feasible and sufficiently interesting to a broad spectrum of users, I would be happy to consider it. However, I am having a hard time really understanding specifically what you want to do. My best current guess is that you want a quantity class that is capable of holding a unit having a specified dimension but with runtime-varying unit system. That is quantity<abstract::length,Y> would represent a length in any unit system you wanted, where the unit system can change dynamically. But, to get back to Steven's question, what would this support that the existing library with explicit (and possibly implicit) conversions doesn't already? For user input, you can just do this: quantity<SI::length> getInput(const unit_enum& input_unit,double val) { typedef quantity<SI::length> return_type; switch (input_unit) { case FEET : return return_type(val*imperial::feet); case LEAGUES : return return_type(val*nautical::leagues); case METERS : default : return return_type(val*SI::meters); }; } As long as you're happy to do internal calculations in a pre- specified unit system, this should be all you need. Basically, as I see it, the space of possibilities for quantity types looks like this 1) dimension and unit system fixed at compile time 2) dimension fixed, unit system free at compile time 3) dimension free, unit system fixed at compile time 4) dimension free, unit system free at compile time Here, 1) is the zero-overhead approach I've implemented, 2) is the case where unit system can change at runtime, but dimension is fixed, 3) is the case where dimension can change at runtime, but unit system is fixed, and 4) is the case of completely dynamic runtime units. Of these, only 1) is an option if you do not want to pay a runtime cost for performing unit computations. It is also the solution that has generated by far the most interest in this mailing list over the many discussions of units and quantities. Cheers, Matthias

Matthias Schabel said: (by the date of Tue, 13 Feb 2007 00:20:09 -0700)
quantity<SI::length> getInput(const unit_enum& input_unit,double val) { typedef quantity<SI::length> return_type;
switch (input_unit) { case FEET : return return_type(val*imperial::feet); case LEAGUES : return return_type(val*nautical::leagues); case METERS : default : return return_type(val*SI::meters); }; }
I like this approach. -- Janek Kozicki |

Janek Kozicki wrote:
Matthias Schabel said: (by the date of Tue, 13 Feb 2007 00:20:09 -0700)
quantity<SI::length> getInput(const unit_enum& input_unit,double val) { typedef quantity<SI::length> return_type;
switch (input_unit) { case FEET : return return_type(val*imperial::feet); case LEAGUES : return return_type(val*nautical::leagues); case METERS : default : return return_type(val*SI::meters); }; }
I like this approach.
I wouldn't recommend it. It doesn't scale too well.

On 2/13/07, Noah Roberts <roberts.noah@gmail.com> wrote:
Janek Kozicki wrote:
Matthias Schabel said: (by the date of Tue, 13 Feb 2007 00:20:09 -0700)
quantity<SI::length> getInput(const unit_enum& input_unit,double val) { typedef quantity<SI::length> return_type;
switch (input_unit) { case FEET : return return_type(val*imperial::feet); case LEAGUES : return return_type(val*nautical::leagues); case METERS : default : return return_type(val*SI::meters); }; }
I like this approach.
I wouldn't recommend it. It doesn't scale too well.
I too like the approach. I'm curious how it doesn't scale well. With the existence of some runtime support (for which I agree with others, should not be in this iteration of the library) and the programmer's acceptance that it would incur overhead, how would you implement that function? Wouldn't you still need the switch? // Note the change to abstract::length quantity<abstract::length> getInput(const unit_enum& input_unit,double val) { typedef quantity<abstract::length> return_type; switch (input_unit) { case FEET : return return_type(val*imperial::feet); case LEAGUES : return return_type(val*nautical::leagues); case METERS : default : return return_type(val*SI::meters); }; } I think everyone is trying really hard to see your point of view, Noah. All of the questions are to gain understanding, not personal attacks, so please don't take them that way. --Michael Fawcett

Michael Fawcett wrote:
On 2/13/07, Noah Roberts <roberts.noah@gmail.com> wrote:
Janek Kozicki wrote:
Matthias Schabel said: (by the date of Tue, 13 Feb 2007 00:20:09 -0700)
quantity<SI::length> getInput(const unit_enum& input_unit,double val) { typedef quantity<SI::length> return_type;
switch (input_unit) { case FEET : return return_type(val*imperial::feet); case LEAGUES : return return_type(val*nautical::leagues); case METERS : default : return return_type(val*SI::meters); }; } I like this approach.
I wouldn't recommend it. It doesn't scale too well.
I too like the approach. I'm curious how it doesn't scale well.
Well, I should say in our case it didn't. You end up with several large functions with long lists of case statements and then separately, somewhere, a list of conversion factors. In this case we are talking about a static unit converter somewhere, and correct me if I'm wrong but I think they are in several different tables...not just tables for each dimension you will be doing conversions in but in different "systems". Add the enum...it just became nightmarish or I wouldn't be looking for a new way to do these things. Any time you have 2 or more things that need to be modified in parallel I find there is bound to be difficulties. Hence the desire to merge it all into one concept. I like the idea of systems in order to ensure that the same base unit is used in all places but I don't like the idea of having to make several systems for all the different units a user might want to have their reports in. I don't think that will scale too well either. For instance, we have probably 20+ units for measuring pressure alone. I don't like the idea of getting rid of casts to make this work. The casts ensure that you are not doing conversions in the middle of calculation cycles. This is important to us because in our set of products there are several complex calculations performed in loops and even on todays computers you feel the hit. Enforcing a base system ensures conversions are done outside of this area in a pre/post manner. But without enabling implicit conversions and removing that barrier it looks like doing the other things I need doesn't work too hot. With
the existence of some runtime support (for which I agree with others, should not be in this iteration of the library) and the programmer's acceptance that it would incur overhead, how would you implement that function? Wouldn't you still need the switch?
I wouldn't make the function. There needs to be a way to set up a set of unit converters that are attached to the value and will do the conversion when needed. I personally prefer on assignment and have found, so far, that this works. Frankly put, I don't have a use for a library that does static unit conversions. We work in one system of units and convert into/out of those units when required to arbitrary units the user specifies and there are of course only a few dimensions we do this in. I have a hard time seeing a situation in which this isn't the way you would want to go about it.

Well, I should say in our case it didn't. You end up with several large functions with long lists of case statements and then separately, somewhere, a list of conversion factors. In this case we are talking about a static unit converter somewhere, and correct me if I'm wrong but I think they are in several different tables...not just tables for each dimension you will be doing conversions in but in different "systems".
At the risk of repeating myself, I would just like to point out that this library is primarily intended as a dimensional analysis library to ensure correctness, not a general purpose runtime unit conversion library. That being said, I have attempted to make the conversion architecture as flexible as possible; by appropriately specializing template<class System1,class Dim1, class System2,class Dim2, class Y> class conversion_helper< quantity<unit<System1,Dim1>,Y>, quantity<unit<System2,Dim2>,Y> > you should be able to get just about any unit conversion behavior you want. Default behavior is to only allow explicit unit conversions, but implicit conversion can be optionally enabled. I happen to implement conversions for SI and CGS units using forward and backward tables, which entails defining a conversion factor from SI->CGS and a conversion factor from CGS->SI for each fundamental unit, but nothing in the library mandates this design choice. You can roll your own to do just about anything...
there are of course only a few dimensions we do this in. I have a hard time seeing a situation in which this isn't the way you would want to go about it.
Imagine I want to compute the electrostatic force between two charged particles; the equation for this depends on the unit system you are using. In SI, the expression is F = (1/4 pi epsilon_0) q1 q2 / r12^2 in electrostatic units, this is F = q1 q2 / r12^2 Imagine I want to write a function that computes this for a computationally intensive multi-body simulation where the function is called millions or billions of times. With mcs::units, you can just do this: quantity<si::force> electrostatic_force(const quantity<si::charge>& q1, const quantity<si::charge>& q2, const quantity<si::length>& r12) { static const double k_C = 1.0/(4.0*pi*si::constants::epsilon_0); return k_C*q1*q2/(r12*r12); } quantity<esu::force> electrostatic_force(const quantity<esu::charge>& q1, const quantity<esu::charge>& q2, const quantity<esu::length>& r12) { return q1*q2/(r12*r12); } Note that there is no runtime overhead, and, more importantly, we can select the appropriate equation at compile-time. In this case, this is not just a semantic advantage; doing the calculation in SI units incurs an additional multiply for each function invocation. With runtime units, you would have to check the unit system each time the function was called - bad for simulation work. Cheers, Matthias

AMDG Noah Roberts <roberts.noah <at> gmail.com> writes:
Frankly put, I don't have a use for a library that does static unit conversions. We work in one system of units and convert into/out of those units when required to arbitrary units the user specifies and there are of course only a few dimensions we do this in. I have a hard time seeing a situation in which this isn't the way you would want to go about it.
As has been stated before, this library is primarily about type safety. The conversions are a secondary concern.
From what you are saying I think that you have not seriously considered what the implementation should look like. There are two basic possibilities
1) Store everything in some standard unit system and also store the conversion factor to the requested system. 2) Store the value in the requested unit system and remember the conversion factor to the standard system. How should addition and subtraction be defined? You cannot require that the operands have the same unit. Otherwise harmless looking code such as 1.0 * joules + (2.0 * kilograms * pow<2>(2.5 * meters / second)) / 2; might not work. Converting the second to match the first will work but loses precision. If you use 2. to avoid the precision loss brought about by the conversions involved in 1., by the time you are finished you will end up losing more precision than if you had simply converted to SI to begin with. Both of these require storing twice as much information as the current implementation. The inefficiency is probably more than a simple factor of two because the size increase will most likely force the compiler to continually load and store instead of keeping the values in a register. Any more complex approach will add even more overhead. For example, storing a sorted dimension list necessiates a call to merge for every single multiplication or division. In Christ, Steven Watanabe

Steven Watanabe wrote:
AMDG
Noah Roberts <roberts.noah <at> gmail.com> writes:
Frankly put, I don't have a use for a library that does static unit conversions. We work in one system of units and convert into/out of those units when required to arbitrary units the user specifies and there are of course only a few dimensions we do this in. I have a hard time seeing a situation in which this isn't the way you would want to go about it.
As has been stated before, this library is primarily about type safety. The conversions are a secondary concern.
From what you are saying I think that you have not seriously considered what the implementation should look like. There are two basic possibilities
1) Store everything in some standard unit system and also store the conversion factor to the requested system.
2) Store the value in the requested unit system and remember the conversion factor to the standard system.
How should addition and subtraction be defined? You cannot require that the operands have the same unit. Otherwise harmless looking code such as 1.0 * joules + (2.0 * kilograms * pow<2>(2.5 * meters / second)) / 2; might not work. Converting the second to match the first will work but loses precision. If you use 2. to avoid the precision loss brought about by the conversions involved in 1., by the time you are finished you will end up losing more precision than if you had simply converted to SI to begin with.
"Converting the second to match the first" doesn't make sense to me. I guess you might be talking about operands of '+'? Yeah, not necessary. The answer is pretty simple. All arithmetic operators work in the base system. The result is converted on assignment if necessary.
Both of these require storing twice as much information as the current implementation.
There is one condition in which that is necessary. That is if you are using a value that is entered by the user and the calculation loop keeps requesting it from whatever object it is provided by. Then, in order that the conversion isn't repeatedly done you would want to store two values...the user entered value and the converted value. The reason you wouldn't just store the converted value is double creep...the number might change on the user. I imagine that a developer who needs to can work with this so that it does not happen...so that the calculation loop is not asking for the same conversion on the same value repeatedly. Any other time it is sufficient that you keep only the user entered value and the unit (ie the conversion factor) to translate it into the base system...OR...the converted value in the base system without a conversion factor. The inefficiency is
probably more than a simple factor of two because the size increase will most likely force the compiler to continually load and store instead of keeping the values in a register.
Any more complex approach will add even more overhead. For example, storing a sorted dimension list necessiates a call to merge for every single multiplication or division.
Don't know why any of that would be necessary.

AMDG Noah Roberts <roberts.noah <at> gmail.com> writes:
"Converting the second to match the first" doesn't make sense to me. I guess you might be talking about operands of '+'? Yeah, not necessary.
The answer is pretty simple. All arithmetic operators work in the base system. The result is converted on assignment if necessary.
Ok. What happens here: std::cout << (x + y) << std::endl; Is the unit printed a. the unit of x b. the unit of y c. the base unit
Both of these require storing twice as much information as the current implementation.
There is one condition in which that is necessary. That is if you are using a value that is entered by the user and the calculation loop keeps requesting it from whatever object it is provided by. Then, in order that the conversion isn't repeatedly done you would want to store two values...the user entered value and the converted value. The reason you wouldn't just store the converted value is double creep...the number might change on the user. I imagine that a developer who needs to can work with this so that it does not happen...so that the calculation loop is not asking for the same conversion on the same value repeatedly.
Any other time it is sufficient that you keep only the user entered value and the unit (ie the conversion factor) to translate it into the base system...OR...the converted value in the base system without a conversion factor.
The former is exactly what I said. It stores two doubles instead of one. Further, temp = a + b; c = temp * d; is not equivalent to c = d * (a + b); Every operation incurs an extra multiplication per operand. This is a far cry from zero overhead. The latter does not have any significant benefit over the current implementation. In Christ, Steven Watanabe

Steven Watanabe wrote:
AMDG
Noah Roberts <roberts.noah <at> gmail.com> writes:
"Converting the second to match the first" doesn't make sense to me. I guess you might be talking about operands of '+'? Yeah, not necessary.
The answer is pretty simple. All arithmetic operators work in the base system. The result is converted on assignment if necessary.
Ok. What happens here:
std::cout << (x + y) << std::endl;
Nothing good would probably come of it the way I see things; but since mcs does magic unit label stuff it could just output that. Without, allowing for the user to override certain behavior this could be more well behaved: std::cout << ((x + y) * unit) << std::endl; You're probably not too worried about the moderate overhead of the extra work there when I/O is going to be bottlenecking anyway. Except in debugging, the developer is going to have a pretty strong idea of what units need labels and create them. Other units and dimensions are of no consequence.
The former is exactly what I said. It stores two doubles instead of one. Further,
temp = a + b; c = temp * d;
is not equivalent to c = d * (a + b);
Every operation incurs an extra multiplication per operand. This is a far cry from zero overhead.
But you don't have to pay this cost unless you need to. Imagine: class qty_withunit; class bare_qty; template < typename T1, typename T2 > bare_qty operator * ( T1 const& l, T2 const& r) { return bare_qty(...); } greatly simplified for posting purposes...the return type alone is a few lines of code. quantities have assignment ops that react to anything that is_quantity<> as do all operators. You then write your functions like so: bare_qty f(bare_qty) { some calcs and a return; } and use them like so: qty_withunit x(unit); x = f(x); and possibly: qty_withunit y = f(x) * unit; Yes, there is an overhead to the unit but it is localized to the places where conversion is necessary. The library user does have to use some care, and I haven't thought of a way to allow them to use this stuff willy nilly in something needing extreem performance, but I don't think that is too much to expect.

AMDG Noah Roberts <roberts.noah <at> gmail.com> writes:
Steven Watanabe wrote:
std::cout << (x + y) << std::endl;
Nothing good would probably come of it the way I see things; but since mcs does magic unit label stuff it could just output that. Without, allowing for the user to override certain behavior this could be more well behaved:
std::cout << ((x + y) * unit) << std::endl;
You're probably not too worried about the moderate overhead of the extra work there when I/O is going to be bottlenecking anyway.
Except in debugging, the developer is going to have a pretty strong idea of what units need labels and create them. Other units and dimensions are of no consequence.
Agreed. If you have to specify the unit at this point, what is the purpose of tracking it dynamically at all, though?
Yes, there is an overhead to the unit but it is localized to the places where conversion is necessary. The library user does have to use some care, and I haven't thought of a way to allow them to use this stuff willy nilly in something needing extreem performance, but I don't think that is too much to expect.
I'm afraid that I am still unconvinced that runtime units are useful. Here are the basic scenarios and solutions. Perhaps I forgot something? 1) The dimensions are known at compile time but the unit system depends on user input. a) Losing a few bits of precision doesn't hurt. Do all internal calculations in SI (e.g.) and track the units to print separately. The printed units need never appear anywhere outside of the IO code. It is cleaner to keep them there rather than having them possibly propogate out, only to be translated immediately into SI anyway. b) Every bit of precision matters. Track everything at runtime using sorted vectors of fundamental units. If perfomance also matters then there is not much that a library can do to help. In Christ, Steven Watanabe

Steven Watanabe wrote:
AMDG
Noah Roberts <roberts.noah <at> gmail.com> writes:
Steven Watanabe wrote:
std::cout << (x + y) << std::endl; Nothing good would probably come of it the way I see things; but since mcs does magic unit label stuff it could just output that. Without, allowing for the user to override certain behavior this could be more well behaved:
std::cout << ((x + y) * unit) << std::endl;
You're probably not too worried about the moderate overhead of the extra work there when I/O is going to be bottlenecking anyway.
Except in debugging, the developer is going to have a pretty strong idea of what units need labels and create them. Other units and dimensions are of no consequence.
Agreed. If you have to specify the unit at this point, what is the purpose of tracking it dynamically at all, though?
I hadn't really considered this use case. For most purposes I think x + y will be assigned to something and stored and most, if not all, calculations would be done in functions working in fundamental units. However, it is a legitimate use. Why track units at all at this point? Well theoretically you could have the unit selected at some earlier point and stored in a variable so that you might do: std::cout << ((x + y) * user_selected_unit) << std::endl; The alternative to requiring this notation would be to pick one or the other...probably the left operand. In my opinion that isn't verbose enough and is rather arbitrary. My intended use is more akin to: quantity<pressure> dp(psi); dp = x - y; std::cout << dp << std::endl; with lots of stuff going on in between.
Yes, there is an overhead to the unit but it is localized to the places where conversion is necessary. The library user does have to use some care, and I haven't thought of a way to allow them to use this stuff willy nilly in something needing extreem performance, but I don't think that is too much to expect.
I'm afraid that I am still unconvinced that runtime units are useful. Here are the basic scenarios and solutions. Perhaps I forgot something?
1) The dimensions are known at compile time but the unit system depends on user input.
a) Losing a few bits of precision doesn't hurt.
Do all internal calculations in SI (e.g.) and track the units to print separately. The printed units need never appear anywhere outside of the IO code. It is cleaner to keep them there rather than having them possibly propogate out, only to be translated immediately into SI anyway.
There are a few things I am concerned about. First, a user entered value is a value that has a set unit. It makes sense for these to be together. Since these should be kept together it makes sense for them to be also held with a static dimension to coincide with their future use in calculations. That being the case it makes sense for this to bi similar and compatible with a static dimension quantity. Second, I don't like the idea of having two separate unit systems...one for the static quantity and one for the dynamic. Optimally a psi unit would be used in both of the following: quantity<pressure> dp(psi); and inside a pressure calculation function: stat_quantity<pressure> calc_whatever() { static stat_quantity<pressure> const C = 5.43 * psi; ... } The second is because often times equations are written with a given set of units and contain constants in units possibly not in the base. Allows you to keep the code in line with the domain it models. It would be especially nice if that same psi could be used in static conversions but that's a heavy feature for minor benefit. I also don't believe static conversions to new base systems are going to be that common in most applications.

AMDG Noah Roberts <roberts.noah <at> gmail.com> writes:
Steven Watanabe wrote:
AMDG
Noah Roberts <roberts.noah <at> gmail.com> writes:
Steven Watanabe wrote:
std::cout << (x + y) << std::endl; Nothing good would probably come of it the way I see things; but since mcs does magic unit label stuff it could just output that. Without, allowing for the user to override certain behavior this could be more well behaved:
std::cout << ((x + y) * unit) << std::endl;
You're probably not too worried about the moderate overhead of the extra work there when I/O is going to be bottlenecking anyway.
Except in debugging, the developer is going to have a pretty strong idea of what units need labels and create them. Other units and dimensions are of no consequence.
Agreed. If you have to specify the unit at this point, what is the purpose of tracking it dynamically at all, though?
I hadn't really considered this use case. For most purposes I think x + y will be assigned to something and stored and most, if not all, calculations would be done in functions working in fundamental units.
However, it is a legitimate use. Why track units at all at this point? Well theoretically you could have the unit selected at some earlier point and stored in a variable so that you might do:
std::cout << ((x + y) * user_selected_unit) << std::endl;
The alternative to requiring this notation would be to pick one or the other...probably the left operand. In my opinion that isn't verbose enough and is rather arbitrary.
My intended use is more akin to:
quantity<pressure> dp(psi); dp = x - y; std::cout << dp << std::endl;
with lots of stuff going on in between.
My question was: Why should x and y store the unit, when that information is just going to be thrown away and replaced by user_selected_unit? Is this converted value going to be used for any other purpose than to output it or convert it back to the base system for more calculation?
There are a few things I am concerned about. First, a user entered value is a value that has a set unit. It makes sense for these to be together. Since these should be kept together it makes sense for them to be also held with a static dimension to coincide with their future use in calculations. That being the case it makes sense for this to bi similar and compatible with a static dimension quantity. Second, I don't like the idea of having two separate unit systems...one for the static quantity and one for the dynamic. Optimally a psi unit would be used in both of the following:
A few clarifications are in order. I have assumed that the only use for a dynamic unit system is to read and write quantities in units to be determined at runtime. If This assumption is wrong then I will accept the utility of runtime units. Even then, I don't really like to add them to Units. If it is possible to cleanly implement it as a separate library on top of Units then that would be my first choice. The dimension code can be used wholesale. A runtime quantity can be constructed from a static quantity. Adding two runtime quantities yields a static quantity. A static quantity should be constructible fro a runtime quantity too. That is the only hitch I see. There is one ugliness in your proposal. quantity<pressure> p1(psi); quantity<pressure> p2(pascals); p1 = whatever; p2 = p1; Now, what should the unit of p2 be?
quantity<pressure> dp(psi);
and inside a pressure calculation function:
stat_quantity<pressure> calc_whatever() { static stat_quantity<pressure> const C = 5.43 * psi; ... }
The second is because often times equations are written with a given set of units and contain constants in units possibly not in the base. Allows you to keep the code in line with the domain it models.
It would be especially nice if that same psi could be used in static conversions but that's a heavy feature for minor benefit.
I also don't believe static conversions to new base systems are going to be that common in most applications.
I agree with you there. However, static conversions are common enough to warrent consideration and can be cleanly integrated into the rest of the library. In Christ, Steven Watanabe

Matthias Schabel wrote:
std::cout << ((x + y) * user_selected_unit) << std::endl;
How is this different from/better than
std::cout << quantity<your_system::psi>(x+y) << std::endl;
or, if you prefer,
typedef quantity<your_system::psi> psi;
std::cout << psi(x+y) << std::endl;
??
Well, like I said earlier, this solution requires a system for every possible unit you're using. It also doesn't allow one to assign units since they are distinct types. In order to create a runtime unit quantity in with your plan is to use something akin to boost::any to wrap any unit "type" into something that can be assigned to/from. I started going that direction and realized quite quickly it's a big mess and there isn't much need for it except the desire to do everything in compile mode, which is unnecessary.

AMDG Matthias Schabel <boost <at> schabel-family.org> writes:
std::cout << ((x + y) * user_selected_unit) << std::endl;
How is this different from/better than
std::cout << quantity<your_system::psi>(x+y) << std::endl;
or, if you prefer,
typedef quantity<your_system::psi> psi;
std::cout << psi(x+y) << std::endl;
??
The unit is fixed at compile time that way. In some programs you may want the user to specify what units he wants the result to be in. In Christ, Steven Watanabe

Steven Watanabe wrote:
AMDG
Noah Roberts <roberts.noah <at> gmail.com> writes:
Steven Watanabe wrote:
AMDG
Noah Roberts <roberts.noah <at> gmail.com> writes:
Steven Watanabe wrote:
std::cout << (x + y) << std::endl; Nothing good would probably come of it the way I see things; but since mcs does magic unit label stuff it could just output that. Without, allowing for the user to override certain behavior this could be more well behaved:
std::cout << ((x + y) * unit) << std::endl;
You're probably not too worried about the moderate overhead of the extra work there when I/O is going to be bottlenecking anyway.
Except in debugging, the developer is going to have a pretty strong idea of what units need labels and create them. Other units and dimensions are of no consequence. Agreed. If you have to specify the unit at this point, what is the purpose of tracking it dynamically at all, though? I hadn't really considered this use case. For most purposes I think x + y will be assigned to something and stored and most, if not all, calculations would be done in functions working in fundamental units.
However, it is a legitimate use. Why track units at all at this point? Well theoretically you could have the unit selected at some earlier point and stored in a variable so that you might do:
std::cout << ((x + y) * user_selected_unit) << std::endl;
The alternative to requiring this notation would be to pick one or the other...probably the left operand. In my opinion that isn't verbose enough and is rather arbitrary.
My intended use is more akin to:
quantity<pressure> dp(psi); dp = x - y; std::cout << dp << std::endl;
with lots of stuff going on in between.
My question was: Why should x and y store the unit, when that information is just going to be thrown away and replaced by user_selected_unit?
Well, presumably it is used elsewhere.
Is this converted value going to be used for any other purpose than to output it or convert it back to the base system for more calculation?
No.
There are a few things I am concerned about. First, a user entered value is a value that has a set unit. It makes sense for these to be together. Since these should be kept together it makes sense for them to be also held with a static dimension to coincide with their future use in calculations. That being the case it makes sense for this to bi similar and compatible with a static dimension quantity. Second, I don't like the idea of having two separate unit systems...one for the static quantity and one for the dynamic. Optimally a psi unit would be used in both of the following:
A few clarifications are in order. I have assumed that the only use for a dynamic unit system is to read and write quantities in units to be determined at runtime.
The other use is to write code like so: quantity f() { static quantity const C = 9.43 * psi; ... equation written with psi constant ... } If
This assumption is wrong then I will accept the utility of runtime units. Even then, I don't really like to add them to Units. If it is possible to cleanly implement it as a separate library on top of Units then that would be my first choice.
This is why I think the concepts in the library need to be better documented. What makes a quantity a quantity, a unit a unit, etc... It's one thing to say, "Hey, just override XYZ struct deep inside the implementation and it will work," and providing real documentation so a person doesn't have to know the implementation upside and down to do it.
The dimension code can be used wholesale. A runtime quantity can be constructed from a static quantity. Adding two runtime quantities yields a static quantity. A static quantity should be constructible fro a runtime quantity too. That is the only hitch I see.
There is one ugliness in your proposal.
quantity<pressure> p1(psi); quantity<pressure> p2(pascals); p1 = whatever; p2 = p1;
Now, what should the unit of p2 be?
pascals. The rule is conversion on assignment. This does cause some need to be careful with such things as std containers that work through assignment.

AMDG Noah Roberts <roberts.noah <at> gmail.com> writes:
My question was: Why should x and y store the unit, when that information is just going to be thrown away and replaced by user_selected_unit?
Well, presumably it is used elsewhere.
Is this converted value going to be used for any other purpose than to output it or convert it back to the base system for more calculation?
No.
Then why go to the trouble of providing all the arithmetic functions? As far as I can see runtime_quantity can be a very basic. template<class BaseUnit> struct runtime_unit { double conversion_factor_; std::string name_; friend istream& operator>>(istream& is, runtime_unit& self) { //fancy stuff for parsing a unit } }; runtime_unit<SI::force> pound = { ..., "pound" }; runtime_unit<SI::force> dyne = { ..., "dyne" }; template<class BaseUnit, class Unit> struct static_runtime_unit { static runtime_unit& value() { static runtime_unit result = { ... }; return(result); } }; template<class BaseUnit> class runtime_quantity { public: runtime_quantity(const quantity<BaseUnit>& q) : value_(q.value()), unit_(static_runtime_unit<BaseUnit,BaseUnit>::value()) { } friend ostream& operator<<(ostream& os, const runtime_quantity& self) { //overly simplified return(os << value_ << ' ' << unit_.name_); } friend istream& operator>>(istream& is, runtime_quantity& self) { //overly simplified return(is >> value_ >> unit_); } double value() const { return(value_); } runtime_unit<BaseUnit> unit() const { return(unit_); } BaseUnit quantity() const { return(quantity<BaseUnit>::from_value( value_ * unit_.conversion_factor_)); } private: double value_; runtime_unit<BaseUnit> unit_; }; template<class Quantity, class BaseUnit> runtime_quantity<BaseUnit> operator*(const runtime_unit<BaseUnit>&, const Quantity&);
A few clarifications are in order. I have assumed that the only use for a dynamic unit system is to read and write quantities in units to be determined at runtime.
The other use is to write code like so:
quantity f() { static quantity const C = 9.43 * psi; ... equation written with psi constant ... }
I don't understand what is wrong with quantity<psi_type> f() { static quantity<psi_type> const C = 9.43 * psi; ... equation written with psi constant ... }
quantity<pressure> p1(psi); quantity<pressure> p2(pascals); p1 = whatever; p2 = p1;
Now, what should the unit of p2 be?
pascals. The rule is conversion on assignment. This does cause some need to be careful with such things as std containers that work through assignment.
Right. I can't see a way to make a std container that holds quantities of different units safe at all. Explicit conversion is safer. p2 = runtime_quantity_cast(pascals, p1); In Christ, Steven Watanabe

Steven Watanabe wrote:
AMDG
Noah Roberts <roberts.noah <at> gmail.com> writes:
My question was: Why should x and y store the unit, when that information is just going to be thrown away and replaced by user_selected_unit? Well, presumably it is used elsewhere.
Is this converted value going to be used for any other purpose than to output it or convert it back to the base system for more calculation? No.
Then why go to the trouble of providing all the arithmetic functions?
Because they are occasionally useful. dp = inlet - outlet; vs. dp = (inlet.quantity() - outlet.quantity()) * dp.unit();
quantity f() { static quantity const C = 9.43 * psi; ... equation written with psi constant ... }
I don't understand what is wrong with
quantity<psi_type> f() { static quantity<psi_type> const C = 9.43 * psi; ... equation written with psi constant ... }
Well, if we are talking about in terms of the current proposal I believe that would result in a lot of conversions from psi to pascals (assuming si base). Because now C is in some psi system whereas the rest of the function uses the SI system. Incredibly inefficient.
quantity<pressure> p1(psi); quantity<pressure> p2(pascals); p1 = whatever; p2 = p1;
Now, what should the unit of p2 be? pascals. The rule is conversion on assignment. This does cause some need to be careful with such things as std containers that work through assignment.
Right. I can't see a way to make a std container that holds quantities of different units safe at all.
Most commonly you would not want them to convert to a different unit on assignment. You would want the elements in your container to stay a certain unit while converting whatever value is being assigned to that unit. When that is not what is wanted a wrapper is needed.

AMDG Noah Roberts <roberts.noah <at> gmail.com> writes:
Then why go to the trouble of providing all the arithmetic functions?
Because they are occasionally useful.
dp = inlet - outlet;
vs.
dp = (inlet.quantity() - outlet.quantity()) * dp.unit();
Well, ok. I personally prefer the latter, But I can see why some might not want to type more. I would definitely want dp = (inlet - outlet) * dp.unit();
quantity f() { static quantity const C = 9.43 * psi; ... equation written with psi constant ... }
I don't understand what is wrong with
quantity<psi_type> f() { static quantity<psi_type> const C = 9.43 * psi; ... equation written with psi constant ... }
Well, if we are talking about in terms of the current proposal I believe that would result in a lot of conversions from psi to pascals (assuming si base). Because now C is in some psi system whereas the rest of the function uses the SI system. Incredibly inefficient.
How was I supposed to know that the rest of the function was in SI? Even so, quantity<SI::pressure> f() { static quantity<SI::pressure> const C = quantity_cast<SI::system>(9.43 * psi); ... equation written with psi constant ... } OR quantity<psi_type> f() { static quantity<SI::pressure> const C = quantity_cast<SI::system>(9.43 * psi); ... equation written with psi constant ... }
quantity<pressure> p1(psi); quantity<pressure> p2(pascals); p1 = whatever; p2 = p1;
Now, what should the unit of p2 be? pascals. The rule is conversion on assignment. This does cause some need to be careful with such things as std containers that work through assignment.
Right. I can't see a way to make a std container that holds quantities of different units safe at all.
Most commonly you would not want them to convert to a different unit on assignment. You would want the elements in your container to stay a certain unit while converting whatever value is being assigned to that unit. When that is not what is wanted a wrapper is needed.
typedef runtime_quantity<SI::length> length_t; std::vector<runtime_quantity<length_t> > v; //fill v v.insert(v.begin(), ...); //the units of the elements are now unspecified. A type that is almost a value type but isn't quite is asking for trouble. In Christ, Steven Watanabe

Steven Watanabe wrote:
If it is possible to cleanly implement it as a separate library on top of Units then that would be my first choice.
I have also expressed an interest in actually doing this but there hasn't been much interest in making sure it actually works so I've trekked off on my own. From my perspective it seems like the goal is to fast track this static unit library into boost without first making sure people are going to use it, and second that they can. If the expectation is that people will build runtime unit systems on top of this then steps should be taken to make that possible, easy, and clean. However, when I brought this option up earlier it was deemed unimportant. That's fine, but I can't really support the library without this.

Not necessarily. static_multiply has to be able to operate on an mpl::list.
Exactly...
There is also a lack of meta functions to identify units and quantities as types (only dimensions have this functionality) for use in such things as enable_if and other meta programming utilities.
Good point.
I've added is_dimension<D> is_unit<U> is_unit_of_system<U,S> is_quantity<Q> is_quantity_of_system<Q,S> for the next version. I should probably also allow querying the existence of an explicit conversion...although I suspect is_convertible already does that... Matthias
participants (14)
-
Andrey Semashev
-
Aristid Breitkreuz
-
Ben FrantzDale
-
Ben FrantzDale
-
David Greene
-
Janek Kozicki
-
Kevin Lynch
-
Matthias Schabel
-
Michael Fawcett
-
Noah Roberts
-
Paul A Bristow
-
Peder Holt
-
Steven Watanabe
-
Yuval Ronen