
A new alpha version of mcs::units has been posted to SourceForge since the vault is not accepting uploads: www.sourceforge.net/projects/mcs-units (mcs_units_v0.7.0_alpha_4.zip). This version includes some significant new functionality that Steven Watanabe and I have been working on: - heterogeneous and mixed unit systems allowing constructs such as watt hr, m^2/cm^(1/2), etc... in which fundamental units from different unit systems are combined and multiple fundamental units may have the same dimensional signature. - fine-grained implicit unit conversion has been enabled, allowing implicit conversions as long as they are allowed for every fundamental dimension present - conversions between homogeneous and heterogeneous unit systems - trigonometric functions and angle systems for degrees, gradians, and radians (with implicit conversions to SI and CGS radians) are included - a nearly complete set of CODATA physical constants Comments and bug reports (especially on the heterogeneous unit/implicit conversion functionality) are welcomed, as always. Documentation is not completely up to date, but virtually all functionality is demonstrated in the examples. Matthias

It seems that the mcs-units library is moving along well. I have one question, though, about scope and whether the following should be within the scope of this library or within the scope of another one. In building probability and likelihood models, I often encounter the issue of transforming from either to their logarithms and vice versa. Code would be much simpler and less error-prone if the appropriate domain was defined, the natural arithmetic operators were used, and the conversions occurred (or could be prevented to catch errors) as necessary, but without necessarily requiring explicit bookkeeping to distinguish these quantities and their logarithms. This suggests a (small) set of types with appropriate conversions, much as in the units library. However, probabilities are of course unitless. Should this idea be developed as a separate library or does it make any sense to fold it into the existing units framework? Cheers, Brook

In building probability and likelihood models, I often encounter the issue of transforming from either to their logarithms and vice versa. Code would be much simpler and less error-prone if the appropriate domain was defined, the natural arithmetic operators were used, and the conversions occurred (or could be prevented to catch errors) as necessary, but without necessarily requiring explicit bookkeeping to distinguish these quantities and their logarithms.
This suggests a (small) set of types with appropriate conversions, much as in the units library. However, probabilities are of course unitless.
Should this idea be developed as a separate library or does it make any sense to fold it into the existing units framework?
We've tried to make mcs::units as flexible and extensible as possible in order to accommodate a wide range of value types; as you point out, since probabilities and log probabilities don't have associated units (by definition, since taking the log of a unit doesn't make any sense from a dimensional analysis standpoint), so I don't think this is something that should be directly supported in the library itself. That being said, mcs::units can be used for tagging and conversion in this way if you're willing to specialize the conversion_helper class template for your probability/log_probability classes. If you want to try this, I can walk you through the steps... Matthias

Matthias Schabel writes:
In building probability and likelihood models, I often encounter the issue of transforming from either to their logarithms and vice versa. Code would be much simpler and less error-prone if the appropriate domain was defined, the natural arithmetic operators were used, and the conversions occurred (or could be prevented to catch errors) as necessary, but without necessarily requiring explicit bookkeeping to distinguish these quantities and their logarithms.
This suggests a (small) set of types with appropriate conversions, much as in the units library. However, probabilities are of course unitless.
Should this idea be developed as a separate library or does it make any sense to fold it into the existing units framework?
We've tried to make mcs::units as flexible and extensible as possible in order to accommodate a wide range of value types; as you point out, since probabilities and log probabilities don't have associated units (by definition, since taking the log of a unit doesn't make any sense from a dimensional analysis standpoint), so I don't think this is something that should be directly supported in the library itself.
Voice from the back of the room: a) It might be handy, though, to have (for example) a log function that enforces unitless quantities. For example, log ( velocity * time / meters ) is fine, but if you forget to multiply by the time, you get a helpful error message. b) It might also be handy, if possible, to have some functions that take units and know what to do; for example, sqrt (m*m) properly returns meters, and so forth. < returns to lurk mode > ---------------------------------------------------------------------- Dave Steffen, Ph.D. Fools ignore complexity. Software Engineer IV Pragmatists suffer it. Numerica Corporation Some can avoid it. ph (970) 461-2000 x227 Geniuses remove it. dgsteffen@numerica.us -- Alan Perlis

AMDG Dave Steffen <dgsteffen <at> numerica.us> writes:
a) It might be handy, though, to have (for example) a log function that enforces unitless quantities. For example,
log ( velocity * time / meters )
is fine, but if you forget to multiply by the time, you get a helpful error message.
b) It might also be handy, if possible, to have some functions that take units and know what to do; for example, sqrt (m*m) properly returns meters, and so forth.
Absolutely. We already have sin/cos &c. We don't have sqrt but root<2>(m * m) will give give the correct answer (albeit inefficiently). I'll look through cmath and add all the applicable functions. In Christ, Steven Watanabe

Steven Watanabe writes:
AMDG
Dave Steffen <dgsteffen <at> numerica.us> writes:
a) It might be handy, though, to have (for example) a log function that enforces unitless quantities. For example,
log ( velocity * time / meters )
is fine, but if you forget to multiply by the time, you get a helpful error message.
b) It might also be handy, if possible, to have some functions that take units and know what to do; for example, sqrt (m*m) properly returns meters, and so forth.
Absolutely.
We already have sin/cos &c.
We don't have sqrt but root<2>(m * m) will give give the correct answer (albeit inefficiently). I'll look through cmath and add all the applicable functions.
Cool. Our application could / should be using this sort of thing, but there are some significant difficulties involved. So we won't be using units any time soon, but at some point we'll have to tackle these issues. ---------------------------------------------------------------------- Dave Steffen, Ph.D. Fools ignore complexity. Software Engineer IV Pragmatists suffer it. Numerica Corporation Some can avoid it. ph (970) 461-2000 x227 Geniuses remove it. dgsteffen@numerica.us -- Alan Perlis

a) It might be handy, though, to have (for example) a log function that enforces unitless quantities. For example,
log ( velocity * time / meters )
is fine, but if you forget to multiply by the time, you get a helpful error message.
Cases where the function argument reduces to a dimensionless quantity already all work because dimensionless quantities can be implicitly converted to their value type : std::cout << std::sqrt(quantity<SI::velocity> (9.0*SI::meters_per_second)*SI::seconds/SI::meters) << std::endl; gives 3
b) It might also be handy, if possible, to have some functions that take units and know what to do; for example, sqrt (m*m) properly returns meters, and so forth.
True - we can support integral and rational powers, but not irrational ones... Matthias

AMDG Matthias Schabel <boost <at> schabel-family.org> writes:
a) It might be handy, though, to have (for example) a log function that enforces unitless quantities. For example,
log ( velocity * time / meters )
is fine, but if you forget to multiply by the time, you get a helpful error message.
Cases where the function argument reduces to a dimensionless quantity already all work because dimensionless quantities can be implicitly converted to their value type :
std::cout << std::sqrt(quantity<SI::velocity> (9.0*SI::meters_per_second)*SI::seconds/SI::meters) << std::endl;
gives
3
But they cannot be found by ADL and they return the raw value_type. The preferable output from your example would be 3 dimensionless In Christ, Steven Watanabe

Dave Steffen wrote:
Voice from the back of the room:
a) It might be handy, though, to have (for example) a log function that enforces unitless quantities. For example,
log ( velocity * time / meters )
is fine, but if you forget to multiply by the time, you get a helpful error message.
There's just plain going to be times when you have to break out of the dim/unit analysis model. For instance, in fluid flow analysis, what I do, you often have empirical functions...guesswork...it is not at all uncommon to take a quantity to some variable power that might itself be a quantity. There's no way to enforce dimensions on this. You have to break out and go back in and watch carefully what you are doing in these parts of the code. Wrap functions with quantities and do what you have to...just because the function works outside of dimensions doesn't mean it can't accept and return statically dimensioned values...we know what goes in and what comes out.

Noah Roberts wrote:
There's just plain going to be times when you have to break out of the dim/unit analysis model. For instance, in fluid flow analysis, what I do, you often have empirical functions...guesswork...it is not at all uncommon to take a quantity to some variable power that might itself be a quantity. There's no way to enforce dimensions on this.
Is this really right? Are you sure there isn't a constant lurking around that resolves all the dimensions properly? Surely, these formulas have to be adjusted appropriately if you use different units. Can you provide a concrete example?

Engineering approximations are often derived by regression to empirical functional forms, which can result in all kinds of dimensional weirdness, including floating point powers. In principle these could be accommodated within a dimensional analysis framework. In practice, floating point powers are impossible for a compile-time library. On the other hand, we do support rational powers and, for engineering approximations, you might as well use a rational approximation to the powers since it is unlikely that they will be exactly equal to some irrational value... Matthias
There's just plain going to be times when you have to break out of the dim/unit analysis model. For instance, in fluid flow analysis, what I do, you often have empirical functions...guesswork...it is not at all uncommon to take a quantity to some variable power that might itself be a quantity. There's no way to enforce dimensions on this.
Is this really right? Are you sure there isn't a constant lurking around that resolves all the dimensions properly? Surely, these formulas have to be adjusted appropriately if you use different units.
Can you provide a concrete example?

Matthias, Matthias Schabel wrote:
Engineering approximations are often derived by regression to empirical functional forms, which can result in all kinds of dimensional weirdness, including floating point powers. In principle these could be accommodated within a dimensional analysis framework. In practice, floating point powers are impossible for a compile-time library. On the other hand, we do support rational powers and, for engineering approximations, you might as well use a rational approximation to the powers since it is unlikely that they will be exactly equal to some irrational value...
I am still skeptical for the need of anything fancy here. In my experience, fractional powers of units/dimensions are needed only if you need to use such units in the interface. For example, in finance, volatility, which is in units of time raised to the negative 1/2 power, is a common input or output parameter. But if the fractional powers occur only within a calculation of a formula, then I don't see the need for fractional dimensions. The (appropriate) trick is to convert anything that is going to be raised to a weird power into a unitless quantity before you put it into the power function. Let me concoct an example: Suppose you have a funky force field F that depends on distance and velocity in some weird way. In particular, suppose that if distance is in meters and velocity is in meters per second, you've decided that the force in newtons is given (approximately) by F(d, v) = d^{1/3} / v^{sqrt(2)} - v^{-5/4} You want to code this formula up so that you can input each of the quantities in different units from the original ones. The trick is to rewrite it as: F(d, v) = [(d/d_0)^{1/3}/ (v/v_0)^{sqrt(2)} - (v/v_0)^(-5/4}] * F_0 where d_0 = 1 meter, v_0 = 1 meter/second, and F_0 = 1 newton. Then if your library has implicit unit conversions, then formula will be properly calculated, no matter what units the input values d and v are in. But surely I'm telling you anything new? I'm pretty sure I learned tricks like this from physicists. So I still need an example where this approach does not work or where the extra divisions cause a serious problem. Deane

AMDG Deane Yang <deane_yang <at> yahoo.com> writes:
F(d, v) = [(d/d_0)^{1/3}/ (v/v_0)^{sqrt(2)} - (v/v_0)^(-5/4}] * F_0
where d_0 = 1 meter, v_0 = 1 meter/second, and F_0 = 1 newton.
Then if your library has implicit unit conversions, then formula will be properly calculated, no matter what units the input values d and v are in.
But surely I'm telling you anything new? I'm pretty sure I learned tricks like this from physicists. So I still need an example where this approach does not work or where the extra divisions cause a serious problem.
Deane
If d_0, v_0, and F_0 are units instead of quantities, everything will work perfectly and there will be no extra divisions at runtime when the quantity is already in the correct system. I can think of two cases where temorarily bypassing quantity can be useful. The first is when you already have a function that operates on the raw value_type. Then you can simply write an overload taking a quantity which forwards to that function. The second case is when the value_type is a complex UDT. In this case it may be possible to write the function in a much more efficent way by using it directly. Of course, the usual caveats about optimization apply. In Christ, Steven Watanabe

Hi Deane,
I am still skeptical for the need of anything fancy here. In my experience, fractional powers of units/dimensions are needed only if you need to use such units in the interface. For example, in finance, volatility, which is in units of time raised to the negative 1/2 power, is a common input or output parameter.
I agree with you in principle that many of the fancy aspects are not strictly necessary. However, we've designed the library with the "principle of least surprise" in mind - anything that is reasonable and can be implemented is allowed, and anything that is allowed should behave in the most reasonable/expected way.
Let me concoct an example: Suppose you have a funky force field F that depends on distance and velocity in some weird way. In particular, suppose that if distance is in meters and velocity is in meters per second, you've decided that the force in newtons is given (approximately) by
F(d, v) = d^{1/3} / v^{sqrt(2)} - v^{-5/4}
You want to code this formula up so that you can input each of the quantities in different units from the original ones.
The trick is to rewrite it as:
F(d, v) = [(d/d_0)^{1/3}/ (v/v_0)^{sqrt(2)} - (v/v_0)^(-5/4}] * F_0
where d_0 = 1 meter, v_0 = 1 meter/second, and F_0 = 1 newton.
Then if your library has implicit unit conversions, then formula will be properly calculated, no matter what units the input values d and v are in.
Of course this is fine, but it requires that you rewrite the equation, which creates the potential for errors by itself. In addition, as Steven pointed out, you may not have control of the function.
But surely I'm telling you anything new? I'm pretty sure I learned tricks like this from physicists. So I still need an example where this approach does not work or where the extra divisions cause a serious problem.
As a physicist, of course I know these tricks. However, there are issues of efficiency to consider - if the equation you wrote above is expressed in SI units, then it is possible to do the de-dimensionalized calculation with no computational overhead above the actual function evaluation (that is, the divisions by units don't incur additional overhead). However, if you pass non-SI units to the function, you will incur unit conversion overhead. In addition, the library we're proposing is trying to provide three things simultaneously : 1) safety by performing rigorous unit checking on expressions that are as close to arbitrary as possible 2) convenience of expressing dimensional equations in any sensible way 3) zero runtime overhead While there are limits to what can be accomplished, I think we've actually done a pretty good job of meeting these objectives... Matthias

Deane Yang wrote:
Noah Roberts wrote:
There's just plain going to be times when you have to break out of the dim/unit analysis model. For instance, in fluid flow analysis, what I do, you often have empirical functions...guesswork...it is not at all uncommon to take a quantity to some variable power that might itself be a quantity. There's no way to enforce dimensions on this.
Is this really right? Are you sure there isn't a constant lurking around that resolves all the dimensions properly? Surely, these formulas have to be adjusted appropriately if you use different units.
Can you provide a concrete example?
Formula for Fluid Compressibility through a Venturi Tube: Y = {[kt^(2/k)/(k-1)][(1-b^4)/(1-b^4t^(2/k))][(1-t^((k-1)/k)/(1-t)]}^.5 With k being the SP heat ratio of the fluid passing through the venturi. Static dimensional analysis is impossible here. In fact dimensional analysis at all just isn't appropriate in these odd cases.

Noah Roberts wrote:
Deane Yang wrote:
Can you provide a concrete example?
Formula for Fluid Compressibility through a Venturi Tube:
Y = {[kt^(2/k)/(k-1)][(1-b^4)/(1-b^4t^(2/k))][(1-t^((k-1)/k)/(1-t)]}^.5
With k being the SP heat ratio of the fluid passing through the venturi. Static dimensional analysis is impossible here. In fact dimensional analysis at all just isn't appropriate in these odd cases.
So you're saying that the dimension/unit library does *not* need to worry about formulas like this, right? If so, my comments below are off-topic. But I'm still a skeptic. I don't see why the trick I outlined before can't be played here, too. Tell me what each variable means and what units it is in (if any). Or give me an online reference for this formula.

On 3/6/07, Deane Yang <deane_yang@yahoo.com> wrote:
Noah Roberts wrote:
Deane Yang wrote:
Can you provide a concrete example?
Formula for Fluid Compressibility through a Venturi Tube:
Y = {[kt^(2/k)/(k-1)][(1-b^4)/(1-b^4t^(2/k))][(1-t^((k-1)/k)/(1-t)]}^.5
With k being the SP heat ratio of the fluid passing through the venturi. Static dimensional analysis is impossible here. In fact dimensional analysis at all just isn't appropriate in these odd cases.
So you're saying that the dimension/unit library does *not* need to worry about formulas like this, right? If so, my comments below are off-topic.
But I'm still a skeptic. I don't see why the trick I outlined before can't be played here, too. Tell me what each variable means and what units it is in (if any). Or give me an online reference for this formula.
It appears that he's referring to the equation given here: http://en.wikipedia.org/wiki/Orifice_plate Look for the equation under the heading "Flow of gases through an orifice". Above it and below it are references to the variable units. --Michael Fawcett

Michael Fawcett wrote:
On 3/6/07, Deane Yang <deane_yang@yahoo.com> wrote:
Noah Roberts wrote:
Deane Yang wrote:
Can you provide a concrete example? Formula for Fluid Compressibility through a Venturi Tube:
Y = {[kt^(2/k)/(k-1)][(1-b^4)/(1-b^4t^(2/k))][(1-t^((k-1)/k)/(1-t)]}^.5
With k being the SP heat ratio of the fluid passing through the venturi. Static dimensional analysis is impossible here. In fact dimensional analysis at all just isn't appropriate in these odd cases.
So you're saying that the dimension/unit library does *not* need to worry about formulas like this, right? If so, my comments below are off-topic.
What I'm saying is that there are just times when you'll have to break out. You'll have to get the value out of the quantity, use it, and put the result into a quantity with the dimensions it's supposed to have. There's just no way around it that I can see. In other words I don't think there's much use in trying to solve ALL problems that might come up, especially wrt exponents.
But I'm still a skeptic. I don't see why the trick I outlined before can't be played here, too. Tell me what each variable means and what units it is in (if any). Or give me an online reference for this formula.
It appears that he's referring to the equation given here:
Could be. It's the Fluid Compressibility function for a Venturi Tube or Nozzle shaped orifice. I'm not an engineer...I just do what I'm told on that end and ask questions /if I need to/. Y = fluid compressibility factor t = fluid temp (F I believe) b = beta - orifice/pipe size. k = SP Heat
Look for the equation under the heading "Flow of gases through an orifice". Above it and below it are references to the variable units.
The only variable that has units is temperature. The rest are ratios. But in that EQ t^2k is the problem that will work in the end. k is an unknown so you can't statically create a result type for that EQ. You have to take the temp value out and use it as a double and then plug in the compressibility into a dimensional quantity (it has none but it's good to have it in one that's labeled as such). There are other occasions when you might plug that value into a mass flow rate or something. You could say that t^x is a dimensionless quantity and tell people to multiply that by 1 * whatever dimensions they expect. I don't like that idea though because it doesn't really convey the importance that they get it right that breaking out of the static system does. When you have to cast out of the quantity system into doubles you're going to be careful. This problem can be solved with runtime dimensional analysis...unfortunately it doesn't do it right most of the time because we don't care what the true dimension of t^x is...it's empirical and doesn't use dimensional analysis.

Noah Roberts wrote:
You could say that t^x is a dimensionless quantity and tell people to multiply that by 1 * whatever dimensions they expect. I don't like that idea though because it doesn't really convey the importance that they get it right that breaking out of the static system does. When you have to cast out of the quantity system into doubles you're going to be careful.
Maybe I'm wrong on that assumption. Thinking more about it that might be just what is needed. Assuming some EQ t = t^x you could do: // this it total make believe... quantity<temp> t = 5 * F; quantity<nodim> x = 3.14; t = pow(t, x) * temp() * F; // omit last part? Worth thinking about anyway.

AMDG Brook Milligan <brook <at> biology.nmsu.edu> writes:
It seems that the mcs-units library is moving along well. I have one question, though, about scope and whether the following should be within the scope of this library or within the scope of another one.
In building probability and likelihood models, I often encounter the issue of transforming from either to their logarithms and vice versa. Code would be much simpler and less error-prone if the appropriate domain was defined, the natural arithmetic operators were used, and the conversions occurred (or could be prevented to catch errors) as necessary, but without necessarily requiring explicit bookkeeping to distinguish these quantities and their logarithms.
This suggests a (small) set of types with appropriate conversions, much as in the units library. However, probabilities are of course unitless.
Should this idea be developed as a separate library or does it make any sense to fold it into the existing units framework?
It would be better to develop a separate library. Units supports only linear conversions. You need multiplication to yield the same type as the operands Units usually gives a different type. The specializations that you would need to define to make Units do what you want would require as much code as if you defined the classes by hand.
Cheers, Brook
In Christ, Steven Watanabe
participants (7)
-
Brook Milligan
-
Dave Steffen
-
Deane Yang
-
Matthias Schabel
-
Michael Fawcett
-
Noah Roberts
-
Steven Watanabe