
The review for the Quantitative Units library, submitted by Matthias Schabel and Steven Watanabe begins today, March 26 and ends April 4.
I apologize to the authors and other reviewers in advance for not actively participating in the discussion over the review period, nor in providing as detailed a review report as I feel this submission deserves; project deadlines being what they are, however .... you know the drill. I'd also like to thank the authors for "diving into the breach" on this one; there has been no end of discussion on dimensional analysis and units over the last few years on the list. Many proposals have come and gone, with much heated discussion on their various merits. I commend them on their courage and fortitude in rising to the challenge. WHERE I'M COMING FROM ===================== My background is in theoretical and experimental particle physics. I have done extensive scientific simulation work, electronic data acquisition system design and implementation, software control systems and data analysis. I'm a "tight loop bit twiddler" ... one of the wackos that really needs zero-overhead, compile-time dimensional and unit analysis, because I can't afford a performance hit ... "reinterpret_cast is your friend, not your enemy". I would find the ability to define non-standard unit systems invaluable (I'm one of those "natural units for the problem" types ... CGS for E&M, solar mass/AU system for astrophysics, "hbar=c=1" for particle physics, etc). While I understand the usefulness to others of runtime conversions, they are not useful to my work. Likewise, I don't need anything beyond simple debugging output support, but again, I understand the potential usefulness to others of configurable unit IO support. To me, layout compatibility with the underlying fundamental type is quite useful. This is all just to say that I think the library _as described in the documentation_ is a good fit for my needs. That said, I think the library deserves to be reviewed in light of what it claims to be, not what we might want it to be. I do think it is reasonable to consider whether the library is "flexible" enough to support extensions that support those other needs. MY REVIEW =========
What is your evaluation of the design?
I really like the design. I think it comes as close to the standard domain notation as it is possible to get in compile time C++. The interface seems clean, and conversions between units and unit systems seem very nicely specified. The issues with pow<R> and root<R> are unfortunate, but that is a language shortcoming, not a library shortcoming. I'm not thrilled with the "quantity_cast" business, but my understanding from earlier mailing list discussion is that this corner of the library will see significant interface changes, so I won't belabor this issue.
What is your evaluation of the implementation?
I fear that I have not given the implementation sufficient study to make useful comments. Although I note that my favorite CODATA constant (Fermi's constant) is missing ... that itself is nearly inexcusable :-) I did play with Example 14 (performance measurements) a bit. Fedora 6, g++ 4.1.1, Intel T7400. Unoptimized, the quantity versions run an order of magnitude more slowly than the double versions. With any optimization at all, there is no runtime difference, 2.6s, satisfying the zero overhead guarantee.
What is your evaluation of the documentation?
Overall, it's very well done. I very much like the discussion of dimensional analysis ... it's clear, concise, and with the right level of mathematical sophistication. It beats the pants off the discussion in the BIPM's SI brochure, in substantially less space. I would like to see a warning on example 4 ... the presented "measurement error" type is insufficient for actual use in dealing with measurement errors, since it doesn't deal properly in correlations (quite a hard problem at compile time, I think): assuming x is a measurement<double>, with x.e the error, a correct error propagation would have (2x).e == (x+x).e == 2(x.e) whereas this class would get the addition wrong. Additionally, it is not well protected against internal overflow in the various arithmetic operation. But this isn't a review of "measurement<>", and it is overall still a nice toy example of the relevant library functionality and interface. I'm not sure that example 7 or 8 bring much more to the docs (not that there's really anything "wrong" with them, per se). They aren't "realistic", as they can't be used in production code, and no solutions to their drawbacks are presented. As toy examples, they are fine, but by this point in the documentation, there are already a number of excellent, clarifying toy examples. Even further, example 9 DOES show how to solve the drawbacks of example 8. Examples 14, 15 and 18 could use more discussion :-) Particularly example 14, since the results seem to be highly dependent on the quality of compiler optimizations.
What is your evaluation of the potential usefulness of the libray?
I think it will be extremely useful to a broad group of users, particularly in scientific simulations.
Did you try to use the library? With what compiler? Did you have any problems?
Regrettably, not beyond building and compiling some of the examples. I have not had the opportunity to write any code of my own. But all of the examples compiled and ran as expected.
How much effort did you put into your evaluation? A glance? A quick reading? In-depth study?
A few hours study of the documentation and examples, and following of the discussion on the mailing list. I have done more in depth study of some of the alternative libraries (Walter Brown's SIunits and Andy Little's pqs); while those are both good libraries, I would be more likely to actually use the current library because the interface and flexibility suit my tastes better.
Are you knowledgeable about the problem domain?
Yes. I'm involved in experiments that do precision measurement, and a good understanding of dimensional analysis, units, and the underlying standards based approaches to their measurement and definition are important to my work.
Do you think the library should be accepted as a Boost library? Be sure to say this explicitly so that your other comments don't obscure your overall opinion.
I vote FOR acceptance. And I got my vote in this time before the review period ended in my time zone :-) -- ------------------------------------------------------------------------------- Kevin Lynch voice: (617) 353-6025 Physics Department Fax: (617) 353-9393 Boston University office: PRB-361 590 Commonwealth Ave. e-mail: krlynch@bu.edu Boston, MA 02215 USA http://budoe.bu.edu/~krlynch -------------------------------------------------------------------------------

Kevin, I appreciate your efforts to get a review in under the wire - and fully understand the deadline phenomenon. This "labor of love" has expanded to take much more time than I ever intended, and I'm glad that at least a subset of prospective users like the end result...
find the ability to define non-standard unit systems invaluable (I'm one of those "natural units for the problem" types ... CGS for E&M, solar mass/AU system for astrophysics, "hbar=c=1" for particle physics, etc).
If you ever get the chance to put a few of these systems together, I'd be happy to add them to the distro. I'm reluctant to do it myself because they are mostly outside the realm of my personal experience and I'd rather not screw them up... I think an extended set of CGS electromagnetic units (esu/emu/gaussian) with associated EM functions would be a great application of the library, since this is a clear case where getting units wrong can lead to big trouble.
I'm not thrilled with the "quantity_cast" business, but my
In the sandbox version, quantity_cast is only used for the (unsafe) operation of direct mutating access to the quantity value_type...other uses have been removed.
make useful comments. Although I note that my favorite CODATA constant (Fermi's constant) is missing ... that itself is nearly inexcusable :-)
Ouch...I only left it out because NIST doesn't provide the SI value in their tables, just GeV... I'll get right on it ;^)
I would like to see a warning on example 4 ... the presented "measurement error" type is insufficient for actual use in dealing with measurement errors, since it doesn't deal properly in correlations (quite a hard problem at compile time, I think): assuming x is a measurement<double>, with x.e the error, a correct error propagation would have (2x).e == (x+x).e == 2(x.e) whereas this class would get the addition wrong. Additionally, it is not well protected against internal overflow in the various arithmetic operation. But this isn't a review of "measurement<>", and it is overall still a nice toy example of the relevant library functionality and interface.
I'm actually thinking of including measurement<> with the library itself as a way to provide both physical constants and their associated measurement errors (from CODATA, in particular), so your comments are very timely. I was planning on tweaking the implementation, but I had naively failed to consider the issue of correlated errors... I'll have to see what's possible with a reasonable amount of effort. I certainly don't want to try to come up with a general solution for dealing with error covariances - any such solution would be messy and complex - but the self correlation case is clearly important... If you have any other ideas/input, it would be most welcome.
Examples 14, 15 and 18 could use more discussion :-) Particularly example 14, since the results seem to be highly dependent on the quality of compiler optimizations.
This will be forthcoming. We will have substantial revamping of the documentation to do. Sigh. Matthias

AMDG Matthias Schabel <boost <at> schabel-family.org> writes:
I'm actually thinking of including measurement<> with the library itself as a way to provide both physical constants and their associated measurement errors (from CODATA, in particular)
I'm not sure that I'm really keen on this. measurement really doesn't belong in this library. In Christ, Steven Watanabe

I'm actually thinking of including measurement<> with the library itself as a way to provide both physical constants and their associated measurement errors (from CODATA, in particular)
I'm not sure that I'm really keen on this. measurement really doesn't belong in this library.
How about just a wrapper that provides value() and error() accessors and auto conversion to value_type, but no algebra or other functionality? Matthias

AMDG Matthias Schabel <boost <at> schabel-family.org> writes:
How about just a wrapper that provides value() and error() accessors and auto conversion to value_type, but no algebra or other functionality?
I'd prefer to leave the constants as plain quantity<double>s on the grounds of simplicity. In Christ, Steven Watanabe

Steven Watanabe writes:
AMDG
Matthias Schabel <boost <at> schabel-family.org> writes:
I'm actually thinking of including measurement<> with the library itself as a way to provide both physical constants and their associated measurement errors (from CODATA, in particular)
I'm not sure that I'm really keen on this. measurement really doesn't belong in this library.
</lurk mode off> A voice from the back of the room seconds that motion. The work I do is heavily involved with measurements, and there are all kinds of interesting, difficult, and even unsolved issues. I suggest that a measurement library would be built on top of an existing units library, not as part of it. I really don't think you want to open this can of worms yet. :-) <sits back down... /lurk mode on> ---------------------------------------------------------------------- Dave Steffen, Ph.D. Fools ignore complexity. Software Engineer IV Pragmatists suffer it. Numerica Corporation Some can avoid it. ph (970) 461-2000 x227 Geniuses remove it. dgsteffen<at>numerica<dot>us -- Alan Perlis

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Dave Steffen Sent: 05 April 2007 18:00 To: boost@lists.boost.org Subject: Re: [boost] units review
I'm not sure that I'm really keen on this. measurement really doesn't belong in this library.
A voice from the back of the room seconds that motion.
The work I do is heavily involved with measurements, and there are all kinds of interesting, difficult, and even unsolved issues. I suggest that a measurement library would be built on top of an existing units library, not as part of it.
I really don't think you want to open this can of worms yet. :-)
It's a REALLY, REALLY interesting can though - but I agree we should wait to walk with the units library before trying to run with measurements and their uncertainty. Paul --- Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539561830 & SMS, Mobile +44 7714 330204 & SMS pbristow@hetp.u-net.com

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Paul A Bristow Sent: Thursday, April 05, 2007 12:10 PM To: boost@lists.boost.org Subject: Re: [boost] units review
-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Dave Steffen Sent: 05 April 2007 18:00 To: boost@lists.boost.org Subject: Re: [boost] units review
I'm not sure that I'm really keen on this. measurement really doesn't belong in this library.
A voice from the back of the room seconds that motion.
The work I do is heavily involved with measurements, and there are all kinds of interesting, difficult, and even unsolved issues. I suggest that a measurement library would be built on top of an existing units library, not as part of it.
I really don't think you want to open this can of worms yet. :-)
It's a REALLY, REALLY interesting can though - but I agree we should wait to walk with the units library before trying to run with measurements and their uncertainty.
This whole issue is a.k.a. error propagation? Eric.

Matthias Schabel wrote:
I'm actually thinking of including measurement<> with the library itself as a way to provide both physical constants and their associated measurement errors (from CODATA, in particular), so your comments are very timely. I was planning on tweaking the implementation, but I had naively failed to consider the issue of correlated errors... I'll have to see what's possible with a reasonable amount of effort. I certainly don't want to try to come up with a general solution for dealing with error covariances - any such solution would be messy and complex - but the self correlation case is clearly important... If you have any other ideas/input, it would be most welcome.
I regret that I don't have any good ideas. Solving this is a problem near and dear to me ... in my early forays into C++, I wrote the equivalent of you measurement<> class, spending way too much time on it before I realized that it certinly wasn't going to do the right thing. The experiment I currently expend most of my effort on has an ultimate goal of measuring the positive muon lifetime at 1ppm (hence my love for the Fermi constant :-) Keeping very careful track of errors and correctly propagating them through calculations is critical for us. Long story short, I've spent a lot of time thinking very hard about how one might go about doing so "correctly" in pure C++, and I haven't found a way to do it (note, however, that I have no real experience or facility in the template metaprogramming ... most of my scientific programming effort has gone in to floating point and numerical issues, and interacting with DAQ hardware). Even ignoring the parameter covariances (by assuming all variable errors are independent, or equivalently, a diagonal covariance matrix), I think it's a very very hard problem. The very simple example I posted earlier: measurement<> x(1.,1.); assert( 2*x == x+x ); will assert, but that's just the tip of the iceberg. A robust, correct implementation would need to be able to track which measurement instance it was looking at _across function calls_ and perhaps even across translation units. None of the following should assert ... all of them will, with either your implementation or mine: measurement<> x(1.,1.); measurement<> f(y+z) { return y+z; } measurement<> y = f(x,x) + f(x,x); // not independent assert ( y.e = 4*x.e ); measurement<> z = x; // not independent measurement<> w = z+x; // not independent assert ( w.e == 2*x.e ); measurement<> a(1.,1.); extern f(measurement<>& b); measurement<> c = a+f(a); // what happens here? I think you would need to write the equivalent of a computer algebra system to track what variables are "independent", and you would have to do so across programming boundaries in a non-trivial way. I think this is a problem akin to global compiler optimization. I'm not sure there's even a way to do it in C++ without significant restrictions. The only way I know to deal with this problem is to use a computer algebra package designed to support this need, and even then, in the packages I know, you have to supply all calculations in the equivalent of a single translation unit. For instance, there's a Mathematica package (the name escapes me) and open source tools like "fussy", which I use extensively. I'd love for someone to figure out how to do it right, but I'm not going to hold my breath.... -- ------------------------------------------------------------------------------- Kevin Lynch voice: (617) 353-6025 Physics Department Fax: (617) 353-9393 Boston University office: PRB-361 590 Commonwealth Ave. e-mail: krlynch@bu.edu Boston, MA 02215 USA http://budoe.bu.edu/~krlynch -------------------------------------------------------------------------------

I think you would need to write the equivalent of a computer algebra system to track what variables are "independent", and you would have to do so across programming boundaries in a non-trivial way.
Kevin, Thanks for the thorough synopsis - clearly, the covariance issue is the least of the challenges here. You're probably right that it is basically impossible to implement in full generality without huge runtime overhead since you would need to track the complete history of operations for every variable... Matthias
participants (6)
-
Dave Steffen
-
Eric Lemings
-
Kevin Lynch
-
Matthias Schabel
-
Paul A Bristow
-
Steven Watanabe