[Review][PQS] Review deadline

The deadline for reviews of Andy Little's Physical Quantities System is the end of this Friday. (June 9) So far, we haven't gotten many reviews. If you're thinking about writing one, it would be great if you could finish it up. Thanks, Fred Bertsch

Hello! I'm a newbie to the Boost mailing list, but I want to add my comments because I believe Andy's PQS library is very useful for scientific programming. I hope that it gets accepted into Boost, either in its present form, or possibly with some improvements as I'll discuss below. Unfortunately I haven't had time to examine the code in depth, but I feel very qualified to comment because I have implemented a similar template library as part of my job. (I got the inspiration from Barton & Nackman's book, _Scientific and Engineering C++_.) I can't share the code because it's proprietary, but we have been using it for the last six years in an application that's heavy on physical and geometric calculation and numerical analysis. It has provided all the benefits Andy promises - self-documenting code, compiler detection of errors in physical formulas, elimination of units confusion, etc. The benefits are excellent, and it's easy to use for the most part. It has saved us a lot of headaches. Andy's library looks to be similar in all those respects, and I expect it will provide the same benefits. Many of our design choices are the same. Actually, the PQS library is in many ways much more extensive than my work and should be useful in a broader range of applications. One important point is that we found that our library was extremely efficient, with all of the work being done at compile-time. A comparison of run-times using the quantities library versus built-in floating-point types showed a speed degradation of only 1% on highly math-intensive code. What I implemented was more similar to Andy's "t2_quantity" and "t3_quantity," and I find these (his future planned parts of the library) to be the most useful in what I do. I look forward especially to their completion, and I can testify to their utility. If the value of the library is not evident to some people as it stands now, I think a vision of the completed library may make it more clear. For that reason (and because it's what I'm most familiar with), I'll make a comment about where I think the library is going - realizing that I'm departing briefly from the topic of reviewing the library as it is today. Then I'll come back to specific comments on the current code. In my work, the advantage of the "t2_quantity" is that it provides an abstraction like a "length" that's independent of the units used to represent it - i.e., 1000 m is the *same* as 1 km, and a variable of type "t2_quantity" could contain that length as a value. All my scientific formulas then work on values like length and time rather than meters and seconds. A specific choice of units to represent the values is then only needed for input (e.g., from numeric values in a text file or hard-coded constants) or output (e.g., for human-readable results). All the computations become independent of units, and all numeric values input or output are enforced by the compiler to have documented units. Code is clear and simple, like this: length x = 1000.0 * meter; velocty v = 2.0 * nautical_mile / hour; ... time t = x / v; // typically LOTS of computations here ... cout << "Time in seconds = " << t / second; I think the currently implemented PQS library is a good start and provides most of the same benefits to the user. The unit-specific types of the "t1_quantity" probably in some cases can also avoid some of the multiplication/division that would occur with my example above. Returning to the current library per se, I want to add some specific comments to some of those that have already been made in the reviews: There's been quite a bit of discussion about names. First, the name of the library. I wouldn't mind a different name than PQS, but I'd prefer to stay away from calling it "units" - precisely because of what I said above about "t2_quantity" - the advantage is actually in the ability to make your physical formulas units-INDEPENDENT. Obviously, units are still needed and handling them is a big part of the library, but the part of value is in making the units more transparent to the users. So to focus on the units (meters, seconds) versus the dimensions/quantities (length, time) would seem misleading in the library name. Then again, it's just a name. Secondly, the term "historical units" has been suggested to replace "incoherent units." To me, "historical units" suggests they are archaic and no longer in use, and as an American I unfortunately know better. Even in certain scientific applications, units like feet and nautical miles, for instance, are still in common use. "Traditional units" was also suggested - I like that one better. I have to agree with the other reviewers that the names "t1_quantity," "t2_quantity," and "t3_quantity" are not very descriptive. Let me explain the naming scheme I used - maybe it will help get us thinking about better names. In my case, I tried to stick to one class template (the equivalent of t2_quantity), but I found a need for a type that did its dimension-checking at runtime so that the dimensions were not part of the type itself. This came up whenever I had an aggregate class (such as a covariance matrix, or a set of polynomial coefficients) that needed to contain a mix of values with different physical dimensions. Even though (in my application) the dimensions of each element of the aggregate are known a priori, I couldn't find another way to implement those aggregates that was time- efficient without also being impossibly cumbersome. So, because of this property of being able to have different kinds of quantities "mixed" within an aggregate, I called this type "mix_measure" (i.e., t3_quantity), while the other type was simply "measure" (for t2_quantity). I'm not necessarily proposing you use my names, but I'm saying I think it's possible to find names that do reflect something of their differences in functionality or purpose. Lastly, it does seem from some of the other reviews that the documentation could be more clear. I'm disappointed that the purpose and the value and ease of use of the library is not more obvious to people from reading the documentation. It's clear to me because I'm very familiar with the concept and I've used it successfully. One example is the questions about adding monetary conversions or pixel units to the library. I think PQS is definitely the wrong place to put these things. It's more than just a units conversion tool. The most obvious difference is that I've never heard anyone talk in units like dollars squared. The PQS library provides functionality meant for scientific comoutations involving things like cubic meters per second squared. Dollars (and to a lesser extent, pixels) are just a linear thing that does not make use of the machinery afforded by this library. One final note: I do think it's important that this library eventually be extended and completed, and also that the various parts (t1_quantity, t2_quantity, etc.) work well together and have similar interfaces. It's possible that in implementing the future parts of the library, Andy will find he needs to change some things in the current code to make this happen. I'm not too familiar with the Boost process for changing existing libraries. So I don't know if there would be any value in putting the acceptance on hold until the other parts of the library are complete. But I would hate to see this important work get dropped, and I recommend its acceptance, with this one caveat about possibly accepting a more extended version instead at a later date. However, I think the library is quite useful as it stands now, and it provides a good foundation for the future work. Sincerely, -- Leland Brown

all numeric values input or output are enforced by the compiler to have documented units. Code is clear and simple, like this:
length x = 1000.0 * meter; velocty v = 2.0 * nautical_mile / hour; ... time t = x / v; // typically LOTS of computations here ... cout << "Time in seconds = " << t / second;
I like this notation. I'm not sure, however, why t = x / v should involve lots of computation. If length, velocity, time are implemented as I proposed in my other email, x would contain the length in meters, v the velocity in meters per second, and time the time in seconds. Thus, one division. The conversions into SI units would be done in the assignments above. Regards -Gerhard -- Gerhard Wesp ZRH office voice: +41 (0)44 668 1878 ZRH office fax: +41 (0)44 200 1818 For the rest I claim that raw pointers must be abolished.

Hi Leland, "Leland Brown" wrote
Hello! I'm a newbie to the Boost mailing list, but I want to add my comments because I believe Andy's PQS library is very useful for scientific programming. I hope that it gets accepted into Boost, either in its present form, or possibly with some improvements as I'll discuss below.
Great. Thanks!
Unfortunately I haven't had time to examine the code in depth, but I feel very qualified to comment because I have implemented a similar template library as part of my job. (I got the inspiration from Barton & Nackman's book, _Scientific and Engineering C++_.) I can't share the code because it's proprietary, but we have been using it for the last six years in an application that's heavy on physical and geometric calculation and numerical analysis. It has provided all the benefits Andy promises - self-documenting code, compiler detection of errors in physical formulas, elimination of units confusion, etc. The benefits are excellent, and it's easy to use for the most part. It has saved us a lot of headaches. Andy's library looks to be similar in all those respects, and I expect it will provide the same benefits. Many of our design choices are the same. Actually, the PQS library is in many ways much more extensive than my work and should be useful in a broader range of applications.
One important point is that we found that our library was extremely efficient, with all of the work being done at compile-time. A comparison of run-times using the quantities library versus built-in floating-point types showed a speed degradation of only 1% on highly math-intensive code.
FWIW, one area that is lacking in the documentation/examples/tests is some comparative performance testing versus built-in types. Another thing on the to-do list.
What I implemented was more similar to Andy's "t2_quantity" and "t3_quantity," and I find these (his future planned parts of the library) to be the most useful in what I do. I look forward especially to their completion, and I can testify to their utility. If the value of the library is not evident to some people as it stands now, I think a vision of the completed library may make it more clear. For that reason (and because it's what I'm most familiar with), I'll make a comment about where I think the library is going - realizing that I'm departing briefly from the topic of reviewing the library as it is today. Then I'll come back to specific comments on the current code.
In my work, the advantage of the "t2_quantity" is that it provides an abstraction like a "length" that's independent of the units used to represent it - i.e., 1000 m is the *same* as 1 km, and a variable of type "t2_quantity" could contain that length as a value. All my scientific formulas then work on values like length and time rather than meters and seconds. A specific choice of units to represent the values is then only needed for input (e.g., from numeric values in a text file or hard-coded constants) or output (e.g., for human-readable results). All the computations become independent of units, and all numeric values input or output are enforced by the compiler to have documented units. Code is clear and simple, like this:
length x = 1000.0 * meter; velocty v = 2.0 * nautical_mile / hour; ... time t = x / v; // typically LOTS of computations here ... cout << "Time in seconds = " << t / second;
That is very reminiscent of the approach taken by a pioneering units library by Walter Brown: http://www.oonumerics.org/tmpw01/brown.pdf This approach can be simulated by PQS library. There is an example in <libs/examples/clcpp_response.cpp> (The example was originally written in response to an article on comp.std.c++, hence the title.) The example compares (I hope) types that dont have units as part of the type as opposed to those that do. In the unitless type accuracy in calculations is lost. Demonstrating this another way, If you want to work in units of parsecs and you have a variable representing 1 parsec with the unitless version you will have a variable whose internal numeric value is 30856780000000000 whereas if the internal value is held in the unit of parsecs the internal value will be just 1. However, when implementing the (so called) t2_quantity with runtime modifiable units, this type of problem must occur. The t2_quantity was implemented in a previous implementation of pqs. I removed it from the tentative Boost version for lack of time The t2_quantity was implemented in terms of the t1_quantity. Basically the t2_quantity held a pointer to an abstract base class which indirected always to a t1_quantity member. This meant that the t1_quantity output function overloads and conversion functions were available. I think the functionality is quite similar to Boost Variant FWIW Typical usage might be like this extrapolated from an example from a previous version of pqs: namespace pqs = boost::pqs; int main() { typedef pqs::t2_quantity<pqs::length::abstract_quantity> t2_length; // default ctor units of meters. Numeric value of 0 t2_length(); // t2_quantity initialised by t1_quantity t2_length length(pqs::length::ft(1)); std::cout << "get units to string : " << length.units_str() <<'\n'; std::cout << "get numeric_value : " << length.numeric_value() <<'\n'; // set the numeric_value of the quantity length.set_numeric_value(2.0); //change t2_quantity internal units - pq value scaled to new units length.set_units<pqs::length::um>(); // alt version using expression length.set_units(pqs::length::mm()); // assign t2_quantity from t1_ length of arbitrary units // internal value changed, but units not changed (here miles) length = pqs::length::mi(1); // install t2_quantity from lt1_ ength of arbitrary units // internally pq replaced by new type length.install( pqs::length::mi(1)); // return a t1_quantity cast from t2_quantity // t2_quantity value unaffected pqs::length::yd yds = length; }
I think the currently implemented PQS library is a good start and provides most of the same benefits to the user. The unit-specific types of the "t1_quantity" probably in some cases can also avoid some of the multiplication/division that would occur with my example above.
The t1_quantity is designed to be fast with as much calculation as possible at compile time rarther then runtime. OTOH It will be interesting to see whether the implementation of the t1_quantity/t2_quantity combo will have the functionality you need. If not it is very simple to simulate using the t1_quantity by always using only the default units and not including output functions. This will effectively give you a unitless quantity. Again, examples demonstrating such use will help I hope.
Returning to the current library per se, I want to add some specific comments to some of those that have already been made in the reviews:
There's been quite a bit of discussion about names. First, the name of the library. I wouldn't mind a different name than PQS, but I'd prefer to stay away from calling it "units" - precisely because of what I said above about "t2_quantity" - the advantage is actually in the ability to make your physical formulas units-INDEPENDENT. Obviously, units are still needed and handling them is a big part of the library, but the part of value is in making the units more transparent to the users. So to focus on the units (meters, seconds) versus the dimensions/quantities (length, time) would seem misleading in the library name. Then again, it's just a name.
OK. I will obviously need to have a good think about names!
Secondly, the term "historical units" has been suggested to replace "incoherent units." To me, "historical units" suggests they are archaic and no longer in use, and as an American I unfortunately know better. Even in certain scientific applications, units like feet and nautical miles, for instance, are still in common use. "Traditional units" was also suggested - I like that one better.
I kind of like 'traditional units' too...
I have to agree with the other reviewers that the names "t1_quantity," "t2_quantity," and "t3_quantity" are not very descriptive.
Yep.
Let me explain the naming scheme I used - maybe it will help get us thinking about better names. In my case, I tried to stick to one class template (the equivalent of t2_quantity), but I found a need for a type that did its dimension-checking at runtime so that the dimensions were not part of the type itself.
Now this sounds more like the t3_quantity.
This came up whenever I had an aggregate class (such as a covariance matrix, or a set of polynomial coefficients) that needed to contain a mix of values with different physical dimensions. Even though (in my application) the dimensions of each element of the aggregate are known a priori, I couldn't find another way to implement those aggregates that was time- efficient without also being impossibly cumbersome.
This sounds like the sort of problem that pqs needs to be able to deal with. So, because of this
property of being able to have different kinds of quantities "mixed" within an aggregate, I called this type "mix_measure" (i.e., t3_quantity), while the other type was simply "measure" (for t2_quantity). I'm not necessarily proposing you use my names, but I'm saying I think it's possible to find names that do reflect something of their differences in functionality or purpose.
OK. I'm interested in all suggestions re naming ... :-)
Lastly, it does seem from some of the other reviews that the documentation could be more clear. I'm disappointed that the purpose and the value and ease of use of the library is not more obvious to people from reading the documentation. It's clear to me because I'm very familiar with the concept and I've used it successfully.
OK. I think the main problem has been putting the definition of terms section, at the start rather than the end, a suggestion that I ignored previously FWIW ;-)
One example is the questions about adding monetary conversions or pixel units to the library. I think PQS is definitely the wrong place to put these things. It's more than just a units conversion tool. The most obvious difference is that I've never heard anyone talk in units like dollars squared. The PQS library provides functionality meant for scientific comoutations involving things like cubic meters per second squared. Dollars (and to a lesser extent, pixels) are just a linear thing that does not make use of the machinery afforded by this library.
Yes. These things really need a different library. In both cases(money and pixels) conversion factors arent constant. (In case of pixels a runtime mode change for example)
One final note: I do think it's important that this library eventually be extended and completed, and also that the various parts (t1_quantity, t2_quantity, etc.) work well together and have similar interfaces. It's possible that in implementing the future parts of the library, Andy will find he needs to change some things in the current code to make this happen. I'm not too familiar with the Boost process for changing existing libraries. So I don't know if there would be any value in putting the acceptance on hold until the other parts of the library are complete.
The situation as I understand is that ,if the review manager decides the library should be accepted, he will often add requirements and suggests changes. As the author I also need to consider the needs of existing users, so ideally the interface shouldnt change too much from the current one. However from watching the progress of other libraries the (indeterminate) period between acceptance and inclusion in a boost release can sometimes result in radical modifications. A solution to the current dissatisfaction with names in PQS is going to break the interface and there isnt any way around that. With regard to extending the library, I intend to add the t2_quantity, t3_quantity, matrix, vector, complex and quaternion.
But I would hate to see this important work get dropped, and I recommend its acceptance, with this one caveat about possibly accepting a more extended version instead at a later date. However, I think the library is quite useful as it stands now, and it provides a good foundation for the future work.
Thanks again for your review. regards Andy Little

Leland Brown said: (by the date of Tue, 6 Jun 2006 21:44:02 +0000 (UTC))
Hello! I'm a newbie to the Boost mailing list, but I want to add my comments because I believe Andy's PQS library is very useful for scientific programming. I hope that it gets accepted into Boost, either in its present form, or possibly with some improvements as I'll discuss below.
Hello, according to the Boost Formal Review Process: http://www.boost.org/more/formal_review_process.htm all mailing list members "are encouraged to submit Formal Review comments". Which means that you can too. I was just surprised to discover it myself, because I thought that only few "selected people" can submit reviews. Quite to the contrary. Especially because you are so familiar with this topic. So if you'd like to see it included - simply write a review according to guidelines of Boost Formal Review Process, and vote! (mostly copy'n'paste of your post, to which I'm replying ;) -- Janek Kozicki |

Sorry, I don't have the time to read the PQS documentation in-depth right now. Still, I'd like to make one suggestion. I think that handling of dimensional quantities and conversion factors are orthogonal concepts and should be separated. I suggest that for maximum transparency the library should *exclusively* handle quantities expressed in SI units. I'm aware that this "mildly forces" developers to adopt SI units. I consider this a Good Thing. Conversion factors between non-SI units and SI units should be constant dimensional quantities, e.g. (assuming constructors from double): const length foot = .3048 ; // Meter const power european_horse_power = 735.4987 ; // Watt const mass pound = 0.4535924; // Kilogram // ... This way, one could e.g. construct dimensional quantities like this: const power deux_chevaux = 2 * european_horse_power; Non-SI quantities would have to stay "out of the system" in normal floating point variables: length altitude; double altitude_ft = altitude / foot; Ignoring dimensionality, this is the notation Mathematica chooses. I have done some engineering simulations, written a flight simulation framework and used one that uses US units (D6; see www.bihrle.com). The D6 source code is riddled with magic constants. I've collected some conversion constants in units.h of cpp-lib, see http://gwesp.tx0.org/software/. Regards -Gerhard -- Gerhard Wesp ZRH office voice: +41 (0)44 668 1878 ZRH office fax: +41 (0)44 200 1818 For the rest I claim that raw pointers must be abolished.

Gerhard Wesp <gwesp <at> google.com> writes:
I think that handling of dimensional quantities and conversion factors are orthogonal concepts and should be separated.
An interesting idea. It might also help with some of the confusion about the purpose of the library.
I suggest that for maximum transparency the library should *exclusively* handle quantities expressed in SI units.
Conversion factors between non-SI units and SI units should be constant dimensional quantities, e.g. (assuming constructors from double):
const length foot = .3048 ; // Meter const power european_horse_power = 735.4987 ; // Watt const mass pound = 0.4535924; // Kilogram // ...
This way, one could e.g. construct dimensional quantities like this: const power deux_chevaux = 2 * european_horse_power;
Yes! That is the way I like to see construction of dimensional quantities. Cf my example:
length x = 1000.0 * meter; velocty v = 2.0 * nautical_mile / hour;
BUT - the difference is, I would like to see the compiler enforce this sort of strong typing and self-documenting for SI units also (as in my "meter" example above). Otherwise, if SI quantities can be constructed directly from double, nothing stops me from doing: force f = 10.0; // kg which is an undetected error because I apparently intended kg which is not a force unit. I prefer requiring: force f = 10.0 * kilogram; // error will be caught by compiler! and const length foot = .3048 * meter; FWIW, when I wrote the equivalent of the (so-called for now) t2_quantity, I implemented it exactly as you describe, keeping all quantities exclusively in SI units, but kept that fact hidden from the user in order to prevent direct conversions from built-in types and require the user to document his units, SI or otherwise. Likewise, similarly to your other example:
length altitude; double altitude_ft = altitude / foot;
I would write: double altitude_m = altitude / meter; if I needed the value as a nondimensional "double." -- Leland

On Wed, Jun 07, 2006 at 09:39:25PM +0000, Leland Brown wrote:
force unit. I prefer requiring:
force f = 10.0 * kilogram; // error will be caught by compiler!
Seems like a classic case for explicit constructors. I actually thought about this but didn't mention in my posting. I'm perfectly fine with using them, so if you really need the conversion from double you can write (you do; to avoid a chicken-and-egg problem!): const length meter(1); force f = 10; would then be illegal. Regards -Gerhard -- Gerhard Wesp ZRH office voice: +41 (0)44 668 1878 ZRH office fax: +41 (0)44 200 1818 For the rest I claim that raw pointers must be abolished.

"Gerhard Wesp" wrote
Sorry, I don't have the time to read the PQS documentation in-depth right now.
Still, I'd like to make one suggestion.
I think that handling of dimensional quantities and conversion factors are orthogonal concepts and should be separated.
They are in PQS. This is the distinction between abstract and concrete quantities and units in the documentation. However in source code it is more convenient to combine these elements in one type. This is the t1_quantity. t1_quantity can be passed as a template parameter carrying all the dimension and unit information with it. This is not possible with external constants or doubles etc.
I suggest that for maximum transparency the library should *exclusively* handle quantities expressed in SI units.
Unfortunately in everyday life non-si units crop up frequently.
I'm aware that this "mildly forces" developers to adopt SI units. I consider this a Good Thing.
PQS tries to favour SI quantities over non SI ones. By using SI quantities rather than non-SI ones, you will get faster and more accurate results for example.
Conversion factors between non-SI units and SI units should be constant dimensional quantities, e.g. (assuming constructors from double):
const length foot = .3048 ; // Meter const power european_horse_power = 735.4987 ; // Watt const mass pound = 0.4535924; // Kilogram
In PQS the constants are encoded as template parameters as part of the type. The unit defined in <boost/pqs/meta/unit.hpp> holds all the required information. Any mulltiplier is held as a rational. In the case of SI quantities the rational evaluates to 1 and this means that the rational can be optimised out of the calculation. Exponents are held as powers so multiplication of two si Quantities only involves multiplying runtime values. Any multiplication of their units is in fact addition of powers done at compile time. This makes for much faster and more accurate calculations. Consider the calculation 1 km * 1 millisecond. The external units approach 1 * km * 1 * millisecond, which expanding the constants would be 1 * 1000. * 1 * .001 In pqs the internal calculation at runtime is simply 1. * 1. The type of the result encodes the calculation 1000 * .001 as plus<3,-3>::value in the unit of the result. In pqs the user doesnt have to remember or look up the 'magic constants' that you describe below. This is an important point, because it means that quantities and conversions are potentially more consistent and predictable across applications.
This way, one could e.g. construct dimensional quantities like this: const power deux_chevaux = 2 * european_horse_power;
Non-SI quantities would have to stay "out of the system" in normal floating point variables:
length altitude; double altitude_ft = altitude / foot;
Ignoring dimensionality, this is the notation Mathematica chooses.
I have done some engineering simulations, written a flight simulation framework and used one that uses US units (D6; see www.bihrle.com).
A goal of pqs is to be useable in that sort of situation. To be really effective it will need a lot of supporting classes ( matrix, vector, quat) though.
The D6 source code is riddled with magic constants. I've collected some conversion constants in units.h of cpp-lib, see http://gwesp.tx0.org/software/.
One function of pqs is to provide these constants for conversion in a consistent way. It is of course possible to extract them from the headers without using the rest of the library too. regards Andy Little

Andy Little wrote:
Consider the calculation 1 km * 1 millisecond.
The external units approach
1 * km * 1 * millisecond, which expanding the constants would be
1 * 1000. * 1 * .001
In pqs the internal calculation at runtime is simply
1. * 1.
The type of the result encodes the calculation 1000 * .001 as plus<3,-3>::value in the unit of the result.
This, I think, is one of the great strengths of pqs. The prefix is encoded in the type system. Now, Andy claims that this is why prefix_offset is required. I believe I understand his argument but I'm not convinced yet. I'll have to do some thinking on that. I much prefer pqs' interface to conversion over the "external conversion constants" system. If I want to output something in a particular unit, I'll create a pqs variable with the appropriate unit and assign to it. -Dave

Andy Little <andy <at> servocomm.freeserve.co.uk> writes:
Consider the calculation 1 km * 1 millisecond.
The external units approach
1 * km * 1 * millisecond, which expanding the constants would be
1 * 1000. * 1 * .001
In pqs the internal calculation at runtime is simply
1. * 1.
The type of the result encodes the calculation 1000 * .001 as plus<3,-3>::value in the unit of the result.
BTW, I think this is an excellent example. It shows clearly how keeping track of the units at compile time would benefit the user, and also gives an idea of how it works. -- Leland

Gerhard Wesp wrote:
Sorry, I don't have the time to read the PQS documentation in-depth right now.
Still, I'd like to make one suggestion.
I think that handling of dimensional quantities and conversion factors are orthogonal concepts and should be separated.
I suggest that for maximum transparency the library should *exclusively* handle quantities expressed in SI units.
I'm aware that this "mildly forces" developers to adopt SI units. I consider this a Good Thing.
I disagree. In my work, most quantities are transformed into some set of simulation units that are only rarely SI units, and I believe this to be true of much other scientific programming. The basis for chosing the units is instead attempting to maximize numeric stability, and so choosing units that keep things as close to order 1 as possible. Then, since real world units are the measured inputs, and the useful outputs, simple conversions are done at the beginning and end. As such, I prefer the current approach of the library that allows for non-SI units (though not eaily, yet). John Phillips

John Phillips <phillips <at> delos.mps.ohio-state.edu> writes:
The basis for chosing the units is instead attempting to maximize numeric stability, and so choosing units that keep things as close to order 1 as possible.
BTW, this post will be only of interest to those with a mathematics or numerical analysis background. Others can safely skip it without missing anything. This is a very interesting issue to me! I'm not sure what you have in mind as far as keeping things close to order 1 improving numerical stability, but let me explain what comes to mind for me. First, there may be a possibility of overflow or underflow with values significantly far from order 1, especially with single precision floating point values. In my application, I use double precision for everything, so this doesn't concern me. The second issue relates to things like inversion of ill-conditioned matrices, where a matrix may be nearly singular due to a large difference in the orders of magnitude of its eigenvalues. Clearly this can cause severe numerical problems, and keeping things near order 1 helps prevent this. In some cases FWIW I think this problem can be resolved in another way, or even seen as a symptom of a different problem, which can be addressed - as I'll explain. And the discipline imposed by strongly-typed dimensional analysis may force the implementer of the algorithm to correct the problem. (Of course, you can read this as "annoy the implementer because his mathematically correct algorithm won't compile," depending on your perspective :-) ) Consider, for example, a function that inverts a matrix A using an SVD, in order to avoid numerical problems. Let's assume A is a symmetric, positive definite matrix composed of values having different physical dimensions - such as a covariance matrix of a set of parameters having different dimensions. I don't use SVD (so forgive me if I display my ignorance); but as I understand, it would essentially find the eigenvalues and eigenvectors, and invert the eigenvalues; but for any very small eigenvalues it would replace the very large reciprocal value with a zero instead. If we're solving Av=b, then loosly speaking the numerical problem is that many values of v solve the equation, some with very large coordinate values. Geometrically, the SVD can be described as choosing from among these values of v the one closest to the origin. What has always bugged me about this description is that "closest" requires a distance metric, like d=sqrt(x*x+y*y), but this is meaningless in a physical sense if the components x and y have different physical dimensions. Sure, if you have numerical values in some particular units system, you can do the computation and get a number. But what doesn't sit right with me is that if you change your units, you can get a different "closest" point! So even without roundoff errors and the like, the answer that comes out of the algorithm will change depending on what physical units are chosen. To me, it seems that a physical problem should have a particular solution regardless of the units chosen (aside from issues of computational accuracy). Likewise, the eigenvalues of this matrix will have some sort of hybrid dimensions just like the distance metric above. And the eigenvectors can vary based on choice of units, because even the notion of two vectors being orthogonal is dependent on the relationship among the units of the parameters defining the matrix. Because of this, as you pointed out in another post, a strongly-typed dimensions library would not even be able to compile the SVD (unless all the elements of the matrix are of the same dimensionality). And how do I set the threshold of what's a "very small" eigenvalue for the SVD? Perhaps .000001? In what units, since the matrix elements are of different dimensions? Again, the meaning of my threshold changes depending on the relationship among my parameter units. I used to think all this was just the price you pay for using an SVD. But with the way I've constructed my matrix, it can be decomposed as: A = DCD where D is a diagonal matrix of dimensioned quantities, and C is a dimensionless symmetric matrix. It's easy, in fact, to find such a diagonal matrix D, with no a priori knowledge of the magnitude of the values, by taking the square roots of the diagonal elements of A. Then C is just the correlation matrix corresponding to the covariance A, and its elements tend to be close to order 1. Now the matrix to be inverted (C) is likely to avoid numerical problems caused by large differences in magnitude, and the solution will be independent of the units chosen - since they will just become multipliers on the elements of D. The SVD/eigenvalue computation can be done without violating the strong typing, and the choice of units no longer impacts the numerical stability of the matrix inversion. There may be a small penalty (such as calculating the square roots), but as compared to an SVD algorithm I think it's almost negligible. I expect a similar technique could be found for cases of nondefinite matrices or non- square matrices, and other types of matrix calculations. In parts of the algorithm that are not "mixing" dimensions (as the distance formula above does) - where strongly-typed computations would compile - you're always adding apples to apples, so the choice of units won't influence numerical problems like summing values of very different magnitudes. Most matrix multiplications should probably be type-safe, for instance, and thus independent of units as far as numerical stability. These ideas would never have occurred to me when I was just manipulating matrices as set of numbers. It wasn't until I started to think of them as strongly-typed physical quantities that this perspective emerged. I think that's also part of the value of using a dimensional analysis library. I hope what I've suggested here makes some sense. There may be reasons it doesn't apply in certain situations. But if anyone finds it helpful, it's something you can try. Regards, -- Leland

On Thu, Jun 08, 2006 at 06:02:54PM -0400, John Phillips wrote:
instead attempting to maximize numeric stability, and so choosing units that keep things as close to order 1 as possible. Then, since real world
Can you elaborate on this? In particular, an example of a simple problem that can be better conditioned by choosing the "right" units would interest me very much! (Assuming we can express numbers from about 1e-300 to 1e300). Regards -Gerhard -- Gerhard Wesp ZRH office voice: +41 (0)44 668 1878 ZRH office fax: +41 (0)44 200 1818 For the rest I claim that raw pointers must be abolished.

Gerhard Wesp wrote:
On Thu, Jun 08, 2006 at 06:02:54PM -0400, John Phillips wrote:
instead attempting to maximize numeric stability, and so choosing units that keep things as close to order 1 as possible. Then, since real world
Can you elaborate on this? In particular, an example of a simple problem that can be better conditioned by choosing the "right" units would interest me very much! (Assuming we can express numbers from about 1e-300 to 1e300).
Regards -Gerhard
Try an experiment. Using your favorite platform, compare the results of sin(x) for x in the range [0, 2*pi] with those from [100*pi, 102*pi] or [10^13*pi, (10^13+2)*pi]. In all cases with which I'm familiar, the underlying representation of the sine function drifts some for large arguments. This is because it is a series expansion, and the series only includes a set number of terms. As the argument gets big, the latter terms get more and more important. So, the only way to stablize it would be to check the argument in advance of the calculation and rescale to the range [0, 2*pi]. However, that check implies a cost for everyone using the function, no matter what numbers they are using in the argument, and so is unacceptable to most users. Thus, the choice is, start with entries of order 1, sacrifice accuracy or sacrifice performance. I hope that make it more clear. John

On Fri, Jun 09, 2006 at 11:55:17AM -0400, John Phillips wrote:
Gerhard Wesp wrote:
On Thu, Jun 08, 2006 at 06:02:54PM -0400, John Phillips wrote:
instead attempting to maximize numeric stability, and so choosing units that keep things as close to order 1 as possible. Then, since real world
Can you elaborate on this? In particular, an example of a simple problem that can be better conditioned by choosing the "right" units would interest me very much! (Assuming we can express numbers from about 1e-300 to 1e300).
Regards -Gerhard
Try an experiment. Using your favorite platform, compare the results of sin(x) for x in the range [0, 2*pi] with those from [100*pi, 102*pi] or [10^13*pi, (10^13+2)*pi]. In all cases with which I'm familiar, the underlying representation of the sine function drifts some for large arguments. This is because it is a series expansion, and the series only includes a set number of terms. As the argument gets big, the latter terms get more and more important. So, the only way to stablize it would be to check the argument in advance of the calculation and rescale to the range [0, 2*pi]. However, that check implies a cost for everyone using the function, no matter what numbers they are using in the argument, and so is unacceptable to most users. Thus, the choice is, start with entries of order 1, sacrifice accuracy or sacrifice performance. I hope that make it more clear. John
Arguments to sin are always unitless, so that particular example doesn't work very well. One easy to get to get numbers that are too large is to start doing lazy matrix analysis. This problem bit me a few days ago: 1. Start with a 3x3 stress matrix of order 1e10. 2. Compute a cofactor matrix (made of up of sums of squares of order 1e20). 3. Compute vector magnitudes in order to pick out the largest column (1e40). We use single precision floats, so 1e40 overflows. Here, the choice is use O(1) numbers, sacrifice performance, or break completely. Geoffrey

Geoffrey Irving said: (by the date of Fri, 9 Jun 2006 09:09:25 -0700)
We use single precision floats, so 1e40 overflows. Here, the choice is use O(1) numbers, sacrifice performance, or break completely.
benchmark performance with double on your system. On mine (AMD X2, but running on 32bit platform) double is faster. -- Janek Kozicki |

On Sat, Jun 10, 2006 at 02:37:25AM +0200, Janek Kozicki wrote:
Geoffrey Irving said: (by the date of Fri, 9 Jun 2006 09:09:25 -0700)
We use single precision floats, so 1e40 overflows. Here, the choice is use O(1) numbers, sacrifice performance, or break completely.
benchmark performance with double on your system. On mine (AMD X2, but running on 32bit platform) double is faster.
Indeed. A quick test on nocona has doubles beating floats by about 4% on a small example. I'll have to try more tests to see if that holds up once things fit in cache. The main reason we switched to floats was memory and cache footprint (sometimes switching to floats on a large example gets you back down to fitting into 32G), but superlinear time complexity seems to kicking in again these days, so maybe it's time to reconsider. I just hope I don't have to double templatize all our code to store data in floats for cache throughput and compute in doubles. As for the original topic, I probably can't salvage my example unless I cheat (say, by unrolling the power method and dropping intermediate normalizations, or applying some sort of extremely naive high order polynomial regression). All the nice Taylor series example seem unitless. Thanks, Geoffrey

On Fri, Jun 09, 2006 at 06:19:58PM -0700, Geoffrey Irving wrote:
polynomial regression). All the nice Taylor series example seem unitless.
They have to be, don't they? Because you add up different powers of the argument. I took this once as a "heuristic" explanation to myself why the transcendental functions only work for dimensionless arguments. On the other hand, the square root can be approximated by a series as well, and this function does make sense with dimensional arguments. Regards -Gerhard -- Gerhard Wesp ZRH office voice: +41 (0)44 668 1878 ZRH office fax: +41 (0)44 200 1818 For the rest I claim that raw pointers must be abolished.

On Sat, Jun 10, 2006 at 09:54:14AM +0200, Gerhard Wesp wrote:
On Fri, Jun 09, 2006 at 06:19:58PM -0700, Geoffrey Irving wrote:
polynomial regression). All the nice Taylor series example seem unitless.
They have to be, don't they? Because you add up different powers of the argument. I took this once as a "heuristic" explanation to myself why the transcendental functions only work for dimensionless arguments.
On the other hand, the square root can be approximated by a series as well, and this function does make sense with dimensional arguments.
That's cool. Square root isn't an example either because it has a branch cut and therefore doesn't have any infinitely converging series. If you want to use a series square root, you need to remove the units first. In general, if f(z) is a total analytic unit-correct function, then we have f(z a) = f(z) a^p for some p. If p is fractional or <0, f is not total, so p is a positive integer. But then f(inf) = f(z inf) = f(z) inf^p = inf, so f is a polynomial. That makes your heuristic argument rigorous: the only examples are polynomials. Geoffrey

Gerhard Wesp wrote:
On Fri, Jun 09, 2006 at 06:19:58PM -0700, Geoffrey Irving wrote:
polynomial regression). All the nice Taylor series example seem unitless.
They have to be, don't they? Because you add up different powers of the argument. I took this once as a "heuristic" explanation to myself why the transcendental functions only work for dimensionless arguments.
Yes, that's right.
On the other hand, the square root can be approximated by a series as well, and this function does make sense with dimensional arguments.
sqrt(x) has no Taylor series expansion at x = 0, but sqrt(1+x) does. You'll find that if you use the latter to compute the square root of a dimensioned quantity, the dimension can factored out of the series, so that the series itself is completely dimensionless.
Regards -Gerhard

On Sat, Jun 10, 2006 at 11:32:07PM -0400, Deane Yang wrote:
dimensioned quantity, the dimension can factored out of the series, so that the series itself is completely dimensionless.
OK, I'll do that exercise once I have a couple of spare minutes or so! Regards -Gerhard -- Gerhard Wesp ZRH office voice: +41 (0)44 668 1878 ZRH office fax: +41 (0)44 200 1818 For the rest I claim that raw pointers must be abolished.

I just took one quick look at the "Definition of Terms" section in the docs and I'm concerned about what appear to be not-quite-concepts being defined and what appear to be the inventive naming conventions used there. 1. General multiword terms such as "physical_quantity_system" should not be spelled with underscores between the words. Nobody else does that; it's confusing because one can't tell whether these are meant to be identifiers 2. Many of the terms, e.g. abstract_quantity, seem like they are weak versions of generic programming concepts, and should be strengthened... oh, I see that AbstractQuantity is in fact listed in the the Concepts section of the docs, with an appropriate naming convention. But then, why the redundant terms? Or are they not redundant? Regardless, many of the other terms, such as abstract_quantity_id, anonymous_quantity, etc. seem to be concept-like and don't appear in the concepts section. Are you sure they're not needed in order to rigorously define the library's requirements? The Concepts section is not ready for prime-time. For example, let's look at the AbstractQuantity table: +--------------------------------------------+------------------------------------------------+ |Expression |Type | +============================================+================================================+ |AbstractQuantity::dimension |Dimension | +--------------------------------------------+------------------------------------------------+ Okay, so what is AbstractQuantity here? It can't be a type or a namespace, because you just used that for a concept identifier. I'm not kidding, when there's language support for concepts it won't seem so strange that this is confusable. Until then, you should add Boost concept checking classes to the library anyway, and those would occupy the identifiers. There's an established convention for choosing and describing the identifiers in concept tables. Use it. Also, put code elements in a code font. And what is Dimension? Your column header implies it's a type, but I bet it's not. And if it is, it violates Boost naming conventions, and must be fixed. Also is AbstractQuantity::dimension a type or a value? It could be either. +--------------------------------------------+------------------------------------------------+ |AbstractQuantity::id |AbstractQuantityId | +--------------------------------------------+------------------------------------------------+ Same exact problems. +--------------------------------------------+------------------------------------------------+ |binary_operation<AbstractQuantity |AbstractQuantity D where D::dimension is | |Lhs,Op,AbstractQuantity Lhs>::type |binary_operation<Lhs::dimension, Op, | | |Rhs::dimension>::type and D::id is | | |binary_operation<Lhs::id, Op, | | |Rhs::id>::type, except where Op = pow | +--------------------------------------------+------------------------------------------------+ Okay, left column: this isn't even valid C++ code. Right column: Everything looks like it could be comprehensible, until you get to "except." Well, what is the requirement in the "except" case? Does "except..." apply to the whole clause or only to the part of the clause after "and?" +--------------------------------------------+------------------------------------------------+ |binary_operation<AbstractQuantity Lhs, |[AbstractQuantity D where D::dimension is | |pow, Rational Exp>::type |binary_operation<Lhs::dimension,times,Exp>::type| | |and D::id is the anonymous_quantity_id | +--------------------------------------------+------------------------------------------------+ Left column: not C++. +--------------------------------------------+------------------------------------------------+ |DimensionallyEquivalent<AbstractQuantity |true if DimensionallyEquivalent<Lhs::dimension, | |Lhs,AbstractQuantity Rhs>::value |Rhs::dimension>::value==true else false | +--------------------------------------------+------------------------------------------------+ Left column: 1. not C++. 2. Your library contains a non-concept-checking template that uses CamelCase? What's wrong with the boost convention? Oh, but I see elsewhere in the docs you do have dimensionally_equivalent. Which is it? 3. Why isn't DimensionallyEquivalent a conforming metafunction? It costs essentially nothing. +--------------------------------------------+------------------------------------------------+ |Dimensionless<AbstractQuantity>::value |bool | +--------------------------------------------+------------------------------------------------+ |IsNamedQuantity<AbstractQuantity>::value |bool | +--------------------------------------------+------------------------------------------------+ |IsAnonymousQuantity<AbstractQuantity>::value|bool | +--------------------------------------------+------------------------------------------------+ Ditto #2 and #3 above. Also, technically speaking it's not clear whether these are types or values. Also, are there _no_ semantics associated with the values? Could I pick true or false for any of them and still have a conforming AbstractQuantity? If so, why bother requiring them? Some of the table cells are split across pages. (e.g. p. 30-31) Value_type should be ValueType. The description of Value_type should not be done in terms of a single template where you intend to require one as a parameter. Why BOOST_PQS_INT32? Doesn't the standard or Boost provide enough types for this without you having to define a macro? And even if not, why not use a typedef instead? Aside from nits and naming convention issues, I feel especially strongly that we must not muddle the ideas of generic programming. I think this is an important domain and I hope we'll be able to accept a different version of this library, but, with regret, I vote against the inclusion of this one in its current state. -- Dave Abrahams Boost Consulting www.boost-consulting.com

Hi David, "David Abrahams" wrote
I just took one quick look at the "Definition of Terms" section in the docs and I'm concerned about what appear to be not-quite-concepts being defined and what appear to be the inventive naming conventions used there.
1. General multiword terms such as "physical_quantity_system" should not be spelled with underscores between the words. Nobody else does that; it's confusing because one can't tell whether these are meant to be identifiers
Ok.
2. Many of the terms, e.g. abstract_quantity, seem like they are weak versions of generic programming concepts, and should be strengthened... oh, I see that AbstractQuantity is in fact listed in the the Concepts section of the docs, with an appropriate naming convention. But then, why the redundant terms? Or are they not redundant? Regardless, many of the other terms, such as abstract_quantity_id, anonymous_quantity, etc. seem to be concept-like and don't appear in the concepts section. Are you sure they're not needed in order to rigorously define the library's requirements?
The Concepts section is not ready for prime-time. For example, let's look at the AbstractQuantity table:
+--------------------------------------------+------------------------------------------------+ |Expression |Type | +============================================+================================================+ |AbstractQuantity::dimension |Dimension | +--------------------------------------------+------------------------------------------------+
Okay, so what is AbstractQuantity here? It can't be a type or a namespace, because you just used that for a concept identifier. I'm not kidding, when there's language support for concepts it won't seem so strange that this is confusable. Until then, you should add Boost concept checking classes to the library anyway, and those would occupy the identifiers. There's an established convention for choosing and describing the identifiers in concept tables. Use it. Also, put code elements in a code font.
OK.
And what is Dimension? Your column header implies it's a type, but I bet it's not. And if it is, it violates Boost naming conventions, and must be fixed.
OK.
Also is AbstractQuantity::dimension a type or a value? It could be either.
OK. I guess thats unclear.
+--------------------------------------------+------------------------------------------------+ |AbstractQuantity::id |AbstractQuantityId | +--------------------------------------------+------------------------------------------------+
Same exact problems.
+--------------------------------------------+------------------------------------------------+ |binary_operation<AbstractQuantity |AbstractQuantity D where D::dimension is | |Lhs,Op,AbstractQuantity Lhs>::type |binary_operation<Lhs::dimension, Op, | | |Rhs::dimension>::type and D::id is | | |binary_operation<Lhs::id, Op, | | |Rhs::id>::type, except where Op = pow | +--------------------------------------------+------------------------------------------------+
Okay, left column: this isn't even valid C++ code.
Right column: Everything looks like it could be comprehensible, until you get to "except." Well, what is the requirement in the "except" case? Does "except..." apply to the whole clause or only to the part of the clause after "and?"
Ok I could put brackets around it I guess.
+--------------------------------------------+------------------------------------------------+ |binary_operation<AbstractQuantity Lhs, |[AbstractQuantity D where D::dimension is | |pow, Rational Exp>::type |binary_operation<Lhs::dimension,times,Exp>::type| | |and D::id is the anonymous_quantity_id | +--------------------------------------------+------------------------------------------------+
Left column: not C++.
+--------------------------------------------+------------------------------------------------+ |DimensionallyEquivalent<AbstractQuantity |true if DimensionallyEquivalent<Lhs::dimension, | |Lhs,AbstractQuantity Rhs>::value |Rhs::dimension>::value==true else false | +--------------------------------------------+------------------------------------------------+
Left column:
1. not C++.
2. Your library contains a non-concept-checking template that uses CamelCase? What's wrong with the boost convention? Oh, but I see elsewhere in the docs you do have dimensionally_equivalent. Which is it?
OK. Fair point.
3. Why isn't DimensionallyEquivalent a conforming metafunction? It costs essentially nothing.
Sure its confused...
+--------------------------------------------+------------------------------------------------+ |Dimensionless<AbstractQuantity>::value |bool | +--------------------------------------------+------------------------------------------------+ |IsNamedQuantity<AbstractQuantity>::value |bool | +--------------------------------------------+------------------------------------------------+ |IsAnonymousQuantity<AbstractQuantity>::value|bool | +--------------------------------------------+------------------------------------------------+
Ditto #2 and #3 above. Also, technically speaking it's not clear whether these are types or values. Also, are there _no_ semantics associated with the values? Could I pick true or false for any of them and still have a conforming AbstractQuantity? If so, why bother requiring them?
Yes. These should be functions..
Some of the table cells are split across pages. (e.g. p. 30-31)
Ok.
Value_type should be ValueType. The description of Value_type should not be done in terms of a single template where you intend to require one as a parameter.
Ok.
Why BOOST_PQS_INT32? Doesn't the standard or Boost provide enough types for this without you having to define a macro? And even if not, why not use a typedef instead?
I tried that but it wasnt acceptable as a non-type parameter IIRC.
Aside from nits and naming convention issues, I feel especially strongly that we must not muddle the ideas of generic programming. I think this is an important domain and I hope we'll be able to accept a different version of this library, but, with regret, I vote against the inclusion of this one in its current state.
Ok. Thanks for the review regards Andy Little

Why BOOST_PQS_INT32? Doesn't the standard or Boost provide enough types for this without you having to define a macro? And even if not, why not use a typedef instead?
I tried that but it wasnt acceptable as a non-type parameter IIRC.
Eh? boost::int32_t is just a typedef for ... well int on most platforms, so sure it's usable as a non-type parameter. Using a macro for this looks just silly to me, unless you have a concrete example that shows otherwise? John.

"John Maddock"wrote
Why BOOST_PQS_INT32? Doesn't the standard or Boost provide enough types for this without you having to define a macro? And even if not, why not use a typedef instead?
I tried that but it wasnt acceptable as a non-type parameter IIRC.
Eh? boost::int32_t is just a typedef for ... well int on most platforms, so sure it's usable as a non-type parameter. Using a macro for this looks just silly to me, unless you have a concrete example that shows otherwise?
Maybe. I started out using the boost::int32_t ( I started out using int, then changed from there to boost::int32_t one day, which caused compile errors, but what that situation or compiler was I dont remember). Anyway changing to a macro solved this particular compilation error. Changing the macro to use boost::int32_t doesnt currently cause compile errors, at least in VC7.1. Not tested on gcc though. If I find the exact situation, I will sure present it. regards Andy Little

| -----Original Message----- | From: boost-bounces@lists.boost.org | [mailto:boost-bounces@lists.boost.org] On Behalf Of David Abrahams | Sent: 07 June 2006 12:59 | To: boost@lists.boost.org | Subject: [boost] [Review][PQS] Concept names? | | Aside from nits and naming convention issues, I am sure that your expertise is invaluable and welcomed here. | I feel especially strongly that we must not muddle the ideas of generic programming. Could you elaborate on this? Are you saying that there is a problem with the documentation, or with the design? (Boosters seem to continue to accept vital tools like Test and Build whose documentation is in a MUCH less than satisfactory state, so I can't see that this should be grounds for rejecting a submission, only urging its improvement). | I think this is an important domain and I hope we'll be able to accept a | different version of this library, but, with regret, I vote against | the inclusion of this one in its current state. We have been trying for years to get towards a useful solution to this exccedingly important area, which has application in 9 out of 10 real-life programs. The C++ language tantalizingly promises to make possible auto-type checking, converting and displaying the myriad units, but keeps tripping us up with vital missing features like typeof, or well fall at the compile speed hurdle. What design changes would persuade you to vote for this attempt? Paul --- Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539561830 & SMS, Mobile +44 7714 330204 & SMS pbristow@hetp.u-net.com

"Paul A Bristow" <pbristow@hetp.u-net.com> writes:
boost-bounces@lists.boost.org wrote:
Paul, can you fix your mailer's misattribution problem? Dave Abrahams wrote:
| Aside from nits and naming convention issues,
I am sure that your expertise is invaluable and welcomed here.
| I feel especially strongly that we must not muddle the ideas of generic programming.
Could you elaborate on this?
I think I did fairly extensively in the post you're responding to.
Are you saying that there is a problem with the documentation, or with the design?
Yes. :) Documentation and design are not wholly distinguishable from one another, especially not when it comes to generic programming.
(Boosters seem to continue to accept vital tools like Test and Build whose documentation is in a MUCH less than satisfactory state, so I can't see that this should be grounds for rejecting a submission, only urging its improvement).
The sins of the past do not justify making the same mistake again. IMO we have recently allowed too many libraries into Boost with inadequate documentation and especially with a muddled expression of generic programming, which is too poorly understood in the C++ community at large. For years, Boost set the standard for generic programming outside the STL, and that standard has recently become diluted. For the record, although I agree both should/must have better docs, neither Boost.Test nor Boost.Build has much to do with generic programming.
| I think this is an important domain and I hope we'll be able to | accept a different version of this library, but, with regret, I | vote against the inclusion of this one in its current state.
We have been trying for years to get towards a useful solution to this exccedingly important area, which has application in 9 out of 10 real-life programs.
Well, maybe.
The C++ language tantalizingly promises to make possible auto-type checking, converting and displaying the myriad units, but keeps tripping us up with vital missing features like typeof, or well fall at the compile speed hurdle.
What design changes would persuade you to vote for this attempt?
I haven't looked at enough of it to know if there are other issues, but in order to resolve my problems with the issues I've raised: Clarity and conformance to Boost/C++ standards and conventions. -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams wrote:
The sins of the past do not justify making the same mistake again. IMO we have recently allowed too many libraries into Boost with inadequate documentation and especially with a muddled expression of generic programming, which is too poorly understood in the C++ community at large. For years, Boost set the standard for generic programming outside the STL, and that standard has recently become diluted.
I can't speak to the quality of boost documentation, but I'd like to support the principle that good documentation is crucial. In fact, I'd say that the documentation *is* the library, and the code is merely one possible implementation of the library. I do not know whether I will be able to provide a full review of PQS, but I will say that I find it difficult to vote yes with the documentation in its current state. I would like to see, at the very least, the documentation revised according to all the suggestions that have been made already and the library resubmitted.

"David Abrahams" wrote
"Paul A Bristow" writes:
What design changes would persuade you to vote for this attempt?
I haven't looked at enough of it to know if there are other issues, but in order to resolve my problems with the issues I've raised:
Clarity and conformance to Boost/C++ standards and conventions.
To be honest David, I am finding this quite difficult to handle. On the one hand I think the PQS library is good, there seems to be interest and a need. On the other hand, at least unofficially, Boost is your party and my impression is that for whatever reason you wouldnt be too happy about this library becoming part of boost. Coincidentally nor would I. The situation with PQS is that to do it justice would take more time than I am prepared to invest. I would also need to learn a lot about the internals of boost which would tie me in deeper than I wish. I have spent a lot of time, particularly on the documentation, over the past few months, but unfortunately writing documentation doesnt come easy to me and I think I would find it quite difficult to complete to the required standard, at least without a huge investment of time. ( At the end of the day I am basically an average part-timecoder that somehow got involved way above my level) I think the best solution in light of your comments is to withdraw PQS from consideration and hope that someone else more in touch with the boost way comes forward with a Units library. Funnily enough I think both you and I would feel much happier that way. regards Andy Little

"Andy Little" <andy@servocomm.freeserve.co.uk> writes:
"David Abrahams" wrote
"Paul A Bristow" writes:
What design changes would persuade you to vote for this attempt?
I haven't looked at enough of it to know if there are other issues, but in order to resolve my problems with the issues I've raised:
Clarity and conformance to Boost/C++ standards and conventions.
To be honest David, I am finding this quite difficult to handle. On the one hand I think the PQS library is good, there seems to be interest and a need. On the other hand, at least unofficially, Boost is your party
I don't know what that means. I am one of several moderators; it's not "mine."
and my impression is that for whatever reason you wouldnt be too happy about this library becoming part of boost.
No, not for "whatever reasons," for exactly the reasons I posted. It seems like you're not responding to what I wrote, but something else. I would even be open to being convinced to change my vote, if the author exhibited sufficient interest in and responsiveness to my concerns. I haven't looked at the code, but I really like the idea of what this library does, and it probably has a pretty nice interface -- at the code level.
Coincidentally nor would I. The situation with PQS is that to do it justice would take more time than I am prepared to invest.
Wow. Why did you submit it?
I would also need to learn a lot about the internals of boost which would tie me in deeper than I wish. I have spent a lot of time, particularly on the documentation, over the past few months, but unfortunately writing documentation doesnt come easy to me
Nor to most people. Writing documentation takes a great deal of attention, and anyone submitting a Boost library should be prepared to spend at least as long documenting as coding.
and I think I would find it quite difficult to complete to the required standard, at least without a huge investment of time. ( At the end of the day I am basically an average part-timecoder that somehow got involved way above my level)
I think the best solution in light of your comments is to withdraw PQS from consideration and hope that someone else more in touch with the boost way comes forward with a Units library.
Funnily enough I think both you and I would feel much happier that way.
That's not funny at all, and it's not what I'd like at all. I'm not sure what gave you that impression. I thought I made it clear that "I hope we'll be able to accept a different version of this library" and also that my negative vote was made with regret. If you do decide to simply withdraw without making improvements, I'll be sorry. -- Dave Abrahams Boost Consulting www.boost-consulting.com

"David Abrahams" wrote
"Andy Little" wrote
"David Abrahams" wrote
"Paul A Bristow" writes:
What design changes would persuade you to vote for this attempt?
I haven't looked at enough of it to know if there are other issues, but in order to resolve my problems with the issues I've raised:
Clarity and conformance to Boost/C++ standards and conventions.
To be honest David, I am finding this quite difficult to handle. On the one hand I think the PQS library is good, there seems to be interest and a need. On the other hand, at least unofficially, Boost is your party
I don't know what that means. I am one of several moderators; it's not "mine."
Put it another away. You have put a lot of work into Boost. Your vote in a review is an order of magnitude more powerful than mine (And incidentally that is as it should be. OTOH if you were in my house and wanted to knock a wall down, your vote certainly wouldnt be anything like as powerful as mine!)
and my impression is that for whatever reason you wouldnt be too happy about this library becoming part of boost.
No, not for "whatever reasons," for exactly the reasons I posted. It seems like you're not responding to what I wrote, but something else.
AFAIK your impressions have been formed not even by downloading the library itself but by downloading the pdf documentation, which I put there as some previous reviewers said they found it helpful to print it out. Once there you headed for the two areas that other reviewers found poor and started slashing!( Pavel Vozenilek made the same overall point but without needing to twist the knife.) That is the sum total of your review AFAICS. If this formed the total substance of a review of mine I would not feel justified in casting a vote at all.
I would even be open to being convinced to change my vote, if the author exhibited sufficient interest in and responsiveness to my concerns.
The authors name is Andy BTW. The point re using underscores is trivial. It was done because QuickBook wont accept '-' in link names. It speeded things up slightly. The C++ concepts section is a mess.. Sure, as I said to Pavel Vozenilek. It was the first time I have written this kind of documentation and I found it difficult. I decided to spend time on other areas of the documentation before the review.
I haven't looked at the code, but I really like the idea of what this library does, and it probably has a pretty nice interface -- at the code level.
Wow! That is encouraging. It would have been helpful to have that included in the review. It would have lightened the tone. As it stands I read every point made as negative.
Coincidentally nor would I. The situation with PQS is that to do it justice would take more time than I am prepared to invest.
Wow. Why did you submit it?
Its a very good library, but I am too old to see the need to fight for every inch, if the environment is hostile. That is a waste of energy. I have better things to do.
I would also need to learn a lot about the internals of boost which would tie me in deeper than I wish. I have spent a lot of time, particularly on the documentation, over the past few months, but unfortunately writing documentation doesnt come easy to me
Nor to most people. Writing documentation takes a great deal of attention, and anyone submitting a Boost library should be prepared to spend at least as long documenting as coding.
and I think I would find it quite difficult to complete to the required standard, at least without a huge investment of time. ( At the end of the day I am basically an average part-timecoder that somehow got involved way above my level)
I think the best solution in light of your comments is to withdraw PQS from consideration and hope that someone else more in touch with the boost way comes forward with a Units library.
Funnily enough I think both you and I would feel much happier that way.
That's not funny at all, and it's not what I'd like at all. I'm not sure what gave you that impression. I thought I made it clear that "I hope we'll be able to accept a different version of this library" and also that my negative vote was made with regret.
FWIW I read that different version as implying a version of the library written by someone else. That was the impact. Re-reading it, I still get that impression. Its ambiguous and impersonal.
If you do decide to simply withdraw without making improvements, I'll be sorry.
OK, that is helpful, as were the encouraging comments above. OTOH I already did withdraw it in a mail to Fred Bertsch. I'm not quite sure about whether I can un-withdraw it or not. I will have to see what he says. regards Andy Little

interest and a need. On the other hand, at least unofficially, Boost is your party
I don't know what that means. I am one of several moderators; it's not "mine."
Put it another away. You have put a lot of work into Boost. Your vote in a review is an order of magnitude more powerful than mine...
It is my understanding that if any weighting of votes is done it is based on the answer to "Are you knowledgeable about the problem domain?"; I sincerely hope having "boost" in your email address makes no difference at all :-). Darren P.S. I won't be reviewing as I've neither time nor need for the library, but I've been skimming the reviews and my impression is if it is rejected it will be simply because it is half-finished (t2_quantity, docs, etc.), and once finished would be very useful to a large group of people.

Darren Cook wrote:
P.S. I won't be reviewing as I've neither time nor need for the library, but I've been skimming the reviews and my impression is if it is rejected it will be simply because it is half-finished (t2_quantity, docs, etc.)
No. There are some fundamental design problems with the library that prohibit various uses of it. Oleg and I have pointed out showstoppers for the two of us. There are issues for other people as well. That said, I very much want to encourage further work on this library. It _is_ very important. I'm disappointed that Andy does not seem committed to it. The volume of feedback indicates that there's a lot of interest. -Dave

On HP-UX, there is a bug in <sys/statvfs.h> header which results in link error when building boost.filesystem library in 32-bit data model (the default). Below is reproducer; setting _FILE_OFFSET_BITS macro to 64 is necessary to unearth buggy code in the header: bash-3.00$ cat statvfs_bug.cpp #define _FILE_OFFSET_BITS 64 #include <sys/statvfs.h> int main() { statvfs((const char*)0, 0); } bash-3.00$ aCC statvfs_bug.cpp ld: Unsatisfied symbol "statvfs(char const*,statvfs*)" in file statvfs_bug.o 1 errors. bash-3.00$ aCC statvfs_bug.cpp +DD64 bash-3.00$ The bug will be fixed in HP-UX 11.31 where macro _STATVFS_ACPP_PROBLEMS_FIXED will be defined to indicate a fix. Module operations.cpp in the boost.filesystem library unconditionally defines _FILE_OFFSET_BITS macro as 64 which triggers a bug when building the library on HP-UX with BBv1. BBv2 build specifies +DD64 (64-bit data model) and is not affected by this problem. The attached patch conditionalizes definition of _FILE_OFFSET_BITS macro so it does not get defined as 64 on a HP-UX system with buggy header when building in 32-bit data model. I tested it on a 11.23 system by building filesystem library with BBv1. Can this patch be applied to both CVS HEAD and RC_1_34? Currently, operations.cpp is identical in both HEAD and RC. Thanks, Boris Gubenko cal-bear> diff -u operations.cpp.orig operations.cpp --- operations.cpp.orig Fri Jun 9 09:40:54 2006 +++ operations.cpp Fri Jun 9 09:41:08 2006 @@ -16,7 +16,10 @@ #define _POSIX_PTHREAD_SEMANTICS // Sun readdir_r() needs this +#if !(defined(__HP_aCC) && defined(_ILP32) && \ + !defined(_STATVFS_ACPP_PROBLEMS_FIXED)) #define _FILE_OFFSET_BITS 64 // at worst, these defines may have no effect, +#endif #define __USE_FILE_OFFSET64 // but that is harmless on Windows and on POSIX // 64-bit systems or on 32-bit systems which don't have files larger // than can be represented by a traditional POSIX/UNIX off_t type. cal-bear>

Boris Gubenko wrote:
On HP-UX, there is a bug in <sys/statvfs.h> header which results in link error when building boost.filesystem library in 32-bit data model (the default)...
...
The attached patch conditionalizes definition of _FILE_OFFSET_BITS macro so it does not get defined as 64 on a HP-UX system with buggy header when building in 32-bit data model. I tested it on a 11.23 system by building filesystem library with BBv1.
Can this patch be applied to both CVS HEAD and RC_1_34? Currently, operations.cpp is identical in both HEAD and RC.
Done. Thanks for the patch! --Beman

I believe that in current state PQS can not be accepted into boost. Reasons: 1) It is almost useless for scientists. (of cause, it is very useful for engineers, though). It lacks or makes it really hard to plug-in different systems, such as C.G.S.E. (centimeter, gramme, second, and intensity of electric field from one electron is given by formula: E = e/r2. Note, no k coefficient, as in SI), standard model in theoretical physics (velocity of light is 1, plank constant is 1, etc.), relativistic model, where only velocity of light is 1 (it means that dimensions of time and length are equal. length is time and time is length). This list can be expanded dramatically (almost every physical problem can benefit from some sort of special dimensions usage). 2) In PQS concept of dimensional analysis is tightly coupled with concept of units. I believe that they _must_ be uncoupled and implemented independently. Dimensional analysis has its own value even without any units at all. Below is a description of a Dimensional/Units library that would be good enough for boost (and for users in the first place) IMO. I envision the following workflow as a typical usage of Dimensional/Units library: 1) define a dimensional space (some types of a basic dimensions. a number of dimensions can vary from application to application significantly: 7 in SI; 6 in relativistic theory; 2 in a simple financial analysis (time and money); etc.). On this step one also can establish some restrictions on dimensions usage, like: money can not be in any power different from 1, if it makes sense in a particular application. As a result of this stage one would have a number of dimension types: D1, D2,..., Dn. In SI it would be LengthDim, MassDim, etc. 2) define facets and manipulators for input/output of values of this dimension space. It is the only place where units come to play. I believe that there is really no need to have all these mm/mkm/nm stuff in code. Simple Length should be used. 3) define all sensible conversions/relations to/with other dimensional spaces. For example, if we already have SI and are going to define relativistic model, we can state that Length and Time in relativistic model can be treated as Length in SI model, and define a conversion coefficient (if our relativistic model assumes parsecs as its native Length unit, then this coefficient would map parsecs to meters). Note that it is not required if we have no need to cooperate with SI. 4) define all value types desired as in the following example: namespace relativistic { typedef quantity<double, LengthDim> Length; } 5) use it in your code: int main() { using namespace relativistic; Length l1, l2; std::cin >> l1; // in parsecs std::cin >> kilo >> l2; // in kilo parsecs Length sum = l1 + l2; //Length sum = l1 * l2; // error std::cout << giga << sum; // output “xxx GParsecs” } Of cause, library would have many different dimensional spaces out of the box. not only SI. Typically, user would be involved in the final (5) step of this process, but library would provide primitives to make life of those who can not escape 1-4 steps as easy as possible. So far, dimensions were defined as a compile time entities, whose only one influence on run-time were output. But run-time dimensional analysis can be very useful too. It means that Boost.Dimensional library should/can (note that this item is optional though) provide it. It can be useful in validating formulas originated from user input at run-time and in the real dimensional analysis. Below is a description of what I mean by that: Consider a simple experiment: drop a ball of mass M from a table whose height is H with horizontal velocity V (the gravitational acceleration is G). The question is how the horizontal distance L from the initial ball position to the point where it’ll touch floor first time is dependent on parameters {M, H, V, G}? In other words, we need to find a function f such that L ~ f(M, H, V, G). Almost everyone knows how to solve it using Newton mechanics, but we can do something even without it, by using only dimensional analysis: 1) Is it possible to construct dimensionless value from given parameters? M^m * H^h * V^v * G^g ~ Mass^m * Length^h * (Length/Time)^v * (Length/Time2)^g == 1 <=> m == 0 && h + v + g == 0 && v + 2*g == 0 <=> m == 0 && h == g && v == -2g. We can conclude that it is H * G / V2. 2) Let try to construct a Length: M^m * H^h * V^v * G^g ~ Mass^m * Length^h * (Length/Time)^v * (Length/Time2)^g == Length <=> m == 0 && h + v + g == 1 && v + 2*g == 0 <=> m == 0 && h == g + 1 && v == -2*g => H2 * G / V2 Now we can conclude, that L = (H^(g+1) * G^g / V^(2*g)) * f(H * G / V2), where f is an arbitrarily function and g can be any rational number. if g is set to be -1/2 it becomes: L = (H * V2 / G)^(1/2) * f(H * G / V2) Note that an exact answer is: L = (2 * H * V2 / G)^(1/2) Also note that mass have disappeared from our equations completely, so we can conclude that L is not affected by mass of our ball using only dimensional analysis. From this simple example it can be seen, that dimensional analysis can be used to determine a functional relationship between experimentally observable and unknown parameters that can be used to significantly reduce problem complexity. Utility that can provide such functionality would be very useful for many experimentalists and of cause a library that can be used to implement such utility would be welcomed too. I hope, that you Andy can transform your PQS library into something that was described here. And I want to thank you that you made PQS and thank you in advance if you’d be the one who’ll made Boost.Dimensional/Units library. Best regards, Oleg Abrosimov.

On Fri, Jun 09, 2006 at 12:13:39PM -0500, David Greene wrote:
That said, I very much want to encourage further work on this library. It _is_ very important. I'm disappointed that Andy does not seem committed to it. The volume of feedback indicates that there's a lot of interest.
It is very hard for a volunteer to put a life-time of work into a library when he doesn't even know whether or not it will be accepted. I've followed the review of PQS with great interest, and was amazed by Andy's patience and politeness. Yet, the end result of the review is a list of things that are "wrong" with the "currently unusable library". No wonder that he wisely thinks: I'm wasting my time here. I think it's smart to bail out before wasting MORE time. Here is how I think that a review procedure could be improved: 1) It should be devided into at least five votes: A. Is the concept ok? Do we want SUCH a library in boost? B. Is the presented library a good starting point, or do we think we should start from scratch? C. Is the presented API of the library on the right track? D. Is the internal implementation on the right track? E. Is the documentation good enough for a boost library? 2) If the answer is NO to any of the above questions, then either it should be accompanied with a constructive list of improvements (if *this* was added/there THEN I would vote yes), or the vote shouldn't be counted. Then the author has something to work with. Such a procedure would be a motivation: if I continue to work on it, and add this, improve that, change this, then it WILL be accepted. 3) A library should be accepted as soon as it is "good enough". Nothing motivates a volunteer open source coder more than having their code already in the CVS repository and starting to build a userbase. If a library is added when it's at 80%, then you can almost be SURE that as a result the author will carry it to 99% all by himself with needing any 'pressure' from others. -- Carlo Wood <carlo@alinoe.com>

Carlo Wood wrote:
On Fri, Jun 09, 2006 at 12:13:39PM -0500, David Greene wrote:
That said, I very much want to encourage further work on this library. It _is_ very important. I'm disappointed that Andy does not seem committed to it. The volume of feedback indicates that there's a lot of interest.
It is very hard for a volunteer to put a life-time of work into a library when he doesn't even know whether or not it will be accepted.
Not every library in Boost has been accepted the first go-'round. Many prominent libraries such as Serialization had to iterate.
I've followed the review of PQS with great interest, and was amazed by Andy's patience and politeness. Yet, the end result of the review is a list of things that are "wrong" with the "currently unusable library". No wonder that he wisely thinks: I'm wasting my time here. I think it's smart to bail out before wasting MORE time.
Every single message I've seen that points out needed changes has included specific criteria the author wants improved. I don't know how it can get more constructive than that. All of the messages have encouraged further work on the library, noting that it is a very important domain to cover. I have said this multiple times.
Here is how I think that a review procedure could be improved: 1) It should be devided into at least five votes: A. Is the concept ok? Do we want SUCH a library in boost?
It's pretty clear that the consensus for pqs is "yes."
B. Is the presented library a good starting point, or do we think we should start from scratch?
Here it's a little harder to tell. From what Any has said, it sounds like the changes can be accommodated without starting over. A reviewer often doesn't have the context to make such as determination.
C. Is the presented API of the library on the right track?
There have been some disagreements on this for pqs but again, people have been very clear about what they're looking for.
D. Is the internal implementation on the right track?
Ditto.
E. Is the documentation good enough for a boost library?
This has been made very clear and Andy has graciously accepted the suggested documentation changes.
2) If the answer is NO to any of the above questions, then either it should be accompanied with a constructive list of improvements (if *this* was added/there THEN I would vote yes), or the vote shouldn't be counted. Then the author has something to work with.
Every single official review for pqs (and alomst all non-official reviews) has included exactly these items. Boost reviewers are very specific. Sometimes that's taken as excessive criticism, but it's not meant to be. As Dave A. said, it's much better to get a clear "no" with detailed explanations than a "yes" with vague statements about areas of improvement.
Such a procedure would be a motivation: if I continue to work on it, and add this, improve that, change this, then it WILL be accepted.
I stated such things in my review, as did others.
3) A library should be accepted as soon as it is "good enough". Nothing motivates a volunteer open source coder more than having their code already in the CVS repository and starting to build a userbase. If a library is added when it's at 80%, then you can almost be SURE that as a result the author will carry it to 99% all by himself with needing any 'pressure' from others.
The question, of course, is what is "good enough." In my view, pqs is not yet quite "good enough" because it's missing some important basic functionality. Andy indicates that it can be added without starting over. I'd like to see a little progress toward that before accepting pqs into Boost. As I told Andy in another post, it doesn't have to be a complete implementation of everything (other unit systems, etc.). But it does have to contain the foundation to complete the work. Oleg has some very good ideas about separating units from dimensional analysis that are worth considering. This might require a more fundamental change to the library. I actually prefer pqs' user interface to Oleg's, but some sort of hybrid might be very interesting indeed. -Dave

David Greene <greened@obbligato.org> writes:
E. Is the documentation good enough for a boost library?
This has been made very clear and Andy has graciously accepted the suggested documentation changes.
Andy has indeed graciously accepted criticism of the documentation, for which I commend him. What's missing for me is a clear intention to actively pursue better docs himself, as opposed to being willing to accept specific edits that other people happen to suggest. If we leave the quality of our documentation (or code, for that matter) up to people who rewrite it for us, we won't have much quality at all. IMO the library author has to be willing to take responsibility for making the docs work; any help from the outside is a bonus. <soapbox> Learning to write good documentation isn't easy, and we need to work hard at helping people to learn that skill. That said, it *has* to be a "teach a man to fish" sort of thing, because at the end of the day, there are just too many docs to be written. </soapbox> -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams ha escrito:
David Greene <greened@obbligato.org> writes:
E. Is the documentation good enough for a boost library?
This has been made very clear and Andy has graciously accepted the suggested documentation changes.
Andy has indeed graciously accepted criticism of the documentation, for which I commend him.
What's missing for me is a clear intention to actively pursue better docs himself, as opposed to being willing to accept specific edits that other people happen to suggest. If we leave the quality of our documentation (or code, for that matter) up to people who rewrite it for us, we won't have much quality at all. IMO the library author has to be willing to take responsibility for making the docs work; any help from the outside is a bonus.
There's an alternative: convincing someone else other than the programmer to become the long-term documenter of the lib. Producing quality documentation for a library is a challenging and rewarding task and distributing responsibilities among several people might work better than expecting authors to excel at coding as well as documenting. Now, not that we have a pool of aspiring documenters, but it we publicized the position a little some volunteers might appear, for specific libs at least. Joaquín M López Muñoz Telefónica, Investigación y Desarrollo

Joaquín Mª López Muñoz wrote:
There's an alternative: convincing someone else other than the programmer to become the long-term documenter of the lib. Producing quality documentation for a library is a challenging and rewarding task and distributing responsibilities among several people might work better than expecting authors to excel at coding as well as documenting. Now, not that we have a pool of aspiring documenters, but it we publicized the position a little some volunteers might appear, for specific libs at least.
Joaquín M López Muñoz Telefónica, Investigación y Desarrollo
Actually, if you check the Wiki, you will find a list of people who have offered to assist with documentation for libraries. John Phillips
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Joaquín Mª López Muñoz <joaquin@tid.es> writes:
David Abrahams ha escrito:
David Greene <greened@obbligato.org> writes:
E. Is the documentation good enough for a boost library?
This has been made very clear and Andy has graciously accepted
the suggested documentation changes.
Andy has indeed graciously accepted criticism of the documentation,
for which I commend him.
What's missing for me is a clear intention to actively pursue better
docs himself, as opposed to being willing to accept specific edits
that other people happen to suggest. If we leave the quality of our
documentation (or code, for that matter) up to people who rewrite it
for us, we won't have much quality at all. IMO the library author
has to be willing to take responsibility for making the docs work; any
help from the outside is a bonus.
There's an alternative: convincing someone else other than the
programmer to become the long-term documenter of the lib.
Sure, that's fine, if the person presenting the library as a Boost submission does it, and before the review. But then, if that's handled, the docs will probably have been cleaned up well before the review starts.
Producing quality documentation for a library is a challenging and
rewarding task and distributing responsibilities among several
people might work better than expecting authors to excel at coding
as well as documenting. Now, not that we have a pool of aspiring
documenters, but it we publicized the position a little some
volunteers might appear, for specific libs at least.
Tried that already; we need someone to take a leadership position in documentation. There was a guy we appointed to be the "documentation wizard" last year but he disappeared. -- Dave Abrahams Boost Consulting www.boost-consulting.com

"David Abrahams"
David Greene <greened@obbligato.org> writes:
E. Is the documentation good enough for a boost library?
This has been made very clear and Andy has graciously accepted the suggested documentation changes.
Andy has indeed graciously accepted criticism of the documentation, for which I commend him.
Your criticism mainly concerned the C++ Concepts section of the documentation for PQS. What might help is some examples of what you consider good C++ Concept documentation.
What's missing for me is a clear intention to actively pursue better docs himself, as opposed to being willing to accept specific edits that other people happen to suggest. If we leave the quality of our documentation (or code, for that matter) up to people who rewrite it for us, we won't have much quality at all.
I havent asked and don't expect anyone else to write the PQS documentation for me FWIW. I have stated that a truly generic quantities library ( encompassing all unit systems) is beyond my skill and knowledge and that someone else would need to write that if that is the requirement for a Boost Quantities library. I would be happy to do what I could to help in the SI area though, in that case.
IMO the library author has to be willing to take responsibility for making the docs work; any help from the outside is a bonus.
I agree with that ! FWIW Here are some rough notes to self regarding the PQS documentation *Non C++ concepts (definition of terms).* Remove C++ specific and especially PQS library implementation details from here. Move this section to the back of the docs. *C++ Concepts* Some entities are metafunctions hiding as Concepts. Look at other C++ concept documentation and see how it works. Link to headers. *Writing Tools.* Frankly I have had some problems with QuickBook. This is partly because I used an early version - Problems with links, layout, features, formatting, some bugs, Partly because it provides an alien layout. Partly Issues with not being able to integrate a map when required and not being able to integrate html, Javascript etc. Maybe newer Quickbook is better.Maybe try a different html generator. Maybe go back to raw html. OTOH maybe a bad workman blames his tools ... *Getting started section.* Basically seems to be acceptable. Try to improve the examples and pick up on comments made during the review. Consistent examples, copy paste to code, show output, link to actual code etc. *Informal semantics of Operations section* I am repeating myself 3 times showing the functionality of the t1_quantites operations, first in Getting Started section, second in Informal Semantics section, finally in The C++ concepts section. This actually works well for users concentrating on the Getting started section because its light, but I wonder if I can somehow combine the informal semantics with the C++ Concepts without getting very tedious indeed. *Synopsis* Unfinished. Maybe move this forward so users can see it after getting started section. *Overall Layout.* Docs are very incomplete Many sections missing. Lose pdf compatibility. Try moving away from serial layout back to preferred star/hierarchical layout. as always use diagrams, not text where possible (especially when adding Geometry etc). Add some larger more ambitious examples. Link to the code examples. Show more hints and tricks (such as Typeof when available). Show alternative useages/views( ie 'Jesper Schmidts'/'SIunits' style) than the Simple Interface shown in Getting Started section. Mechanisms for switching quantities /floats for checking without loss in performance etc. Thats it so far and it may all change of course. I must spend another good few sessions rereading all the reviews and comments too. regards Andy Little

"Andy Little" <andy@servocomm.freeserve.co.uk> writes:
"David Abrahams"
David Greene <greened@obbligato.org> writes:
E. Is the documentation good enough for a boost library?
This has been made very clear and Andy has graciously accepted the suggested documentation changes.
Andy has indeed graciously accepted criticism of the documentation, for which I commend him.
Your criticism mainly concerned the C++ Concepts section of the documentation for PQS. What might help is some examples of what you consider good C++ Concept documentation.
The C++ standard does a pretty decent job. There's also the SGI STL website. There's also the Boost Graph library. The "new iterator concepts" document in the iterator library docs does pretty well. And you can always start at http://www.boost.org/more/generic_programming.html#concept.
What's missing for me is a clear intention to actively pursue better docs himself, as opposed to being willing to accept specific edits that other people happen to suggest. If we leave the quality of our documentation (or code, for that matter) up to people who rewrite it for us, we won't have much quality at all.
I havent asked and don't expect anyone else to write the PQS documentation for me FWIW.
I didn't think you had, but I was trying to make it clear in general that graciously accepting criticism isn't enough.
I have stated that a truly generic quantities library ( encompassing all unit systems) is beyond my skill and knowledge and that someone else would need to write that if that is the requirement for a Boost Quantities library.
Let me be very clear: my complaint was not with the level of generality of the library, if that's what you mean by "truly generic;" it was with the quality of the specification.
I would be happy to do what I could to help in the SI area though, in that case.
IMO the library author has to be willing to take responsibility for making the docs work; any help from the outside is a bonus.
I agree with that !
FWIW Here are some rough notes to self regarding the PQS documentation
*Non C++ concepts (definition of terms).* Remove C++ specific and especially PQS library implementation details from here. Move this section to the back of the docs.
*C++ Concepts* Some entities are metafunctions hiding as Concepts. Look at other C++ concept documentation and see how it works. Link to headers.
Good. Let me stress again that you shouldn't underestimate the value of writing concept checking classes and archetypes for your library (see Boost.ConceptCheck). Those will lead almost directly to coherent concept documentation.
*Writing Tools.*
Frankly I have had some problems with QuickBook. This is partly because I used an early version - Problems with links, layout, features, formatting, some bugs, Partly because it provides an alien layout. Partly Issues with not being able to integrate a map when required and not being able to integrate html, Javascript etc. Maybe newer Quickbook is better.Maybe try a different html generator. Maybe go back to raw html.
OTOH maybe a bad workman blames his tools ...
*Getting started section.* Basically seems to be acceptable. Try to improve the examples and pick up on comments made during the review. Consistent examples, copy paste to code, show output, link to actual code etc.
*Informal semantics of Operations section* I am repeating myself 3 times showing the functionality of the t1_quantites operations, first in Getting Started section, second in Informal Semantics section, finally in The C++ concepts section. This actually works well for users concentrating on the Getting started section because its light, but I wonder if I can somehow combine the informal semantics with the C++ Concepts without getting very tedious indeed.
*Synopsis* Unfinished. Maybe move this forward so users can see it after getting started section.
*Overall Layout.* Docs are very incomplete Many sections missing. Lose pdf compatibility. Try moving away from serial layout back to preferred star/hierarchical layout. as always use diagrams, not text where possible (especially when adding Geometry etc). Add some larger more ambitious examples. Link to the code examples. Show more hints and tricks (such as Typeof when available). Show alternative useages/views( ie 'Jesper Schmidts'/'SIunits' style) than the Simple Interface shown in Getting Started section. Mechanisms for switching quantities /floats for checking without loss in performance etc.
Thats it so far and it may all change of course. I must spend another good few sessions rereading all the reviews and comments too.
regards Andy Little
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- Dave Abrahams Boost Consulting www.boost-consulting.com

"David Abrahams" wrote
"Andy Little" <andy@servocomm.freeserve.co.uk> writes:
Your criticism mainly concerned the C++ Concepts section of the documentation for PQS. What might help is some examples of what you consider good C++ Concept documentation.
The C++ standard does a pretty decent job. There's also the SGI STL website. There's also the Boost Graph library. The "new iterator concepts" document in the iterator library docs does pretty well. And you can always start at http://www.boost.org/more/generic_programming.html#concept.
I have had a look at these sources and I will look at them in more detail, however they seem to represent only Concepts with runtime requirements. Many of the concepts used in PQS library have only compile time requirements. The only example of documentation for this type of Concept that I know of is MPL.( I assume you see MPL as a good example of compile time only concept documentation). PQS uses both forms of Concept. Is there any means or language convention to distinguish the two forms? My current intention is to put the compile-time requirement Concepts under a separate section from the runtime requirement Concepts. regards Andy little

"Andy Little" <andy@servocomm.freeserve.co.uk> writes:
"David Abrahams" wrote
"Andy Little" <andy@servocomm.freeserve.co.uk> writes:
Your criticism mainly concerned the C++ Concepts section of the documentation for PQS. What might help is some examples of what you consider good C++ Concept documentation.
The C++ standard does a pretty decent job. There's also the SGI STL website. There's also the Boost Graph library. The "new iterator concepts" document in the iterator library docs does pretty well. And you can always start at http://www.boost.org/more/generic_programming.html#concept.
I have had a look at these sources and I will look at them in more detail, however they seem to represent only Concepts with runtime requirements.
I don't think so. X a(b); that the syntax is valid is a compile-time requirement. Any associated semantics are runtime requirements. std::iterator_traits<T>::value_type 100% compile-time requirement.
Many of the concepts used in PQS library have only compile time requirements. The only example of documentation for this type of Concept that I know of is MPL.( I assume you see MPL as a good example of compile time only concept documentation).
Yes, **in the context of the MPL**. There is a background assumption that nested members of templates are types and not values, which you can't reasonably make in the PQS context.
PQS uses both forms of Concept. Is there any means or language convention to distinguish the two forms?
There are not really two different forms AFAICT.
My current intention is to put the compile-time requirement Concepts under a separate section from the runtime requirement Concepts.
Almost every concept you write that has runtime requirements also has compile-time requirements, so I don't know if this division makes much sense. But I'm much more concerned with the contents of the concept descriptions than the order in which they're presented. -- Dave Abrahams Boost Consulting www.boost-consulting.com

Hi David, Firstly I should qualify the following by saying that I havent yet spent as long as I should have on the subject of Concept documentation yet and am continuing to RTM(s). "David Abrahams" wrote
"Andy Little" <andy@servocomm.freeserve.co.uk> writes:
"David Abrahams" wrote
"Andy Little" <andy@servocomm.freeserve.co.uk> writes:
Your criticism mainly concerned the C++ Concepts section of the documentation for PQS. What might help is some examples of what you consider good C++ Concept documentation.
The C++ standard does a pretty decent job. There's also the SGI STL website. There's also the Boost Graph library. The "new iterator concepts" document in the iterator library docs does pretty well. And you can always start at http://www.boost.org/more/generic_programming.html#concept.
I have had a look at these sources and I will look at them in more detail, however they seem to represent only Concepts with runtime requirements.
I don't think so.
X a(b);
that the syntax is valid is a compile-time requirement. Any associated semantics are runtime requirements.
std::iterator_traits<T>::value_type
100% compile-time requirement.
Many of the concepts used in PQS library have only compile time requirements. The only example of documentation for this type of Concept that I know of is MPL.( I assume you see MPL as a good example of compile time only concept documentation).
Yes, **in the context of the MPL**. There is a background assumption that nested members of templates are types and not values, which you can't reasonably make in the PQS context.
Is there not a universal syntax for C++ concept documentation to which everyone should conform or in which every construct can be expressed?
PQS uses both forms of Concept. Is there any means or language convention to distinguish the two forms?
There are not really two different forms AFAICT.
OK. I have difficulties with that because it seems to contradict your remark about **in the context of MPL**. MPL is exclusively a compile time library. The TMP in PQS borrows heavily form MPL. If I implement more planned features of the PQS library, I will need runtime and compile time versions of many Concepts. An example of this is (lets call it ) DimensionA Concept which in the TMP book Dimensional Analysis example is modelled by the mpl::vector of dimensional exponents. In PQS it is also desirable to have a runtime version, this time modelled (say for simplicity) by something encapsulating a std::vector<int>. This would be a model of (lets call it) DimensionB. The two Concepts need to be distinguished. I'm not sure if there is a naming convention in use to distinguish the two. I thought of a Meta prefix for the DimensionA category (MetaDimension) and none for the latter (just Dimension). OTOH I thought of a namespace prefix meta::Dimension, but I think that would be unconventional. I read you werent happy about use of the word meta but it seems to have the right connotations to me FWIW, Const might do but I think the difference between these two is not well described by that.
My current intention is to put the compile-time requirement Concepts under a separate section from the runtime requirement Concepts.
Almost every concept you write that has runtime requirements also has compile-time requirements, so I don't know if this division makes much sense. But I'm much more concerned with the contents of the concept descriptions than the order in which they're presented.
I have put my latest effort at getting one PQS Concept (AbstractQuantity) right, into the Boost vault in Physical Quantities Units/ AbstractQuantity.html. http://tinyurl.com/7m5l8 BTW I am aware the formatting is poor, and external links go nowhere but currently I am just trying to get the syntax right (or more accurately the right ballpark) FWIW in rewriting PQS library, I am now aiming to make the Concepts wide enough so that the examples using mpl::vectors in TMP book should work though they will need specialisations(overloads?) of the metafunctions in the PQS Requirements to work of course. I guess this may have been what you were getting at in your review of PQS? IOW Some parts of the PQS library provide models of the Concepts, while other parts (metafunctions act in terms of Concepts) so can be applied to other models than the ones supplied. ------------ The following is some questions I am trying to answer. Maybe they just reflect my current confusion over this whole subject. Anyway I thought they might be interesting to see my struggles as a newcomer to this subject. I am currently looking over examples and may find answers to some questions there. Questions resulting from trying to do the AbstractQuantity Concepts in the above mentioned AbstractQuantity.html Associated metafunctions 1) Should I use the term return type with metafunctions? e.g typedef f<T>::type result; I see the above in terms of a function f returning a type, which IMO is quite acceptable in the TMP domain. I would then have a Returns clause in the specification describing the return type of f. It seems to be used this way in MPL docs anyway. 2) Overloaded metafunctions. I have overloaded? specialised? the metafunctions in a similar way to runtime functions are overloaded. An obvious example is the pqs::meta::binary_operation metafunction in the sample. This is meant to be entirely equivalent to runtime operator functions such as operator +. ( std::string + std:: string , int + int , IOW its just an operator) Now however I need to express the overload of the Concept arguments. The way it seems to be done is to put in some Preamble (Sometimes called Notation?) that in the following, Lhs and Rhs are models of Concept X. The problem is that is remote from the per item description: binary_operation<Lhs,Op,Rhs> IMO It would make more sense to say e.g. binary_operation<AbstractQuantity Lhs,Op,AbstractQuantity Rhs> 3) Why no BooleanConstant Concept? boost::typetraits uses true_type_or_false_type. MPL uses IntegralConstant Concept. Should there be a (say) BooleanConstant Concept? (Also IMO then there should be a True Concept and a False concept) This would be more generic and could apply to MPL and type_traits return types. 4)Am I right in saying there is a fair amount of variability in Concept documentation?. It strikes me that there could be a more abstract syntax to describe Concepts. Many times I want to say for example: T Is True , meaning that in code where true_ is some model of True then one (among several) ways for a type t to model this Concept is struct c : true_ {}; regards Andy Little

Andy, I'm impressed with the thought and effort you're putting into this. Keep it up. "Andy Little" <andy@servocomm.freeserve.co.uk> writes:
Is there not a universal syntax for C++ concept documentation to which everyone should conform or in which every construct can be expressed?
Every construct? Probably not. I can't prove that the existing idioms cover everything, including constraints I've never seen. However, they cover most things and you're not doing anything so exotic that it shouldn't fit. FWIW, once we have concept support in the language we will be using pseudosignatures rather than valid expressions to express syntactic constraints, so we can expect that to change. In the meantime, though, the things that can be expressed using established conventions should be so expressed.
PQS uses both forms of Concept. Is there any means or language convention to distinguish the two forms?
There are not really two different forms AFAICT.
OK. I have difficulties with that because it seems to contradict your remark about **in the context of MPL**. MPL is exclusively a compile time library. The TMP in PQS borrows heavily form MPL.
All that means is that when you're reading the docs for an MPL metafunction and you see foo<...>::type and have a "Type" column header, you know you're dealing with a type member and not a value member, and you don't need further disambiguation. That's ALL it means. So if you want to avoid ambiguity there, say your "compile-time" templates are MPL metafunctions. Well, you'd better make sure they _are_ metafunctions first :)
If I implement more planned features of the PQS library, I will need runtime and compile time versions of many Concepts. An example of this is (lets call it ) DimensionA Concept which in the TMP book Dimensional Analysis example is modelled by the mpl::vector of dimensional exponents.
In PQS it is also desirable to have a runtime version, this time modelled (say for simplicity) by something encapsulating a std::vector<int>. This would be a model of (lets call it) DimensionB.
The two Concepts need to be distinguished. I'm not sure if there is a naming convention in use to distinguish the two.
Static vs. Dynamic.
I thought of a Meta prefix for the DimensionA category (MetaDimension) and none for the latter (just Dimension).
It's not really meta.
OTOH I thought of a namespace prefix meta::Dimension, but I think that would be unconventional. I read you werent happy about use of the word meta but it seems to have the right connotations to me
Not me. Meta implies that it will be operating on itself.
FWIW, Const might do but I think the difference between these two is not well described by that.
static :)
My current intention is to put the compile-time requirement Concepts under a separate section from the runtime requirement Concepts.
Almost every concept you write that has runtime requirements also has compile-time requirements, so I don't know if this division makes much sense. But I'm much more concerned with the contents of the concept descriptions than the order in which they're presented.
I have put my latest effort at getting one PQS Concept (AbstractQuantity) right, into the Boost vault in Physical Quantities Units/ AbstractQuantity.html. http://tinyurl.com/7m5l8
BTW I am aware the formatting is poor, and external links go nowhere but currently I am just trying to get the syntax right (or more accurately the right ballpark)
This concerns me: A model of AbstractQuantity is also either a model of NamedQuantity or of AnonymousQuantity. Types modelling NamedQuantity are usually created by the programmer, while types modelling AnonymousQuantity usually arise as temporary results of calculations, somewhat analogous to L-values and R-values). AFAIK there's no precedent for a concept that sometimes refines one concept and sometimes refines another, but always refines one of them, and I think it will make things very difficult if you continue that way. What is an Entity Relationship Diagram? Do you possibly mean a diagram of concept refinement? If so, why not just say that? I don't know what I'm supposed to see below "Notation," but it's completely unfamilar to me. Is there supposed to be a table somewhere? It looks like: Notation In the AbstractQuantity Synopsis and functions tables Lhs and Rhs and Q are types that are models of AbstractQuantity and R is type that is a model of Rational. AbstractQuantity Synopsis a::divides,Rhs>; concept AbstractQuantity{}; Requirements: meta::binary_operation<Lhs,meta::plus,Rhs>; meta::binary_operation<Lhs,meta::minus,Rhs>; meta::binary_operation<Lhs,meta::times,Rhs>; meta::binary_operation<Lhs,met meta::binary_operation<Q,meta::pow,R>; meta::dimension<Q>; meta::dimensionally_equivalent<Lhs,Rhs>; meta::is_dimensionless<Q>; meta::is_named_quantity<Q>; meta::is_anonymous_quantity<Q>; meta::is_same_quantity<Lhs,Rhs>; Notes Math on the associated Dimension of an AbstractQuantity works logarithmically, so addition and subtraction of the DimensionalExponents of a Dimension transform to no-ops ( the types of dimension<Lhs> and dimension<Rhs> are required to be dimensionally equivalent), multiplication to addition, division to subtraction and raising to a power to multiplication. Functions meta::binary_operation<Lhs,meta::plus,Rhs> </ Description: Returns the type resulting from addition of 2 types modelling AbstractQuantity Expression: meta::binary_operation<Lhs,meta::plus,Rhs>::type Requires: dimensionally-equivalent<Lhs,Rhs> models True Returns: A model of AbstractQuantity Q where: dimensionally-equivalent<Q,Lhs > models True. And dimensionally-equivalent<Q,Rhs > models True. And ( If same_quantity<Lhs,Rhs> models True Then same_quantity<Q,Lhs> models True Else is_anonymous_quantity<Q> models True.)
FWIW in rewriting PQS library, I am now aiming to make the Concepts wide enough so that the examples using mpl::vectors in TMP book should work though they will need specialisations(overloads?) of the metafunctions in the PQS Requirements to work of course.
Sorry, not "of course," because you lost me. Make concepts wide? You mean the tables? Some examples from the TMP book should work with PQS? How?
I guess this may have been what you were getting at in your review of PQS? IOW Some parts of the PQS library provide models of the Concepts, while other parts (metafunctions act in terms of Concepts) so can be applied to other models than the ones supplied.
Still lost.
------------
The following is some questions I am trying to answer. Maybe they just reflect my current confusion over this whole subject. Anyway I thought they might be interesting to see my struggles as a newcomer to this subject. I am currently looking over examples and may find answers to some questions there.
Questions resulting from trying to do the AbstractQuantity Concepts in the above mentioned AbstractQuantity.html
Associated metafunctions
1) Should I use the term return type with metafunctions?
e.g typedef f<T>::type result;
I see the above in terms of a function f returning a type, which IMO is quite acceptable in the TMP domain. I would then have a Returns clause in the specification describing the return type of f. It seems to be used this way in MPL docs anyway.
I would do whatever the MPL reference manual does, as long as you make it clear that the template is a metafunction.
2) Overloaded metafunctions.
I have overloaded? specialised?
Specialized of course. Where did you ever see the term "overload" applied to class templates? I understand that there's an analogy there, but it's not an exact one, and my whole point was that you shouldn't be inventing new naming conventions and notation in a domain where you don't yet feel very comfortable.
the metafunctions in a similar way to runtime functions are overloaded. An obvious example is the pqs::meta::binary_operation metafunction in the sample. This is meant to be entirely equivalent to runtime operator functions such as operator +. ( std::string + std:: string , int + int , IOW its just an operator)
Now however I need to express the overload of the Concept arguments.
Sorry, I'm lost again. Why not revert to standard terminology? What is a "concept argument?" Concepts are neither types nor templates (unless you mean concept checking classes, which I again highly recommend you write), so I don't see how they can either be or have arguments.
The way it seems to be done is to put in some Preamble (Sometimes called Notation?) that in the following, Lhs and Rhs are models of Concept X.
That's a familiar idiom before a concept requirements table, yes.
The problem is that is remote from the per item description:
binary_operation<Lhs,Op,Rhs>
IMO It would make more sense to say e.g.
binary_operation<AbstractQuantity Lhs,Op,AbstractQuantity Rhs>
Maybe it would (in fact something like that will be available with the language support for concepts), but as I said this is not the time to invent new notations. Get comfortable with the existing conventions first. If you wanted to look at ConceptGCC and actually write conforming new-style concepts, I'd find it hard to fault you... but I don't think that would be as useful to your readers, and for you I think that might be overreaching at this stage.
3) Why no BooleanConstant Concept?
boost::typetraits uses true_type_or_false_type. MPL uses IntegralConstant Concept. Should there be a (say) BooleanConstant Concept?
Why? Do you have some code that needs an IntegralConstant whose values can only be true or false (I.e. the code won't work as expected if the value is 2)? If so, you might consider a BooleanConstant concept. But you might also consider just making the code work right when the value is 2.
(Also IMO then there should be a True Concept and a False concept)
The existence of concepts is driven by the requirements of generic code. If you don't have code that requires a type's conformance to these properties, don't make a concept. Also, some concepts don't need to be separately named. For example, you can say "a nonzero-valued MPL IntegralConstant" or "a true-valued MPL IntegralConstant" when you mean the True concept. Whether it's worthwhile to make a separate True concept might depend on how often you find yourself needing it in your documentation.
This would be more generic and could apply to MPL and type_traits return types.
More generic how?
4)Am I right in saying there is a fair amount of variability in Concept documentation?.
Sorry, it's too subjective a question for me to answer definitively. I'd guess that most concept documentation exists at Boost. There is some poor concept documentation out there, and also some in here.
It strikes me that there could be a more abstract syntax to describe Concepts. Many times I want to say for example:
T Is True , meaning that in code where true_ is some model of True then one (among several) ways for a type t to model this Concept is
struct c : true_ {};
I usually say a "true-valued IntegralConstant" for that. You don't need to invent a more abstract syntax to describe that True concept. The standard requirements table and other notations will do just fine: True Concept Refines: IntegralConstant Requirements: In the table below, T is a model of True. +--------------------+--------------------+ |Expression |Semantics | +====================+====================+ |T::value |nonzero | +--------------------+--------------------+ -- Dave Abrahams Boost Consulting www.boost-consulting.com

"David Abrahams"
Almost every concept you write that has runtime requirements also has compile-time requirements, so I don't know if this division makes much sense. But I'm much more concerned with the contents of the concept descriptions than the order in which they're presented.
Thanks for the comments. I have just got hold of your and Alexey Gurtovoys TMP book. It seems that studying chapter 5 is going to help quite a lot! regards Andy Little

"Andy Little" <andy@servocomm.freeserve.co.uk> writes:
Andy has indeed graciously accepted criticism of the documentation, for which I commend him.
Your criticism mainly concerned the C++ Concepts section of the documentation for PQS. What might help is some examples of what you consider good C++ Concept documentation.
<snip> Sorry about the overquoting in my other reply to this message; it was user error in my mailer :( -- Dave Abrahams Boost Consulting www.boost-consulting.com

E. Is the documentation good enough for a boost library? ... ... If we leave the quality of our documentation (or code, for that matter) up to people who rewrite it for us, we won't have much quality at all. IMO the library author has to be willing to take responsibility for making the docs work; any help from the outside is a bonus.
It may be worth distinguishing between reference docs and tutorial docs. I agree that the library authors have to be responsible for the former (which judging by the subject of this thread is what we are talking about). But many times the tutorial documentation is better written by someone else, someone who can see the wood not the trees. Incidentally when I'm in the early stages of evaluating a library the existence of more than one "how-to" article, and articles written by project outsiders, are more important than just about anything else. Reference docs are useful but less important as I can always look at the source if I really need to understand something. Darren

Darren Cook <darren@dcook.org> writes:
E. Is the documentation good enough for a boost library? ... ... If we leave the quality of our documentation (or code, for that matter) up to people who rewrite it for us, we won't have much quality at all. IMO the library author has to be willing to take responsibility for making the docs work; any help from the outside is a bonus.
It may be worth distinguishing between reference docs and tutorial docs.
I agree that the library authors have to be responsible for the former (which judging by the subject of this thread is what we are talking about).
But many times the tutorial documentation is better written by someone else, someone who can see the wood not the trees.
Maybe, but the library author has to be responsible for both. Who does the actual writing of either one is not my concern.
Incidentally when I'm in the early stages of evaluating a library the existence of more than one "how-to" article, and articles written by project outsiders, are more important than just about anything else. Reference docs are useful but less important as I can always look at the source if I really need to understand something.
That's fine for you if you're comfortable with it, but it won't tell you anything about how to use the library correctly. It will just tell you how to do something that works with this particular version of this particular implementation. "Use the source, Luke" is not one of those open-source sayings we have traditionally encouraged around Boost. For Boost, coherent specification is at _least_ as important as working implementations. -- Dave Abrahams Boost Consulting www.boost-consulting.com

Here is how I think that a review procedure could be improved: 1) It should be devided into at least five votes: A. Is the concept ok? Do we want SUCH a library in boost? B. Is the presented library a good starting point, or do we think we should start from scratch? C. Is the presented API of the library on the right track? D. Is the internal implementation on the right track? E. Is the documentation good enough for a boost library?
I like this - these are fairly orthogonal issues (except perhaps B, which C and D also address). They may deserve separate evaluations in a review, to make it more clear what should be done with the library from here. -- Leland

Carlo Wood <carlo@alinoe.com> writes:
On Fri, Jun 09, 2006 at 12:13:39PM -0500, David Greene wrote:
That said, I very much want to encourage further work on this library. It _is_ very important. I'm disappointed that Andy does not seem committed to it. The volume of feedback indicates that there's a lot of interest.
It is very hard for a volunteer to put a life-time of work into a library when he doesn't even know whether or not it will be accepted.
I very much agree, and I appreciate the courage it takes to submit one's hard work to a Boost review.
I've followed the review of PQS with great interest, and was amazed by Andy's patience and politeness. Yet, the end result of the review is a list of things that are "wrong" with the "currently unusable library".
Now, now. The only message in the boost archive containing both "pqs" and "unusable" is this one: http://article.gmane.org/gmane.comp.lib.boost.devel/142241/match=pqs+unusabl... which makes no such accusation. To say nothing of the fact that nobody seems to have written "currently unusable library" before now. I don't think anyone has specifically said there was anything "wrong" with the library, either, though I might've missed that. If you're going to put things in quotes, especially as a critique of others' behavior, they ought to be actual quotations. -- Dave Abrahams Boost Consulting www.boost-consulting.com

"Andy Little" <andy@servocomm.freeserve.co.uk> writes:
"David Abrahams" wrote
"Andy Little" wrote
"David Abrahams" wrote
"Paul A Bristow" writes:
What design changes would persuade you to vote for this attempt?
I haven't looked at enough of it to know if there are other issues, but in order to resolve my problems with the issues I've raised:
Clarity and conformance to Boost/C++ standards and conventions.
To be honest David, I am finding this quite difficult to handle. On the one hand I think the PQS library is good, there seems to be interest and a need. On the other hand, at least unofficially, Boost is your party
I don't know what that means. I am one of several moderators; it's
Not even sure why I mentioned moderators.
not "mine."
Put it another away. You have put a lot of work into Boost. Your vote in a review is an order of magnitude more powerful than mine (And incidentally that is as it should be.
Not sure about either of those statements. Certainly it should be clear from my review that I only had a very brief look at the library, which ought to dilute my vote considerably.
OTOH if you were in my house and wanted to knock a wall down, your vote certainly wouldnt be anything like as powerful as mine!)
Yeah, but it's not "my house;" it's ours. I may have put in a few more years in construction than some other people here, it's true.
and my impression is that for whatever reason you wouldnt be too happy about this library becoming part of boost.
No, not for "whatever reasons," for exactly the reasons I posted. It seems like you're not responding to what I wrote, but something else.
AFAIK your impressions have been formed not even by downloading the library itself but by downloading the pdf documentation,
Yes.
which I put there as some previous reviewers said they found it helpful to print it out. Once there you headed for the two areas that other reviewers found poor
Hadn't read the other reviews.
and started slashing!( Pavel Vozenilek made the same overall point but without needing to twist the knife.) That is the sum total of your review AFAICS.
It certainly wasn't my intention that my review contain any "knife twisting." On the other hand, it was intentionally pointed -- I wanted to make sure that it was well understood (failed obviously). I didn't mean my review to be hurtful; if it was, I'm sorry.
If this formed the total substance of a review of mine I would not feel justified in casting a vote at all.
Where concept documentation is concerned, IMO, a focused "no" with very specific critiques (even if it isn't weakend by being based only on a narrow view of the submission) is more valuable than a "yes" with a general request to improve the docs, especially if lots of other people are voting yes. In case it isn't obvious, I feel very strongly about the importance of the specification -- in many ways it's more important than the implementation -- and I want that to be taken seriously.
I would even be open to being convinced to change my vote, if the author exhibited sufficient interest in and responsiveness to my concerns.
The authors name is Andy BTW.
Yes, I know. I was clumsily trying to make a more general statement about what I am doing with this vote. If I had wanted it to be personal, I'd have said "you" and not "Andy."
The point re using underscores is trivial. It was done because QuickBook wont accept '-' in link names. It speeded things up slightly.
There may be a trivial reason for it, but it's not a trivial point. It has a big impact on comprehensibility.
The C++ concepts section is a mess.. Sure, as I said to Pavel Vozenilek. It was the first time I have written this kind of documentation and I found it difficult. I decided to spend time on other areas of the documentation before the review.
I understand that you decided other areas were more important, but I hope you can understand that sensible concept docs are a priority for me. BTW, if you are having trouble writing concept docs you can ask on the list for help with specific problems. I'd be happy to try to help.
I haven't looked at the code, but I really like the idea of what this library does, and it probably has a pretty nice interface -- at the code level.
Wow! That is encouraging. It would have been helpful to have that included in the review. It would have lightened the tone. As it stands I read every point made as negative.
Agreed, to soften my post I could have added my speculation about the interface being nice, but that really is just speculation. Aside from that I said essentially the same thing ("I think this is an important domain") in my original posting,
Coincidentally nor would I. The situation with PQS is that to do it justice would take more time than I am prepared to invest.
Wow. Why did you submit it?
Its a very good library, but I am too old to see the need to fight for every inch, if the environment is hostile. That is a waste of energy. I have better things to do.
The environment is not hostile; just demanding. Actually, not the environment -- it's might just be me. Boost doesn't always do what I want.
That's not funny at all, and it's not what I'd like at all. I'm not sure what gave you that impression. I thought I made it clear that "I hope we'll be able to accept a different version of this library" and also that my negative vote was made with regret.
FWIW I read that different version as implying a version of the library written by someone else. That was the impact. Re-reading it, I still get that impression. Its ambiguous and impersonal.
Sorry, it was meant to be impersonal (since it was critical -- I assumed a personal post would have been viewed as an attack -- that sure didn't work out well) but not ambiguous. You obviously invested a lot of effort; I hope we'll be able to accept _your_ library. That said, to get _my_ personal "yes" vote, I insist that certain things be cleaned up substantially. Of course you could choose to ignore me, but obviously I hope you won't, or I wouldn't have voted.
If you do decide to simply withdraw without making improvements, I'll be sorry.
OK, that is helpful, as were the encouraging comments above. OTOH I already did withdraw it in a mail to Fred Bertsch. I'm not quite sure about whether I can un-withdraw it or not. I will have to see what he says.
I'm sure we can convince him to come around. -- Dave Abrahams Boost Consulting www.boost-consulting.com

"David Abrahams"
"Andy Little"
OK, that is helpful, as were the encouraging comments above. OTOH I already did withdraw it in a mail to Fred Bertsch. I'm not quite sure about whether I can un-withdraw it or not. I will have to see what he says.
I'm sure we can convince him to come around.
Yes. He says that its OK. So to clarify. I am not withdrawing PQS from the review. Apologies for the see-saw act. regards Andy Little

OK, that is helpful, as were the encouraging comments above. OTOH I already did withdraw it in a mail to Fred Bertsch. I'm not quite sure about whether I can un-withdraw it or not. I will have to see what he says.
I am extremely grateful to you, Andy, for not withdrawing the library. We'll continue with the review, then! :) The library may or may not pass the review, but I hope that the completed review process will at the very least help you improve the library. -Fred Bertsch

On 2006-06-06, fred bertsch <fred.bertsch@gmail.com> wrote:
The deadline for reviews of Andy Little's Physical Quantities System is the end of this Friday. (June 9)
So far, we haven't gotten many reviews. If you're thinking about writing one, it would be great if you could finish it up.
Well, you've got more now, but I would hope to get mine in sometime in the next few days [as somebody who has written one version of a similar library in use everyday for the last 4 years, and also somewhat (ahem) involved in the previous discussions]. However, with a 2 month old baby this might be tricky, and with 24,000+ lines of code and 50 odd pages of docs I'm not sure I can do it justice (yes, I can pick holes, but I'd like to offer constructive criticism too). phil -- change name before "@" to "phil" for email

--- Phil Richards wrote:
However, with a 2 month old baby this might be tricky,
Congrats, anyway! I myself just got hired into my first real programming job, so I also won't have much time to submit a review while I'm getting up to speed. Cromwell D. Enage __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com

Cromwell Enage said: (by the date of Wed, 7 Jun 2006 11:56:22 -0700 (PDT))
So far, we haven't gotten many reviews. If you're thinking about writing one, it would be great if you could finish it up.
Well, you've got more now, but I would hope to get mine in sometime in the next few days However, with a 2 month old baby this might be tricky,
Congrats, anyway! I myself just got hired into my first real programming job, so I also won't have much time to submit a review while I'm getting up to speed.
Cromwell D. Enage
well, then considering above I think it would be wise to extend the deadline by few days, or another week? It would be very sad if it got rejected because the reviewers didn't have enough time to grasp the whole library. And it is HUGE. If I follow correctly we currently have 5 votes, of which one is abstaining, 2 negative and 2 positive. This is not good news (for me)... I really would like to see pqs in boost - I can't wait to start using it in my application. -- Janek Kozicki |

Janek Kozicki wrote:
It would be very sad if it got rejected because the reviewers didn't have enough time to grasp the whole library. And it is HUGE.
If I follow correctly we currently have 5 votes, of which one is abstaining, 2 negative and 2 positive. This is not good news (for me)... I really would like to see pqs in boost - I can't wait to start using it in my application.
What's preventing you from using it now? If it's just because it's not "officially" in Boost, then that's a poor criteria for using third-party code. The review process is to ensure that libraries in Boost have been thoroughly vetted and are of sufficient quality to serve the Boost community. It is not meant as a general test of whether one should use the library. In this case, I brought up some missing functionality that I consider critical to make the library as general as possible. Boost prides itself on defining not just libraries, but _reuseable_components_. Our goal should be to strive for this as much as possible. I use Boost because I don't want to reinvent the wheel. As pqs currently stands, it does not achieve that goal for me. I would have to implement something that looks much like pqs but doesn't have its rigid assumption of powers-of-10 prefixes. There is no shame in a Boost rejection. Many libraries have been rejected, changed and accepted in a later review. -Dave

Hi Phil, "Phil Richards" wrote
Well, you've got more now, but I would hope to get mine in sometime in the next few days [as somebody who has written one version of a similar library in use everyday for the last 4 years, and also somewhat (ahem) involved in the previous discussions].
Yes. Thanks for that !
However, with a 2 month old baby this might be tricky, and with 24,000+ lines of code and 50 odd pages of docs I'm not sure I can do it justice (yes, I can pick holes, but I'd like to offer constructive criticism too).
No problem and congratulations... regrads Andy Little
participants (20)
-
Andy Little
-
Beman Dawes
-
Boris Gubenko
-
Carlo Wood
-
Cromwell Enage
-
Darren Cook
-
David Abrahams
-
David Greene
-
Deane Yang
-
fred bertsch
-
Geoffrey Irving
-
Gerhard Wesp
-
Janek Kozicki
-
Joaquín Mª López Muñoz
-
John Maddock
-
John Phillips
-
Leland Brown
-
Oleg Abrosimov
-
Paul A Bristow
-
Phil Richards