Re: [boost] [Review][PQS] Review deadline

On 6/8/06, Andy Little <andy@servocomm.freeserve.co.uk> wrote:
Hi Michael,
-----Original Message----- From: Michael Fawcett [mailto:michael.fawcett@gmail.com] Sent: 08 June 2006 15:49 To: andy@servocomm.freeserve.co.uk Subject: Re: [boost] [Review][PQS] Review deadline
A goal of pqs is to be useable in that sort of situation. To be really effective it will need a lot of supporting classes ( matrix, vector, quat) though.
Did you plan on using Boost::uBLAS for this, along with Boost::quaternion? This seemed tangential enough to the PQS discussions that I didn't want to clutter up the mailing list, but I am defeinitely curious.
FWIW Its extremely important. I think it would be worth asking on the list. I dont think many people realise! I have discussed it in various replies but no one has asked me the direct question:
Neither of those libraries above will work with other than numeric types AFAIK. IOW where T is a type (simplifying and ignoring e.g floating point promotion etc), it must satisfy:
T result = T() + T(); T result = T() * T();
(among others)
If T is a physical quantity, the second requirement never can be true. Quantity Compatible library must be written something like:
Where Op is +,-,*,/, ==, etc ...
typeof( T1() Op T1() ) result = T1() Op T2();
This was a private e-mail discussion but Andy thought it important to be bruoght up on the list. FWIW, my own current matrix/vector/quaternion code does what you are asking for (with boost's help), but I had been looking at changing all my stuff over to uBLAS/quaternion for some time. As an example: blas::vec3<float> a(1, 0, 0); blas::vec3<double> b(0, 1, 0); blas::vec3<double> result = cross_product(a, b); // returns vec3<double>! I accomplished this using what I think never became a boost library (from the sandbox maybe? Did MPL::transform consume this functionality?) result_of which was used something like: template <typename LHS, typename RHS> typename boost::result_of_plus<LHS, RHS>::type operator +(const LHS &lhs, const RHS &rhs) { typename boost::result_of_plus<LHS, RHS>::type nrv(lhs); nrv += rhs; return nrv; } I would hope some solution could be found where you didn't have to implement all new matrix/vector/quaternion classes and their corresponding functionality, rather, just ensure the return type from the operation follows normal promotion rules (this is actually what I thought the promotion traits library was going to do). --Michael Fawcett

"Michael Fawcett" wrote
On 6/8/06, Andy Little wrote:
Hi Michael,
-----Original Message----- From: Michael Fawcett [mailto:michael.fawcett@gmail.com] Sent: 08 June 2006 15:49 To: andy@servocomm.freeserve.co.uk Subject: Re: [boost] [Review][PQS] Review deadline
A goal of pqs is to be useable in that sort of situation. To be really effective it will need a lot of supporting classes ( matrix, vector, quat) though.
Did you plan on using Boost::uBLAS for this, along with Boost::quaternion? This seemed tangential enough to the PQS discussions that I didn't want to clutter up the mailing list, but I am defeinitely curious.
FWIW Its extremely important. I think it would be worth asking on the list. I dont think many people realise! I have discussed it in various replies but no one has asked me the direct question:
Neither of those libraries above will work with other than numeric types AFAIK. IOW where T is a type (simplifying and ignoring e.g floating point promotion etc), it must satisfy:
T result = T() + T(); T result = T() * T();
(among others)
If T is a physical quantity, the second requirement never can be true. Quantity Compatible library must be written something like:
Where Op is +,-,*,/, ==, etc ...
typeof( T1() Op T1() ) result = T1() Op T2();
This was a private e-mail discussion but Andy thought it important to be bruoght up on the list. FWIW, my own current matrix/vector/quaternion code does what you are asking for (with boost's help), but I had been looking at changing all my stuff over to uBLAS/quaternion for some time. As an example:
blas::vec3<float> a(1, 0, 0); blas::vec3<double> b(0, 1, 0); blas::vec3<double> result = cross_product(a, b); // returns vec3<double>!
I accomplished this using what I think never became a boost library (from the sandbox maybe? Did MPL::transform consume this functionality?) result_of which was used something like:
template <typename LHS, typename RHS> typename boost::result_of_plus<LHS, RHS>::type operator +(const LHS &lhs, const RHS &rhs) { typename boost::result_of_plus<LHS, RHS>::type nrv(lhs); nrv += rhs; return nrv; }
I would hope some solution could be found where you didn't have to implement all new matrix/vector/quaternion classes and their corresponding functionality, rather, just ensure the return type from the operation follows normal promotion rules (this is actually what I thought the promotion traits library was going to do).
OK, what I was going onto say was that this is only the beginning of the complexities involved in strong type checking on physical quantities . A transformation matrix can perform the operations of translation and scaling. Translation of an entity(lets say a position vector implemented as length quantities) is basically an addition. Therefore the translation part of the matrix must be comprised of length quantities. However scaling of a quantity can only be performed by a numeric type. We can deduce from this that some elements of the transformation matrix must be numeric, and some length quantities. In fact the type makeup of a matrix for 2d transforms ends up looking like the following: where N is a numeric, Q is a quantity and R is the reciprocal of the quantity or 1/Q ( N N R ) ( NN R ) ( QQ N ) The 3D transform version is similar except with the extra row and column of course. The problem is that this is quite a heavy modification to ask of a numeric matrix library, especially when most users will be using it for numeric values. (Incidentallly numerics work in the above of course. they just all resolve to N). The point of all this is that the price of strong type checking (ie using PQS rather than numerics) may be discarding all your current libraries, which are based on the assumption of numerics. That is quite a high price!. I dont know any solution to this problem( except to exit the strong type checking) but basically there it is. regards Andy Little

On 6/8/06, Andy Little <andy@servocomm.freeserve.co.uk> wrote:
A transformation matrix can perform the operations of translation and scaling.
Translation of an entity(lets say a position vector implemented as length quantities) is basically an addition. Therefore the translation part of the matrix must be comprised of length quantities. However scaling of a quantity can only be performed by a numeric type. We can deduce from this that some elements of the transformation matrix must be numeric, and some length quantities.
In fact the type makeup of a matrix for 2d transforms ends up looking like the following:
where N is a numeric, Q is a quantity and R is the reciprocal of the quantity or 1/Q
( N N R ) ( NN R ) ( QQ N )
The 3D transform version is similar except with the extra row and column of course.
The problem is that this is quite a heavy modification to ask of a numeric matrix library, especially when most users will be using it for numeric values. (Incidentallly numerics work in the above of course. they just all resolve to N). The point of all this is that the price of strong type checking (ie using PQS rather than numerics) may be discarding all your current libraries, which are based on the assumption of numerics. That is quite a high price!.
I dont know any solution to this problem( except to exit the strong type checking) but basically there it is.
What the library would be concerned with is that the result of the operation was possible (made sense) given the types that were held within the matrix or vector. Contrived example using theoretical code: // Multiply accumulate (Is there a simpler way?) template < typename X, typename Y, typename Z, typename VX, typename VY, typename VZ
struct macc { // X * VX + Y * VY + Z * VZ typedef typename boost::result_of_multiplies<X, VX>::type x_times_x; typedef typename boost::result_of_multiplies<Y, VY>::type y_times_y; typedef typename boost::result_of_multiplies<Z, VZ>::type z_times_z; typedef typename boost::result_of_plus<y_times_y, z_times_z>::type y_times_y_plus_z_times_z; typedef typename boost::result_of_plus<x_times_x, y_times_y_plus_z_times_z>::type x_plus_y_plus_z; typedef x_plus_y_plus_z type; }; template < typename X, typename Y, typename Z, typename VX, typename VY, typename VZ
typename macc<X, Y, Z, VX, VY, VZ>::type dot_product(const vec3<X, Y, Z> &lhs, const vec3<VX, VY, VZ> &rhs) { return lhs.x * rhs.x + lhs.y * rhs.y + lhs.z * rhs.z; } blas::vec3<int, short, float> a(1, 0, 0); blas::vec3<double, int, float> b(0, 1, 0); double result = dot_product(a, b); // returns double I left out PQS types there for simplicity, but I think you can see where I'm going. If the result_of struct isn't specialized for the types you are trying to combine, then you get a compile-time error. So given: blas::vec3<length::m> len(1, 2, 3); blas::vec3<velocity::s> vel(5, 6, 7); // Note the result type blas::vec3<velocity::m_per_s2> result = len / (vel * vel); You only have to specialize for the very base operations between types. Everything more complex (matrix multiplies, etc) can be built on top of that. For the above, operator / would need to have been implemented to return boost::result_of_divides<length::m, velocity::s2>::type, while operator * would need to have been implemented to return boost::result_of_multiplies<velocity::s, velocity::s>::type. I see you were using BOOST_AUTO in some other posts which would probably clean some of my code up, but I'm not familiar enough with it to use in my examples. This would require the math library be more heavily templated than it currently is. Note that a matrix would need to hold mutliple types (up to NxN). For instance, what would be the syntax for a 3D transform? Perhaps something like: matrix4x4 < float, float, float, length, float, float, float, length, float, float, float, length, length, length, length, float
m;
maybe? --Michael Fawcett

"Michael Fawcett" wrote
What the library would be concerned with is that the result of the operation was possible (made sense) given the types that were held within the matrix or vector.
Before continuing I better say I'm no expert in linear algebra! I have only looked at 3D transform matrices. But yes using Boost.Typeof or result_of to run through the expression. IOW the operations decide the type of the result. Obvious really, but linear algebra libraries often assume numeric types. However AFAIK a matrix can have matrix or maybe vector or complex elements so perhaps there are other candidates than physical quantities for the elements?
Contrived example using theoretical code:
// Multiply accumulate (Is there a simpler way?)
I think its simpler with Boost.Typeof, You could use it instead of the macc struct for deducing the result of the expression directly, but gcc wont like it, so you could use it in the body of macc: template < typename X, typename Y, typename Z, typename VX, typename VY, typename VZ
struct macc { typedef BOOST_TYPEOF_TPL( X() * VX() + Y() * VY() + Z() * VZ()) type; };
template < typename X, typename Y, typename Z, typename VX, typename VY, typename VZ
typename macc<X, Y, Z, VX, VY, VZ>::type dot_product(const vec3<X, Y, Z> &lhs, const vec3<VX, VY, VZ> &rhs) { return lhs.x * rhs.x + lhs.y * rhs.y + lhs.z * rhs.z; }
template < typename X, typename Y, typename Z, typename VX, typename VY, typename VZ typename macc<X, Y, Z, VX, VY, VZ>::type dot_product(const vec3<X, Y, Z> &lhs, const vec3<VX, VY, VZ> &rhs) { return lhs.x * rhs.x + lhs.y * rhs.y + lhs.z * rhs.z; }
blas::vec3<int, short, float> a(1, 0, 0); blas::vec3<double, int, float> b(0, 1, 0); double result = dot_product(a, b); // returns double
I left out PQS types there for simplicity, but I think you can see where I'm going.
I think so... If the result_of struct isn't specialized for the
types you are trying to combine, then you get a compile-time error. So given:
blas::vec3<length::m> len(1, 2, 3); blas::vec3<velocity::s> vel(5, 6, 7); // Note the result type blas::vec3<velocity::m_per_s2> result = len / (vel * vel);
You only have to specialize for the very base operations between types. Everything more complex (matrix multiplies, etc) can be built on top of that.
OK. That sounds much better than what I have done, which was to write it out long hand. I implemented a 4 x 4 matrix as a tuple of tuples FWIW. But I'm not much of an expert as I said. The matrix and vect stuff is in in pqs_3_1_1 in <boost/pqs/three_d/xx.hpp> directory to see the gory details FWIW
For the above, operator / would need to have been implemented to return boost::result_of_divides<length::m, velocity::s2>::type, while operator * would need to have been implemented to return boost::result_of_multiplies<velocity::s, velocity::s>::type. I see you were using BOOST_AUTO in some other posts which would probably clean some of my code up, but I'm not familiar enough with it to use in my examples.
This would require the math library be more heavily templated than it currently is. Note that a matrix would need to hold mutliple types (up to NxN). For instance, what would be the syntax for a 3D transform? Perhaps something like:
matrix4x4 < float, float, float, length, float, float, float, length, float, float, float, length, length, length, length, float
m;
maybe?
The above shape would I am reasonably sure be ( or the other way dependent on if its a R*C or C*R matrix IIRC) matrix4x4 < float, float, float, typeof(1/ length()), float, float, float, typeof(1/ length()), float, float, float, typeof(1/ length()), length, length, length, float
m;
If you work through the calcs using dimensionally analysis you find that concatenating two matrices then gives you the same type of matrix (give or take promotion ), which is kind of nice! Of course you can generalise the float by replacing it with typeof(length()/length()) too. (Then finally replace length by T .... Stick a single template param T on the front, giving matrix4x4<T> and Bobs your uncle! That is the basic idea and similarly for complex vect etc. The only one I havent tried is quaternion, but hopefully I can try it sometime! regards Andy Little

Andy Little wrote:
If you work through the calcs using dimensionally analysis you find that concatenating two matrices then gives you the same type of matrix (give or take promotion ), which is kind of nice! Of course you can generalise the float by replacing it with typeof(length()/length()) too. (Then finally replace length by T .... Stick a single template param T on the front, giving matrix4x4<T> and Bobs your uncle!
That is the basic idea and similarly for complex vect etc. The only one I havent tried is quaternion, but hopefully I can try it sometime!
regards Andy Little
Maybe I'm not getting it yet, but I don't think this gets you out of the woods. The problem I see is that matrices are used for many things other than just transformations. For example, a user might want to diagonalize a matrix, or find the eigenvalues, or any number of other things. If so, having units on the matrix becomes problematic, I think. The old war horse of diagonalization routines is Gaussian elimination (by any of a number of different names). Though it has some issues as a numeric routine, it provides a good example of the problem I think I see with the units being included in matrices. Because anyone who wants to see the problem will need to remember the algorithm, I'll describe it as I go. My apologies for those people who don't need this reminder. Gaussian elimination has two basic parts. The first one is the actual elimination part, and then an adjustment that avoids stability issues. In the elimination phase, different rows or columns of the matrix are added together with a possible multiplicative factor. The result of this addition is placed in one of the rows or columns in the sum. This is done to adjust the matrix so that one element becomes zero. It looks like this /a_11 a_12\ /a_11 a_12 \ | | -> | | \a_21 a_22/ \a_21 - (a_11(a_21/a_11)) a_22 - (a_12(a_21/a_11)/ The other operation is trading the contents of two rows or columns. This is called pivoting, and it is done to make sure there are no zero (or very small) divisors in the elimination step. Both of these operations are valid because there is a matrix product that does not change the values of the eventual diagonal elements that has the same effect as these apparently ad hoc adjustments. However, since both move values around in the matrix, it is quite likely that they will cause type system errors in a matrix such as the transformation matrices discussed above. The only way I see to avoid this while enforcing types is to actually perform the matrix products as part of the algorithm. However, matrix products have worse scaling than the typical implementations of elimination and pivoting, so there would be a huge efficiency hit in large problems. This would be a game breaker for scientific and engineering calculations, of course, so it just can't be the route the library chooses. The other choice is to develop a way to shut off the library for some operations and extrapolate the resultant types after the fact. I would guess that this can be done, but I don't currently know enough to even begin to do it. So, matrices and vectors will be a major struggle, would be my guess right now. John Phillips
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Andy Little wrote (in an earlier post):
OK, what I was going onto say was that this is only the beginning of the complexities involved in strong type checking on physical quantities .
Yeah, and wait till you hear this one! It gets worse - though I also have a solution that works for me. Read on, if you dare :-) Warning - this post is primarily for the more mathematically inclined - though I think it brings up an important issue that anyone wanting to implement a matrix library on dimensioned quantities needs to think about. John Phillips <phillips <at> mps.ohio-state.edu> writes:
Maybe I'm not getting it yet, but I don't think this gets you out of the woods. The problem I see is that matrices are used for many things other than just transformations. For example, a user might want to diagonalize a matrix, or find the eigenvalues, or any number of other things. If so, having units on the matrix becomes problematic, I think.
With certain restrictions, this can be done. But it brings up other problems. Consider: It's a basic tenet of matrix algebra that for a square matrix A (as long as it has an inverse), both of these are true: A * inv(A) = I inv(A) * A = I where I is the identity matrix of the same size as A. Now suppose A is a matrix of elements having different dimensions. For instance, if A is a 2x2 covariance matrix of position and velocity values, it might have units like A: m^2 m^2/s m^2/s m^2/s^2 Its inverse has well-defined dimensions, with the units you might expect, like: inv(A): 1/m^2 s/m^2 s/m^2 s^2/m^2 But if you multiply A * inv(A) or inv(A) * A, you'll get two different products, neither of which is the identity matrix!!! The results are: A * inv(A) = 1 0 s (I call this "I1" below) 0/s 1 inv(A) * A = 1 0/s (I call this "I2" below) 0 s 1 I = 1 0 0 1 Of course, we all learned in high school physics that "0" and "0 seconds" are *not* the same thing, and Andy's library faithfully embodies that truth. Thus, we have three *different* matrix values above. To make matters worse - for the same reason, neither A * I = A nor I * A = A. In fact, neither expression (A * I) nor (I * A) will compile, because they both involve addition of quantities with different dimensions. This addition would be (correctly, I think) not allowed by the library, even though in this particular case, one of the addends has a numerical value of zero. To multiply A by the identity, we would need (I1 * A) or (A * I2). What a mess! BTW, I believe John's example of Guassian elimination may fail not only because of things like pivoting (as he points out), but even without that, because of this issue - addition or subtraction of values where one is numerically exactly zero but has the wrong dimensions. So what's a programmer to do? In my case, I was already using something akin to "t3_quantity" for these "mixed" matrices, and in that context I came up with the following solution: I introduced a special value that a t3_quantity can hold, called an adimensional zero. Normally, a value of my t3_quantity has specific dimensions (or is specifically dimensionless), even if its numerical value happens to be zero. An adimensional zero is a special case. What this means is that its dimensions are undefined, ambiguous. Semantically, this means it can be added to a value of any dimension (zero + x = x, of course, with the dimensions of the result the same), and when multiplied by any value the result is another adimensional zero (zero * x = zero). I define a constant called "zero," and then define the identity matrix as I = 1 zero zero 1 Now (I1 == I) and (I2 == I) both evaluate to true, (A * I) and (I * A) both compile and give A as the result, etc. Problem solved! It works out quite well in all my linear algebra. But sadly, at the price of increasing the run- time overhead already required by my "t3_quantity." Anybody got a better idea? -- Leland

Andy Little <andy <at> servocomm.freeserve.co.uk> writes:
However AFAIK a matrix can have matrix or maybe vector or complex elements so perhaps there are other candidates than physical quantities for the elements?
Actually, yes! In fact, I played around quite a bit with allowing my matrix elements to be matrices themselves, and even implemented some of it. Mostly just for fun, and to prove to myself it could be done - I'm not sure how much anyone actually needs this. Essentially, these can be thought of as partitioned matrices. When you do that, though, you have to be a lot more careful in your arithmetic inside the matrix functions. A step in your algorithm that says "a * b" with scalar elements can become any of these with matrices: A * B, A * B', A' * B, A' * B' B * A, B * A', B' * A, B' * A' (where I use the prime ' here to represent matrix transpose). In other words, you have to get your multiplications in the right order and know when to do a transpose. (Actually, for me, that's the fun part! I'm a mathematician at heart.) But if you write your matrix operations carefully this way, it is in fact possible to have matrices of matrices and have the results work out correctly! This can even be done with something more esoteric like a Cholesky decomposition. I'm less familiar with handling complex elements, but I think a similar issue comes up with making sure you know when to take the complex conjugate of an element. However, standard matrix libraries are probably more used to dealing with complex values. -- Leland

Leland Brown said: (by the date of Fri, 9 Jun 2006 04:31:28 +0000 (UTC))
Actually, yes! In fact, I played around quite a bit with allowing my matrix elements to be matrices themselves, and even implemented some of it.
wouldn't it be tensors, then? -- Janek Kozicki |

Janek Kozicki <janek_listy <at> wp.pl> writes:
Leland Brown said: (by the date of Fri, 9 Jun 2006 04:31:28 +0000 (UTC))
Actually, yes! In fact, I played around quite a bit with allowing my matrix elements to be matrices themselves, and even implemented some of it.
wouldn't it be tensors, then?
They act like partitioned matrices. The elements are submatrices of the larger matrix. The same results can be obtained by putting all the individual scalar elements into one big matrix, so there's not really any new functionality with this, just notational convenience for some problems. Tensors are a bit over my head, but I assume their semantics are different than simply a partitioned matrix. I don't think what I implemented would produce tensor algebra. -- Leland

Leland Brown wrote:
Janek Kozicki <janek_listy <at> wp.pl> writes:
Leland Brown said: (by the date of Fri, 9 Jun 2006 04:31:28 +0000 (UTC))
Actually, yes! In fact, I played around quite a bit with allowing my
matrix
elements to be matrices themselves, and even implemented some of it.
wouldn't it be tensors, then?
They act like partitioned matrices. The elements are submatrices of the larger matrix. The same results can be obtained by putting all the individual scalar elements into one big matrix, so there's not really any new functionality with this, just notational convenience for some problems.
Tensors are a bit over my head, but I assume their semantics are different than simply a partitioned matrix. I don't think what I implemented would produce tensor algebra.
-- Leland
What it means to be a tensor is defined in terms of transformation properties. If the object transforms the proper way, it is a tensor, if not, it is not. Matrices may or may not be tensors, and certainly, many types of tensors can't be represented as simple matrices (since they can be more than 2 dimensional). The matrix elements are other matrices approach is something I've only used in the way Leland describes it. It is effectively a partitioning, and I have only used it in cases where it helps organize what I'm looking at. Maybe others have done other things with it. John Phillips
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Michael Fawcett <michael.fawcett <at> gmail.com> writes:
Contrived example using theoretical code:
// Multiply accumulate (Is there a simpler way?) template < typename X, typename Y, typename Z, typename VX, typename VY, typename VZ
struct macc { // X * VX + Y * VY + Z * VZ typedef typename boost::result_of_multiplies<X, VX>::type x_times_x; typedef typename boost::result_of_multiplies<Y, VY>::type y_times_y; typedef typename boost::result_of_multiplies<Z, VZ>::type z_times_z; typedef typename boost::result_of_plus<y_times_y, z_times_z>::type y_times_y_plus_z_times_z; typedef typename boost::result_of_plus<x_times_x, y_times_y_plus_z_times_z>::type x_plus_y_plus_z; typedef x_plus_y_plus_z type; };
template < typename X, typename Y, typename Z, typename VX, typename VY, typename VZ
typename macc<X, Y, Z, VX, VY, VZ>::type dot_product(const vec3<X, Y, Z> &lhs, const vec3<VX, VY, VZ> &rhs) { return lhs.x * rhs.x + lhs.y * rhs.y + lhs.z * rhs.z; }
Yes, surely a matrix library written like this could do the trick for PQS. But it has its limitations too - see my other comment below.
This would require the math library be more heavily templated than it currently is. Note that a matrix would need to hold mutliple types (up to NxN). For instance, what would be the syntax for a 3D transform? Perhaps something like:
matrix4x4 < float, float, float, length, float, float, float, length, float, float, float, length, length, length, length, float
m;
But consider the following situation (which, though it may sound contrived, is pretty similar to what came up in my real application), Suppose I have a compile-time constant like const int NUM_OF_ITEMS = 5; And I need a vector containing 3 dimensionless quantities, a time, and NUM_OF_ITEMS length values. And then I need a scaling matrix for that! Yes, the value "5" is known a priori, and I could write out the whole 10x10 transform. But I've defined NUM_OF_ITEMS as a symbolic constant because it's subject to change. Next month I find out we have 7 items instead of 5. I'd like it to be a single-point code change: const int NUM_OF_ITEMS = 7; Can I get the compiler to generate the 12x12 matrix definition for me? I bet it's theoretically possible with template metaprogramming, but I doubt it would be worth anybody's time to implement such a monster. This is the point where I threw up my hands and accepted that I was going to need the t3_quantity. -- Leland

Andy Little <andy <at> servocomm.freeserve.co.uk> writes:
"Michael Fawcett" wrote
On 6/8/06, Andy Little wrote:
-----Original Message----- From: Michael Fawcett [mailto:michael.fawcett <at> gmail.com]
I dont think many people realise! I have discussed it in various replies but no one has asked me the direct question:
Neither of those libraries above will work with other than numeric types AFAIK. IOW where T is a type (simplifying and ignoring e.g floating point promotion etc), it must satisfy:
T result = T() + T(); T result = T() * T();
(among others)
If T is a physical quantity, the second requirement never can be true.
I totally agree - and this is a big problem with using any existing matrix libraries that I know of. *Except* - with your t3_quantity, your last statement above is not true. This *will* work: t3_quantity result = t3_quantity() * t3_quantity(); For anyone who scratches their head as to what t3_quantity would be good for, here's one answer! It can be used with existing linear algebra libraries with a minimum of effort. (It would still be good to use t3_quantity only when necessary, however, because of the run-time penalty.)
blas::vec3<float> a(1, 0, 0); blas::vec3<double> b(0, 1, 0); blas::vec3<double> result = cross_product(a, b); // returns vec3<double>!
template <typename LHS, typename RHS> typename boost::result_of_plus<LHS, RHS>::type operator +(const LHS &lhs, const RHS &rhs) { typename boost::result_of_plus<LHS, RHS>::type nrv(lhs); nrv += rhs; return nrv; }
In fact the type makeup of a matrix for 2d transforms ends up looking like
In my work I implemented something very similar to this - but only for vectors with elements of homogeneous dimensions (e.g., a length vector, a velocity vector). When I have a vector of mixed quantities (length, velocity, time, dimensionless - together in one vector), I end up going to something more like t3_quantity. (FYI, this occurs in the context of least-squares estimation of model parameters, where the parameters are of various dimensions.) the
following:
where N is a numeric, Q is a quantity and R is the reciprocal of the quantity or 1/Q
( N N R ) ( NN R ) ( QQ N )
The 3D transform version is similar except with the extra row and column of course.
The problem is that this is quite a heavy modification to ask of a numeric matrix library, especially when most users will be using it for numeric values.
Yes, unfortunately, that's true. It will make it very hard to integrate any dimensional analysis library with any existing matrix library.
The point of all this is that the price of strong type checking (ie using PQS rather than numerics) may be discarding all your current libraries, which are based on the assumption of numerics. That is quite a high price!.
Again, unfortunately true. (But not because of bad PQS implementation! AFAICS the situation would be the same with any strongly-typed dimensional analysis library.)
I dont know any solution to this problem( except to exit the strong type checking) but basically there it is.
You can exit the strong type checking, or you can pay the performance penalty for t3_quantity. In effect what I ended up doing was both - by putting a compile-time switch in my t3_quantity to turn off the dimensional analysis. Then I get the best of both worlds - I live with the performance penalty for the sake of the dimensions-checking (and enforced documentation of units) until my formulas are debugged, and then I get the benefit of the speed in my production code - all while using a matrix library that doesn't care about any of this. (And BTW, it did find several bugs in my computations by flagging dimensions problems!) -- Leland

| -----Original Message----- | From: boost-bounces@lists.boost.org | [mailto:boost-bounces@lists.boost.org] On Behalf Of Leland Brown | Sent: 09 June 2006 04:48 | To: boost@lists.boost.org | Subject: Re: [boost] [Review][PQS] Review deadline | (And BTW, it did find several bugs in my | computations by flagging dimensions problems!) Can you elaborate a little on the value of using a dimensional analysis by sharing some of these with us? We are all assuming that there is a correctness payoff (some think a BIG payoff) for using a system like yours/PQS/... but it is useful to have some evidence that our instinct is correct. (This ignores the convenience of handling units, of course). The potential users of a 'units/quantity' feature are very much more numerous than any uber-super-pointer IMO. In fact I would describe it as a 'killer application'. (I'd also like to throw in optional estimates of uncertainty to further muddy the water). So I am still very keen for the collective neurons of Boost (especially those with Meta/Template minds) to solve it, if this is possible - I am coming to fear that the language may not really make it as practicable as I thought. Paul PS If only the MKS people had known the grief they would cause by making the kilogram the fundamental mass unit... As a schoolboy, I thought it a little odd, but its full horror never crossed my mind. Some of the perpetrators must be still alive - I wonder if they are aware of what they have done? --- Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539561830 & SMS, Mobile +44 7714 330204 & SMS pbristow@hetp.u-net.com

Paul A Bristow <pbristow <at> hetp.u-net.com> writes:
| (And BTW, it did find several bugs in my | computations by flagging dimensions problems!)
Can you elaborate a little on the value of using a dimensional analysis by sharing some of these with us?
One example I was able to dig up was in approximating a function's derivatives using finite differences. In simplified form, my code had looked something like this: length f( time t ); ... t3_quantity t = // some time value t3_quantity dt = // some time value ... t3_quantity x = f(t); t3_quantity xdot = f(t+dt) - x; // incorrect! ... velocity v = xdot; // runtime error caught here! The erroneous line should have been: t3_quantity ydot = ( f(t+dt) - x ) / dt; I had left out the division, which changes the units. Incidentally, when using t1_quantity or t2_quantity types, the error would be caught at compile-time, and on the line actually containing the error: time t = // some time value time dt = // some time value ... length x = f(t); velocity xdot = f(t+dt) - x; // error here - caught by compiler! (Because these errors are caught at compile-time, and thus usually before the code is checked in, the fixes don't appear in the change history, so I can't produce actual examples I've encountered for those cases.) Another example is my mistake in an earlier post, where I tried to show code taking sqrt(acceleration/length) to get a time value. But actually it gives a frequency, so if the error were not corrected I'd get the reciprocal of the value I want. Dimensional analysis would prevent the invalid assignment from occurring. -- Leland

| -----Original Message----- | From: boost-bounces@lists.boost.org | [mailto:boost-bounces@lists.boost.org] On Behalf Of Leland Brown | Sent: 09 June 2006 22:41 | To: boost@lists.boost.org | Subject: Re: [boost] [Review][PQS] Review deadline | | Paul A Bristow <pbristow <at> hetp.u-net.com> writes: | | > | (And BTW, it did find several bugs in my | > | computations by flagging dimensions problems!) | > | > Can you elaborate a little on the value of using a | dimensional analysis by sharing some of these with us? | | One example I was able to dig up was in approximating a | function's derivatives using finite differences. An interesting example. | (Because these errors are caught at compile-time, | and thus usually before the code is checked in, *** the fixes don't appear in the change history.*** - only as a oath from the programmer;-) And a very interesting observation about potential pitfalls of software metrics. Thank you. Paul --- Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539561830 & SMS, Mobile +44 7714 330204 & SMS pbristow@hetp.u-net.com

"Leland Brown"wrote
Andy Little <andy <at> servocomm.freeserve.co.uk> writes:
[...]
For anyone who scratches their head as to what t3_quantity would be good for, here's one answer! It can be used with existing linear algebra libraries with a minimum of effort. (It would still be good to use t3_quantity only when necessary, however, because of the run-time penalty.)
OK. That is the sort of functionality I saw for the t3_quantity. In a simple implementation it would need to carry its dimension and unit information around at runtime though AFAICS, which would be quite an overhead in terms of size and speed.
blas::vec3<float> a(1, 0, 0); blas::vec3<double> b(0, 1, 0); blas::vec3<double> result = cross_product(a, b); // returns vec3<double>!
template <typename LHS, typename RHS> typename boost::result_of_plus<LHS, RHS>::type operator +(const LHS &lhs, const RHS &rhs) { typename boost::result_of_plus<LHS, RHS>::type nrv(lhs); nrv += rhs; return nrv; }
In my work I implemented something very similar to this - but only for vectors with elements of homogeneous dimensions (e.g., a length vector, a velocity vector). When I have a vector of mixed quantities (length, velocity, time, dimensionless - together in one vector), I end up going to something more like t3_quantity. (FYI, this occurs in the context of least-squares estimation of model parameters, where the parameters are of various dimensions.)
Having only investigated transform matrices, I had hoped that integrity of quantities can be maintained in most cases. But I havent done extensive experiments In cases of vectors, I have only used vectors where all elements are one type of quantity. The vectors are used to represent position, direction and so on in 3 dimensions. A container that holds different quantities I would consider to be a tuple. But I stress I am not an expert. [...]
The problem is that this is quite a heavy modification to ask of a numeric matrix library, especially when most users will be using it for numeric values.
Yes, unfortunately, that's true. It will make it very hard to integrate any dimensional analysis library with any existing matrix library.
The point of all this is that the price of strong type checking (ie using PQS rather than numerics) may be discarding all your current libraries, which are based on the assumption of numerics. That is quite a high price!.
Again, unfortunately true. (But not because of bad PQS implementation! AFAICS the situation would be the same with any strongly-typed dimensional analysis library.)
The question then is: when are the benefits of strong type checking (so use a Quantity type) justified, and when arent they (so use a float type). That would be a good question to answer in the PQS docs AFAICS. But not a trivial one.
I dont know any solution to this problem( except to exit the strong type checking) but basically there it is.
You can exit the strong type checking, or you can pay the performance penalty for t3_quantity. In effect what I ended up doing was both - by putting a compile-time switch in my t3_quantity to turn off the dimensional analysis. Then I get the best of both worlds - I live with the performance penalty for the sake of the dimensions-checking (and enforced documentation of units) until my formulas are debugged, and then I get the benefit of the speed in my production code - all while using a matrix library that doesn't care about any of this. (And BTW, it did find several bugs in my computations by flagging dimensions problems!)
That sounds like an interesting useage. I would guess that the only problem apart from slow performance would be that the t3_quantity would use a lot of space compared with a float, which would have an impact if used in some situations. regards Andy Little

Andy Little <andy <at> servocomm.freeserve.co.uk> writes:
"Leland Brown"wrote
When I have a vector of mixed quantities (length, velocity, time, dimensionless - together in one vector), I end up going to something more like t3_quantity. (FYI, this occurs in the context of least-squares estimation of model parameters, where the parameters are of various dimensions.)
In cases of vectors, I have only used vectors where all elements are one type of quantity. The vectors are used to represent position, direction and so on in 3 dimensions. A container that holds different quantities I would consider to be a tuple. But I stress I am not an expert.
I see what you mean. But unfortunately, in the least-squares estimation, the tuples get used extensively as vectors in heavy linear algebra equations. So I'm stuck with using vectors of heterogeneous units in a matrix library.
The question then is: when are the benefits of strong type checking (so use a Quantity type) justified, and when arent they (so use a float type). That would be a good question to answer in the PQS docs AFAICS. But not a trivial one.
True on both counts.
I would guess that the only problem apart from slow performance would be that the t3_quantity would use a lot of space compared with a float, which would have an impact if used in some situations.
Yes, but perhaps not as much space (or time) as you would think. I allocate a few bits to each dimensionn exponent and combine them in a single iteger value. Then in a multiplication I can add all the exponents at once with only one addition operation (and I can check for matching dimensions with only one integer comparison). This works well for me because I have only integer exponents and only length and time dimensions, so I can easily allocate plenty of bits to avoid overflow. Perhaps you can make use of something similar in your design. -- Leland

Andy Little wrote:
Having only investigated transform matrices, I had hoped that integrity of quantities can be maintained in most cases. But I havent done extensive experiments
In cases of vectors, I have only used vectors where all elements are one type of quantity. The vectors are used to represent position, direction and so on in 3 dimensions. A container that holds different quantities I would consider to be a tuple. But I stress I am not an expert.
[...]
regards Andy Little
I think the question of even a position vector depends on what coordinate system you choose. In cartesian coordinates, all of the components of the vector have the same units, but in any other system (spherical, or cylindrical, for example) the components don't all have the same units. Since the coordinate system is another thing that should be problem dependent, even simple vectors like the position of a particle may have mixed units. John Phillips
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Mon, Jun 12, 2006 at 11:10:14AM -0400, John Phillips wrote:
Andy Little wrote:
Having only investigated transform matrices, I had hoped that integrity of quantities can be maintained in most cases. But I havent done extensive experiments
In cases of vectors, I have only used vectors where all elements are one type of quantity. The vectors are used to represent position, direction and so on in 3 dimensions. A container that holds different quantities I would consider to be a tuple. But I stress I am not an expert.
[...]
regards Andy Little
I think the question of even a position vector depends on what coordinate system you choose. In cartesian coordinates, all of the components of the vector have the same units, but in any other system (spherical, or cylindrical, for example) the components don't all have the same units. Since the coordinate system is another thing that should be problem dependent, even simple vectors like the position of a particle may have mixed units.
John Phillips
The vector type models vector spaces written in component form, so it can't correctly be used for spherical or cylindrical coordinates. Tuples should suffice for those systems. Are there useful linear coordinate systems with different units? Hmm. I suppose spacetime counts, which is probably something you care about quite a bit. Of course, when computing in spacetime, you probably use the same units for time and space anyways...but I'll have to defer to you on that one. Geoffrey

At 11:10 AM 6/12/2006, you wrote:
the same units. Since the coordinate system is another thing that should be problem dependent, even simple vectors like the position of a particle may have mixed units.
John Phillips
(and other comments on vectors, matrices, tensors etc.). I have no idea how practical this would be but I think a different take on this might resolve these problems, at least in principal. I would say that it is a mistake to view the elements of a vector (matrix, whatever -- for simplicity, I'll just refer to matrices henceforth) as having "units" per se. Rather a matrix is itself a quantity, just as a scalar is, but one with, for lack of a better term, structure. Just as a scalar can have associated units (type), so can a matrix. The "unit" in this case is structured in the same way as the quantity. One consequence, perhaps the most direct, for this "structure" in the quantity, is that there are accessor functions on that matrix that return scalar quantities with scalar units. To multiply two matrices they must be compatible, not only in shape but in other aspects of their structure. Given that the units (complete structure) of the two matrices are compatible, they can be multiplied together and the "unit" for the result is well defined. Once compatibility has been checked the unitless "values" for the quantities (e.g., old-fashioned matrices of floats) are numerically multiplied and the result is cast to the proper structured unit. This is just a matter of separating the abstract interface from the particular implementation. It is a conceptual error to automatically equate a position in 3-space as being interchangeable with an array of floats or doubles with one dimension and three elements representing orthogonal distances from an origin. The latter is how the concept might (or might not) be implemented. Topher Cooper

On 6/9/06, Andy Little <andy@servocomm.freeserve.co.uk> wrote:
In cases of vectors, I have only used vectors where all elements are one type of quantity. The vectors are used to represent position, direction and so on in 3 dimensions. A container that holds different quantities I would consider to be a tuple. But I stress I am not an expert.
I am not an expert either, but in practice I have used (incorrectly? should I have used a tuple or a custom data type?) vectors for representing latitude/longitude/altitude with mixed units. Something like vec3<double, unsigned short, double> for decimal degrees, meters, decimal degrees. I often need to move back and forth from those units to nautical miles and feet, and I bet a library like PQS would help me out more than meaningful variable names. I am ambivalent about whether the vector should allow mixed data types or not. I could settle for using a tuple instead, as long as vector functions like normalize(my_vec3) or length(my_vec) worked on tuple types as well. --Michael Fawcett

"Michael Fawcett" wrote
On 6/9/06, Andy Little wrote:
In cases of vectors, I have only used vectors where all elements are one type of quantity. The vectors are used to represent position, direction and so on in 3 dimensions. A container that holds different quantities I would consider to be a tuple. But I stress I am not an expert.
I am not an expert either, but in practice I have used (incorrectly? should I have used a tuple or a custom data type?) vectors for representing latitude/longitude/altitude with mixed units. Something like vec3<double, unsigned short, double> for decimal degrees, meters, decimal degrees.
Well IMO this sounds more like a navigation coordinate(or whatever is the standard name for it) , and this sounds specific enough to make it its own class, maybe you could convert it to a vector relative to the center of the earth: struct nav_coord{ boost::pqs::angle::s latitude,longtitude; boost::pqs::length::m altitude; operator boost::pqs::three_d::vect<boost::pqs::length::m>(); }; AFAIK a vector classically represents a magnitude and direction without more information though, so the vector would be a position vector
I often need to move back and forth from those units to nautical miles and feet, and I bet a library like PQS would help me out more than meaningful variable names.
You could return the distance (or distance vector) from a distance function (between two navigation coordinates presumably) in (say) meters. In PQS conversions to non-SI units are automatic so the result can be assigned to ( a vector in) nautical miles or feet. (In PQS vectors are designed to work this way as well as scalars) : /* show that conversion works for vectors as well as scalars NB not tested in pqs_3_1_1 release version */ // note:ideally its .. //#include <boost/pqs/three_d/out/vect.hpp> // but in pqs_3_1_1... #include <boost/pqs/three_d/vect_out.hpp> #include <boost/pqs/t1_quantity/types/out/length.hpp> namespace pqs = boost::pqs; int main() { typedef pqs::length::naut_mile naut_mile; typedef pqs::length::ft foot; pqs::three_d::vect<naut_mile> naut_mile_vect( naut_mile(1),naut_mile(1),naut_mile(1) ); pqs::three_d::vect<foot> ft_vect = naut_mile_vect; std::cout << ft_vect <<'\n'; // convert absolute length of vect to meters FWIW pqs::length::m length_of_vect = magnitude(ft_vect); std::cout << length_of_vect <<'\n'; } //output: //[6076.12 ft, 6076.12 ft, 6076.12 ft] //3207.76 m
I am ambivalent about whether the vector should allow mixed data types or not. I could settle for using a tuple instead, as long as vector functions like normalize(my_vec3) or length(my_vec) worked on tuple types as well.
My guess is the nav coordinates would work best as a class? This would enable for example conversion to from position vectors etc as above, which isnt possible with a tuple. regards Andy Little

Andy Little wrote:
AFAIK a vector classically represents a magnitude and direction without more information though, so the vector would be a position vector
Yes, anything that is a representation of a magnitude and a direction is, in the math and physics senses of the word, a vector. Then there are different representations of that vector, and that is where it is possible for mixed units to appear. In general, the only representation that has no concerns about mixed units is cartesian coordinates in a space where the sense of scale in all directions has the same units. The x, y and z positions in cartesian 3 space is an example. However, that same vector is represented by a magnitude and 2 direction angles in spherical 3 space, and there is no a priori reason to prefer one representation to another. The choice of representation is always current application dependent. Thus, for some not explicitly defined potential vector library it is possible to have mixed units in even the simplest of applications. There are also spaces where the units (or more accurately, the dimensions) are not the same in all directions, so any vectors in those spaces will have mixed units in any coordinate system. A commonly used one is called "phase space" and it includes the position and momentum variables for a system all in the same space. Thinking of them together turns out to be quite important in some applications, so the example can be quite meaningful for some people. John Phillips
regards Andy Little
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

"John Phillips" wrote
Andy Little wrote:
AFAIK a vector classically represents a magnitude and direction without more information though, so the vector would be a position vector
Yes, anything that is a representation of a magnitude and a direction is, in the math and physics senses of the word, a vector. Then there are different representations of that vector, and that is where it is possible for mixed units to appear. In general, the only representation that has no concerns about mixed units is cartesian coordinates in a space where the sense of scale in all directions has the same units. The x, y and z positions in cartesian 3 space is an example. However, that same vector is represented by a magnitude and 2 direction angles in spherical 3 space, and there is no a priori reason to prefer one representation to another. The choice of representation is always current application dependent.
In creating C++ types, my first thought would be to keep the cartesian and polar types separate for efficiency reasons. If necessary ( which I dont't see it would) a polymorphic_vector could provide the interface of both types. Regarding a cartesian_vector (I'm not sure if that is technically the correct nomenclature...?) I originally wrote it with a separate template parameter for each element. There is no runtime cost but there are other costs. One is in writing code for the type. An addition would require 6 template parameters (allowing for implicit conversions) not 2. The second is in compilation time and the third is in additional documentation. A fourth is design complexities, should all the elements be addable for example. My current thinking is that it is better to design a type with the minimal interface which will cover some reasonable percentage of use. It is notable that in the PQS review,two reviewers have stated that the design of the so-called t1_quantity type in PQS is overcomplicated. Two decisions complicated the design of t1_quantity. The first was the requirement to distinguish dimensionally equivalent quantities (torque and energy say). The second was the use of rational rather than integer powers of dimension. The type would be considerably leaner without these requirements, but they were a result of responding to demands for more flexibility and took considerable time to implement.
Thus, for some not explicitly defined potential vector library it is possible to have mixed units in even the simplest of applications.
There are also spaces where the units (or more accurately, the dimensions) are not the same in all directions, so any vectors in those spaces will have mixed units in any coordinate system. A commonly used one is called "phase space" and it includes the position and momentum variables for a system all in the same space. Thinking of them together turns out to be quite important in some applications, so the example can be quite meaningful for some people.
That is interesting, though I find anything beyond the usual space difficult to visualise. The ability to visualise things such as this seems to be what marks out mathematicians. regards Andy Little

Andy Little said: (by the date of Tue, 13 Jun 2006 00:59:11 +0100)
There are also spaces where the units (or more accurately, the dimensions) are not the same in all directions, so any vectors in those spaces will have mixed units in any coordinate system. A commonly used one is called "phase space" and it includes the position and momentum variables for a system all in the same space. Thinking of them together turns out to be quite important in some applications, so the example can be quite meaningful for some people.
That is interesting, though I find anything beyond the usual space difficult to visualise. The ability to visualise things such as this seems to be what marks out mathematicians.
... and engineers? physicians? I'm not sure to which category I could belong (certainly not mathematician) but "phase space" is quite common when working with engineering problems. It is the most handy way to represent what is going on in any given point of space. However vector operations between vectors in "phase space" are different than dot_product and cross_product. Those two operations make no sense in fact. It is although common to multiply such vectors by some matrices... -- Janek Kozicki |

Janek Kozicki said: (by the date of Tue, 13 Jun 2006 17:31:58 +0200)
... and engineers? physicians? I'm not sure to which category I could belong (certainly not mathematician)
sorry. reading that made me laugh, obviously I'm tired now. I forgot that I work in material engineering :) -- Janek Kozicki |

Janek Kozicki <janek_listy@wp.pl> wrote:
Janek Kozicki said: (by the date of Tue, 13 Jun 2006 17:31:58 +0200)
... and engineers? physicians? I'm not sure to which category I could belong (certainly not mathematician)
sorry. reading that made me laugh, obviously I'm tired now. I forgot
obviously you are, as you did not check meaning of word "physician" (Pol. lekarz) ;) I'm pretty sure that you meant physicist. B.

On Tue, Jun 13, 2006 at 05:31:58PM +0200, Janek Kozicki wrote:
Andy Little said: (by the date of Tue, 13 Jun 2006 00:59:11 +0100)
There are also spaces where the units (or more accurately, the dimensions) are not the same in all directions, so any vectors in those spaces will have mixed units in any coordinate system. A commonly used one is called "phase space" and it includes the position and momentum variables for a system all in the same space. Thinking of them together turns out to be quite important in some applications, so the example can be quite meaningful for some people.
That is interesting, though I find anything beyond the usual space difficult to visualise. The ability to visualise things such as this seems to be what marks out mathematicians.
... and engineers? physicians? I'm not sure to which category I could belong (certainly not mathematician) but "phase space" is quite common when working with engineering problems. It is the most handy way to represent what is going on in any given point of space. However vector operations between vectors in "phase space" are different than dot_product and cross_product. Those two operations make no sense in fact.
It is although common to multiply such vectors by some matrices...
I would suggest that instead of trying to make an extremely general vector class, it'd be better to make an extremely specific vector class, together with an extremely general way to other vector classes. Specifically, you can make vector3 (or vector<3>, perhaps) a straightforward single unit Euclidean vector. It can have L2 norms and L^inf norms and cross products and all the operations that are undefined for vectors with components of different units. Then we could define a vector space variant of boost::operators to convert any tuple-like type into a vector space type. If you wanted phase space, you'd do something like this: template<class Q,class P> class phase_vector:public boost::vector_operators<phase_vector> { public: static const int vector_operators_dimension = 2; vector<Q> position vector<P> momentum; phase_vector(const vector<Q>& position,const vector<P>& momentum) :position(position),momentum(momentum) {} // these would need to be generalized to get<n>() vector<Q>& first() {return position;} vector<P>& second() {return momentum;} }; and the vector_operators class would fill in scalar multiplication and addition-related operators. In my opinion, this would be far more useful than writing phase space as vector<m,m,m,kg_m_div_s,kg_m_div_s,kg_m_div_s> Geoffrey

Geoffrey Irving said: (by the date of Tue, 13 Jun 2006 09:30:39 -0700)
I would suggest that instead of trying to make an extremely general vector class, it'd be better to make an extremely specific vector class, together with an extremely general way to other vector classes.
Specifically, you can make vector3 (or vector<3>, perhaps) a straightforward single unit Euclidean vector. It can have L2 norms and L^inf norms and cross products and all the operations that are undefined for vectors with components of different units. Then we could define a vector space variant of boost::operators to convert any tuple-like type into a vector space type.
I like this idea very much. Not general, but specific, with underlyings exportable to other classes.
If you wanted phase space, you'd do something like this:
template<class Q,class P> class phase_vector:public boost::vector_operators<phase_vector>; <snip>
I'm afraid that this example is too simplified. phase_vector would need the ability to be multiplied by matrix_with_units (weird name I know ;) - but it's rather not a phase_matrix)
and the vector_operators class would fill in scalar multiplication and addition-related operators. In my opinion, this would be far more useful than writing phase space as
vector<m,m,m,kg_m_div_s,kg_m_div_s,kg_m_div_s>
indeed it would be more useful, once get right. But I have the impression that it will be more difficult than getting vector<3> right, or getting units system right :/ PS: I like vector<3> , I think that Andy can't argue with this name :> In fact all this discuccion is great for deciding about a specification of such a library. When specification is agreed upon, then implementation should be easy enough :) -- Janek Kozicki |

On Tue, Jun 13, 2006 at 09:13:43PM +0200, Janek Kozicki wrote:
Geoffrey Irving said: (by the date of Tue, 13 Jun 2006 09:30:39 -0700)
I would suggest that instead of trying to make an extremely general vector class, it'd be better to make an extremely specific vector class, together with an extremely general way to other vector classes.
Specifically, you can make vector3 (or vector<3>, perhaps) a straightforward single unit Euclidean vector. It can have L2 norms and L^inf norms and cross products and all the operations that are undefined for vectors with components of different units. Then we could define a vector space variant of boost::operators to convert any tuple-like type into a vector space type.
I like this idea very much. Not general, but specific, with underlyings exportable to other classes.
If you wanted phase space, you'd do something like this:
template<class Q,class P> class phase_vector:public boost::vector_operators<phase_vector>; <snip>
I'm afraid that this example is too simplified. phase_vector would need the ability to be multiplied by matrix_with_units (weird name I know ;) - but it's rather not a phase_matrix)
This example handles that just fine, except for one detail. You just define a matrix_operators class, and then define multiplication between any class inheriting from matrix_operators and anything inherited from vector_operators where the types match. The annoying detail is that you'd have to return the right type, so you'd need some policy method for specifying return types of such things. For my personal use, I wouldn't mind having to specify a bunch of policy details each time I defined one of these special vectors, since all the policy details would hopefully have intuitive physical meanings. Assuming it's possible to set them up in a way that captures all the reasonable operations, that is.
and the vector_operators class would fill in scalar multiplication and addition-related operators. In my opinion, this would be far more useful than writing phase space as
vector<m,m,m,kg_m_div_s,kg_m_div_s,kg_m_div_s>
indeed it would be more useful, once get right. But I have the impression that it will be more difficult than getting vector<3> right, or getting units system right :/
PS: I like vector<3> , I think that Andy can't argue with this name :>
Or vector<T,3> since there's no reason not to allow floats and ints. Templatize specialization is probably required to get all the lower dimensional versions fast, but that's happily transparent to the user. Specialization also allows the low dimension versions to have nice member variables like x,y,z.
In fact all this discuccion is great for deciding about a specification of such a library. When specification is agreed upon, then implementation should be easy enough :)
Geoffrey

--- Geoffrey Irving <irving@cs.stanford.edu> wrote:
On Tue, Jun 13, 2006 at 09:13:43PM +0200, Janek Kozicki wrote:
Geoffrey Irving said: (by the date of Tue, 13 Jun 2006 09:30:39 -0700)
Specifically, you can make vector3 (or vector<3>, perhaps) a straightforward single unit Euclidean vector. It can have L2 norms and L^inf norms and cross products and all the operations that are undefined for vectors with components of different units. Then we could define a vector space variant of boost::operators to convert any tuple-like type into a vector space type.
In my opinion, this would be far more useful than writing phase space as
vector<m,m,m,kg_m_div_s,kg_m_div_s,kg_m_div_s>
PS: I like vector<3> , I think that Andy can't argue with this name :>
Or vector<T,3>
There's been a lot of discussion on this issue of vectors and how/ whether to deal with vectors or tuples of mixed units. I'd like to chime in with some of my thoughts: 1 - why I think mixed vectors are both sensible and important, and 2 - why I think this could be easy to implement, at least partially. 1. First, what's the real difference between a vector and a tuple? There are probably differences in the way they're visualized by various people, but I think we should be primarily concerned with the differences in semantics - what operations are meaningful and/or useful on each, which distinguish one kind from the other. The way I think about them, I see at least three differences: A. Vectors have operations like magnitude (length), dot products, cross products, angle between two vectors, etc. For tuples, in general, none of these functions have a meaningful definition. (This is probably why they are hard to "visualize" as vectors.) B. Vectors can be transformed to other vectors by matrix multiplication. Thus, it's useful to have them be compatible with or embedded in some sort of matrix library. Tuples typically are not suited for such use. C. Vectors, like matrices, can be indexed numerically (1st row, 2nd column, etc.), so it's easy to loop over the elements. This is part of what makes them suited for matrix calculations. Tuples are frequently referenced by name instead of number - e.g., the members of a struct. So what about "vectors" of mixed dimensions/units? As far as property, they would not act like vectors. But for property B, there *are* many engineering applications that need mixed aggregates to have these operations. The bulk of my own work falls into this category, and this is, in fact, the situation for which I developed my dimensional analysis library in the first place. Janek Kozicki also commented on the need for matrix multiplication with vectors in phase space. In my work, I tend to visualize these mentally as tuples, not as vectors in some N-dimensional space. But it turns out the mathematics I need to perform requires them to be involved in lots of matrix calculations - which also means the matrices themselves have mixed units. In summary, I think it's important to allow vectors whose elements have different physical dimensions - even though certain operations like vector length will fail unless all the dimensions are the same. 2. The good news is that I think this is almost trivial to implement using the "t3_quantity" or "free_quantity" or whatever we decide to call it. And with the other two "quantities" I found it extremely difficult to implement in a general way, so I suggest don't bother. If the user needs mixed vectors, he can use "free_quantity." (Or he can write his own matrix operations for his special case, or exit the strong typing and dimensions checking for the matrix operations.) We can do this if we define vectors as vector<N,T> like this: vector<2,double> // 2D dimensionless vector vector<3,pqs::length::km> // 3D position vector in km vector<6,pqs::free_quantity> // 6-element vector of mixed units // (e.g., phase space) And I agree that this is better than: vector<m,m,m,kg_m_div_s,kg_m_div_s,kg_m_div_s> Likewise with matrices, perhaps use matrix<M,N,T> like this: matrix<3,3,double> matrix<2,3,pws::time::s> matrix<7,6,pqs::free_quantity> etc. FWIW, in my library I made the type parameter default to double, which allows simply vector<3> if you want a unitless vector.
Templatize specialization is probably required to get all the lower dimensional versions fast, but that's happily transparent to the user. Specialization also allows the low dimension versions to have nice member variables like x,y,z.
I agree. Especially since cross products only exist for 3D, template specialization is probably needed for that case at least. -- Leland __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com

I'm quoting Leland Brown completely, so that this very insighful comment has now sane text formmatting. I fully agree with all observations. I hope we can put it together somewhere and make specification for this upcoming vector/matrix with units library? (part of pqs in fact) My few comments are below. Leland Brown said: (by the date of Wed, 14 Jun 2006 02:21:31 -0700 (PDT))
There's been a lot of discussion on this issue of vectors and how/ whether to deal with vectors or tuples of mixed units. I'd like to chime in with some of my thoughts: 1 - why I think mixed vectors are both sensible and important, and 2 - why I think this could be easy to implement, at least partially.
1. First, what's the real difference between a vector and a tuple? There are probably differences in the way they're visualized by various people, but I think we should be primarily concerned with the differences in semantics - what operations are meaningful and/or useful on each, which distinguish one kind from the other. The way I think about them, I see at least three differences:
A. Vectors have operations like magnitude (length), dot products, cross products, angle between two vectors, etc. For tuples, in general, none of these functions have a meaningful definition. (This is probably why they are hard to "visualize" as vectors.)
B. Vectors can be transformed to other vectors by matrix multiplication. Thus, it's useful to have them be compatible with or embedded in some sort of matrix library. Tuples typically are not suited for such use.
C. Vectors, like matrices, can be indexed numerically (1st row, 2nd column, etc.), so it's easy to loop over the elements. This is part of what makes them suited for matrix calculations. Tuples are frequently referenced by name instead of number - e.g., the members of a struct.
So what about "vectors" of mixed dimensions/units? As far as property A, they would not act like vectors. But for property B, there *are* many engineering applications that need mixed aggregates to have these operations. The bulk of my own work falls into this category, and this is, in fact, the situation for which I developed my dimensional analysis library in the first place. Janek Kozicki also commented on the need for matrix multiplication with vectors in phase space.
In my work, I tend to visualize these mentally as tuples, not as vectors in some N-dimensional space. But it turns out the mathematics I need to perform requires them to be involved in lots of matrix calculations - which also means the matrices themselves have mixed units.
In summary, I think it's important to allow vectors whose elements have different physical dimensions - even though certain operations like vector length will fail unless all the dimensions are the same.
2. The good news is that I think this is almost trivial to implement using the "t3_quantity" or "free_quantity" or whatever we decide to call it. And with the other two "quantities" I found it extremely difficult to implement in a general way, so I suggest don't bother. If the user needs mixed vectors, he can use "free_quantity." (Or he can write his own matrix operations for his special case, or exit the strong typing and dimensions checking for the matrix operations.)
We can do this if we define vectors as vector<N,T> like this:
vector<2,double> // 2D dimensionless vector vector<3,pqs::length::km> // 3D position vector in km vector<6,pqs::free_quantity> // 6-element vector of mixed units // (e.g., phase space)
And I agree that this is better than:
vector<m,m,m,kg_m_div_s,kg_m_div_s,kg_m_div_s>
Likewise with matrices, perhaps use matrix<M,N,T> like this:
matrix<3,3,double> matrix<2,3,pws::time::s> matrix<7,6,pqs::free_quantity>
etc. FWIW, in my library I made the type parameter default to double, which allows simply vector<3> if you want a unitless vector.
Templatize specialization is probably required to get all the lower dimensional versions fast, but that's happily transparent to the user. Specialization also allows the low dimension versions to have nice member variables like x,y,z.
I agree. Especially since cross products only exist for 3D, template specialization is probably needed for that case at least.
Also specializations can distiguish whether T is free_quantity or not. And depending on that, they could provide category A operations (dot product, cross product, magnitude, etc..) - because such operations can work for a vector with components of similar type. Second point is about matrix operations. The need to "bundle" matrix operations within this library limits possible usage of ublas/lapack libraries. Possible solutions: 1. include transparent for the user call to external methods that solve matrix problems, like the most popular Ax=b (bare bones approach is to invert matrix A, but there are more subtle and efficient methods). The user will work within this library, and this library will use external backend, while taking care of units. 2. Taking care of units and calling external methods may be too complicated. It would be simpler to implement the operations on our own. Tempting, but can we provide ,,all'' the functionality? 3. don't do it at all. Just make matrix classes (like in examples above) that cannot do any other operations except multiplication with a vector. The user will decide how he wants to handle that - write own code to solve Ax=b, while taking care of units, or call lapack and temporarily turn units "off". For the beginning we can take approach 3. because it minimizes the amount of work. So we have a small library to start with. But later maybe we can try to improve towards 1. or 2. Another question - would quaternions be something like vector<4,double> ? For me it looks like a good idea. Template specialization can offer category A operations that are specific to quaternions, when someone works with unitless vector<4>. It would be just like template specialization will provide cross product for vector<3>, which is specific only for vector<3>. Besides quaternions are also known to be used together with matrix<4,4,double>. One last note: above design does not allow to resize vectors and matrices in runtime. This limits the library usage. Should there be added a slower complementary library that will allow to resize data in runtime? Any ideas? Personally I see a limited need for that, the only exception is working with FEM. But FEM would also certainly require a working method that solves Ax=b. So maybe first better focus on vector/matrix non resizeable in runtime. Heh, I just recognized a similarity of this problem with pqs, look: everything determined everything determined during compilation during runtime stage --------------------+----------------------+------------------ t1_quantity | t2_quantity | t3_quantity fixed_quantity | scaled_quantity | free_quantity | | vector<3> | | vector.resize(3) matrix<4,4> | ? | matrix.resize(4,4) fixed_vector ? | | free_vector ? -- Janek Kozicki |

On 6/14/06, Janek Kozicki <janek_listy@wp.pl> wrote:
Another question - would quaternions be something like vector<4,double> ? For me it looks like a good idea. Template specialization can offer category A operations that are specific to quaternions, when someone works with unitless vector<4>.
It would be just like template specialization will provide cross product for vector<3>, which is specific only for vector<3>.
Besides quaternions are also known to be used together with matrix<4,4,double>.
I don't think this is the correct thing to do. There are uses of vector<4> where quaternion operations don't make sense, but classic vector operations do. In graphics programming vector<4>s are sometimes used as the color component, with data encoded into each field to be used by a hardware shader. There are often times where the 'w' component is used to signal that the vertex is at infinity, but should otherwise still be treated as a classic vector<3>. For lights, the 'w' component is used to signal the type of light, directional or point (omni-directional). That type specifies how the x,y,z components should be interpreted. If the light is directional, then the x,y,z is a direction vector. If the type is a point light, the x,y,z components are a position in space. I agree about the only allowing certain operations using template specialization for the vector/quaternion classes, but I disagree that quaternion can simply be vector<4> with the provided quaternion specializations. I think it should be a separate class following the same principles. I currently use enable_if and is_same from boost to determine what functions should be exposed. There are convenience functions like: as_array() to be used like glVertex3fv(my_vec.as_array()); that only make sense if the vector contained all of the same type. Otherwise, in cases like vec3<float, short, float>, as_array() would be disabled. I suspect that we could do the same for the rest of the operations (dot_product, cross_product, etc). --Michael Fawcett

Michael Fawcett said: (by the date of Wed, 14 Jun 2006 13:16:12 -0400) Hi Michael,
I currently use enable_if and is_same from boost to determine what functions should be exposed. There are convenience functions like:
as_array() to be used like glVertex3fv(my_vec.as_array());
that only make sense if the vector contained all of the same type. Otherwise, in cases like vec3<float, short, float>, as_array() would be disabled. I suspect that we could do the same for the rest of the operations (dot_product, cross_product, etc).
in current design (see parent post) it is decided that all components of vector/matrix/quaternion have the same underlying type (specified as second template argument). IMO this is a good decision, because the main goal here is to work with units (then math). And this most of the time involves math, physics or engineering. With such target usage it makes no sense to use different underlying types. Examples that you have provided below are not from math, physics or engineering field, so they do not apply here. It is possible that this library will work in such scenarios, but this is not the design target. Possible solutions are given below.
On 6/14/06, Janek Kozicki <janek_listy@wp.pl> wrote:
Another question - would quaternions be something like vector<4,double> ? For me it looks like a good idea. Template specialization can offer category A operations that are specific to quaternions, when someone works with unitless vector<4>.
It would be just like template specialization will provide cross product for vector<3>, which is specific only for vector<3>.
Besides quaternions are also known to be used together with matrix<4,4,double>.
I don't think this is the correct thing to do. There are uses of vector<4> where quaternion operations don't make sense, but classic vector operations do.
In graphics programming vector<4>s are sometimes used as the color component, with data encoded into each field to be used by a hardware shader.
it is more like a tuple with four components. Of course storing them as vector is handy, but there is not math inside, it's just a container for data.
There are often times where the 'w' component is used to signal that the vertex is at infinity, but should otherwise still be treated as a classic vector<3>.
so it would be a vector<3> with a bool flag. Makes sense, because there is no cross_product defined for vector<4> in which the last component is some bool flag.
For lights, the 'w' component is used to signal the type of light, directional or point (omni-directional). That type specifies how the x,y,z components should be interpreted. If the light is directional, then the x,y,z is a direction vector. If the type is a point light, the x,y,z components are a position in space.
so it's a vector<3> with an enum flag.
I agree about the only allowing certain operations using template specialization for the vector/quaternion classes, but I disagree that quaternion can simply be vector<4> with the provided quaternion specializations. I think it should be a separate class following the same principles.
Your arguments did not convince me at all, because they are out of place. Also - think about it - there is only one applicable definition of multiplication between vector<3> and vector<4> - and this definition will assume vector<4> to be quaternion which rotates vector<3> Other definition for such multiplication doesn't exist, so there is no conflict with other math fields. It's specific just like cross product for vector<3> So still I think that quaternion can be a template specialization for unitless (dimensionless) vector<4>. Also we can add a typedef quaternion<double> vector<4,double>; // etc... -- Janek Kozicki |

On Thu, Jun 15, 2006 at 07:11:23PM +0200, Janek Kozicki wrote:
Your arguments did not convince me at all, because they are out of place.
Also - think about it - there is only one applicable definition of multiplication between vector<3> and vector<4> - and this definition will assume vector<4> to be quaternion which rotates vector<3>
I wish that were the case, but sadly, quaternion multiplication is slightly ambiguous. The problem is that the normal quaternion rotation is actually a conjugation: rotated_v = q v q^-1 where v = (x,y,z) is canonically viewed as the quaternion (0,x,y,z). rotated_v will come out looking like (0,rx,ry,rz), so we can drop the zero and view it as a vector again. This could be a problem if the conversion between vector<3> and quaternion was an actual C++ conversion, since q * v could mean two completely different things. I imagine most people handle this by disallowing that conversion.
Other definition for such multiplication doesn't exist, so there is no conflict with other math fields. It's specific just like cross product for vector<3>
So still I think that quaternion can be a template specialization for unitless (dimensionless) vector<4>. Also we can add a typedef quaternion<double> vector<4,double>; // etc...
That's bad for the following simple reason. If you give people two vector<4>'s and ask them to multiple them, they'll expect the answer to either not exist or be component-wise multiplication. There's very little chance they'll expect fancy quaternion multiplication. Quaternions need a multiplication operator to be useful, so they can't be the same as vector<4>. Geoffrey

Geoffrey Irving said: (by the date of Thu, 15 Jun 2006 10:23:24 -0700)
On Thu, Jun 15, 2006 at 07:11:23PM +0200, Janek Kozicki wrote:
So still I think that quaternion can be a template specialization for unitless (dimensionless) vector<4>. Also we can add a typedef quaternion<double> vector<4,double>; // etc...
That's bad for the following simple reason. If you give people two vector<4>'s and ask them to multiple them, they'll expect the answer to either not exist or be component-wise multiplication. There's very little chance they'll expect fancy quaternion multiplication. Quaternions need a multiplication operator to be useful, so they can't be the same as vector<4>.
You are absolutely right, I forgot about this one when I was talking about quaternions. So it means that quaternions will be implementad as separate class. Though it is possible that quaternion will use vector<4> inside as a backend to store data...
Also - think about it - there is only one applicable definition of multiplication between vector<3> and vector<4> - and this definition will assume vector<4> to be quaternion which rotates vector<3>
I wish that were the case, but sadly, quaternion multiplication is slightly ambiguous. The problem is that the normal quaternion rotation is actually a conjugation:
rotated_v = q v q^-1
where v = (x,y,z) is canonically viewed as the quaternion (0,x,y,z). rotated_v will come out looking like (0,rx,ry,rz), so we can drop the zero and view it as a vector again.
This could be a problem if the conversion between vector<3> and quaternion was an actual C++ conversion, since q * v could mean two completely different things. I imagine most people handle this by disallowing that conversion.
IMHO it's a good solution. Such conversion would not be used anyway. Also the documentation must explicitly state that q*v is actually a conjugation. In fact when I started working with yade, and had some problems. One of the questions I asked myself was - what actually q*v is doing - does it perform rotation (full conjugation)? Or maybe it's just multiplication? I had to check that in the source, because I was using third party math library, not a one written by me. -- Janek Kozicki |

On 6/15/06, Janek Kozicki <janek_listy@wp.pl> wrote:
Michael Fawcett said: (by the date of Wed, 14 Jun 2006 13:16:12 -0400)
Hi Michael,
I currently use enable_if and is_same from boost to determine what functions should be exposed. There are convenience functions like:
as_array() to be used like glVertex3fv(my_vec.as_array());
that only make sense if the vector contained all of the same type. Otherwise, in cases like vec3<float, short, float>, as_array() would be disabled. I suspect that we could do the same for the rest of the operations (dot_product, cross_product, etc).
in current design (see parent post) it is decided that all components of vector/matrix/quaternion have the same underlying type (specified as second template argument). IMO this is a good decision, because the main goal here is to work with units (then math). And this most of the time involves math, physics or engineering. With such target usage it makes no sense to use different underlying types.
I was really just agreeing with the current sentiment that we should allow some operations and disallow others, depending on the template parameters. I was under the impression that the vector<3, free_quantity> syntax was just one alternative to the lengthier one, in fact, I got the impression that Leland Brown currently uses mixed type vectors and matrices with success. I was also unable to find the post where it was agreed to drop mixed types. Could you quote next time?
Examples that you have provided below are not from math, physics or engineering field, so they do not apply here. It is possible that this library will work in such scenarios, but this is not the design target.
Possible solutions are given below.
Although I'm ambivalent about the implementation, I can't see why you wouldn't either support mixed types (act like a tuple), or support operations on tuples. See below.
it is more like a tuple with four components. Of course storing them as vector is handy, but there is not math inside, it's just a container for data.
Sure, but often dot products are used on that data if the author knows it makes sense. Why do you care what is stored in there as the library author? The vector<4, float> (or whatever we are calling it now) may be the color component, but it may store a normal vector in the first 3 components with a texture lookup id in the last component. Who cares if the author wants to say normalize(myvec4.as_vec3()); as long as the types in myvec4 allow normalize to function correctly.
There are often times where the 'w' component is used to signal that the vertex is at infinity, but should otherwise still be treated as a classic vector<3>.
so it would be a vector<3> with a bool flag. Makes sense, because there is no cross_product defined for vector<4> in which the last component is some bool flag.
No, but again, why do you care? The cross product IS defined for a vec<3> of all the same type, so the user will be able to use that if it makes sense. In actuality it would probably be a vec<4, float> so that I could pass it directly on to the video card without an intermediate copy being made.
For lights, the 'w' component is used to signal the type of light, directional or point (omni-directional). That type specifies how the x,y,z components should be interpreted. If the light is directional, then the x,y,z is a direction vector. If the type is a point light, the x,y,z components are a position in space.
so it's a vector<3> with an enum flag.
I agree about the only allowing certain operations using template specialization for the vector/quaternion classes, but I disagree that quaternion can simply be vector<4> with the provided quaternion specializations. I think it should be a separate class following the same principles.
Your arguments did not convince me at all, because they are out of place.
In practice, 99% of the time, my vectors and matrices are of the same type. A few other posters (John Philips and Leland Brown notably) have mentioned successful use of mixed type vectors and matrices which are convincing to me. I fail to see the problem with allowing mixed types implementation-wise, which is my only real argument against disallowing them. If the operation supports the types, allow it, if it doesn't, fail to compile.
So still I think that quaternion can be a template specialization for unitless (dimensionless) vector<4>. Also we can add a typedef quaternion<double> vector<4,double>; // etc...
I think not. vector<4> * vector<4> better be component-wise, at least, that's how it worked on the PlayStation2. I think I can agree that vectors and matrices can store the same type if we allow operations to accept tuple types, if that is suitable for others. Perhaps dot_product could be implemented as: template <typename LHS, typename RHS> BOOST_TYPEOF_TPL( something here ) dot_product(const LHS &lhs, const RHS &rhs) { return lhs.get<0>() * rhs.get<0>() + lhs.get<1>() * rhs.get<1>() + lhs.get<2>() * rhs.get<2>(); } but then our vectors have to act like tuples, and then I guess you have to ask, why isn't a tuple adequate with a bunch of free functions implemented like above? PQS would still kick in at the individual operation level (i.e. all of those multiplications and additions). Regards, --Michael Fawcett

--- Michael Fawcett <michael.fawcett@gmail.com> wrote:
I was under the impression that the vector<3, free_quantity> syntax was just one alternative to the lengthier one,
That was my understanding too. I see no reason to preclude the type parameter from being free_quantity, and this would automatically allow mixed vectors - and automatically ensure they're only used in ways that make sense. Operations on these vectors will call operations on their elements, and those individual operations on free_quantity will still check for consistency of dimensions and produce errors if the mathematics doesn't make sense. An example may help. Consider these vector declarations: vector<2,free_quantity> v1; v1[0] = 1 * meter(); v1[1] = 2 * meter() / second(); vector<2,free_quantity> v2; v2[0] = 3 * gram(); v2[1] = 4 * gram(); vector<2,free_quantity> v3; v3[0] = 5 / second(); v3[1] = 6; Then v1.magnitude() will produce an error, because it involves adding (1 m)^2 + (2 m/s)^2, and the + operator on free_quantity should complain that the dimensions are not consistent. On the other hand, v2.magnitude() could be allowed - it's a straightforward computation; the result is 5 g. Likewise, the dot product v1.dot(v3) can be done: v1.dot(v3) = (1 m)*(5/s) + (2 m/s)*6 = 5 m/s + 12 m/s = 17 m/s Here the computation succeeds because both terms in the sum have the same dimensions (velocity).
in fact, I got the impression that Leland Brown currently uses mixed type vectors and matrices with success.
Yes. In fact my dot product example above is exactly the kind of thing that occurs in my algorithm many, many times (except that it more often involves matrices as well and not just vectors). My application involves a set of input quantities of different dimensions, which influence the output of a process. We need to find the best set of inputs to get the right outputs. If there are N inputs and M outputs, then it involves MxN matrices. The same kind of problem comes up in a variety of engineering disciplines. The math is the same regardless of whether the input (or output) quantities have equal dimensions. In my case, they do not, so I need mixed vectors and matrices. Another example is a covariance matrix, describing the statistical relationships between quantities of different dimensions. In this case, the elements of the matrix will have many different dimensions. And a common computation called "propagation of errors" requires this matrix to be multiplied by other matrices of mixed dimensions. But a key point is: always, as in my dot product example, the individual addition operations will end up having consistent dimensions if the matrices have been designed correctly. And if they haven't - well, that's exactly where using PQS with a matrix library is really useful to spot the error! -- Leland __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com

"Leland Brown" wrote
I was under the impression that the vector<3, free_quantity> syntax was just one alternative to the lengthier one,
That was my understanding too. I see no reason to preclude the type parameter from being free_quantity, and this would automatically allow mixed vectors - and automatically ensure they're only used in ways that make sense.
FWIW the type T of allowable elements of a vector<N,T>, should be described in terms of a Concept. IOW in my limited understanding of Conceptgcc : http://www.generic-programming.org/software/ConceptGCC/ The vector definition would be something like: template <int N, Quantity Q> struct vector {/*...*/}; Current C++ cant enforce the Quantity Concept, so it will have to rely on external documentation. (Though I really ought to try to get Conceptgcc running) Working this way then assuming a PQS type is a model of Quantity then things should work. This would allow the Geometry library to be decoupled from PQS. BTW Leland Thanks for the post. There is a huge amount I would like to comment on , but for time reasons I'll just lay down my garbled notes in hope they make some sense: Yes free quantity, Must be able to switch this in and out for performance reasons, greatly affects how PQS might be used. Need various views. Apologies if that makes no sense ... regards Andy Little

Andy Little said: (by the date of Fri, 16 Jun 2006 10:01:06 +0100)
Working this way then assuming a PQS type is a model of Quantity then things should work. This would allow the Geometry library to be decoupled from PQS.
yeah, decoupling is what we need :)
- Yes free quantity,
- Must be able to switch this in and out for performance reasons, greatly affects how PQS might be used.
- Need various views.
Can you elaborate a bit about views? Is it Jesper's mechanism ( length x = 10 * meter(); ) or maybe the ability to choose between using only SI base units, or all SI units including powers of 10, or physicists units (electronovolt, speed of light = 1) ? -- Janek Kozicki |

Hi Janek, "Janek Kozicki" <janek_listy@wp.pl> wrote in message news:20060618012314.6b5d8aee@absurd...
Andy Little said: (by the date of Fri, 16 Jun 2006 10:01:06 +0100)
Working this way then assuming a PQS type is a model of Quantity then things should work. This would allow the Geometry library to be decoupled from PQS.
yeah, decoupling is what we need :)
- Yes free quantity,
- Must be able to switch this in and out for performance reasons, greatly affects how PQS might be used.
- Need various views.
Can you elaborate a bit about views? Is it Jesper's mechanism ( length x = 10 * meter(); ) or maybe the ability to choose between using only SI base units, or all SI units including powers of 10, or physicists units (electronovolt, speed of light = 1) ?
I see the easy to use syntax: boost::pqs::length::m as a 'view' of the boost::pqs::t1_quantity<abstract_quantity<...>,unit<...>,value_type> Another view might be my::length where only base units are used. I can think of other ways to present the raw t1_quantity too. One might be just to present the quantities and units you need in one header. As for other unit systems, there is so much work in simply completing the library for the SI system that I have decided not to try to cover other unit systems, nor make any guarantees that PQS will work for them. The priority is to get what is there to an acceptable standard for Boost. PQS needs a lot of work to get up to boost standards. regards Andy Little

Leland Brown said: (by the date of Thu, 15 Jun 2006 14:33:06 -0700 (PDT))
--- Michael Fawcett <michael.fawcett@gmail.com> wrote:
I was under the impression that the vector<3, free_quantity> syntax was just one alternative to the lengthier one,
That was my understanding too. I see no reason to preclude the type parameter from being free_quantity, and this would automatically allow mixed vectors - and automatically ensure they're only used in ways that make sense.
I have read you whole post, and I totally agree (it was an interesting example too!). I think there was a misunderstanding. The problem is about underlying data type. Do we allow one field of a vector<3> to be double, another field to be int, and another field to be Boost.Rational ? Please see my direct reply to Michael, titled "Re: [boost] [PQS] free_quantity contra int,double and Boost.Rational" -- Janek Kozicki |

--- Janek Kozicki <janek_listy@wp.pl> wrote:
I have read you whole post, and I totally agree (it was an interesting example too!). I think there was a misunderstanding.
The problem is about underlying data type. Do we allow one field of a vector<3> to be double, another field to be int, and another field to be Boost.Rational ?
Please see my direct reply to Michael, titled "Re: [boost] [PQS] free_quantity contra int,double and Boost.Rational"
Ok, yes, I see the misunderstanding now. I agree - I don't see a need to mix the underlying numeric types. -- Leland __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com

Michael Fawcett said: (by the date of Thu, 15 Jun 2006 15:28:08 -0400)
On 6/15/06, Janek Kozicki <janek_listy@wp.pl> wrote:
in current design (see parent post) it is decided that all components of vector/matrix/quaternion have the same underlying type
I was really just agreeing with the current sentiment that we should allow some operations and disallow others, depending on the template parameters. I was under the impression that the vector<3, free_quantity> syntax was just one alternative to the lengthier one, in fact, I got the impression that Leland Brown currently uses mixed type vectors and matrices with success. I was also unable to find the post where it was agreed to drop mixed types. Could you quote next time?
Hello Michael, Quote: Leland Brown said:
We can do this if we define vectors as vector<N,T> like this:
vector<2,double> // 2D dimensionless vector vector<3,pqs::length::km> // 3D position vector in km vector<6,pqs::free_quantity> // 6-element vector of mixed units // (e.g., phase space)
And I agree that this is better than:
vector<m,m,m,kg_m_div_s,kg_m_div_s,kg_m_div_s>
Well, actually it is not spelt out clearly above. I have taken some assumptions when reading above, perhaps we have here a misunderstanding, which I really hope will be cleared now. The point is that we can have mixed types in various categories: A. mixing different physical quantities (like velocity, acceleration and momentum in a single vector) B. mixing different underlying numerical data types (like short, double and Boost.Rational in a single vector) My assumption is that A is allowed, and free_quantity allows to change physical quantities during runtime, while fixed_quantity doesn't allow that. Also my assumption is that B is NOT allowed, and whole vector uses the same underlying numerical data type (ie. all fields in a vector are Boost.Rational), and there is no way during runtime to change that underlying numerical type. An option (which in my understanding you favor) is to be able to declare a vector which in each field uses different underlying numerical type. And that those types cannot be changed during runtime. I'm curious about opinions from other people, but personally I oppose about such functionality. Because (as I already said): working in physics/math/engineering makes no sense to use different underlying types. And also because it will add extra complexity to this library, like type promotion problems, or operator[] problems. We have limited time to spend working on this library. Perhaps this functionality could be added later, if there is really a big need for it. What others think? -- Janek Kozicki |

Michael Fawcett said: (by the date of Thu, 15 Jun 2006 15:28:08 -0400)
it is more like a tuple with four components. Of course storing them as vector is handy, but there is not math inside, it's just a container for data.
Sure, but often dot products are used on that data if the author knows it makes sense. Why do you care what is stored in there as the library author? The vector<4, float> (or whatever we are calling it now) may be the color component, but it may store a normal vector in the first 3 components with a texture lookup id in the last component. Who cares if the author wants to say normalize(myvec4.as_vec3()); as long as the types in myvec4 allow normalize to function correctly.
You are assuming that there is a function in vector<4> that converts it into vector<3> by dropping some undefined (random?) field. In fact such operation makes little sense in linear algebra, if the user needs to do that, he would better declare vector<3> and assign each field individually, so that he can chose which field to be left out, and maybe also to change the field order. I'm against adding such function - it's too imprecise - sorry about that. Also Geoffrey Irving in the same thread said that a similar conversion between quaternion and vector<3> will confuse users a lot (and each user could expect something different to be done), so better disallow it. Quote follows: Geoffrey Irving wrote:
where v = (x,y,z) is canonically viewed as the quaternion (0,x,y,z). rotated_v will come out looking like (0,rx,ry,rz), so we can drop the zero and view it as a vector again. This could be a problem if the conversion between vector<3> and quaternion was an actual C++ conversion, since q * v could mean two completely different things. I imagine most people handle this by disallowing that conversion.
There are often times where the 'w' component is used to signal that the vertex is at infinity, but should otherwise still be treated as a classic vector<3>.
so it would be a vector<3> with a bool flag. Makes sense, because there is no cross_product defined for vector<4> in which the last component is some bool flag.
No, but again, why do you care? The cross product IS defined for a vec<3> of all the same type, so the user will be able to use that if it makes sense. In actuality it would probably be a vec<4, float> so that I could pass it directly on to the video card without an intermediate copy being made.
no problem with that. This will of course work.
I think I can agree that vectors and matrices can store the same type if we allow operations to accept tuple types, if that is suitable for others.
Perhaps dot_product could be implemented as:
template <typename LHS, typename RHS> BOOST_TYPEOF_TPL( something here ) dot_product(const LHS &lhs, const RHS &rhs) { return lhs.get<0>() * rhs.get<0>() + lhs.get<1>() * rhs.get<1>() + lhs.get<2>() * rhs.get<2>(); }
but then our vectors have to act like tuples, and then I guess you have to ask, why isn't a tuple adequate with a bunch of free functions implemented like above? PQS would still kick in at the individual operation level (i.e. all of those multiplications and additions).
this is an interesting idea. -- Janek Kozicki |

--- Janek Kozicki <janek_listy@wp.pl> wrote:
3. don't do it at all. Just make matrix classes (like in examples above) that cannot do any other operations except multiplication with a vector. The user will decide how he wants to handle that - write own code to solve Ax=b, while taking care of units, or call lapack and temporarily turn units "off".
I'd probably also include multiplication of one matrix by another, which is an equally straightforward operation. -- Leland __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com

Leland Brown said: (by the date of Fri, 16 Jun 2006 09:45:53 -0700 (PDT))
--- Janek Kozicki <janek_listy@wp.pl> wrote:
3. don't do it at all. Just make matrix classes (like in examples above) that cannot do any other operations except multiplication with a vector. The user will decide how he wants to handle that - write own code to solve Ax=b, while taking care of units, or call lapack and temporarily turn units "off".
I'd probably also include multiplication of one matrix by another, which is an equally straightforward operation.
fully agreed, I was thinking about it, but didn't wrote this. This qualifies to be put into wiki with specifications of this library. -- Janek Kozicki |

Janek Kozicki writes:
Leland Brown said: (by the date of Fri, 16 Jun 2006 09:45:53 -0700 (PDT))
[...]
I'd probably also include multiplication of one matrix by another, which is an equally straightforward operation.
fully agreed, I was thinking about it, but didn't wrote this. This qualifies to be put into wiki with specifications of this library.
</lurk> Allow me to ask a stupid question. Doesn't Boost already have a large, working, nifty, linear algebra (or at least vectors-and-matrices) package in UBLAS? It seems to me that building operations for vectors-with-units and matrices-with-units is a duplication of effort. If the PQS library is as easy to use as [ it should be | we think it is ] then incorporating units into UBLAS should be both easy and a good exercise. Personally I think we're seeing some scope-creep here. If we come up with a good, general, units/dimensions library, than things like vectors-with-units should be things people do _with_ the library, not things people do _to_ the library, right? <lurk> ---------------------------------------------------------------------- Dave Steffen, Ph.D. Fools ignore complexity. Software Engineer IV Pragmatists suffer it. Numerica Corporation Some can avoid it. ph (970) 419-8343 x27 Geniuses remove it. fax (970) 223-6797 -- Alan Perlis dgsteffen_AT_numerica_DOT_us

On Mon, Jun 19, 2006 at 09:43:45AM -0600, Dave Steffen wrote:
Janek Kozicki writes:
Leland Brown said: (by the date of Fri, 16 Jun 2006 09:45:53 -0700 (PDT))
[...]
I'd probably also include multiplication of one matrix by another, which is an equally straightforward operation.
fully agreed, I was thinking about it, but didn't wrote this. This qualifies to be put into wiki with specifications of this library.
</lurk>
Allow me to ask a stupid question.
Doesn't Boost already have a large, working, nifty, linear algebra (or at least vectors-and-matrices) package in UBLAS?
I think there is a need for a dedicated small vector library, not least because it's highly unlikely that UBLAS code applied to fixed size vectors will optimize sufficiently. I tried switching from a vector with separate x,y,z member variables to one with a single x[3] and compile time for loops and it got 30% slower (with gcc 4). Since small vector/matrix operations are often the core of extremely performance intensive applications, it's worth duplicating a little effort to get a 30% speedup. That said, if the fusion-capable library Dave Abrahams is working on doesn't have as much of a problem here, I'll happily withdraw the above.
It seems to me that building operations for vectors-with-units and matrices-with-units is a duplication of effort. If the PQS library is as easy to use as [ it should be | we think it is ] then incorporating units into UBLAS should be both easy and a good exercise.
Small vectors and transforms require somewhat different interfaces than large vectors. For example, uBLAS does not have cross products, normalization, quaternions, exponential maps from vectors to quaternions, etc. As for units, a vector library should certainly be separate from the units library, but the ability to add units requires a little extra functionality here and there. For example, an unit capable affine transform class has to know that y = Ax + b has different units for A and b, and therefore needs one more template parameter than it would otherwise require.
Personally I think we're seeing some scope-creep here. If we come up with a good, general, units/dimensions library, than things like vectors-with-units should be things people do _with_ the library, not things people do _to_ the library, right?
I apologize for contributing to this confusion. A small vector library is something to do _with_ PQS, and would not be part of PQS at all. The fact that it's still on the PQS thread is due to bad etiquette. Sorry about that. Dave Abrahams: it would be great if you could give us your two sentence opinion as to whether your linear algebra library subsumes all of this functionality, and whether there would be significant speed penalties compared to a far less general version specific to low dimensions. Geoffrey

On Mon, Jun 19, 2006 at 09:03:19AM -0700, Geoffrey Irving wrote:
large vectors. For example, uBLAS does not have cross products, normalization, quaternions, exponential maps from vectors to quaternions, etc.
Still it seems that the general vector functions is a subset of the functions for 3-vectors. I did a linear algebra library myself (http://gwesp.tx0.org/software/) including quaternions, cross product, etc. The code for the basic vector operations is the same for the fixed 3-vector and the general n-vector. I didn't benchmark it against other libraries, but as for fixed size vectors the dimension is a compile-time constant, optimal optimization seems to be a quality-of-implementation issue. Regards -Gerhard -- Gerhard Wesp ZRH office voice: +41 (0)44 668 1878 ZRH office fax: +41 (0)44 200 1818 For the rest I claim that raw pointers must be abolished.

On 6/19/06 12:03 PM, "Geoffrey Irving" <irving@cs.stanford.edu> wrote:
On Mon, Jun 19, 2006 at 09:43:45AM -0600, Dave Steffen wrote: [SNIP]
It seems to me that building operations for vectors-with-units and matrices-with-units is a duplication of effort. If the PQS library is as easy to use as [ it should be | we think it is ] then incorporating units into UBLAS should be both easy and a good exercise.
Small vectors and transforms require somewhat different interfaces than large vectors. For example, uBLAS does not have cross products, normalization, quaternions, exponential maps from vectors to quaternions, etc. [TRUNCATE]
But we have a quaternion library too.... -- Daryle Walker Mac, Internet, and Video Game Junkie darylew AT hotmail DOT com

Daryle Walker said: (by the date of Wed, 21 Jun 2006 15:49:22 -0400)
On 6/19/06 12:03 PM, "Geoffrey Irving" <irving@cs.stanford.edu> wrote:
On Mon, Jun 19, 2006 at 09:43:45AM -0600, Dave Steffen wrote: [SNIP]
It seems to me that building operations for vectors-with-units and matrices-with-units is a duplication of effort. If the PQS library is as easy to use as [ it should be | we think it is ] then incorporating units into UBLAS should be both easy and a good exercise.
Small vectors and transforms require somewhat different interfaces than large vectors. For example, uBLAS does not have cross products, normalization, quaternions, exponential maps from vectors to quaternions, etc. [TRUNCATE]
But we have a quaternion library too....
yes, there is Boost.Quaternion library, but it is not suitable for working with geomnetry. Only math stuff. Several functions are missing, for example: from_rotation_matrix (builds a quaternion) from_axis_angle (builds a uaternion) to_euler_angle operator* (quaternion,vector) (rotates a vector) and a few others. I have investigated this library a bit, and the author has done some work in this direction, but finally gave up, because the only linear algebra library is Boost.Ublas.Numerics which has a bit unsuitable vector type (mainly because it is designed as a library for big vectors/matrices, not small vectors/matrices). There were problems on the interface between Boost.Quaternin and speed issues. so currently Boost.Quaternion is unusable for our purposes here. I hope that this will change once Dave's linear algebra library will be introduced... -- Janek Kozicki |

--- Janek Kozicki <janek_listy@wp.pl> wrote:
One last note: above design does not allow to resize vectors and matrices in runtime. This limits the library usage. Should there be added a slower complementary library that will allow to resize data in runtime? Any ideas?
I agree this is useful, and also that it has a run-time penalty. But I have an idea to reduce that penalty, as well as reducing the work for the library implementer to add this feature, without having to write a whole parallel library. I had hoped to add this to my own matrix library, but I haven't yet. (Actually, I need this capability, but I'm using a less efficient workaround for now.) A standard way to implement dynamically-sized data structures is to use dynamic (heap) memory allocation, which can be costly. Data sized at compile time, on the other hand, can be allocated efficiently on the stack. A compromise that works in many situations is to set a maximum size for each object at compile-time, allocate memory for the maximum size, and then only use part of the structure depending on its current size set at run-time. This allows the data and its operations to have an adjustable size (up to a predetermined maximum) while avoiding heap allocation - the best of both worlds. As an example of how this can be implemented, suppose we augment our vector template with an extra template argument of type bool, which tells if the vector is resizable - e.g.: vector<3,double,false> // 3D vector of double vector<5,double,true> // vector *up to* 5D The resizable vector needs an extra member to store the actual current size, and that size should be used as the upper limit in loops, instead of the size in the template argument (see example below). Both kinds of vectors can be implemented with one definition, by inheriting from separately specialized base classes. Here's an example: template< int N, bool resizable > class vector_base {}; template< int N > class vector_base< N, false > { const int _n = N; // size is constant }; template< int N > class vector_base< N, true > { public: // check 1<=new_size<=N and set _n = new_size void resize( int new_size ); private: int _n; // actual size set at run-time }; Then a single vector implementation works for both: template< int N, type T, bool resizable > class vector : vector_base< N, resizable > { public: T magnitude(); ... private: T elements[N]; }; template< int N, type T, bool resizable > T vector< N, T, resizable >::magnitude() { typedef ... T_squared; T_squared sum = elements[0] * elements[0]; for (int i=1; i<_n; ++i) // use _n, not N { sum += elements[i] * elements[i]; } return sqrt( sum ); } Note that for resizable vectors, _n is variable, but for fixed-size vectors, _n is a compile-time constant, allowing the compiler to do optimizations like loop unrolling, as you'd expect for a fixed-size vector. Operations such as dot product require validating that both vectors have the same size. Again, if both are fixed-size vectors, this comparison is a compile-time constant and can be optimized out. FWIW, I'm not happy with the syntax above of adding an extra bool parameter. For my code, I planned to use a negative value of N to indicate resizable: vector<3,double> // fixed 3D vector vector<-5,double> // up to 5D vector matrix<4,-14,double> // 4x1 to 4x14 matrix matrix<-14,4,double> // 1x4 to 14x4 matrix matrix<-6,-6,double> // both values resizable but this is admittedly nonintuitive (though it does make both the library and user code more concise). BTW, this is where I'd really like to see templatized typedefs in C++, allowing us to define: template< int N, type T > typedef vector< N, T, false > fixed_vector; template< int N, type T > typedef vector< N, T, true > resizable_vector; Thus we'd have: fixed_vector<N,T> resizable_vector<N,T> while using the common vector template defined above. In _The Design and Evolution of C++_, Bjarne Stroustrup says adding templatized typedefs would be a "technically trivial" extension, but questions the wisdom of the idea. Perhaps there's been some discussion about this topic in Boost? -- Leland __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com

Leland Brown said: (by the date of Fri, 16 Jun 2006 14:14:31 -0700 (PDT))
--- Janek Kozicki <janek_listy@wp.pl> wrote:
One last note: above design does not allow to resize vectors and matrices in runtime. This limits the library usage. Should there be added a slower complementary library that will allow to resize data in runtime? Any ideas?
I agree this is useful, and also that it has a run-time penalty. But I have an idea to reduce that penalty, as well as reducing the work for the library implementer to add this feature, without having to write a whole parallel library. I had hoped to add this to my own matrix library, but I haven't yet. (Actually, I need this capability, but I'm using a less efficient workaround for now.)
heh, perhaps you can help writing it here in boost, and then use it in your program ;) <snip>
A compromise that works in many situations is to set a maximum size for each object at compile-time, allocate memory for the maximum size, and then only use part of the structure depending on its current size set at run-time. This allows the data and its operations to have an adjustable size (up to a predetermined maximum) while avoiding heap allocation - the best of both worlds. As an example of how this can be implemented, suppose we augment our vector template with an extra template argument of type bool, which tells if the vector is resizable - e.g.:
vector<3,double,false> // 3D vector of double
where the last argument defaults to false, so it doesn't need to be specified always.
vector<5,double,true> // vector *up to* 5D
The resizable vector needs an extra member to store the actual current size, and that size should be used as the upper limit in loops, instead of the size in the template argument (see example below).
Both kinds of vectors can be implemented with one definition, by inheriting from separately specialized base classes. Here's an example:
<snip, interesting stuff>
FWIW, I'm not happy with the syntax above of adding an extra bool parameter. For my code, I planned to use a negative value of N to indicate resizable:
vector<3,double> // fixed 3D vector vector<-5,double> // up to 5D vector matrix<4,-14,double> // 4x1 to 4x14 matrix matrix<-14,4,double> // 1x4 to 14x4 matrix matrix<-6,-6,double> // both values resizable
but this is admittedly nonintuitive (though it does make both the library and user code more concise).
that is a very intersting idea. Especially because matrices would require two bools. If stated out clearly in the documentation, perhaps we could use it? What others think?
BTW, this is where I'd really like to see templatized typedefs in C++, allowing us to define:
template< int N, type T > typedef vector< N, T, false > fixed_vector;
template< int N, type T > typedef vector< N, T, true > resizable_vector;
Thus we'd have:
fixed_vector<N,T> resizable_vector<N,T>
while using the common vector template defined above.
uh, wait. It's not possible currently? I'm not 100% sure but I think that I was using similar code some time in the past...
In _The Design and Evolution of C++_, Bjarne Stroustrup says adding templatized typedefs would be a "technically trivial" extension, but questions the wisdom of the idea. Perhaps there's been some discussion about this topic in Boost?
At the end I want again to recall my "table of" .. err " concepts"? Because resizable_vector really niecely fits here, so that we can see more similarities in the design. everything determined| something determined | everything determined during compilation | during compile time | during runtime stage | sth during runtime | ---------------------+----------------------+------------------ t1_quantity | t2_quantity | t3_quantity fixed_quantity | scalable_quantity | free_quantity | | vector<3> | resizable_vector | vector.resize(3) matrix<4,4> | resizable_matrix | matrix.resize(4,4) fixed_vector ? | | free_vector ? -- Janek Kozicki |

Janek Kozicki wrote:
BTW, this is where I'd really like to see templatized typedefs in C++, allowing us to define:
template< int N, type T > typedef vector< N, T, false > fixed_vector;
template< int N, type T > typedef vector< N, T, true > resizable_vector;
Thus we'd have:
fixed_vector<N,T> resizable_vector<N,T>
while using the common vector template defined above.
uh, wait. It's not possible currently? I'm not 100% sure but I think that I was using similar code some time in the past...
I think not. You probably meant feature described in http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2003/n1489.pdf , but it's not in the C++0x draft, yet. There is simple way to emulate this, unfortunatelly with different semantics that would make metaprograms much more difficult to write: template< int N, type T > struct resizable_vector : vector< N, T, true > {}; The difficulty comes from the fact that: some_useful_template<vector<10, int, true> > and: some_useful_template<resizable_vector<10, int> > are two distinct types B.

--- Bronek Kozicki <brok@rubikon.pl> wrote:
[Leland Brown wrote:]
BTW, this is where I'd really like to see templatized typedefs in C++, allowing us to define:
template< int N, type T > typedef vector< N, T, false > fixed_vector;
template< int N, type T > typedef vector< N, T, true > resizable_vector;
Thus we'd have:
fixed_vector<N,T> resizable_vector<N,T>
while using the common vector template defined above.
There is simple way to emulate this, unfortunatelly with different semantics that would make metaprograms much more difficult to write: template< int N, type T > struct resizable_vector : vector< N, T, true > {};
The difficulty comes from the fact that: some_useful_template<vector<10, int, true> > and: some_useful_template<resizable_vector<10, int> >
are two distinct types
True. And in addition, you don't inherit the constructors of vector. -- Leland __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com

This is a repost. I have fixed the title. Apologies for inconvenience. I'm quoting Leland Brown completely, so that this very insighful comment has now sane text formmatting. I fully agree with all observations. I hope we can put it together somewhere and make specification for this upcoming vector/matrix with units library? (part of pqs in fact) My few comments are below. Leland Brown said: (by the date of Wed, 14 Jun 2006 02:21:31 -0700 (PDT))
There's been a lot of discussion on this issue of vectors and how/ whether to deal with vectors or tuples of mixed units. I'd like to chime in with some of my thoughts: 1 - why I think mixed vectors are both sensible and important, and 2 - why I think this could be easy to implement, at least partially.
1. First, what's the real difference between a vector and a tuple? There are probably differences in the way they're visualized by various people, but I think we should be primarily concerned with the differences in semantics - what operations are meaningful and/or useful on each, which distinguish one kind from the other. The way I think about them, I see at least three differences:
A. Vectors have operations like magnitude (length), dot products, cross products, angle between two vectors, etc. For tuples, in general, none of these functions have a meaningful definition. (This is probably why they are hard to "visualize" as vectors.)
B. Vectors can be transformed to other vectors by matrix multiplication. Thus, it's useful to have them be compatible with or embedded in some sort of matrix library. Tuples typically are not suited for such use.
C. Vectors, like matrices, can be indexed numerically (1st row, 2nd column, etc.), so it's easy to loop over the elements. This is part of what makes them suited for matrix calculations. Tuples are frequently referenced by name instead of number - e.g., the members of a struct.
So what about "vectors" of mixed dimensions/units? As far as property A, they would not act like vectors. But for property B, there *are* many engineering applications that need mixed aggregates to have these operations. The bulk of my own work falls into this category, and this is, in fact, the situation for which I developed my dimensional analysis library in the first place. Janek Kozicki also commented on the need for matrix multiplication with vectors in phase space.
In my work, I tend to visualize these mentally as tuples, not as vectors in some N-dimensional space. But it turns out the mathematics I need to perform requires them to be involved in lots of matrix calculations - which also means the matrices themselves have mixed units.
In summary, I think it's important to allow vectors whose elements have different physical dimensions - even though certain operations like vector length will fail unless all the dimensions are the same.
2. The good news is that I think this is almost trivial to implement using the "t3_quantity" or "free_quantity" or whatever we decide to call it. And with the other two "quantities" I found it extremely difficult to implement in a general way, so I suggest don't bother. If the user needs mixed vectors, he can use "free_quantity." (Or he can write his own matrix operations for his special case, or exit the strong typing and dimensions checking for the matrix operations.)
We can do this if we define vectors as vector<N,T> like this:
vector<2,double> // 2D dimensionless vector vector<3,pqs::length::km> // 3D position vector in km vector<6,pqs::free_quantity> // 6-element vector of mixed units // (e.g., phase space)
And I agree that this is better than:
vector<m,m,m,kg_m_div_s,kg_m_div_s,kg_m_div_s>
Likewise with matrices, perhaps use matrix<M,N,T> like this:
matrix<3,3,double> matrix<2,3,pws::time::s> matrix<7,6,pqs::free_quantity>
etc. FWIW, in my library I made the type parameter default to double, which allows simply vector<3> if you want a unitless vector.
Templatize specialization is probably required to get all the lower dimensional versions fast, but that's happily transparent to the user. Specialization also allows the low dimension versions to have nice member variables like x,y,z.
I agree. Especially since cross products only exist for 3D, template specialization is probably needed for that case at least.
Also specializations can distiguish whether T is free_quantity or not. And depending on that, they could provide category A operations (dot product, cross product, magnitude, etc..) - because such operations can work for a vector with components of similar type. Second point is about matrix operations. The need to "bundle" matrix operations within this library limits possible usage of ublas/lapack libraries. Possible solutions: 1. include transparent for the user call to external methods that solve matrix problems, like the most popular Ax=b (bare bones approach is to invert matrix A, but there are more subtle and efficient methods). The user will work within this library, and this library will use external backend, while taking care of units. 2. Taking care of units and calling external methods may be too complicated. It would be simpler to implement the operations on our own. Tempting, but can we provide ,,all'' the functionality? 3. don't do it at all. Just make matrix classes (like in examples above) that cannot do any other operations except multiplication with a vector. The user will decide how he wants to handle that - write own code to solve Ax=b, while taking care of units, or call lapack and temporarily turn units "off". For the beginning we can take approach 3. because it minimizes the amount of work. So we have a small library to start with. But later maybe we can try to improve towards 1. or 2. Another question - would quaternions be something like vector<4,double> ? For me it looks like a good idea. Template specialization can offer category A operations that are specific to quaternions, when someone works with unitless vector<4>. It would be just like template specialization will provide cross product for vector<3>, which is specific only for vector<3>. Besides quaternions are also known to be used together with matrix<4,4,double>. One last note: above design does not allow to resize vectors and matrices in runtime. This limits the library usage. Should there be added a slower complementary library that will allow to resize data in runtime? Any ideas? Personally I see a limited need for that, the only exception is working with FEM. But FEM would also certainly require a working method that solves Ax=b. So maybe first better focus on vector/matrix non resizeable in runtime. Heh, I just recognized a similarity of this problem with pqs, look: everything determined everything determined during compilation during runtime stage --------------------+----------------------+------------------ t1_quantity | t2_quantity | t3_quantity fixed_quantity | scaled_quantity | free_quantity | | vector<3> | | vector.resize(3) matrix<4,4> | ? | matrix.resize(4,4) fixed_vector ? | | free_vector ? -- Janek Kozicki |

"Janek Kozicki" wrote
Geoffrey Irving said: (by the date of Tue, 13 Jun 2006 09:30:39 -0700)
I would suggest that instead of trying to make an extremely general vector class, it'd be better to make an extremely specific vector class, together with an extremely general way to other vector classes.
Specifically, you can make vector3 (or vector<3>, perhaps) a straightforward single unit Euclidean vector. It can have L2 norms and L^inf norms and cross products and all the operations that are undefined for vectors with components of different units. Then we could define a vector space variant of boost::operators to convert any tuple-like type into a vector space type.
PS: I like vector<3> , I think that Andy can't argue with this name :>
I like it. and I agree that it would be vector<3,T>. It conflicts with (later in the discussion) suggestions of tuple like behaviour though, however I like vector<3,T> primarily because mathematically challenged souls such as myself find it easier to understand. IMO Simplicity is an important and sometimes underrated design feature. The 3 there gives a good indication of what to expect. IOW the vector<3,T> is "a straightforward single unit Euclidean vector" as described by Geoffrey Irving. BTW (O.T.) Geoffrey... I find this awesome. I could watch it all day!: http://graphics.stanford.edu/~irving/movies/rle/boat_turning.avi regards Andy Little

On Wed, Jun 14, 2006 at 07:58:28PM +0100, Andy Little wrote:
"Janek Kozicki" wrote
Geoffrey Irving said: (by the date of Tue, 13 Jun 2006 09:30:39 -0700)
I would suggest that instead of trying to make an extremely general vector class, it'd be better to make an extremely specific vector class, together with an extremely general way to other vector classes.
Specifically, you can make vector3 (or vector<3>, perhaps) a straightforward single unit Euclidean vector. It can have L2 norms and L^inf norms and cross products and all the operations that are undefined for vectors with components of different units. Then we could define a vector space variant of boost::operators to convert any tuple-like type into a vector space type.
PS: I like vector<3> , I think that Andy can't argue with this name :>
I like it. and I agree that it would be vector<3,T>. It conflicts with (later in the discussion) suggestions of tuple like behaviour though, however I like vector<3,T> primarily because mathematically challenged souls such as myself find it easier to understand. IMO Simplicity is an important and sometimes underrated design feature. The 3 there gives a good indication of what to expect. IOW the vector<3,T> is "a straightforward single unit Euclidean vector" as described by Geoffrey Irving.
BTW (O.T.) Geoffrey... I find this awesome. I could watch it all day!: http://graphics.stanford.edu/~irving/movies/rle/boat_turning.avi
Thanks! And not quite off-topic: making that would have been rather nastier without being able to templatize over dimension. Debugging in 2d is a lifesaver. Anything that makes that easier is good (vector<3,T> vs. vector3<T>). Geoffrey

Geoffrey Irving said: (by the date of Wed, 14 Jun 2006 12:49:56 -0700)
On Wed, Jun 14, 2006 at 07:58:28PM +0100, Andy Little wrote:
"Janek Kozicki" wrote
Geoffrey Irving said: (by the date of Tue, 13 Jun 2006 09:30:39 -0700)
I would suggest that instead of trying to make an extremely general vector class, it'd be better to make an extremely specific vector class, together with an extremely general way to other vector classes.
Specifically, you can make vector3 (or vector<3>, perhaps) a straightforward single unit Euclidean vector. It can have L2 norms and L^inf norms and cross products and all the operations that are undefined for vectors with components of different units. Then we could define a vector space variant of boost::operators to convert any tuple-like type into a vector space type.
PS: I like vector<3> , I think that Andy can't argue with this name :>
I like it. and I agree that it would be vector<3,T>. It conflicts with (later in the discussion) suggestions of tuple like behaviour though, however I like vector<3,T> primarily because mathematically challenged souls such as myself find it easier to understand. IMO Simplicity is an important and sometimes underrated design feature. The 3 there gives a good indication of what to expect. IOW the vector<3,T> is "a straightforward single unit Euclidean vector" as described by Geoffrey Irving.
And not quite off-topic: making that would have been rather nastier without being able to templatize over dimension. Debugging in 2d is a lifesaver. Anything that makes that easier is good (vector<3,T> vs. vector3<T>).
good point. -- Janek Kozicki |

On Wed, Jun 14, 2006 at 07:58:28PM +0100, Andy Little wrote:
"Janek Kozicki" wrote
Geoffrey Irving said: (by the date of Tue, 13 Jun 2006 09:30:39 -0700)
I would suggest that instead of trying to make an extremely general vector class, it'd be better to make an extremely specific vector class, together with an extremely general way to other vector classes.
Specifically, you can make vector3 (or vector<3>, perhaps) a straightforward single unit Euclidean vector. It can have L2 norms and L^inf norms and cross products and all the operations that are undefined for vectors with components of different units. Then we could define a vector space variant of boost::operators to convert any tuple-like type into a vector space type.
PS: I like vector<3> , I think that Andy can't argue with this name :>
I like it. and I agree that it would be vector<3,T>. It conflicts with (later in the discussion) suggestions of tuple like behaviour though, however I like vector<3,T> primarily because mathematically challenged souls such as myself find it easier to understand. IMO Simplicity is an important and sometimes underrated design feature. The 3 there gives a good indication of what to expect. IOW the vector<3,T> is "a straightforward single unit Euclidean vector" as described by Geoffrey Irving.
And not quite off-topic: making that would have been rather nastier without being able to templatize over dimension. Debugging in 2d is a
I would suggest, not a vector3 or vector<3>, but to support math with 3D transformation matrices, which is 4x4 matrix, with a vector4 or vector<4>, if 3 space is the desired representation. When doing transformations to vectors, you must use 1 dimension high than the level you are working in. If done properly, quaterion support can also be obtained, which is another way to represent rotations of vectors. I believe we need these contruct to work together in a complete solution, not just a piece. Dave -----Original Message----- From: Janek Kozicki [mailto:janek_listy@wp.pl] Sent: Wednesday, June 14, 2006 2:46 PM To: boost@lists.boost.org Subject: Re: [boost] [pqs] Vector<3> Geoffrey Irving said: (by the date of Wed, 14 Jun 2006 12:49:56 -0700) lifesaver.
Anything that makes that easier is good (vector<3,T> vs. vector3<T>).
good point. -- Janek Kozicki | _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Wed, Jun 14, 2006 at 04:56:37PM -0400, Hickerson, David A wrote:
I would suggest, not a vector3 or vector<3>, but to support math with 3D transformation matrices, which is 4x4 matrix, with a vector4 or vector<4>, if 3 space is the desired representation. When doing transformations to vectors, you must use 1 dimension high than the level you are working in. If done properly, quaterion support can also be obtained, which is another way to represent rotations of vectors.
I believe we need these contruct to work together in a complete solution, not just a piece.
I suppose no one said it explicitly, but I'm pretty sure everyone who's been talking about vector<3> was actually talking about a general vector<d> template. And it is not true that "when doing transformations on vectors, you must use 1 dimension higher than the level you are working in." That is only true if you want to support both affine and projection maps at the same time. If you happen to know you only need affine (or less), using full 4x4 matrices is slow, unintuitive, and mathematically imprecise. Geoffrey

Matrices are not less precise mathmatical than other other methods. They do require more multiplies and adds than quaternion for rotations. As for unituitive, that would only be for those without any training in 3D computer graphics or robotics or a similar field. A 4x4 matrix represents rotation, scaling, shrear, perspective, and translation. A 4x4 matrix can easly be read to directly tell you its x, y, z axes and the amount translation from the current reference frame in a purely positional transformation. When doing work in graphics or robotics, the 4x4 matrix is powerful tool and there are good algorithms for rotation matrix to quaternion representations and back. What you should avoid is Euler angles, granted they are intuitive. However, if you have ever worked in multiply fields of engineering, you will find that there are multiply orders of applying the rotations. Additionally, Euler angles are subject to gimbal lock, where matrices and quaterions are not. Geoffrey wrote:
And it is not true that "when doing transformations on vectors, you must use 1 dimension higher than the level you are working in."
He would right had I said rotations. Transformations are rotations and translations requiring the extra dimension. [ x1, x2, x3, 0] [ y1, y2, y3, 0] [ z1, z2, z3, 0] [ t1, t2, t3, 1] This matrix represents a rotation by the x, y, and z vectors, and a tranlation by the t vector in 3 space. In 2 space it would be: [ x1, x2, 0] [ y1, y2, 0] [ t1, t2, 1] Those representations are in graphics format, where mathmatics or engineering would be the tranpose. Dave -----Original Message----- From: Geoffrey Irving [mailto:irving@cs.stanford.edu] Sent: Wednesday, June 14, 2006 3:20 PM To: boost@lists.boost.org Subject: Re: [boost] [pqs] Vector<3> On Wed, Jun 14, 2006 at 04:56:37PM -0400, Hickerson, David A wrote:
I would suggest, not a vector3 or vector<3>, but to support math with 3D transformation matrices, which is 4x4 matrix, with a vector4 or vector<4>, if 3 space is the desired representation. When doing transformations to vectors, you must use 1 dimension high than the level you are working in. If done properly, quaterion support can also
be obtained, which is another way to represent rotations of vectors.
I believe we need these contruct to work together in a complete solution, not just a piece.
I suppose no one said it explicitly, but I'm pretty sure everyone who's been talking about vector<3> was actually talking about a general vector<d> template. And it is not true that "when doing transformations on vectors, you must use 1 dimension higher than the level you are working in." That is only true if you want to support both affine and projection maps at the same time. If you happen to know you only need affine (or less), using full 4x4 matrices is slow, unintuitive, and mathematically imprecise. Geoffrey _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Thu, Jun 15, 2006 at 04:38:48PM -0400, Hickerson, David A wrote:
<snip>
Geoffrey wrote:
And it is not true that "when doing transformations on vectors, you must use 1 dimension higher than the level you are working in."
He would right had I said rotations. Transformations are rotations and translations requiring the extra dimension.
[ x1, x2, x3, 0] [ y1, y2, y3, 0] [ z1, z2, z3, 0] [ t1, t2, t3, 1]
The fact that you have three 0's and a 1 there shows that that representation is nonoptimal. I don't recall any standard place where those zeros and ones get filled in except the last (and increasingly fastest) stage of scan line renderers, so it would be a terrible shame if 4x4 matrices were the only kind of transform supported. Geoffrey

On Thu, Jun 15, 2006 at 04:38:48PM -0400, Hickerson, David A wrote:
<snip>
Geoffrey wrote:
And it is not true that "when doing transformations on vectors, you must use 1 dimension higher than the level you are working in."
He would right had I said rotations. Transformations are rotations and translations requiring the extra dimension.
[ x1, x2, x3, 0] [ y1, y2, y3, 0] [ z1, z2, z3, 0] [ t1, t2, t3, 1]
The fact that you have three 0's and a 1 there shows that that representation is nonoptimal. I don't recall any standard place where those zeros and ones get filled in except the last (and increasingly fastest) stage of scan line renderers, so it would be a terrible shame if 4x4 matrices were the only kind of transform supported.
Geoffrey
The last column is used when you need the inverse or transpose of a transformation matrix. Mike

On Fri, Jun 16, 2006 at 11:55:18AM -0500, Michael Marcin wrote:
On Thu, Jun 15, 2006 at 04:38:48PM -0400, Hickerson, David A wrote:
<snip>
Geoffrey wrote:
And it is not true that "when doing transformations on vectors, you must use 1 dimension higher than the level you are working in."
He would right had I said rotations. Transformations are rotations and translations requiring the extra dimension.
[ x1, x2, x3, 0] [ y1, y2, y3, 0] [ z1, z2, z3, 0] [ t1, t2, t3, 1]
The fact that you have three 0's and a 1 there shows that that representation is nonoptimal. I don't recall any standard place where those zeros and ones get filled in except the last (and increasingly fastest) stage of scan line renderers, so it would be a terrible shame if 4x4 matrices were the only kind of transform supported.
Geoffrey
The last column is used when you need the inverse or transpose of a transformation matrix.
The inverse of an affine transformation is an affine transformation, and has the same pattern of zeros and 1's. The transpose of an affine transformation is not a mathematically well-defined operator as far I know. Tranpose is typically only defined for linear operators. Do you have an actual use for the transpose of an affine transform? I.e., the whole transform, not just the linear part. Geoffrey

Geoffrey Irving said: (by the date of Wed, 14 Jun 2006 14:19:39 -0700)
I suppose no one said it explicitly, but I'm pretty sure everyone who's been talking about vector<3> was actually talking about a general vector<d> template.
true. -- Janek Kozicki |

"Janek Kozicki"
Geoffrey Irving said: (by the date of Wed, 14 Jun 2006 14:19:39 -0700)
I suppose no one said it explicitly, but I'm pretty sure everyone who's been talking about vector<3> was actually talking about a general vector<d> template.
true.
I have been following the geometry debate with interest. It probably doesnt need to be stated that geometry on numeric types is a sophisticated field. Geometry on strongly typed quantities in C++ is I think a relatively unexplored field. I thought I should say where I am planning to go with PQS library, especially to Janek , as he has experimented with using the library in his own work. First a lot depends on the outcome of the review. If accepted into boost, I can put the library into Boosts CVS. If not I will have to look around and put it elsewhere, possibly on sourceforge. I'm basing that on the high level of interest in the subject of quantities brought up by the PQS review. Either way I am planning a major upheaval of the code (including possibly a change of the library name etc), which will break the current interface. Its also clear that I need to redo the documentation, which is a time consuming process. Regarding the geometry end, it seems to me that the geometry doesnt need to be tied to the particular pqs type. Using Boost.Typeof it should be possible to implement geometric entities that will work with the types in pqs or others if written in terms of Concepts. I can however see some issues with using Typeof (mainly related to gcc), which I will bring up in another thread. The whole area of geometry on quantities is extremely interesting ( In a similar way to the question, Why climb Mount Everest, to which the reply was given, "Because its there"), however I think it is also a relatively unknown field( though Leland Brown seems to have done a fair amount of work on the subject) and with that in mind and with my skill level I reckon that my best bet is to try to implement vectors, quaternions and matrices in the absolute simplest way possible within PQS (This is actually the same philosophy I have used in PQS till now).I dont see PQS as being able to cover the field with authority or high performance etc to start with, but I currently view the work more as an experiment to start to answer the question. What happens if we try to implement geometry on strongly typed quantities? From following comments re pqs it is obvious that there is a real potential benefit, namely helping with debugging, code review etc, however it is only in trying to implement such a library fully that the problems will reveal themselves. I dont know what the answer to the above question is yet of course and I think it is quite important to those interested in the subject to point that out! At the moment I am not making the assumption that using quantities for geometry is beneficial over using floats ( Although what Leland has said is very encouraging). I guess the only way to find out is to try to implement the functionality... Of course, whether a half finished experiment in strongly typed quantities is a suitable candidate for a Boost library, luckily I can leave that to my Review Manager to answer! regards Andy Little

Andy Little said: (by the date of Sat, 17 Jun 2006 14:25:50 +0100)
I thought I should say where I am planning to go with PQS library, especially to Janek , as he has experimented with using the library in his own work.
First a lot depends on the outcome of the review. If accepted into boost, I can put the library into Boosts CVS. If not I will have to look around and put it elsewhere, possibly on sourceforge. I'm basing that on the high level of interest in the subject of quantities brought up by the PQS review.
very good idea, possibly other people will be interested to be able to commit their own changes in the library. If it is decided to decuple the library into several parts I can assume that a bigger number of people will want to contribute (as it will be easier to not mess with the separated work of other people). Like Oleg Abrosimov and Noel Belcourt could possibly contribute to Dimensions part, Leland Brown, John Phillips to Units part, and Geoffrey Irving, Leland Brown and me (Janek Kozicki) to Linear Algebra part. Perhaps I have missed others I'm sorry! This discussion is getting HUGE!
Either way I am planning a major upheaval of the code (including possibly a change of the library name etc), which will break the current interface. Its also clear that I need to redo the documentation, which is a time consuming process.
I hope that the wiki will grow into a very useful specification, that can be then turned into a very useful documentation. The task ahead of us is not easy, nor small, but I really hope that with so much interest on the boost mailing list we can do this. Although of course it will take time, because we all have other work to do.
Regarding the geometry end, it seems to me that the geometry doesnt need to be tied to the particular pqs type. Using Boost.Typeof it should be possible to implement geometric entities that will work with the types in pqs or others if written in terms of Concepts.
I want to reiterate that "Geometry" is a too general name. A better name is linear algebra - as it is exactly vectors and matrices. I'm not sure if quaternions fit this name, though. So maybe there is a better name. But certainly it is not "Geometry". If we start talking about geometry the problem will grow too big, and we will never finish it. Geometrical entities can be added later. -- Janek Kozicki |

"Janek Kozicki" <janek_listy@wp.pl> wrote in message news:20060618012319.174d7a9e@absurd...
Andy Little said: (by the date of Sat, 17 Jun 2006 14:25:50 +0100)
I thought I should say where I am planning to go with PQS library, especially to Janek , as he has experimented with using the library in his own work.
First a lot depends on the outcome of the review. If accepted into boost, I can put the library into Boosts CVS. If not I will have to look around and put it elsewhere, possibly on sourceforge. I'm basing that on the high level of interest in the subject of quantities brought up by the PQS review.
very good idea, possibly other people will be interested to be able to commit their own changes in the library. If it is decided to decuple the library into several parts I can assume that a bigger number of people will want to contribute (as it will be easier to not mess with the separated work of other people).
The current situation with no CVS home for PQS is inadequate, however based on the feedback in the review I am working on various changes. As soon as possible I will try to put the library on a public database, but still with the intent of making the library part of boost if not accepted this time. The one difficulty is with using the boost namespace outside boost, which is bad form IIRC, so it may be MACRO namespace time soon, if PQS is not accepted, else it will all have to be changed back for another review !
Like Oleg Abrosimov and Noel Belcourt could possibly contribute to Dimensions part, Leland Brown, John Phillips to Units part, and Geoffrey Irving, Leland Brown and me (Janek Kozicki) to Linear Algebra part. Perhaps I have missed others I'm sorry! This discussion is getting HUGE!
Yes and the 3D vector space stuff is a bit over my head, but I guess you guys know what you are talking about! Seriously I need to sort out the Concept documentation, so hopefully PQS can be compatible with 3rd party libraries or vice versa. or at least it can be used as one model for testing etc.
Either way I am planning a major upheaval of the code (including possibly a change of the library name etc), which will break the current interface. Its also clear that I need to redo the documentation, which is a time consuming process.
I hope that the wiki will grow into a very useful specification, that can be then turned into a very useful documentation. The task ahead of us is not easy, nor small, but I really hope that with so much interest on the boost mailing list we can do this. Although of course it will take time, because we all have other work to do.
The only problem with a Wiki is that it requires a lot of attention AFAICS, though I havent run one. An alternative might be to put "quantities and quantity spaces" related papers in the Boost Vault. That would be less work anyway! OTOH if you want to set up a Wiki I'll not stop you!
Regarding the geometry end, it seems to me that the geometry doesnt need to be tied to the particular pqs type. Using Boost.Typeof it should be possible to implement geometric entities that will work with the types in pqs or others if written in terms of Concepts.
I want to reiterate that "Geometry" is a too general name. A better name is linear algebra - as it is exactly vectors and matrices. I'm not sure if quaternions fit this name, though. So maybe there is a better name.
"types and algorithms for quantity related spaces" ? regards Andy little

Andy Little said: (by the date of Mon, 19 Jun 2006 21:42:34 +0100)
The current situation with no CVS home for PQS is inadequate, however based on the feedback in the review I am working on various changes. As soon as possible I will try to put the library on a public database,
I'm looking forward to it.
Yes and the 3D vector space stuff is a bit over my head, but I guess you guys know what you are talking about! Seriously I need to sort out the Concept documentation, so hopefully PQS can be compatible with 3rd party libraries or vice versa. or at least it can be used as one model for testing etc.
After all the new comments on this mailing list about lienar algebra, I think that maybe it's better that you completely remove all your classes related to linear algebra from PQS. David Abrahams has written that with his linear algebra library it will be possible to use units inside his vectors without any problems. So that is better for you - you can focus only on units, and just make sure that they are "perfect" - so then you can be sure that they will work with other libraries (including Dave's linear algebra). Previously I was unaware of Dave's work on linear algebra library, so I was ecouraging work on it - because Boost.Ublas is simply not suitable for that (too much focus on speed resulted in loss of genericity). (anybody correct me if I'm wrong ;)
The only problem with a Wiki is that it requires a lot of attention AFAICS, though I havent run one. An alternative might be to put "quantities and quantity spaces" related papers in the Boost Vault. That would be less work anyway! OTOH if you want to set up a Wiki I'll not stop you!
heh, If you don't want to work using wiki, then perhaps it won't work even if I start a wiki. Because you are the chief here, I'm just trying to help as I can.
I want to reiterate that "Geometry" is a too general name. A better name is linear algebra - as it is exactly vectors and matrices. I'm not sure if quaternions fit this name, though. So maybe there is a better name.
"types and algorithms for quantity related spaces" ?
now, with Dave's library, this argument is now obsolete ;) -- Janek Kozicki |

"Janek Kozicki" <janek_listy@wp.pl> wrote in message news:20060622005501.43396a03@absurd...
Andy Little said: (by the date of Mon, 19 Jun 2006 21:42:34 +0100)
The current situation with no CVS home for PQS is inadequate, however based on the feedback in the review I am working on various changes. As soon as possible I will try to put the library on a public database,
I'm looking forward to it.
I have got the project accepted at sourceforge. Its called Quan. Currently I am changing PQS headers into Quan. I then need to do the same for the documentation. Hopefully http://sourceforge.net/projects/quan will be functional in a week or so.
Yes and the 3D vector space stuff is a bit over my head, but I guess you guys know what you are talking about! Seriously I need to sort out the Concept documentation, so hopefully PQS can be compatible with 3rd party libraries or vice versa. or at least it can be used as one model for testing etc.
After all the new comments on this mailing list about lienar algebra, I think that maybe it's better that you completely remove all your classes related to linear algebra from PQS. David Abrahams has written that with his linear algebra library it will be possible to use units inside his vectors without any problems. So that is better for you - you can focus only on units, and just make sure that they are "perfect" - so then you can be sure that they will work with other libraries (including Dave's linear algebra).
Previously I was unaware of Dave's work on linear algebra library, so I was ecouraging work on it - because Boost.Ublas is simply not suitable for that (too much focus on speed resulted in loss of genericity).
(anybody correct me if I'm wrong ;)
FWIW I think this is the url: http://www.osl.iu.edu/research/mtl/ I suspect that MTL provides the Concepts, but I have to write the models, if that makes sense. However I am fairly sure that matrices of multiple element types arent supported. OTOH it might be possible to use the socalled free-quantity solely for checking purposes. I will need to write it to find out.
The only problem with a Wiki is that it requires a lot of attention AFAICS, though I havent run one. An alternative might be to put "quantities and quantity spaces" related papers in the Boost Vault. That would be less work anyway! OTOH if you want to set up a Wiki I'll not stop you!
heh, If you don't want to work using wiki, then perhaps it won't work even if I start a wiki. Because you are the chief here, I'm just trying to help as I can.
OK. Well , as I havent set up a site like Quan before, I hope I can ask your advice on the admin side, but I guess that O.T. for boost. I'll keep you informed of significant developments. regards Andy Little

Andy Little said: (by the date of Sat, 17 Jun 2006 14:25:50 +0100)
I have been following the geometry debate with interest. It probably doesnt need to be stated that geometry on numeric types is a sophisticated field. Geometry on strongly typed quantities in C++ is I think a relatively unexplored field.
we had a discussion about this few months ago, lenghty thread named "Interest in geometry library". But then it drifted towards some other more "geometrical" things like circles, ellipses, or defining a namespace with name 'euclidean' so that inside will be stored all functions related to euclidean space, and then other namespaces for other spaces. And it all got lost, because it simply grew too big. Therefore currently I would prefer to avoid name 'geometry' because we are in fact talking about just linear algebra (that's what vectors and matrices are). -- Janek Kozicki |

On Sun, Jun 18, 2006 at 01:29:20AM +0200, Janek Kozicki wrote:
Andy Little said: (by the date of Sat, 17 Jun 2006 14:25:50 +0100)
I have been following the geometry debate with interest. It probably doesnt need to be stated that geometry on numeric types is a sophisticated field. Geometry on strongly typed quantities in C++ is I think a relatively unexplored field.
we had a discussion about this few months ago, lenghty thread named "Interest in geometry library". But then it drifted towards some other more "geometrical" things like circles, ellipses, or defining a namespace with name 'euclidean' so that inside will be stored all functions related to euclidean space, and then other namespaces for other spaces. And it all got lost, because it simply grew too big.
Therefore currently I would prefer to avoid name 'geometry' because we are in fact talking about just linear algebra (that's what vectors and matrices are).
Most people who see linear algebra will think large dimensional spaces, large dimensional linear system solves, eigen-analysis, etc. How about just 'vectors'? Geoffrey

Geoffrey Irving said: (by the date of Sun, 18 Jun 2006 11:35:50 -0700)
Therefore currently I would prefer to avoid name 'geometry' because we are in fact talking about just linear algebra (that's what vectors and matrices are).
Most people who see linear algebra will think large dimensional spaces, large dimensional linear system solves, eigen-analysis, etc. How about just 'vectors'?
good idea. this name fits great. We call it 'vectors', and inside we have vectors/matrices/quaternions. No one will expect too much from this library. maybe even 'small vectors' ? I don't know :) -- Janek Kozicki |

I like that idea. That would apply to any field positioning and moving objects in 2 space or 3 space. I would add a conversion from the matrix transformation to a quaternion and vector representation and visa versa. Dave
Geoffrey Irving said:
Most people who see linear algebra will think large dimensional spaces, large dimensional linear system solves, eigen-analysis, etc.
How about just 'vectors'?
Janek Kozicki wrote: good idea. this name fits great. We call it 'vectors', and inside we have vectors/matrices/quaternions. No one will expect too much from this library.
maybe even 'small vectors' ? I don't know :)

--- Andy Little <andy@servocomm.freeserve.co.uk> wrote:
Regarding the geometry end, it seems to me that the geometry doesnt need to be tied to the particular pqs type. Using Boost.Typeof it should be possible to implement geometric entities that will work with the types in pqs or others if written in terms of Concepts.
I think this is an excellent idea! Now that I think about it, with Boost.Typeof a linear algebra library could be written that's compatible with strongly typed quantities of any kind, including PQS, but completely independent of PQS. You can leave that work to others who are experts in that area, and concentrate on just the dimensional analysis for individual quantities.
I reckon that my best bet is to try to implement vectors, quaternions and matrices in the absolute simplest way possible within PQS [...] I currently view the work more as an experiment to start to answer the question. What happens if we try to implement geometry on strongly typed quantities?
Sounds like a worthwhile experiment. A simple set of operations you could start with might be: vector dot product matrix multiplied by vector matrix multiplied by matrix addition/subtraction These are all straightforward to implement for vectors of any size and would provide enough functionality to test the concept. BTW, Andy, thank you for all the work you've done on PQS and for your persistence in it, even with so many criticisms and suggestions. I think it could turn out to be a really valuable library. -- Leland __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com

--- Andy Little <andy@servocomm.freeserve.co.uk> wrote:
Regarding the geometry end, it seems to me that the geometry doesnt need to be tied to the particular pqs type. Using Boost.Typeof it should be possible to implement geometric entities that will work with the types in pqs or others if written in terms of Concepts.
I think this is an excellent idea! Now that I think about it, with Boost.Typeof a linear algebra library could be written that's compatible with strongly typed quantities of any kind, including PQS, but completely independent of PQS. You can leave that work to others who are experts in that area, and concentrate on just the dimensional analysis for individual quantities.
I reckon that my best bet is to try to implement vectors, quaternions and matrices in the absolute simplest way possible within PQS [...] I currently view the work more as an experiment to start to answer the question. What happens if we try to implement geometry on strongly typed quantities?
Sounds like a worthwhile experiment. A simple set of operations you could start with might be: vector dot product matrix multiplied by vector matrix multiplied by matrix addition/subtraction These are all straightforward to implement for vectors of any size and would provide enough functionality to test the concept. BTW, Andy, thank you for all the work you've done on PQS and for your persistence in it, even with so many criticisms and suggestions. I think it could turn out to be a really valuable library. -- Leland __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com

"Geoffrey Irving" wrote
BTW (O.T.) Geoffrey... I find this awesome. I could watch it all day!: http://graphics.stanford.edu/~irving/movies/rle/boat_turning.avi
Thanks!
And not quite off-topic: making that would have been rather nastier without being able to templatize over dimension. Debugging in 2d is a lifesaver. Anything that makes that easier is good (vector<3,T> vs. vector3<T>).
Yes! BTW Out of respect to Janek Kozicki.. I think yade is pretty awesome too! http://yade.berlios.de/ regards Andy Little

Andy Little said: (by the date of Wed, 14 Jun 2006 21:55:44 +0100)
And not quite off-topic: making that would have been rather nastier without being able to templatize over dimension. Debugging in 2d is a lifesaver. Anything that makes that easier is good (vector<3,T> vs. vector3<T>).
Yes!
BTW Out of respect to Janek Kozicki.. I think yade is pretty awesome too!
Disclaimer: I did not bribe Andy with positive review of pqs, so that he will advertise yade in exchange ;P </joke> :) -- Janek Kozicki |

Andy Little said: (by the date of Wed, 14 Jun 2006 21:55:44 +0100)
BTW Out of respect to Janek Kozicki.. I think yade is pretty awesome too!
Disclaimer: I did not bribe Andy with positive review of pqs, so that he will advertise yade in exchange ;P
</joke> :)
oops. I should thank you for the praise - thank you! Sorry for being a bit rude :) -- Janek Kozicki |

Andy Little said: (by the date of Tue, 13 Jun 2006 00:59:11 +0100)
t1_quantity type in PQS is overcomplicated. Two decisions complicated the design of t1_quantity. The first was the requirement to distinguish dimensionally equivalent quantities (torque and energy say).
IMHO this distinguish is not that important. We only need units so that compiler will check if there is any mistake in the formulas... Difference between torque and energy happens only during serialization (print N*m, or print J ?), so maybe instead of complicated abstract_quantity_id, there should be just some extra argument/setting that will talk with serialization functions? Maybe this will make the design a bit leaner.
The second was the use of rational rather than integer powers of dimension.
someone suggested using integer powers with twice the value of the represented power. That will allow to have one fractional power: an sqrt of dimension. Maybe that will solve this? Anyone has seen in work a fractional power of unit, different than square root? -- Janek Kozicki |

Many nuclear reaction rates are modeled in series expansions based on the 1/3 power of the temperature. In general, fitting formulas to measured phenomena will happily use any power you can imagine, if it gets the behavior right. John Phillips Janek Kozicki wrote:
Andy Little said: (by the date of Tue, 13 Jun 2006 00:59:11 +0100)
t1_quantity type in PQS is overcomplicated. Two decisions complicated the design of t1_quantity. The first was the requirement to distinguish dimensionally equivalent quantities (torque and energy say).
IMHO this distinguish is not that important. We only need units so that compiler will check if there is any mistake in the formulas...
Difference between torque and energy happens only during serialization (print N*m, or print J ?), so maybe instead of complicated abstract_quantity_id, there should be just some extra argument/setting that will talk with serialization functions?
Maybe this will make the design a bit leaner.
The second was the use of rational rather than integer powers of dimension.
someone suggested using integer powers with twice the value of the represented power. That will allow to have one fractional power: an sqrt of dimension. Maybe that will solve this?
Anyone has seen in work a fractional power of unit, different than square root?

Janek Kozicki wrote:
Anyone has seen in work a fractional power of unit, different than square root?
John Phillips said: (by the date of Wed, 14 Jun 2006 10:49:39 -0400)
Many nuclear reaction rates are modeled in series expansions based on the 1/3 power of the temperature. In general, fitting formulas to measured phenomena will happily use any power you can imagine, if it gets the behavior right.
so the need for rational powers remains? I remember that one of the reviewers complained that rational powers are "too" much: Noel Belcourt said: (by the date of Thu, 8 Jun 2006 00:54:51 -0600)
8) Is there an easy way to replace the rational dimensions with integers? A number of disciplines could probably make do with integers and paying for rational dimensions seems expensive and unnecessary, although it's certainly more general.
Assuming that we want both rational and integer powers - is thare any way to make it without making the library "too" complicated?
Difference between torque and energy happens only during serialization (print N*m, or print J ?), so maybe instead of complicated abstract_quantity_id, there should be just some extra argument/setting that will talk with serialization functions?
I think that this argument still stands? removing abstract_quantity_id would simplify the design... -- Janek Kozicki |

On Jun 14, 2006, at 10:11 AM, Janek Kozicki wrote:
Janek Kozicki wrote:
Anyone has seen in work a fractional power of unit, different than square root?
John Phillips said: (by the date of Wed, 14 Jun 2006 10:49:39 -0400)
Many nuclear reaction rates are modeled in series expansions based on the 1/3 power of the temperature. In general, fitting formulas to measured phenomena will happily use any power you can imagine, if it gets the behavior right.
so the need for rational powers remains? I remember that one of the reviewers complained that rational powers are "too" much:
Noel Belcourt said: (by the date of Thu, 8 Jun 2006 00:54:51 -0600)
8) Is there an easy way to replace the rational dimensions with integers? A number of disciplines could probably make do with integers and paying for rational dimensions seems expensive and unnecessary, although it's certainly more general.
Assuming that we want both rational and integer powers - is thare any way to make it without making the library "too" complicated?
Hi Janek, I intend to post an, hopefully, uncomplicated solution demonstrating support for user selectable integer and rational dimensions. Clearly, a unit system solution needs to be parameterized on this axis so users can select which dimensional representation they need without paying for unnecessary resource consumption, like excess compile or run times, that may be imposed by a rational solution. Regards. -- Noel Belcourt

Noel Belcourt said: (by the date of Wed, 14 Jun 2006 12:43:37 -0600)
Hi Janek,
I intend to post an, hopefully, uncomplicated solution demonstrating support for user selectable integer and rational dimensions. Clearly, a unit system solution needs to be parameterized on this axis so users can select which dimensional representation they need without paying for unnecessary resource consumption, like excess compile or run times, that may be imposed by a rational solution.
Hello Noel, We are all anxious to see it! If there is any way I could help, please say so :) -- Janek Kozicki |

"Janek Kozicki" wrote
Andy Little said: (by the date of Tue, 13 Jun 2006 00:59:11 +0100)
t1_quantity type in PQS is overcomplicated. Two decisions complicated the design of t1_quantity. The first was the requirement to distinguish dimensionally equivalent quantities (torque and energy say).
IMHO this distinguish is not that important. We only need units so that compiler will check if there is any mistake in the formulas...
I think the ability to distinguish quantity types is important for other purposes.
Difference between torque and energy happens only during serialization (print N*m, or print J ?), so maybe instead of complicated abstract_quantity_id, there should be just some extra argument/setting that will talk with serialization functions?
Having thought about that I come to the conclusion that it is worthwhile to have the extra complexity in the t1_quantity/fixed_quantity. Having some form of output/serialisation for quantities is seemingly trivial, something like a toy feature, but it is very useful indeed for demonstrating and communicating what the type can do and for diagnosing what it is doing with minimal effort. That may seem trivial but that type of feedback is very helpfull in the first stage of trying out a library to see what it can do. I can speculate that is part of the reason for the good level of interest in PQS, because it helps when providing short examples in discussions like this. That simple functionality is underrated IMO. regards Andy Little

Andy Little said: (by the date of Wed, 14 Jun 2006 20:22:35 +0100)
"Janek Kozicki" wrote
Andy Little said: (by the date of Tue, 13 Jun 2006 00:59:11 +0100)
t1_quantity type in PQS is overcomplicated. Two decisions complicated the design of t1_quantity. The first was the requirement to distinguish dimensionally equivalent quantities (torque and energy say).
IMHO this distinguish is not that important. We only need units so that compiler will check if there is any mistake in the formulas...
I think the ability to distinguish quantity types is important for other purposes.
I'm sorry about being a bit picky. You said it's overcomplicated (11 lines above this line). Now you say it's needed (4 lines above ;) Can you give examples of where ability to distiguish is useful? With exception of serialization examples because it is listed below (printing unit correctly).
Difference between torque and energy happens only during serialization (print N*m, or print J ?), so maybe instead of complicated abstract_quantity_id, there should be just some extra argument/setting that will talk with serialization functions?
Having thought about that I come to the conclusion that it is worthwhile to have the extra complexity in the t1_quantity/fixed_quantity. Having some form of output/serialisation for quantities is seemingly trivial, something like a toy feature, but it is very useful indeed for demonstrating and communicating what the type can do and for diagnosing what it is doing with minimal effort. That may seem trivial but that type of feedback is very helpfull in the first stage of trying out a library to see what it can do. I can speculate that is part of the reason for the good level of interest in PQS, because it helps when providing short examples in discussions like this. That simple functionality is underrated IMO.
I have difficulties understanding this paragraph. You say that PQS should be able to print "N*m" or print "J" depending on the context, because it is a very helpfull feedback. Right? -- Janek Kozicki |

"Janek Kozicki" wrote
Andy Little said: (by the date of Wed, 14 Jun 2006 20:22:35 +0100)
"Janek Kozicki" wrote
Andy Little said: (by the date of Tue, 13 Jun 2006 00:59:11 +0100)
t1_quantity type in PQS is overcomplicated. Two decisions complicated the design of t1_quantity. The first was the requirement to distinguish dimensionally equivalent quantities (torque and energy say).
IMHO this distinguish is not that important. We only need units so that compiler will check if there is any mistake in the formulas...
I think the ability to distinguish quantity types is important for other purposes.
I'm sorry about being a bit picky. You said it's overcomplicated (11 lines above this line). Now you say it's needed (4 lines above ;)
I think you chopped the quote :-) I think I said that some reviewers had said that the t1_quantity/fixed_quantity was over complicated. The context I said it was about adding yet more functionality, this time to distinguish different unit systems. In case of distinguishing torque and energy, reciprocal time and frequency etc, I think its useful functionality. It would be useful to add the other too FWIW, but practically a huge job IMO, which I dont see as realistic for me to take on. That needs someone with more experience in physics AFAIKS.
Can you give examples of where ability to distiguish is useful? With exception of serialization examples because it is listed below (printing unit correctly).
The only example is serialisation or standard output, but I think that is enough.
Difference between torque and energy happens only during serialization (print N*m, or print J ?), so maybe instead of complicated abstract_quantity_id, there should be just some extra argument/setting that will talk with serialization functions?
Having thought about that I come to the conclusion that it is worthwhile to have the extra complexity in the t1_quantity/fixed_quantity. Having some form of output/serialisation for quantities is seemingly trivial, something like a toy feature, but it is very useful indeed for demonstrating and communicating what the type can do and for diagnosing what it is doing with minimal effort. That may seem trivial but that type of feedback is very helpfull in the first stage of trying out a library to see what it can do. I can speculate that is part of the reason for the good level of interest in PQS, because it helps when providing short examples in discussions like this. That simple functionality is underrated IMO.
I have difficulties understanding this paragraph. You say that PQS should be able to print "N*m" or print "J" depending on the context, because it is a very helpfull feedback. Right?
Yes. Its for completeness, but the serialisation is modular and can easily be left out . The kind of place it might be useful is for students homework, short demos (<libs/pqs/examples>) exporting to a spreadsheet etc. (I am guessing here. It may not be suitable at all for students homework. I am not a lecturer), but for more serious use as in yade, if the distinction is not important ( as is the case for dimensional analysis checking) then the type could referred to strictly as an anonymous-quantity which could then be representing torque or energy(which are so-called named-quantities in PQS definitions section) but without the distinction being important for the current use. regards Andy Little

Andy Little said: (by the date of Wed, 14 Jun 2006 21:43:51 +0100)
I think the ability to distinguish quantity types is important for other purposes.
I think you chopped the quote :-)
more than once ;)
I think I said that some reviewers had said that the t1_quantity/fixed_quantity was over complicated. The context I said it was about adding yet more functionality, this time to distinguish different unit systems. In case of distinguishing torque and energy, reciprocal time and frequency etc, I think its useful functionality. It would be useful to add the other too FWIW, but practically a huge job IMO, which I dont see as realistic for me to take on. That needs someone with more experience in physics AFAIKS.
If it's a huge job, then why not make things simpler and discard this? Especially when it turns out that the real purpose is serialization only? And the side purpose is to boast about extra functionality. I have no bad intentions, it's just how I perceive this. Do not make things more complicated than they are. http://en.wikipedia.org/wiki/Ockhams_razor Do something smaller and simpler first, but complete.
Can you give examples of where ability to distiguish is useful? With exception of serialization examples because it is listed below (printing unit correctly).
The only example is serialisation or standard output, but I think that is enough.
ditto. <snip>
I have difficulties understanding this paragraph. You say that PQS should be able to print "N*m" or print "J" depending on the context, because it is a very helpfull feedback. Right?
Yes. Its for completeness, but the serialisation is modular and can easily be left out . The kind of place it might be useful is for students homework, short demos (<libs/pqs/examples>) exporting to a spreadsheet etc. (I am guessing here. It may not be suitable at all for students homework. I am not a lecturer),
So in fact decision whether to print torque or energy can be an option for serializing function. Especially when we start talking about exporting data to spreasheets - a fairly sophisticated task. Serialization should not affect the underlying design. Currently I have impression that serialization has forced you to add abstarct_quantity_id to underlying design. But I'm really interested in opinions from other people - do we need the ability to distinguish torque/energy, or not? It's just my opinion. We are looking for the best design of units library (pqs), and everyone's opinion matters (not just mine). (reminder: it's Andy who did tons of work) Matthias Troyer in his review raised a concern about clashing id numbers between different custom quantities: <quote>
The immediate problem with the current design that I can see is that you recommend programmers to use the first available id for their new unit. If now two programmers define a new unit, these will have the same abstract id, and cannot be used together without changing the code. Using the scale factor as part of the abstract id would make such clashes much less likely. </quote>
but for more serious use as in yade, if the distinction is not important ( as is the case for dimensional analysis checking) then the type could referred to strictly as an anonymous-quantity which could then be representing torque or energy(which are so-called named-quantities in PQS definitions section) but without the distinction being important for the current use.
Yes, anonymous quantity will work in yade. I'm just wondering if the extra burden of abstract_quantity_id is worth the gains. Especially if making this extra functionality is "huge job" as you called it. PS: it's getting lenghty, and it's all about one design decision :) -- Janek Kozicki |

Janek Kozicki writes:
Andy Little said: (by the date of Tue, 13 Jun 2006 00:59:11 +0100)
t1_quantity type in PQS is overcomplicated. Two decisions complicated the design of t1_quantity. The first was the requirement to distinguish dimensionally equivalent quantities (torque and energy say).
IMHO this distinguish is not that important. We only need units so that compiler will check if there is any mistake in the formulas...
Difference between torque and energy happens only during serialization (print N*m, or print J ?), so maybe instead of complicated abstract_quantity_id, there should be just some extra argument/setting that will talk with serialization functions?
Maybe this will make the design a bit leaner.
I'm completely disagree here. 1) Can anyone provide a really useful example of quantities with units encoded? I mean all these "length::m" things in PQS. There were requests to disable it and reduce to simple "length" and treat with units only in I/O with help of manipulators. Note that historical units can be handled in the same way: length l; cin >> l; // in meters by default cout << hyst_unit_manip << l; 2) torque is a really bad smell in PQS! it has the same dimension as an energy and, if I'm not wrong, is treated in the same way, so one can add torque and energy. It is a completely meaningless. There can not be meaningful equation that do that. And if PQS allows it it means that it is broken here IMO. the problem here is that PQS deals with only dimensions, but physical quantities have not only that, they have rank also. the difference between energy and torque is such that energy is a scalar and torque is a vector, or, more correctly, a pseudovector (it doesn't change its sign if space is inversed. velocity is a vector, it negates when space is inverted). it is completely wrong to add a scalar and a pseudovector (or vector). rank really matters! moreover, adding scalar with pseudoscalar or vector with pseudovector is meaningless too. such a summation can not occur in any correct equation. the abstract_quantity_id is an implementation artefact that hides away such things. In good Boost.Dimensional library rank and inversion behaviour should be taken into account. It would eliminates bad smells like the smell of torque in current PQS implementation. Best regards, Oleg Abrosimov.

On Thu, Jun 15, 2006 at 02:17:38PM +0700, Oleg Abrosimov wrote:
the problem here is that PQS deals with only dimensions, but physical quantities have not only that, they have rank also.
What is the rank of a physical quantity? Regards -Gerhard -- Gerhard Wesp ZRH office voice: +41 (0)44 668 1878 ZRH office fax: +41 (0)44 200 1818 For the rest I claim that raw pointers must be abolished.

Gerhard Wesp said: (by the date of Thu, 15 Jun 2006 11:39:15 +0200)
On Thu, Jun 15, 2006 at 02:17:38PM +0700, Oleg Abrosimov wrote:
the problem here is that PQS deals with only dimensions, but physical quantities have not only that, they have rank also.
What is the rank of a physical quantity?
good question indeed. I can grasp the idea of that. In fact above quote written by Oleg was a revelation for me. But how to include the concept or "rank" into the library design? Would it boil down to abstract_quantity_id, but with different name? I'm really not sure if one can hold energy inside a vector (say: vector field of energy), perhaps some physicist here can answer this question. But I'm sure that momentum can be hold as a vector (ie. momentum vector). Could it possibly mean, that some quantities can be represented as vectors, while representing others as vector doesn't make sense - would it be "rank" then? But anyway - even if we find a specification on how to implement it, I'd rather focus on it later. There is already tons of work with this library. -- Janek Kozicki |

On Jun 18, 2006, at 1:23 AM, Janek Kozicki wrote:
Gerhard Wesp said: (by the date of Thu, 15 Jun 2006 11:39:15 +0200)
On Thu, Jun 15, 2006 at 02:17:38PM +0700, Oleg Abrosimov wrote:
the problem here is that PQS deals with only dimensions, but physical quantities have not only that, they have rank also.
What is the rank of a physical quantity?
good question indeed. I can grasp the idea of that. In fact above quote written by Oleg was a revelation for me. But how to include the concept or "rank" into the library design? Would it boil down to abstract_quantity_id, but with different name?
I'm really not sure if one can hold energy inside a vector (say: vector field of energy), perhaps some physicist here can answer this question. But I'm sure that momentum can be hold as a vector (ie. momentum vector).
Could it possibly mean, that some quantities can be represented as vectors, while representing others as vector doesn't make sense - would it be "rank" then?
As a physicist I am completely baffled and confused. What do you mean by rank of a quantity? Do you mean the size of a vector/matrix? If so, then this is completely orthogonal to a unit library. You can hold any physical quantity inside a vector, or inside a multi_array of arbitrary dimensions. Just consider a finite-difference or finite element representation of a field theory, and you have multi- dimensional arrays of quantities of essentially any unit you can think of. In my opinion thus the "rank" (if I understand what is meant here) is orthogonal to the unit system. The unit is a property of the value type of the container, and the size (or rank) is a property of the container. Matthias

On Mon, Jun 19, 2006 at 10:04:21AM +0200, Matthias Troyer wrote:
As a physicist I am completely baffled and confused. What do you mean by rank of a quantity? Do you mean the size of a vector/matrix? If
I understood Olegs post about rank such that it was something that would allow to distinguish energy from torque, e.g. Is there such a thing? Off the top of my head, I cannot think of a situation where you might want to add energy to torque, even if in the SI system they have the same dimension (Nm). Same thing for angular velocity [rad/s] and frequency (1/s). You probably don't want to add both, even if in SI they're both in s^-1. Is this maybe a deficiency of SI? Would it make sense to add the unit "radians" that is used colloquially anyway, to the system? Regards, -Gerhard -- Gerhard Wesp ZRH office voice: +41 (0)44 668 1878 ZRH office fax: +41 (0)44 200 1818 For the rest I claim that raw pointers must be abolished.

Gerhard Wesp wrote:
On Mon, Jun 19, 2006 at 10:04:21AM +0200, Matthias Troyer wrote:
As a physicist I am completely baffled and confused. What do you mean by rank of a quantity? Do you mean the size of a vector/matrix? If
I understood Olegs post about rank such that it was something that would allow to distinguish energy from torque, e.g. Is there such a thing? Off the top of my head, I cannot think of a situation where you might want to add energy to torque, even if in the SI system they have the same dimension (Nm).
Same thing for angular velocity [rad/s] and frequency (1/s). You probably don't want to add both, even if in SI they're both in s^-1. Is this maybe a deficiency of SI? Would it make sense to add the unit "radians" that is used colloquially anyway, to the system?
You are almost right, but in the end you've chosen a wrong direction. There is a very good article in wikipedia about tensors in which tensor rank is also described: http://en.wikipedia.org/wiki/Tensor The wrong direction is about deficiency in SI. The SI is all about units, rank is absolutely orthogonal concept to units (of course it deals not only with units, but also with dimensions, but it is natural because units are just scales in multidimensional space defined by dimensions). The truth is that angular velocity is (surprise!) a pseudovector. Its direction is the same as a direction of rotation axis. it is known to be involved in the following equation: velocity_vector = crossproduct(angular_velocity_pseudovector, radius_vector) On the other hand, frequency is simply a scalar. So, here is the same situation as we have in energy vs torque "paradox" ;-) Roots of such a problems lies in existence of two operations that produce the same dimension from given ones: 1) dot-product (results in a scalar, like energy) 2) cross-product (results in a vector or a pseudovector, like torque) This problem can not be solved only inside dimensional analysis. I hope that it is clear from this post. The solution is to take into account the rank of quantity (rank is 0 -> scalar; rank is 1 -> vector, ...) It can be implemented absolutely in the same manner as dimensional analysis: rank would be a compile time constant that is bound to quantity and all operations would be defined with respect to quantities' rank. It means, in particular, that dimensional vector _must_ be defined as: 1) length< vector<double> > l; and not as: 2) vector< length<double> > l; reason is simple - we should define operations for vectors (rank is one) that respect rank of it's results. For example: dot(v1, v2) -> scalar (rank is 0) of dim = dim1 * dim2 cross(v1, v2) -> vector (rank is 1) of dim = dim1 * dim2 in implementation of these functions results would be a linear combination of multiplications of vectors' coordinates, like: v1.x * v2.x or v1.x * v2.y. if approach (2) is chosen then there is no way for operator* defined for quantity to chose the right rank of the resulting quantity. But it must be 0 for dot and 1 for cross. To fix it library that implements vector<> should be aware of dimensions. It means that PQS should have it's own copy of all linear algebra code, but tuned for dimensions. There were a long discussion about it in this list. In case of length< vector<double> > l; this limitation vanishes. length here knows that it is a vector and defines its operations with dimension _and_ rank awareness. One time for all linear algebra libs. I'm envision something like boost::operators that can be used to define quantities like length or torque (it can be used to define new system of units/dimensions). something like: struct length : boost::pqs::quantity<1/*rank*/> {...}; Hope this post would be helpful for units/dimensions subcommunity in this list ;-) Best regards, Oleg Abrosimov.

On Tue, Jun 20, 2006 at 12:23:37AM +0700, Oleg Abrosimov wrote:
Gerhard Wesp wrote:
On Mon, Jun 19, 2006 at 10:04:21AM +0200, Matthias Troyer wrote:
As a physicist I am completely baffled and confused. What do you mean by rank of a quantity? Do you mean the size of a vector/matrix? If
I understood Olegs post about rank such that it was something that would allow to distinguish energy from torque, e.g. Is there such a thing? Off the top of my head, I cannot think of a situation where you might want to add energy to torque, even if in the SI system they have the same dimension (Nm).
Same thing for angular velocity [rad/s] and frequency (1/s). You probably don't want to add both, even if in SI they're both in s^-1. Is this maybe a deficiency of SI? Would it make sense to add the unit "radians" that is used colloquially anyway, to the system?
You are almost right, but in the end you've chosen a wrong direction.
There is a very good article in wikipedia about tensors in which tensor rank is also described: http://en.wikipedia.org/wiki/Tensor
<snip>
This problem can not be solved only inside dimensional analysis. I hope that it is clear from this post. The solution is to take into account the rank of quantity (rank is 0 -> scalar; rank is 1 -> vector, ...)
Having rank also solves the following problem I was wondering about: how do you define vector scalar multiplication in a sufficiently restrictive way. Without a notion of rank, matrix * vector could be ambiguous with scalar * vector since (matrix * x, matrix * y, matrix * z) would be a valid vector<matrix<T> >. Geoffrey

On Tue, Jun 20, 2006 at 12:23:37AM +0700, Oleg Abrosimov wrote:
Hope this post would be helpful for units/dimensions subcommunity in this list ;-)
Definitely. Thanks for the enlightenment! -Gerhard -- Gerhard Wesp ZRH office voice: +41 (0)44 668 1878 ZRH office fax: +41 (0)44 200 1818 For the rest I claim that raw pointers must be abolished.

On Jun 19, 2006, at 7:23 PM, Oleg Abrosimov wrote:
Gerhard Wesp wrote:
On Mon, Jun 19, 2006 at 10:04:21AM +0200, Matthias Troyer wrote:
As a physicist I am completely baffled and confused. What do you mean by rank of a quantity? Do you mean the size of a vector/matrix? If
I understood Olegs post about rank such that it was something that would allow to distinguish energy from torque, e.g. Is there such a thing? Off the top of my head, I cannot think of a situation where you might want to add energy to torque, even if in the SI system they have the same dimension (Nm).
Same thing for angular velocity [rad/s] and frequency (1/s). You probably don't want to add both, even if in SI they're both in s^-1. Is this maybe a deficiency of SI? Would it make sense to add the unit "radians" that is used colloquially anyway, to the system?
You are almost right, but in the end you've chosen a wrong direction.
There is a very good article in wikipedia about tensors in which tensor rank is also described: http://en.wikipedia.org/wiki/Tensor
The wrong direction is about deficiency in SI. The SI is all about units, rank is absolutely orthogonal concept to units (of course it deals not only with units, but also with dimensions, but it is natural because units are just scales in multidimensional space defined by dimensions). The truth is that angular velocity is (surprise!) a pseudovector. Its direction is the same as a direction of rotation axis. it is known to be involved in the following equation: velocity_vector = crossproduct(angular_velocity_pseudovector, radius_vector)
On the other hand, frequency is simply a scalar. So, here is the same situation as we have in energy vs torque "paradox" ;-)
Roots of such a problems lies in existence of two operations that produce the same dimension from given ones: 1) dot-product (results in a scalar, like energy) 2) cross-product (results in a vector or a pseudovector, like torque)
This problem can not be solved only inside dimensional analysis. I hope that it is clear from this post. The solution is to take into account the rank of quantity (rank is 0 -> scalar; rank is 1 -> vector, ...)
It can be implemented absolutely in the same manner as dimensional analysis: rank would be a compile time constant that is bound to quantity and all operations would be defined with respect to quantities' rank. It means, in particular, that dimensional vector _must_ be defined as: 1) length< vector<double> > l;
and not as: 2) vector< length<double> > l;
reason is simple - we should define operations for vectors (rank is one) that respect rank of it's results. For example: dot(v1, v2) -> scalar (rank is 0) of dim = dim1 * dim2 cross(v1, v2) -> vector (rank is 1) of dim = dim1 * dim2
I feel that this discussion about "rank" is trying to achieve too much. If I understand it correctly, one wants to prevent users from mixing quantities with the same units but different semantics, and to use the "rank" or some other identifier to disambiguate them. I would propose to completely drop this idea, since there are *many* more examples where semantics play a role. Just consider - Total energy of a system or energy per particle have the same units but care must be taken in adding them - Free energy, energy, and enthalpy have the same units but care must be taken in adding them - Price of a produce including or excluding tax has the same units, but but care must be taken in adding them - ... There are many cases where it does not make any sense to ad two quantities even if they have the same units. No matter how one extends the unit library, one will never be able to catch all of them. I thus think that the unit library should focus on unit checking and conversions and not attempt to prevent any possible user error, especially if this will make the library more complex and will require a completely new linear algebra library to be written Matthias

On Tue, Jun 20, 2006 at 11:44:17AM +0200, Matthias Troyer wrote:
checking and conversions and not attempt to prevent any possible user error, especially if this will make the library more complex and will
Very good point and very good examples. Regards, -Gerhard -- Gerhard Wesp ZRH office voice: +41 (0)44 668 1878 ZRH office fax: +41 (0)44 200 1818 For the rest I claim that raw pointers must be abolished.

Hello, I've been using exceptions for several years now and while this programming style is very rewarding in general, one particular problem was bugging me all along: how do you make sure the catch site has all the information necessary to process the exception? This problem is trivial only in the case when the throw site has all the information that the catch site needs; but in general that is not the case. A good example that illustrates the problem I'm talking about is a file read function, which needs to be given the file name, so that if a read error occurs, the exception object would have the file name stored in it. The typical solution I've seen, and the one I've been using in the past, is to somehow pass all relevant information to the function that's throwing the exception, so that if it throws it can encode it in the exception object. Clearly, this approach is not ideal; in the file read example, the read function doesn't need a file name, all it needs to attempt the read and to detect an error is a file handle. Besides, some files, such as the standard out, don't have names anyway... I came up with a solution which seems to solve the problem, and I wanted to see if there is sufficient interest in it to be added to boost. Essentially, the idea is to decouple all context information, such as file names, etc., from the "what went wrong" information. The "what went wrong" is indicated exclusively by the type of the exception being thrown (e.g. read_error). Any additional information should be independent of that type, because in different contexts different information may be relevant, for the same type of error. Consider the following throw statement: throw read_error(); With the system I developed, this same error would be reported like this: throw failed<read_error>(); Here, 'failed' is a function template, which returns an unnamed temporary of unspecified type, which derives from the type argument passed to 'failed', and another class called 'info'. Class info is essentially a map of boost::any objects, associated by their type_info. So, an object of class info can store objects of any type, but no more than one instance per type. In the example above, you can catch the exception as read_error & (or any of the read_error base types), or as info &. At the catch site, you simply catch errors (that is, you dispatch on 'what went wrong'), and then probe the exception object for any context information available: catch( read_error & x ) { if( info * xi = dynamic_cast<info *>(&x) ) if( file_name * fn = xi->get<file_name>() ) ...... } How did the file name end up in the info object, if the original exception was thrown simply by 'throw failed<read_error>()'? Since you can catch the exception as info &, you can intercept *any* exception thrown by the 'failed' function template -- not to handle it, but to add relevant context information to it, regardless of what went wrong, like this: void open_and_read_file( char const * n ) { boost::shared_ptr<FILE> f = io::fopen(n,"rb"); try { read_file(f,....); .... } catch( info & xi ) { xi.add( file_name(n) ); throw; } } Of course, in the try block, there may be many different function calls; when we catch info &, we don't care which one of those calls failed: we catch any exception thrown by the 'failed' function template, add the file name (since it is relevant to any error that occured in this context), and then re-throw the original object of unspecified type, as returned by the 'failed' function template. In fact, class info is abstract, which protects against slicing which would occur if the user tried to throw xi (in the example above). I can provide more information about my implementation if needed; it allows context information to be encoded as demonstrated by the example above, and also directly in the throw-expression -- for stuff that happens to be known at the throw site anyway, such as errno, etc. --Emil

Emil Dotchevski wrote:
I came up with a solution which seems to solve the problem, and I wanted to see if there is sufficient interest in it to be added to boost.
Essentially, the idea is to decouple all context information, such as file names, etc., from the "what went wrong" information. The "what went wrong" is indicated exclusively by the type of the exception being thrown (e.g. read_error). Any additional information should be independent of that type, because in different contexts different information may be relevant, for the same type of error.
This seems quite a novel approach, and would probably be interesting to play around with. Whether it's useful can probably only be determined by experimentation. Sebastian Redl

On 6/20/06, Emil Dotchevski <emildotchevski@hotmail.com> wrote:
Hello,
Hi Emil, [snipped] I liked very much your idea, but something bothers me.
catch( read_error & x ) { if( info * xi = dynamic_cast<info *>(&x) ) if( file_name * fn = xi->get<file_name>() ) ...... }
IMO, this dynamic_cast should be hided by the library. How about: catch(read_error& x) { if(info i = try_get_info(x)) { // ... do something. } } There should be a way to inspect what there's in the info object too, IMHO. I liked the << operator too for the throw statement. [snipped]
--Emil
-- Felipe Magno de Almeida

I liked very much your idea, but something bothers me.
catch( read_error & x ) { if( info * xi = dynamic_cast<info *>(&x) ) if( file_name * fn = xi->get<file_name>() ) ...... }
IMO, this dynamic_cast should be hided by the library. How about:
catch(read_error& x) { if(info i = try_get_info(x)) { // ... do something. } }
In fact, with a function like this you could completely hide the info class from the public interface, right? But the fact that class info is a public base for the exceptions being thrown is a key feature of the library. It makes it possible to catch(info &) in places where you don't care what went wrong and simply want to add context information to any exception that passes through. I think that because class info will always be a public base of all exceptions, dynamic_cast will always work, and there's no reason to provide an alternative for achieving the same thing.
There should be a way to inspect what there's in the info object too, IMHO.
You mean, a way to enumerate all contained boost::any objects? But what can you do with a type_info and a void pointer? I think a more useful function would be to automatically compose a textual description of the stuff that's in the info object. I thought about this, but decided that formatting a message for the user is beyond the scope of the exception lib. Consider that such a message has to be localized, too! That said, it's probably a good idea to design a system that automatically composes a textual description, even if it is only useful for dumping in error logs and stuff. But such a system would complicate the info class. You wouldn't be able to stuff just any object in it, because 'infos' will have to implement some virtual function to convert them to strings. And I like how my fread wrapper stuffs a boost::weak_ptr<FILE> in the exception, just because it could be useful to someone higher up the stack. But if you have ideas about this, I'm very interested to hear about them. --Emil

Hello, I guess I'm not quite ready to request a formal review for the Exception lib I wrote, but because of the initial interest in it I do want to propose adding it to Boost. So I wrote documentation, and adapted the source code to Boost. Please check it out: http://www.revergestudios.com/exception/exception.htm If you click Download on that page, you can get a .zip file with the source code, tests, and an example, complete with Boost Build Jamfiles. All of the Exception lib code is contained in boost/exception.hpp. The code has been tested with msvc 7.1, 8.0 and gcc 3.4.4. Peter Dimov mentioned the system_error proposal from Beman Dawes. I will read it carefuly and will think how it can be integrated with the Exception lib. Thanks, Emil

Oleg Abrosimov wrote:
There is a very good article in wikipedia about tensors in which tensor rank is also described: http://en.wikipedia.org/wiki/Tensor
(snip...)
I'm envision something like boost::operators that can be used to define quantities like length or torque (it can be used to define new system of units/dimensions). something like: struct length : boost::pqs::quantity<1/*rank*/> {...};
I think you have some nice ideas for the foundations of a Tensor library, but not for a part of the units and dimensions library. All of the problems you bring up are problems based in the representations and algebra of tensors. Things like the difference between vectors and pseudo-vectors, and rank issues. You never used the terms covariant and contravariant, but that seems to be where you are working from. Unfortunately, if you start down this road, it quickly presents its own problems. From Tensor theory, we know that there are indexed arrays of quantities that can interact with tensors, but are not tensors themselves (the Christoffel symbols, for example). Should this also become part of the units library? What about the fact that the algebra of two index tensors is not the same as the algebra of matrices? Do we need to add something else to account for that? What about magnitudes of vector and pseudovector quantities? The magnitude of a torque makes complete physical sense, it is a scalar, and it is still a bad idea to add one to an energy. No, on the whole I think te units and dimensions library should try to retain focus on units and dimensions. That is already a huge issue, and it has more than sufficient complexity on its own. Tensors are good and very interesting objects, but the only concern the units library should have for them is in picking a design that doesn't automatically break them.
Hope this post would be helpful for units/dimensions subcommunity in this list ;-)
Best regards, Oleg Abrosimov. _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
I think all of this discussion has been quite helpful and valuable for the interested subcommunity. I doubt there is anyone paying attention to it who hasn't learned a few things in the process. John Phillips

Hi Oleg, "Oleg Abrosimov" wrote
Janek Kozicki writes:
Andy Little said: (by the date of Tue, 13 Jun 2006 00:59:11 +0100)
t1_quantity type in PQS is overcomplicated. Two decisions complicated the design of t1_quantity. The first was the requirement to distinguish dimensionally equivalent quantities (torque and energy say).
IMHO this distinguish is not that important. We only need units so that compiler will check if there is any mistake in the formulas...
Difference between torque and energy happens only during serialization (print N*m, or print J ?), so maybe instead of complicated abstract_quantity_id, there should be just some extra argument/setting that will talk with serialization functions?
Maybe this will make the design a bit leaner.
I'm completely disagree here.
1) Can anyone provide a really useful example of quantities with units encoded? I mean all these "length::m" things in PQS. There were requests to disable it and reduce to simple "length" and treat with units only in I/O with help of manipulators. Note that historical units can be handled in the same way:
length l; cin >> l; // in meters by default cout << hyst_unit_manip << l;
The genesis of PQS was the need in my own work to remember which unit a floating point variable was expressed in. In this particular application the user had various data entry boxes where the entry unit was millimeters. In my internal calculations I had to convert these lengths to meters. Again I had to convert them back for output. I found I spent a huge amount of time going back and forth between source files trying to remember whether a particular variable was in meters or millimeters. You could argue that the design of the application was poor but it was one of those applications that grow from very simple beginnings by gradual additions into a huge creaking monster. Users of the application often made requests to be allowed to specify other units (feet and inches, meters etc) for data input too I also had an email from a software house who needed to work in imperial units for their work and had done some tests and found PQS potentially very useful for their work. Finally of course there is the famous example of the Mars Lander which crashed wasting Millions of dollars, apparently because one team was using metric and the other imperial units. There are countless other examples too.
2) torque is a really bad smell in PQS! it has the same dimension as an energy and, if I'm not wrong, is treated in the same way, so one can add torque and energy. It is a completely meaningless. There can not be meaningful equation that do that. And if PQS allows it it means that it is broken here IMO.
Actually PQS works in terms of so-called anonymous quantities where only their dimension and unit is significant. The results of many calculations return an anonymous quantity because I dont see any simple way to predict from the imput variable what type the result should be.. The so-called named_quantities are there for users who want to distinguish quantities for their own purposes. (the obvious one being more satisfying output). OTOH if you can come up with a way to distinguish the result of force * distance into either torque or energy depending on some criteria I would be interested. FWIW I experimented with making torque have an angle ValueType. That approach might be worth pursuing. My reason for not pusuing it was that I felt that many potential users would simply find the extra complexity annoying ( as I did during trying it out). And of course in the limit an angular velocity (*radius) becomes a straight velocity etc, etc...
the problem here is that PQS deals with only dimensions, but physical quantities have not only that, they have rank also.
the difference between energy and torque is such that energy is a scalar and torque is a vector, or, more correctly, a pseudovector (it doesn't change its sign if space is inversed. velocity is a vector, it negates when space is inverted).
it is completely wrong to add a scalar and a pseudovector (or vector). rank really matters! moreover, adding scalar with pseudoscalar or vector with pseudovector is meaningless too. such a summation can not occur in any correct equation.
Sure there is a lot of truth in what you are saying. What is lacking is a set of rules that can be used to apply these principles. I believe that Matt Calabrese was working on this aspect of quantities some time ago, so it might be worth talking to him about it. I opted to stick to dimensional analysis checking only and ignore other features of a quantity. I have enough on my plate as it is. So I accept the criticism of PQS, but I believe a working implementation of what you are describing will throw up difficulties (based on a small excursion into that territory) that add a lot of complexity, nevertheless I would be interested in at least some more details about how it would work in practise. It should be clear from the amount of input in this review that this field is potentially huge and worthy of much further research. The PQS library only scratches the surface and leaves many gaps as you are saying. However my goal is to keep PQS as simple as possible (IOW basically as it is now), but to tighten up the documentation, add supporting classes (again a simple set as possible), continue to look through the comments here, accept suggestions for syntax and usability improvements etc etc, but not to extend into the territory you are suggesting (largely as I keep saying because my maths and physics capabilities simply arent good enough). It will then be up to others like yourself to criticise and extend improve on, reject etc, etc PQS as part of that research. regards Andy Little

Andy Little wrote:
"Oleg Abrosimov" wrote
1) Can anyone provide a really useful example of quantities with units encoded? I mean all these "length::m" things in PQS. There were requests to disable it and reduce to simple "length" and treat with units only in I/O with help of manipulators. Note that historical units can be handled in the same way:
length l; cin >> l; // in meters by default cout << hyst_unit_manip << l;
The genesis of PQS was the need in my own work to remember which unit a floating point variable was expressed in. In this particular application the user had various data entry boxes where the entry unit was millimeters. In my internal calculations I had to convert these lengths to meters. Again I had to convert them back for output. I found I spent a huge amount of time going back and forth between source files trying to remember whether a particular variable was in meters or millimeters. You could argue that the design of the application was poor but it was one of those applications that grow from very simple beginnings by gradual additions into a huge creaking monster.
Yes, I would ;-) Actually, my question is: Can you provide good reasons to follow current PQS style (length::m etc.) instead of simply Length, as me and others have proposed? In this letter you only said that it was not a rational decision, but a historical one. It doesn't convince me. It is not a good reason for library that pretends to be in boost IMO. Moreover, I have a strong feeling that the solution with simple Length would be better. In particular, it'll simplify your code that uses PQS. You've said that PQS helps you to remember in what units a variable is. In a scheme with Length you simply have no need to remember it nor think about it anymore. Of course, I can be completely wrong. If so, I would like to see a complete example were current scheme is beneficial. In C++, please. Best regards, Oleg Abrosimov

"Oleg Abrosimov" <beholder@gorodok.net> wrote
Moreover, I have a strong feeling that the solution with simple Length would be better. In particular, it'll simplify your code that uses PQS. You've said that PQS helps you to remember in what units a variable is. In a scheme with Length you simply have no need to remember it nor think about it anymore.
Of course, I can be completely wrong. If so, I would like to see a complete example were current scheme is beneficial. In C++, please.
Have you looked in the examples provided with pqs_3_1_0/pqs_3_1_1? Some examples there showing the advantages of units are: <libs/pqs/examples/conversion_factor.cpp> <libs/pqs/examples/capacitor_time_curve.cpp> <libs/pqs/examples/lab_example.cpp> <libs/pqs/examples/gravity.cpp> <libs/pqs/examples/fibonacci_optimise-timer.cpp> <libs.pqs/examples/noise_voltage_density.cpp. <libs/pqs/examples/clcpp_response.cpp> regards Andy Little

Andy Little said: (by the date of Sat, 17 Jun 2006 21:14:57 +0100)
"Oleg Abrosimov" <beholder@gorodok.net> wrote
Moreover, I have a strong feeling that the solution with simple Length would be better. In particular, it'll simplify your code that uses PQS. You've said that PQS helps you to remember in what units a variable is. In a scheme with Length you simply have no need to remember it nor think about it anymore.
Of course, I can be completely wrong. If so, I would like to see a complete example were current scheme is beneficial. In C++, please.
Have you looked in the examples provided with pqs_3_1_0/pqs_3_1_1?
Some examples there showing the advantages of units are:
<libs/pqs/examples/conversion_factor.cpp> <libs/pqs/examples/capacitor_time_curve.cpp> <libs/pqs/examples/lab_example.cpp> <libs/pqs/examples/gravity.cpp> <libs/pqs/examples/fibonacci_optimise-timer.cpp> <libs.pqs/examples/noise_voltage_density.cpp. <libs/pqs/examples/clcpp_response.cpp>
Hi Andy, this is not a good reply to well posed question. You should rather pick only one of those examples, copy/paste a (small) fragment of it, and write reasoning why length::m makes sense. It's even possible that while writing this, you will discover yourself that it doesn't make sense :) I'm sorry about being harsh ... but - you know small piece of code with good comments that are ,,on topic'' of length::m is thousands better than tons of examples without explanation... -- Janek Kozicki |

"Janek Kozicki" <janek_listy@wp.pl> wrote in message news:20060618012325.13991239@absurd...
Andy Little said: (by the date of Sat, 17 Jun 2006 21:14:57 +0100)
Some examples there showing the advantages of units are:
<libs/pqs/examples/conversion_factor.cpp> <libs/pqs/examples/capacitor_time_curve.cpp> <libs/pqs/examples/lab_example.cpp> <libs/pqs/examples/gravity.cpp> <libs/pqs/examples/fibonacci_optimise-timer.cpp> <libs.pqs/examples/noise_voltage_density.cpp. <libs/pqs/examples/clcpp_response.cpp>
Hi Andy, this is not a good reply to well posed question.
You should rather pick only one of those examples, copy/paste a (small) fragment of it, and write reasoning why length::m makes sense. It's even possible that while writing this, you will discover yourself that it doesn't make sense :)
I'm sorry about being harsh ... but - you know small piece of code with good comments that are ,,on topic'' of length::m is thousands better than tons of examples without explanation...
Yes the examples need to be sorted to and linked to Docs, and more examples... Ok ......... ;-) regards Andy Little

Oleg Abrosimov wrote:
Actually, my question is: Can you provide good reasons to follow current PQS style (length::m etc.) instead of simply Length, as me and others have proposed?
I can think of a couple. Here is a rather contrived one: pqs::length universe_radius = 1.4e26 * METER; pqs::volume universe_volume = (4.0/3.0) * M_PI * pqs::pow<3>(universe_radius); You now have 2.744e78 m^3, which could overflow if the maximum exponent of the double on your system is +37. As I said, that was contrived, but embedded systems may not have a large maximum exponent for doubles. Here is another one from a How about this one that is really from an application I work on. The layer that I work on has to collect data from several legacy application outputs that use different underlying systems of units. One gives data in kJ and another gives data in kcal. Rather than having to constantly convert between the two values, which is error prone, I simply write: // Prototypes of interface with two different legacy codes pqs::energy::kJ GetLegacyData1(/*args*/); pqs::energy::kcal GetLegacyData2(/*args*/); // used in my program pqs::energy::kJ = GetLegacyData1() + GetLegacyData2(); Compare that to what I would have to write under a uniform units system. Inside of both GetLegacyData2(), instead of just the code: return pqs::energy::kcal(ptr->value); I'd have to write return pqs::energy::J(KILOCALORIES * ptr->value); Not terribly different, but there is an extra conversion in there, and that brings up a chance for errors. What if I accidentally wrote KILOCALORIES / ptr->value ? Assuming that I wrote the definition of KILOCALORIES (which I'd almost certainly have to do for more complicated things like angstroms^2/picosecond, which is a unit that comes up for me), I'd have to be careful that (1) I wrote the conversion KILOCALORIES correctly, and (2) I used it correctly. Suddenly, all of the nice automatic conversions that save me time (is it multiply by 4.184 or divide?) are gone and back on my shoulders. I was ecstatic to get rid of all of the conversion factors in my code, and I would be sad to see that go away.
In this letter you only said that it was not a rational decision, but a historical one. It doesn't convince me. It is not a good reason for library that pretends to be in boost IMO.
Moreover, I have a strong feeling that the solution with simple Length would be better. In particular, it'll simplify your code that uses PQS.
In my opinion, the purpose of a library should be to make the user's life easier. That is why we are willing to tolerate the awful syntax of operator[](int i) when we write a class. We only write it once, but use it many times. For anyone who wants to use only the basics, you can simply write: namespace pqs { typedef boost::pqs::length::m length; static const length METER(1.0); typedef boost::pqs::length::s time; static const time SECOND(1.0); typedef boost::pqs::velocity::m_div_s velocity; static const velocity M_PER_S(METER/SECOND); // etc for other units } and then just use: pqs::length inch = 0.0254 * METER; pqs::time ps = 1.0e-12 * SECOND; pqs::velocity v = inch/ps; Everything is nicely stored in meters, seconds, etc. You'll suffer some compile-time penalty (compared to a much simpler system where everything can only be in the basic units) for the fact that you don't use all of the library's power. There should be no run-time penalty (assuming that you have a good optimizing compiler) since all of the conversions are known at compile-time and are 1.0.
You've said that PQS helps you to remember in what units a variable is. In a scheme with Length you simply have no need to remember it nor think about it anymore.
For me, the beauty is that I don't have to remember (or even care what units the underlying system gave me the values in). Once the accessor function was defined, I could store them in whatever units I want and let PQS do all of the conversion.

David Walthall wrote:
Oleg Abrosimov wrote:
Actually, my question is: Can you provide good reasons to follow current PQS style (length::m etc.) instead of simply Length, as me and others have proposed?
I can think of a couple. Here is a rather contrived one:
pqs::length universe_radius = 1.4e26 * METER; pqs::volume universe_volume = (4.0/3.0) * M_PI * pqs::pow<3>(universe_radius);
You now have 2.744e78 m^3, which could overflow if the maximum exponent of the double on your system is +37. As I said, that was contrived, but embedded systems may not have a large maximum exponent for doubles.
Under/overflow will certainly be an issue for some people if they can't specify their own base units, but I don't know if a tight coupling between units and dimensions is the best solution. Could there be some sort of global setting (a facet, maybe?) where we could specify units for base dimensions for the entire application?
The layer that I work on has to collect data from several legacy application outputs that use different underlying systems of units. One gives data in kJ and another gives data in kcal. Rather than having to constantly convert between the two values, which is error prone, I simply write:
// Prototypes of interface with two different legacy codes pqs::energy::kJ GetLegacyData1(/*args*/); pqs::energy::kcal GetLegacyData2(/*args*/);
// used in my program pqs::energy::kJ = GetLegacyData1() + GetLegacyData2();
Another example would be something like this (excuse the psuedo-pqs): pqs::pressure::psi WaterPressure(pqs::length::feet depth) { const pqs::length_per_pressure::foot_per_psi pressureRatio(2.31); return depth/pressureRatio; } If units weren't specified in the function declaration, it could be called with the wrong ones, and its return value would be garbage. Both your example and mine have a common element though. They both involve moving between numeric values and dimensions. I can't think of an instance where units would be useful when that isn't the case. If that's true, then it seems like overkill to require bound units throughout the program. A simpler solution would be to prohibit dimension variables from being assigned or returning "raw" numbers. The only way to get or set a value would be through unit functions like this pqs::energy energyVal = pqs::kcal(value); My water pressure example would look like this pqs::pressure WaterPressure(pqs::length depth) { const pqs::length_per_pressure ratio(pqs::foot_per_psi(2.31)); return depth/ratio; } The value, 2.31, would be converted to the global units, whatever they happened to be, so the function would work with any unit system. Since units would be specified at the point of conversion, this method might even be a little safer. In pqs, you might do something like this pqs::length::m depth; // lots of code double depthAsDouble(depth); SetFieldValue("DepthInFt", depthAsDouble); cout << "Enter new depth (in feet): " cin >> depth; Because depth is declared far from where it's output and reassigned, these mistakes would be easy to miss. But if the code worked like this pqs::length depth; // lots of code // Must use units function to convert to double double depthAsDouble = pqs::feet(depth); SetFieldValue("DepthInFt", depthAsDouble); cout << "Enter new depth (in feet): " // Must use units function to convert from double cin >> depthAsDouble; depth = pqs::feet(depthAsDouble); The programmer would have to declare the units at the point where they're used. This wouldn't eliminate mistakes altogether, but it might discourage the more obvious ones. The biggest advantage of all this is that units would no longer be an integral part of the dimensions library, and the only difference between the unitless and unitful versions of the library would be in how values are assigned to and extracted from dimension variables.

"Beth Jacobson" wrote [...]
Under/overflow will certainly be an issue for some people if they can't specify their own base units, but I don't know if a tight coupling between units and dimensions is the best solution. Could there be some sort of global setting (a facet, maybe?) where we could specify units for base dimensions for the entire application?
The quantity containers and unit typedefs can be easily customised by the user to whatever they prefer: namespace my{ typedef boost::pqs::length::mm distance; typedef boost::pqs::time::s time; typedef boost::pqs::velocity::mm_div_s velocity; } void f() { my:::velocity v = my::distance(1) / my_time(1); } [...]
Another example would be something like this (excuse the psuedo-pqs):
No problem..... ;-)
pqs::pressure::psi WaterPressure(pqs::length::feet depth) { const pqs::length_per_pressure::foot_per_psi pressureRatio(2.31); return depth/pressureRatio; }
If units weren't specified in the function declaration, it could be called with the wrong ones, and its return value would be garbage.
Both your example and mine have a common element though. They both involve moving between numeric values and dimensions. I can't think of an instance where units would be useful when that isn't the case. If that's true, then it seems like overkill to require bound units throughout the program. A simpler solution would be to prohibit dimension variables from being assigned or returning "raw" numbers. The only way to get or set a value would be through unit functions like this.
PQS quantities can't be converted to raw numbers. double val = pqs::length::m(1) ;// Error but a function is provided to get the numeric_value: double val = pqs::length::m(1).numeric_value() ;// Ok Its name is quite long so it's easy to spot in code.
pqs::energy energyVal = pqs::kcal(value);
My water pressure example would look like this
pqs::pressure WaterPressure(pqs::length depth) { const pqs::length_per_pressure ratio(pqs::foot_per_psi(2.31)); return depth/ratio; }
The value, 2.31, would be converted to the global units, whatever they happened to be, so the function would work with any unit system. Since units would be specified at the point of conversion, this method might even be a little safer. In pqs, you might do something like this
pqs::length::m depth;
// lots of code
Its not possible to initialise a double from a quantity in PQS:
double depthAsDouble(depth);
so the the above will not compile.
SetFieldValue("DepthInFt", depthAsDouble);
cout << "Enter new depth (in feet): " cin >> depth;
Because depth is declared far from where it's output and reassigned, these mistakes would be easy to miss.
Not so for the above reason! In PQS the units are part of the type. IOW the numeric value and its unit are always tightly coupled in code. That is a powerful feature. Once the numeric part of a quantity is disassociated from its unit then manual checking and external documentation is required, which is often the current situation wherever doubles are used to represent quantities. Manual checking doesnt always work as well as intended... That of course is why the Mars lander crashed! regards Andy Little

Andy Little wrote:
"Beth Jacobson" wrote
[...]
Under/overflow will certainly be an issue for some people if they can't specify their own base units, but I don't know if a tight coupling between units and dimensions is the best solution. Could there be some sort of global setting (a facet, maybe?) where we could specify units for base dimensions for the entire application?
The quantity containers and unit typedefs can be easily customised by the user to whatever they prefer:
namespace my{
typedef boost::pqs::length::mm distance; typedef boost::pqs::time::s time; typedef boost::pqs::velocity::mm_div_s velocity; }
void f() { my:::velocity v = my::distance(1) / my_time(1);
}
That would remove the unit declarations from the code, but the program still wouldn't be unit agnostic. Say I wrote a function like this my::area GetArea(my::length len, my::length width) { return len*width; } If I used it in a program using SI units, it would work exactly like what I'm proposing, but if the calling program used imperial units, it would first convert len and width from feet to meters, then multiply them, then convert the result from square meters back to square feet. The results would be the same either way (assuming no rounding issues), but it adds three unnecessary conversions.
Both your example and mine have a common element though. They both involve moving between numeric values and dimensions. I can't think of an instance where units would be useful when that isn't the case. If that's true, then it seems like overkill to require bound units throughout the program. A simpler solution would be to prohibit dimension variables from being assigned or returning "raw" numbers. The only way to get or set a value would be through unit functions like this.
PQS quantities can't be converted to raw numbers.
double val = pqs::length::m(1) ;// Error
but a function is provided to get the numeric_value:
double val = pqs::length::m(1).numeric_value() ;// Ok
Its name is quite long so it's easy to spot in code.
Thanks. I didn't see it in the docs, but I figured there had to be some fairly straightforward way to extract the value. ...
Since units would be specified at the point of conversion, this method might even be a little safer. In pqs, you might do something like this
pqs::length::m depth;
// lots of code
Its not possible to initialise a double from a quantity in PQS:
double depthAsDouble(depth);
so the the above will not compile.
Then the line would look like this double depthAsDouble = depth.numeric_value();
SetFieldValue("DepthInFt", depthAsDouble);
cout << "Enter new depth (in feet): " cin >> depth;
Because depth is declared far from where it's output and reassigned, these mistakes would be easy to miss.
Not so for the above reason!
But when I use numeric_value(), the problem still remains: there's nothing in that line to tell you the units of the extracted value. It's not essential that there should be, but since this is one place where the library can't protect the user against units errors, it would be nice if such errors were easier to recognize.
In PQS the units are part of the type. IOW the numeric value and its unit are always tightly coupled in code. That is a powerful feature. Once the numeric part of a quantity is disassociated from its unit then manual checking and external documentation is required, which is often the current situation wherever doubles are used to represent quantities. Manual checking doesnt always work as well as intended... That of course is why the Mars lander crashed!
I agree that unit checking can be an extremely useful feature, I'm just not convinced that tight coupling is necessary to achieve that. If my assumption that unit conversions are only needed when numeric values are assigned to or extracted from dimensions is correct, then dimensions don't really need to know about units. The units functions would need to have a concept of default dimension units so they'd know what they were converting from or to, but the dimensions themselves wouldn't know or care. That would be the real benefit of this system. Reducing the number of conversions and making units explicit at the point of conversion may have some small benefit, but making the dimensions library essentially unitless seems like a major advantage. Of course it's your library, and I'm not trying to dictate design. But if you're looking for a way to make the dimensions library essentially independent of units while retaining the unit safety of the current system, this might be a direction to consider. Regards, Beth

"Beth Jacobson" wrote
Andy Little wrote:
"Beth Jacobson" wrote
[...]
Under/overflow will certainly be an issue for some people if they can't specify their own base units, but I don't know if a tight coupling between units and dimensions is the best solution. Could there be some sort of global setting (a facet, maybe?) where we could specify units for base dimensions for the entire application?
The quantity containers and unit typedefs can be easily customised by the user to whatever they prefer:
namespace my{
typedef boost::pqs::length::mm distance; typedef boost::pqs::time::s time; typedef boost::pqs::velocity::mm_div_s velocity; }
void f() { my:::velocity v = my::distance(1) / my_time(1);
}
That would remove the unit declarations from the code, but the program still wouldn't be unit agnostic. Say I wrote a function like this
my::area GetArea(my::length len, my::length width) { return len*width; }
If I used it in a program using SI units, it would work exactly like what I'm proposing, but if the calling program used imperial units, it would first convert len and width from feet to meters, then multiply them, then convert the result from square meters back to square feet. The results would be the same either way (assuming no rounding issues), but it adds three unnecessary conversions.
You could rewrite the function: #include <boost/pqs/t1_quantity/types/length.hpp> #include <boost/pqs/t1_quantity/types/out/area.hpp> #include <boost/pqs/typeof_register.hpp> #include <boost/type_traits/is_convertible.hpp> #include <boost/utility/enable_if.hpp> template <typename Out, typename In> typename boost::enable_if< boost::mpl::and_< boost::is_convertible<In, boost::pqs::length::m>, boost::is_convertible<Out, boost::pqs::area::m2> >, Out
::type GetArea(In len, In width) { return len*width; }
namespace pqs = boost::pqs; int main() { pqs::length::ft length(2); pqs::length::ft width(1); BOOST_AUTO(result, GetArea<pqs::area::ft2>(length,width)); std::cout << result <<'\n'; } but using SI units is more efficient, sure. The main use of imperial units is for input output AFAICS FWIW you can also use enable_if to prevent unwanted conversions of the arguments if you wish, by using boost::is_same rather than is_convertible.
Both your example and mine have a common element though. They both involve moving between numeric values and dimensions. I can't think of an instance where units would be useful when that isn't the case. If that's true, then it seems like overkill to require bound units throughout the program. A simpler solution would be to prohibit dimension variables from being assigned or returning "raw" numbers. The only way to get or set a value would be through unit functions like this.
PQS quantities can't be converted to raw numbers.
double val = pqs::length::m(1) ;// Error
but a function is provided to get the numeric_value:
double val = pqs::length::m(1).numeric_value() ;// Ok
Its name is quite long so it's easy to spot in code.
Thanks. I didn't see it in the docs, but I figured there had to be some fairly straightforward way to extract the value.
Oops.. Good catch. Its not in the docs!
Since units would be specified at the point of conversion, this method might even be a little safer. In pqs, you might do something like this
pqs::length::m depth;
// lots of code
Its not possible to initialise a double from a quantity in PQS:
double depthAsDouble(depth);
so the the above will not compile.
Then the line would look like this double depthAsDouble = depth.numeric_value();
Yep!
SetFieldValue("DepthInFt", depthAsDouble);
cout << "Enter new depth (in feet): " cin >> depth;
Because depth is declared far from where it's output and reassigned, these mistakes would be easy to miss.
Not so for the above reason!
But when I use numeric_value(), the problem still remains: there's nothing in that line to tell you the units of the extracted value. It's not essential that there should be, but since this is one place where the library can't protect the user against units errors, it would be nice if such errors were easier to recognize.
You can use the units_str(q) function: #include <boost/pqs/t1_quantity/types/out/length.hpp> #include <utility> template <typename Q> std::pair<std::string,double> SetFieldValue(Q q) { std::pair<std::string,double> result(units_str(q),q.numeric_value()); return result; } namespace pqs = boost::pqs; int main() { pqs::length::ft length(1); std::pair<std::string,double> field = SetFieldValue(length); std::cout << field.first << ' ' << field.second <<'\n'; } But sure, if you remove the units then its up to you to deal with what the number means.
In PQS the units are part of the type. IOW the numeric value and its unit are always tightly coupled in code. That is a powerful feature. Once the numeric part of a quantity is disassociated from its unit then manual checking and external documentation is required, which is often the current situation wherever doubles are used to represent quantities. Manual checking doesnt always work as well as intended... That of course is why the Mars lander crashed!
I agree that unit checking can be an extremely useful feature, I'm just not convinced that tight coupling is necessary to achieve that.
There is planned another quantity where you can change the units at runtime FWIW.
If my assumption that unit conversions are only needed when numeric values are assigned to or extracted from dimensions is correct, then dimensions don't really need to know about units. The units functions would need to have a concept of default dimension units so they'd know what they were converting from or to, but the dimensions themselves wouldn't know or care.
That would be the real benefit of this system. Reducing the number of conversions and making units explicit at the point of conversion may have some small benefit, but making the dimensions library essentially unitless seems like a major advantage.
It depends what unitless means. If you want to just use base_units then you can. The input output of units is a separate component, so you don't get it unless you want it.
Of course it's your library, and I'm not trying to dictate design. But if you're looking for a way to make the dimensions library essentially independent of units while retaining the unit safety of the current system, this might be a direction to consider.
Its difficult to see what you mean without a specific problem to consider. regards Andy Little

Andy Little <andy <at> servocomm.freeserve.co.uk> writes:
"Beth Jacobson" wrote
Andy Little wrote:
In PQS the units are part of the type. IOW the numeric value and its unit are always tightly coupled in code. That is a powerful feature. Once the numeric part of a quantity is disassociated from its unit then manual checking and external documentation is required, which is often the current situation wherever doubles are used to represent quantities. Manual checking doesnt always work as well as intended... That of course is why the Mars lander crashed!
Agreed! Any code containing a numeric literal for a physical quanity, or any code that inserts or extracts a numeric value from a quantity should be required to also specify the units of the numeric value. That's a critical part of the goal of the PQS library.
I agree that unit checking can be an extremely useful feature, I'm just not convinced that tight coupling is necessary to achieve that.
But I also agree here. The goal can be achieved without necessarily making the units part of the type. I'll explain.
There is planned another quantity where you can change the units at runtime FWIW.
I mentioned much earlier that my own implementation of a quantities library was more like the so-called "t2_quantity" - at least the way I envisioned the t2_quantity. But as I read the discussion it occurs to me that my idea may not be the same as yours, and it might be useful for me to share what I have in mind. I hope it will shed some more light on the "length::m vs. length" debate. A couple of people have suggested a syntax more like this: length d = 5 * meter; speed v = 2 * inch / second; time t = d / v; cout << "time = " << t / minute << "minutes"; I also like this usage - this is how my library works. As you've pointed out, Andy, this is trivial to achieve with the current PQS implementation, by using: typedef pqs::length::m length; typedef pqs::time::s time; typedef pqs::velocity::m_per_s speed; const length meter(1); const length inch(0.0254); const time second(1); const time minute(60); The problem is, it also allows sloppy code like this: length d(5); // no error; but units not clearly documented! speed v(2); // no error; but units not clearly documented! so that the numeric values are now divorced from their units (which are defined above, in the typedefs), thus potentially leading to another Mars lander problem. What I want is the syntax I illustrated but to enforce specifying units along with numeric values. It occurs to me that perhaps this would work: struct length : public pqs::length::m { length( pqs::length::m x ) : pqs::length::m( x ) { } operator=( pqs::length::m rhs ); } etc. I don't think this is a big departure from the current PQS concept. It's just a simple extension, really, but I think a very helpful one. As a user I'd rather have something like this provided for me than to have to write the extension myself for each type - length, time, speed, energy, etc. The cost is that I may have more units conversions happening behind the scenes. But they only occur when dealing with the numeric values themselves - i.e., input/output. All my internal computations should be occurring with my types length, time, speed, etc. - which would be defined using a common set of units. For many users a little conversion cost in I/O is of no consequence, if most of the time is spent doing calculations on values in length, speed, etc. (Other users can use the existing types that are tied to units.) One benefit is that I could write all my code to be independent of which units I choose, except where I need to specify a quantity numerically (in which case I must specify units, of course). If I later decide, say, the scale of my problem is more suited to eV than kJ (or if some of my code is reused in a differently-scaled application), I simply change the definitions for my "energy" class, etc., and recompile! As Beth Jacobson suggested, perhaps there could be an easy way for the user to select these "base units" for the entire application. To me, having types like "length," "speed," etc. is also easier to understand. It might be a better starting place for illustrating the library to new users. Here's an example of how I think about physical quantities: My desk has a length. "Length in meters" or "length in inches" are not properties of the desk - only length. That length can then be expressed numerically in different units. A "length" object would represent the desk's length in a similarly abstract way, which could then be expressed in whatever units are desired for I/O. Of course, internally the computer has to represent the length as a value in some unit, but that would be encapsulated in the type and thus transparent to the user. The same internal units would be used for all quantities, thus avoidinng the overhead of type conversions (except on I/O).
If my assumption that unit conversions are only needed when numeric values are assigned to or extracted from dimensions is correct, then dimensions don't really need to know about units. The units functions would need to have a concept of default dimension units so they'd know what they were converting from or to, but the dimensions themselves wouldn't know or care.
Well said. I think this is the same as what I'm suggesting.
That would be the real benefit of this system. Reducing the number of conversions and making units explicit at the point of conversion may have some small benefit, but making the dimensions library essentially unitless seems like a major advantage.
Of course it's your library, and I'm not trying to dictate design. But if you're looking for a way to make the dimensions library essentially independent of units while retaining the unit safety of the current system, this might be a direction to consider.
I agree. -- Leland

"Leland Brown" wrote
Andy Little <andy <at> servocomm.freeserve.co.uk> writes:
"Beth Jacobson" wrote
Andy Little wrote:
In PQS the units are part of the type. IOW the numeric value and its unit are always tightly coupled in code. That is a powerful feature. Once the numeric part of a quantity is disassociated from its unit then manual checking and external documentation is required, which is often the current situation wherever doubles are used to represent quantities. Manual checking doesnt always work as well as intended... That of course is why the Mars lander crashed!
Agreed! Any code containing a numeric literal for a physical quanity, or any code that inserts or extracts a numeric value from a quantity should be required to also specify the units of the numeric value. That's a critical part of the goal of the PQS library.
I agree that unit checking can be an extremely useful feature, I'm just not convinced that tight coupling is necessary to achieve that.
But I also agree here. The goal can be achieved without necessarily making the units part of the type. I'll explain.
There is planned another quantity where you can change the units at runtime FWIW.
I mentioned much earlier that my own implementation of a quantities library was more like the so-called "t2_quantity" - at least the way I envisioned the t2_quantity. But as I read the discussion it occurs to me that my idea may not be the same as yours, and it might be useful for me to share what I have in mind. I hope it will shed some more light on the "length::m vs. length" debate.
A couple of people have suggested a syntax more like this:
length d = 5 * meter; speed v = 2 * inch / second; time t = d / v; cout << "time = " << t / minute << "minutes";
I also like this usage - this is how my library works. As you've pointed out, Andy, this is trivial to achieve with the current PQS implementation, by using:
typedef pqs::length::m length; typedef pqs::time::s time; typedef pqs::velocity::m_per_s speed;
const length meter(1); const length inch(0.0254); const time second(1); const time minute(60);
The problem is, it also allows sloppy code like this:
length d(5); // no error; but units not clearly documented! speed v(2); // no error; but units not clearly documented!
so that the numeric values are now divorced from their units (which are defined above, in the typedefs), thus potentially leading to another Mars lander problem.
I should clarify here that the units are still there in the type although not visible in the source code std:cout << d <<' ' << v <<'\n'; should output something like 5 m 2 m.s-1
What I want is the syntax I illustrated but to enforce specifying units along with numeric values.
It occurs to me that perhaps this would work:
struct length : public pqs::length::m { length( pqs::length::m x ) : pqs::length::m( x ) { } operator=( pqs::length::m rhs ); }
The idea being so that you would say: length d = pqs::length::m(5); rather than length d(5); //Now an error ? Would you want to allow this: length d = pqs::length::mm(5); // initialise to 5mm converted to meters ? Then you would probably need this too: pqs::length::ft d1 = d; OTOH Should length d = pqs::length::mm(5); be an error ? I can see that deriving from the current t1_quantity (now called fixed_quantity in Quan, the successor to PQS) could be beneficial for various uses. One use might be to prevent any conversions from base units, so that you would be allowed to work in base_units only. I don't see that as a replacement of the current type but rather maybe a companion. Currently though my priority is to finish off the original concept. It seems to work quite well, as well as being as flexible as possible. IMO some of the concerns re conversions are theoretical rather than practical. It should be remembered that the current type is much better than using a double in the role of quantity for the reasons mentioned in the docs. The ability to convert between units makes the fixed-quantity useful in a much wider variety of situations than if it was restricted to base units only. It also allows certain calculations to be just as efficient when using SI quantities with other than base units, but you need to read the semantics part of the docs quite carefully to find out what they are! Anyway my feeling is that the conversion functionality gives the type a 'richness' that is satisfying for programmers to work with and I'm not going to give that up. :-)
etc. I don't think this is a big departure from the current PQS concept. It's just a simple extension, really, but I think a very helpful one. As a user I'd rather have something like this provided for me than to have to write the extension myself for each type - length, time, speed, energy, etc.
I'm still unclear about the exact semantics you are looking for.
The cost is that I may have more units conversions happening behind the scenes. But they only occur when dealing with the numeric values themselves - i.e., input/output. All my internal computations should be occurring with my types length, time, speed, etc. - which would be defined using a common set of units. For many users a little conversion cost in I/O is of no consequence, if most of the time is spent doing calculations on values in length, speed, etc. (Other users can use the existing types that are tied to units.)
One benefit is that I could write all my code to be independent of which units I choose, except where I need to specify a quantity numerically (in which case I must specify units, of course). If I later decide, say, the scale of my problem is more suited to eV than kJ (or if some of my code is reused in a differently-scaled application), I simply change the definitions for my "energy" class, etc., and recompile!
Why not just use templates template <typename Energy> void my_func( Energy e) { BOOST_STATIC_ASSERT((boost::is_convertible<Energy,pqs::energy::J>::value)); // use e } my_func(pqs::energy::k()); my_func(pqs::energy::eV()); As Beth Jacobson suggested, perhaps there could
be an easy way for the user to select these "base units" for the entire application.
To me, having types like "length," "speed," etc. is also easier to understand. It might be a better starting place for illustrating the library to new users. Here's an example of how I think about physical quantities: My desk has a length. "Length in meters" or "length in inches" are not properties of the desk - only length. That length can then be expressed numerically in different units.
I would prefer to say it can Only be expressed numerically if the units are known otherwise the number is meaningless. *Assuming* what the unit is, is where the mistakes occur. The SI is pretty clear that a quantity must always have its units close by. A "length" object
would represent the desk's length in a similarly abstract way, which could then be expressed in whatever units are desired for I/O. Of course, internally the computer has to represent the length as a value in some unit, but that would be encapsulated in the type and thus transparent to the user. The same internal units would be used for all quantities, thus avoidinng the overhead of type conversions (except on I/O).
If my assumption that unit conversions are only needed when numeric values are assigned to or extracted from dimensions is correct, then dimensions don't really need to know about units. The units functions would need to have a concept of default dimension units so they'd know what they were converting from or to, but the dimensions themselves wouldn't know or care.
Well said. I think this is the same as what I'm suggesting.
That would be the real benefit of this system. Reducing the number of conversions and making units explicit at the point of conversion may have some small benefit, but making the dimensions library essentially unitless seems like a major advantage.
Of course it's your library, and I'm not trying to dictate design. But if you're looking for a way to make the dimensions library essentially independent of units while retaining the unit safety of the current system, this might be a direction to consider.
Having used the fixed-quantity aka t1_quantity and multi_unit_quantity aka t2_quantity for a while, I have found them to be very flexible in the ways that they can be used. The fixed_quantity and all the different predefined quantities in the headers represent a compile time database of SI quantities and units and I have no doubt that this database will have a wide range of uses, but I will leave these up to the future user to explore as there is a lot of work involved in just getting the original library functionality completed satisfactorily. Hopefully I can get at least the CVS database for quan up onto Sourceforge in the next few days. http://sourceforge.net/projects/quan regards Andy Little

Andy Little <andy <at> servocomm.freeserve.co.uk> writes:
"Leland Brown" wrote
Andy Little <andy <at> servocomm.freeserve.co.uk> writes:
"Beth Jacobson" wrote
I agree that unit checking can be an extremely useful feature, I'm just not convinced that tight coupling is necessary to achieve that.
But I also agree here. The goal can be achieved without necessarily making the units part of the type.
<snip>
A couple of people have suggested a syntax more like this:
length d = 5 * meter; speed v = 2 * inch / second; time t = d / v; cout << "time = " << t / minute << "minutes";
I also like this usage - this is how my library works. As you've pointed out, Andy, this is trivial to achieve with the current PQS implementation, by using:
typedef pqs::length::m length; typedef pqs::time::s time; typedef pqs::velocity::m_per_s speed;
const length meter(1); const length inch(0.0254); const time second(1); const time minute(60);
The problem is, it also allows sloppy code like this:
length d(5); // no error; but units not clearly documented! speed v(2); // no error; but units not clearly documented!
so that the numeric values are now divorced from their units (which are defined above, in the typedefs), thus potentially leading to another Mars lander problem.
I should clarify here that the units are still there in the type although not visible in the source code
True. I should have said, the units must still be part of the type, always. Here they're just not part of the type name, so that the user doesn't need to worry about units EXCEPT when needing a numeric value for a quantity, at which time you must decide what units you want for the numeric value.
What I want is the syntax I illustrated but to enforce specifying units along with numeric values.
The idea being so that you would say:
length d = pqs::length::m(5); rather than length d(5); //Now an error
?
Exactly.
Would you want to allow this:
length d = pqs::length::mm(5); // initialise to 5mm converted to meters
?
Yes, "length" should be compatible with any length units, like that.
Then you would probably need this too:
pqs::length::ft d1 = d;
Yes, I think so.
OTOH Should
length d = pqs::length::mm(5);
be an error ?
No, it should be allowed, because 5 mm is a valid length.
I can see that deriving from the current t1_quantity (now called fixed_quantity in Quan, the successor to PQS) could be beneficial for various uses.
<snip>
Currently though my priority is to finish off the original concept. It seems to work quite well, as well as being as flexible as possible.
Yes, that makes sense. My suggestion can easily be added as an extension later - it doesn't need to affect the current design.
I'm still unclear about the exact semantics you are looking for.
I hope it's a little more clear after answering your questions above.
Here's an example of how I think about physical quantities: My desk has a length. "Length in meters" or "length in inches" are not properties of the desk - only length. That length can then be expressed numerically in different units.
I would prefer to say it can Only be expressed numerically if the units are known otherwise the number is meaningless. *Assuming* what the unit is, is where the mistakes occur. The SI is pretty clear that a quantity must always have its units close by.
Yes, that's what I meant. Length is a property of the desk. But in order to express that length numerically, you need to specify the units. The numerical value is always tied to its units. The length may be expressed as 2 m, 200 cm, 2000 mm, or .002 km - but the length of the desk is the same. I just like to have a software abstraction that matches the physical abstraction of "length."
A "length" object
would represent the desk's length in a similarly abstract way, which could then be expressed in whatever units are desired for I/O.
<snip>
Having used the fixed-quantity aka t1_quantity and multi_unit_quantity aka t2_quantity for a while, I have found them to be very flexible in the ways that they can be used. The fixed_quantity and all the different predefined quantities in the headers represent a compile time database of SI quantities and units and I have no doubt that this database will have a wide range of uses, but I will leave these up to the future user to explore as there is a lot of work involved in just getting the original library functionality completed satisfactorily.
Yes, I'm sure there is! That sounds like a good plan. -- Leland

"Leland Brown" wrote
Andy Little <andy <at> servocomm.freeserve.co.uk> writes:
<...>
I'm still unclear about the exact semantics you are looking for.
I hope it's a little more clear after answering your questions above.
Yes. I can see the point of what you are saying now. The new type might be called a base_quantity or something like that. When using it you could guarantee that length is always in meters, time in seconds and so on. The advantages would be that it is much simpler to understand what is going on, as you don't need to think about unit conversions and so on. It might also be useful for use where you need to guarantee that no conversions are taking place, such as if you wished to switch in a quantity for dimension checking of your calculations and switch it out for release. However if you need to deal with other units it should be borne in mind that it would be inferior both in speed and accuracy to the current fixed_quantity aka t1_quantity. The speed is probably not as critical as the accuracy issue. (and BTW I am in the process of stripping out and replacing the horrendous compile time unit conversion code from quan::fixed_quantity. Here is the latest version in quan CVS at sourceforge concept_08_July_2006-branch, of the code (with comments!) for doing a multiply of two fixed_quantities where the reult is dimensioned: http://tinyurl.com/qvmaw . Now all I need to do is all the rest!). The base_quantity (basic_quantity?) would also need its own operations defined, as derivation from fixed_quantity would otherwise mean any math would return a fixed_quantity, so it would probably need to be a standalone type. Nevertheless I can see it would be useful. Another thing on th todo list... <...>
Yes, that's what I meant. Length is a property of the desk. But in order to express that length numerically, you need to specify the units. The numerical value is always tied to its units. The length may be expressed as 2 m, 200 cm, 2000 mm, or .002 km - but the length of the desk is the same. I just like to have a software abstraction that matches the physical abstraction of "length."
Yes I know exactly what you mean here!. Howver AFAICS now the requirement is for (at least) 4 abstractions: fixed_quantity, multiunit_quantity, universal_quantity and base_quantity :-) (BTW I would love to call these fixed_quan, multiunit_quan, uni_quan, and base_quan :-) . I guess maybe fixed_quantity name will have to remain as it is though now for stability :-( .)
A "length" object
would represent the desk's length in a similarly abstract way, which could then be expressed in whatever units are desired for I/O.
<snip>
Having used the fixed-quantity aka t1_quantity and multi_unit_quantity aka t2_quantity for a while, I have found them to be very flexible in the ways that they can be used. The fixed_quantity and all the different predefined quantities in the headers represent a compile time database of SI quantities and units and I have no doubt that this database will have a wide range of uses, but I will leave these up to the future user to explore as there is a lot of work involved in just getting the original library functionality completed satisfactorily.
I am currently (slowly) trying to write a roadmap for Quan library. I will put a message up here when I have put it up somewhere. Meanwhile here's the Quan Docs url again for those interested. (the docs there are Much older than CVS FWIW, which incidentally I found can be browsed with ViewCVS just like any normal html pages which is quite cool): http://quan.sourceforge.net/quan_matters/doc/html/index.html regards Andy Little

"John Phillips" wrote
There are also spaces where the units (or more accurately, the dimensions) are not the same in all directions, so any vectors in those spaces will have mixed units in any coordinate system. A commonly used one is called "phase space" and it includes the position and momentum variables for a system all in the same space. Thinking of them together turns out to be quite important in some applications, so the example can be quite meaningful for some people.
Discussion of the area of mathematical spaces beyond the everyday one brings up an important point regarding PQS and attempts to make it more *generic* re unit systems, but I cant explain it well. Neverthless I will try: The everyday system of units as exemplified by the SI and is about the human scale is relatively stable AFAIKS. Moving further from what might be called the human scale things become less and less certain. I conjecture that as one moves to other more esoteric unit systems then things are less well understood, even by physicists and mathematicians, and that would make working on a units library designed to encompass those systems much more difficult. Is not much of maths and physics a search to find models of those systems? An important aim of PQS is to provide a standardised means of dealing with units. It is only advisable IMO to standardise things that are stable, but the further one goes into maths and physics the less stable things become, so my guess is that any standardised units library would be less satisfactory there. Does that make sense? regards Andy Little

Andy Little wrote:
Discussion of the area of mathematical spaces beyond the everyday one brings up an important point regarding PQS and attempts to make it more *generic* re unit systems, but I cant explain it well. Neverthless I will try:
The everyday system of units as exemplified by the SI and is about the human scale is relatively stable AFAIKS. Moving further from what might be called the human scale things become less and less certain.
I conjecture that as one moves to other more esoteric unit systems then things are less well understood, even by physicists and mathematicians, and that would make working on a units library designed to encompass those systems much more difficult. Is not much of maths and physics a search to find models of those systems?
Actually, I think your understanding of the pieces for generic unit systems is better than you realize. From a physicists point of view, there are just a few choices that need to be made to create a unit system. First is a choice of dimensions. What types of things are you going to measure, and how do they relate to each other. For example, the SI system chooses to measure length, time and mass as independent dimensions. This gives compound units for things like force, velocity and acceleration. Relativistic units make the choice that velocities should be unitless, so to make that happen, the units for length and time must be the same. This is not what our daily intuition would have us expect, but it is consistent with the theoretic decription that unifies length and time. Energy units (The system used in almost all particle physics.) goes further and decides that everything should be measured in different powers of energy. Once the dimensional choices are made, choices of preferred scale need to be made. SI units differ from cgs units and even Imperial units mostly because different scale choices were made. The scale choice for Relativistic units is that the speed of light should be exactly 1, and everything other than that should be SI based. In Energy units, the scale choice is an amount of energy called an Electron Volt (The amount of energy it takes to move one electron across a potential difference of one volt.). In this system, both the speed of light and plank's constant are exactly 1. To my understanding of what you have written, it already supports one specific choice of dimensional quantities (The choice made by the SI units.), and I think it could support other choices with minimal effort, since the choices can be phrased in terms of those made by the SI system. Given that choice of dimensional quantities, it supports scaling between different unit systems. At the moment, I think it does that scaling automagically when the values are compiled and does actual computations in SI units (Please correct me if this understanding is wrong.). I would prefer for unit conversions to only happen when explicitly requested, and for mixed unit expressions without requested conversions that match the units to produce errors. So, there are a few differences between more generic systems and what you have done, but if I understand what you have done so far, they are not as big as you seem to think they are. John
An important aim of PQS is to provide a standardised means of dealing with units. It is only advisable IMO to standardise things that are stable, but the further one goes into maths and physics the less stable things become, so my guess is that any standardised units library would be less satisfactory there.
Does that make sense?
regards Andy Little
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Tue, Jun 13, 2006 at 01:43:48PM -0400, John Phillips wrote:
From a physicists point of view, there are just a few choices that need to be made to create a unit system. [...]
Thanks for this nice overview. It really convinced me that my original view of "SI only" was too narrow. Regards -Gerhard -- Gerhard Wesp ZRH office voice: +41 (0)44 668 1878 ZRH office fax: +41 (0)44 200 1818 For the rest I claim that raw pointers must be abolished.

"John Phillips" wrote
Andy Little wrote:
I conjecture that as one moves to other more esoteric unit systems then things are less well understood, even by physicists and mathematicians, and that would make working on a units library designed to encompass those systems much more difficult. Is not much of maths and physics a search to find models of those systems?
Actually, I think your understanding of the pieces for generic unit systems is better than you realize. From a physicists point of view, there are just a few choices that need to be made to create a unit system. First is a choice of dimensions. What types of things are you going to measure, and how do they relate to each other. For example, the SI system chooses to measure length, time and mass as independent dimensions. This gives compound units for things like force, velocity and acceleration.
[...] Relativistic units make the choice that velocities should be unitless, so to make that happen, the units for length and time must be the same. This is not what our daily intuition would have us expect, but it is consistent with the theoretic decription that unifies length and time.
I'm unclear what you mean by velocity being unitless. Do you mean that velocity is treated as a dimensionless or numeric type in the relativistic system? BTW I think I detect 2 definitions of unitless at various points in the PQS review. I think one definition that has been used is that a unitless type is a dimensionless or numeric type, while the other sees a unitless type as a dimensioned type which has the *default* units for that system. For example in the SI system a meter is 'unitless' according to this 2nd definition, whereas pi is unitless according to the first definition. It occurs that it is quite important to try to define these terms as precisely as possible. I spent some time in trying to achieve that in PQS, in the definition of terms section, but looking back and especially under the gaze of serious physicists, mathematicians, engineers (and maybe even physicians :-) ), that my attempt has been very amateurish, however these (non C++) definitions are absolutely essential for clarity in a discussion of this sort, so if anyone has suggestions on sources for these sorts of terms that would be helpful. Although the definition of terms section in PQS is poor (because, looking back, my terminology is extremely ad-hoc), nevertheless I am sure that trying to nail down precise definitions of terms has been essential in getting PQS as far as it has. In order to proceed though I should ideally spend some time in research and then provide authoratitive references etc to justify my use of particular terms. That should give them weight and authority. Energy units (The system used in almost all
particle physics.) goes further and decides that everything should be measured in different powers of energy. Once the dimensional choices are made, choices of preferred scale need to be made. SI units differ from cgs units and even Imperial units mostly because different scale choices were made. The scale choice for Relativistic units is that the speed of light should be exactly 1, and everything other than that should be SI based. In Energy units, the scale choice is an amount of energy called an Electron Volt (The amount of energy it takes to move one electron across a potential difference of one volt.). In this system, both the speed of light and plank's constant are exactly 1.
OK . This seems to confirm your use of the term 'unitless' as equivalent to dimensionless or numeric. If the speed of light is a numeric type in the relativistic system it leads me to wonder what math constants such as pi would mean (if anything) in this system? (hmmm. Should I have asked that question? Pehaps I should just have shut up? Have I opened Pandoras box for myself ? ;-) )
To my understanding of what you have written, it already supports one specific choice of dimensional quantities (The choice made by the SI units.), and I think it could support other choices with minimal effort, since the choices can be phrased in terms of those made by the SI system.
Maybe. Essentially the C++ representation of the dimension part of the quantity consists in an array ( such as mpl vector) of numbers representing a sequence of powers of the base dimensions. (Assume for example a system which only has the base dimension length; Then the dimension length itself would have power of 1, area power of 2 and so on ). The representation is acceptable if I only need to deal with one system, but if I am required to convert between systems, I need to add to the type representing a quantity, some mechanism to tell me which system my quantity is a member of . As it happens I already have the so-called abstract-quantity, which encapsulates the dimension of a quantity together with a tag so that dimensionally-equivalent quantities can be distinguished for output purposes. The abstract-quantity could be extended to include information regarding the unit system, or the abstract-quantity could be an SI-abstract-quantity (IOW in C++ concept terms the si_abstract_quantity would be a model of AbstractQuantity. I could look into how this would work. There are always costs involved though as I said in a previous post. The main problem is that it increases the size of the ball-park quite considerably to put it mildly. I would be wary of adding the functionality without adding some implementation because if I do that I can (almost) guarantee it wont fit when someone tries to implement it, and then it will tend to hang around as a sort of useless tail which no one can get rid of.
Given that choice of dimensional quantities, it supports scaling between different unit systems. At the moment, I think it does that scaling automagically when the values are compiled and does actual computations in SI units (Please correct me if this understanding is wrong.).
It must be wrong currently, simply because I make the assumption in PQS that there is only one unit system (the SI system), so there is no mechanism in the type to distinguish between unit-systems. (This assumes we are using the term unit-system to refer to a very similar entity of course, which we may not be). Actually I think I am dealing with another unit system.. the natural unit system. I have assumed that it uses the same dimensions as the SI system. I have therefore expressed quantities in natural units in terms of their scaling to SI units. An energy quantity expressed in electron volts for example would be expressed as an energy unit which if it had the numeric value 1 would be equivalent to 1.602177e-19 joules. The conversion factor is encoded in a compile time entity representing the unit.
I would prefer for unit conversions to only happen when explicitly requested, and for mixed unit expressions without requested conversions that match the units to produce errors.
First. the definition of what is a unit conversion is problematic. It probably needs further explanation. However some unit conversions in PQS t1_quantity are entirely lossless due to the use of logarithmic notation to express powers of 10. For example the result type of 1mm / 1 millisecond technically involves a conversion, however the conversion is entirely lossless in both runtime speed and accuracy because the calculation is actual a compile time subtraction of -3 from -3 leaving the result in units equivalent to meters per second. This process is very similar to the way one would proceed in a manual calculation. Requiring the programmer to convert millimeters to meters and milliseconds to seconds before the calculation is cruel and any good programmer will feel the ugliness of it!
So, there are a few differences between more generic systems and what you have done, but if I understand what you have done so far, they are not as big as you seem to think they are.
OK. My concern is that I can add the theoretical ability for PQS to encompass unit-systems other than the SI, but the only way to see if it actually works is to write a reasonable sized implementation for a reasonable number of systems. What you are asking for is I know very well a non trivial expansion of the scope of the project! My solution to this is not even to attempt to go there. My goal is simply to cover the SI system and let others extend the project to more generic systems. BTW Thanks for all your comments and input, and I'm sorry that I keep sounding so negative but I have good reasons for being sceptical, some of which I hope I have explained . regards Andy Little

On Wed, Jun 14, 2006 at 11:21:35AM +0100, Andy Little wrote:
I'm unclear what you mean by velocity being unitless. Do you mean that velocity is treated as a dimensionless or numeric type in the relativistic system?
I think that's what is implied, given space and time are the same. Regards -Gerhard -- Gerhard Wesp ZRH office voice: +41 (0)44 668 1878 ZRH office fax: +41 (0)44 200 1818 For the rest I claim that raw pointers must be abolished.

Andy Little wrote:
I'm unclear what you mean by velocity being unitless. Do you mean that velocity is treated as a dimensionless or numeric type in the relativistic system?
It means that for the sake of computation, it carries no units and no dimensions. So, if I wanted to talk about a velocity of a particle that is moving at 70% of the speed of light, and I make the scale choice that the speed of light is 1, then the velocity is v = 0.7. There are no units to include. I agree with your point below that we need to be a little more careful about the meanings of some of the words we are using. I apologize for my tendancy to use words the way I would with other physicists without defining them for what is largely a non-physicist audience.
BTW I think I detect 2 definitions of unitless at various points in the PQS review. I think one definition that has been used is that a unitless type is a dimensionless or numeric type, while the other sees a unitless type as a dimensioned type which has the *default* units for that system. For example in the SI system a meter is 'unitless' according to this 2nd definition, whereas pi is unitless according to the first definition.
...(snip)
OK . This seems to confirm your use of the term 'unitless' as equivalent to dimensionless or numeric. If the speed of light is a numeric type in the relativistic system it leads me to wonder what math constants such as pi would mean (if anything) in this system?
Pi, and e and all the various others mean exactly the same things they have always meant, and have exactly the same values. All I'm doing is defining relationships between dimensions and selecting scales to measure those dimensions. No matter what scale I choose for the measurement, the ratio of the circumference of a circle to the diameter doesn't change. The values of things like pi and e are in some sense that I think we should avoid trying to describe in detail on this list more fundamental than any unit system.
...(snip)
Essentially the C++ representation of the dimension part of the quantity consists in an array ( such as mpl vector) of numbers representing a sequence of powers of the base dimensions. (Assume for example a system which only has the base dimension length; Then the dimension length itself would have power of 1, area power of 2 and so on ).
So, a refinement of the system described in a few TMP sources. OK, that's what I thought it would be.
The representation is acceptable if I only need to deal with one system, but if I am required to convert between systems, I need to add to the type representing a quantity, some mechanism to tell me which system my quantity is a member of . As it happens I already have the so-called abstract-quantity, which encapsulates the dimension of a quantity together with a tag so that dimensionally-equivalent quantities can be distinguished for output purposes. The abstract-quantity could be extended to include information regarding the unit system, or the abstract-quantity could be an SI-abstract-quantity (IOW in C++ concept terms the si_abstract_quantity would be a model of AbstractQuantity. I could look into how this would work. There are always costs involved though as I said in a previous post. The main problem is that it increases the size of the ball-park quite considerably to put it mildly. I would be wary of adding the functionality without adding some implementation because if I do that I can (almost) guarantee it wont fit when someone tries to implement it, and then it will tend to hang around as a sort of useless tail which no one can get rid of.
I'm going to have to think more to make any useful comments about how it could be implemented, but thank you for the description.
Given that choice of dimensional quantities, it supports scaling between different unit systems. At the moment, I think it does that scaling automagically when the values are compiled and does actual computations in SI units (Please correct me if this understanding is wrong.).
It must be wrong currently, simply because I make the assumption in PQS that there is only one unit system (the SI system), so there is no mechanism in the type to distinguish between unit-systems. (This assumes we are using the term unit-system to refer to a very similar entity of course, which we may not be).
Actually I think I am dealing with another unit system.. the natural unit system. I have assumed that it uses the same dimensions as the SI system. I have therefore expressed quantities in natural units in terms of their scaling to SI units. An energy quantity expressed in electron volts for example would be expressed as an energy unit which if it had the numeric value 1 would be equivalent to 1.602177e-19 joules. The conversion factor is encoded in a compile time entity representing the unit.
So, if I divide 1 eV by 1 s, will the answer of that computation automagically be in Watts (Energy / time = power)? Put a different way, are all calculations on that 1 eV carried out by first converting it to Joules, and then acting on the value in Joules? I again need to apologize for the sloppy language, but this is what I mean by converting to SI units. I am not concerned about your handling of SI prefixes (which I think is fine) nor do I wish to force people to explicitly select the prefix when two quantities that have prefixes but are already in SI units are used in a computation (though some people would probably like to be able to force a certain prefix under some conditions). I'm more worried about things like conversions between pounds and Newtons, or between rods and fathoms, where there is more to do than keep track of a power of 10.
I would prefer for unit conversions to only happen when explicitly requested, and for mixed unit expressions without requested conversions that match the units to produce errors.
First. the definition of what is a unit conversion is problematic. It probably needs further explanation. However some unit conversions in PQS t1_quantity are entirely lossless due to the use of logarithmic notation to express powers of 10. For example the result type of 1mm / 1 millisecond technically involves a conversion, however the conversion is entirely lossless in both runtime speed and accuracy because the calculation is actual a compile time subtraction of -3 from -3 leaving the result in units equivalent to meters per second. This process is very similar to the way one would proceed in a manual calculation. Requiring the programmer to convert millimeters to meters and milliseconds to seconds before the calculation is cruel and any good programmer will feel the ugliness of it!
So, there are a few differences between more generic systems and what you have done, but if I understand what you have done so far, they are not as big as you seem to think they are.
OK. My concern is that I can add the theoretical ability for PQS to encompass unit-systems other than the SI, but the only way to see if it actually works is to write a reasonable sized implementation for a reasonable number of systems. What you are asking for is I know very well a non trivial expansion of the scope of the project! My solution to this is not even to attempt to go there. My goal is simply to cover the SI system and let others extend the project to more generic systems.
I agree that what I'm asking for is non-trivial. And I can't promise that the increased use that would come from this expansion will justify your added time. I'm simply looking at it from the point of view of "what should it do to be the solution for my problems in my work" and adding in some of what I know is done by others in the physical sciences.
BTW Thanks for all your comments and input, and I'm sorry that I keep sounding so negative but I have good reasons for being sceptical, some of which I hope I have explained .
regards Andy Little
And thanks for your willingness to discuss them. I know that you put hours and sweat into this library, and everyone prefers to be told how well they did instead of get into long discussions of what could or should be changed, so I'm sure there are times when this discussion gets tiresome for you. Thanks for continuing to be an active part of the conversation and giving your insight as someone who has actually done the work to implement some of these ideas. John Phillips
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

At 11:24 AM 6/14/2006, you wrote:
I'm unclear what you mean by velocity being unitless. Do you mean
Andy Little wrote: that velocity
is treated as a dimensionless or numeric type in the relativistic system?
It means that for the sake of computation, it carries no units and no dimensions. So, if I wanted to talk about a velocity of a particle that is moving at 70% of the speed of light, and I make the scale choice that the speed of light is 1, then the velocity is v = 0.7. There are no units to include.
Just in case there is someone out there who needs a bit more explanation as to why this would be: Everywhere in the equations of relativistic physics that a velocity (V) appears, it is divided by C. Numerical simplification can be gotten by choosing units of distance and/or time so that C comes out to be 1, but it would still have units of Rd/Rt, where Rd and Rt are the chosen units of distance and time. To come out dimensionally correct, therefore, the equations would still require a division by C (a numeric but not dimensional no-op). By using V/C, a unit-less quantity, to represent the concept of velocity, the equations themselves are simplified. The cost, by the way, is less implicit checking via dimensional analysis. You could use an angle in radians (also dimensionless) where you meant a velocity and everything would be dimensionally correct. Topher

John Phillips said: (by the date of Wed, 14 Jun 2006 11:24:53 -0400)
And thanks for your willingness to discuss them. I know that you put hours and sweat into this library, and everyone prefers to be told how well they did instead of get into long discussions of what could or should be changed, so I'm sure there are times when this discussion gets tiresome for you. Thanks for continuing to be an active part of the conversation and giving your insight as someone who has actually done the work to implement some of these ideas.
John Phillips
how true. I sign myself too under above statement. -- Janek Kozicki |

"John Phillips" wrote
Andy Little wrote: [..]
OK . This seems to confirm your use of the term 'unitless' as equivalent to dimensionless or numeric. If the speed of light is a numeric type in the relativistic system it leads me to wonder what math constants such as pi would mean (if anything) in this system?
Pi, and e and all the various others mean exactly the same things they have always meant, and have exactly the same values. All I'm doing is defining relationships between dimensions and selecting scales to measure those dimensions. No matter what scale I choose for the measurement, the ratio of the circumference of a circle to the diameter doesn't change. The values of things like pi and e are in some sense that I think we should avoid trying to describe in detail on this list more fundamental than any unit system.
OK :-) ... I'm happy to accept that. I am not a physicist! It strikes me that what we are discussing is a conjecture. The conjecture is(something like) The rules applied to the SI system in PQS can be applied to the relativistic units system. What you say above makes the alarm bells ring for me, that the conjecture is false, because PQS has no mechanism for distinguishing dimensionless types. However not being a physicist I dont want to go there and I probably have it all wrong.... ...so I'm happy not to try and prove or disprove the conjecture. I'll leave that to someone else :-) [...]
Actually I think I am dealing with another unit system.. the natural unit system. I have assumed that it uses the same dimensions as the SI system. I have therefore expressed quantities in natural units in terms of their scaling to SI units. An energy quantity expressed in electron volts for example would be expressed as an energy unit which if it had the numeric value 1 would be equivalent to 1.602177e-19 joules. The conversion factor is encoded in a compile time entity representing the unit.
So, if I divide 1 eV by 1 s, will the answer of that computation automagically be in Watts (Energy / time = power)? Put a different way, are all calculations on that 1 eV carried out by first converting it to Joules, and then acting on the value in Joules?
Not quite. The electronvolt is regarded by PQS as a so-called incoherent (non-SI) unit. In this case the quantity is first converted to the nearest coherent unit[*]. This is achieved simply by converting to a a type with a unit which represents 1e-19 Joules (effectively getting rid of the ugly 1.602177 multiplier) Lets call this type the 'cleaned electronvolt'. This involves scaling the internal floating point value at runtime (either a divison or a multiplication by 1.602177 which offhand I forget. I think its a multiplication). The type of the result is then calculated by subtracting the exponent of the denominator from the exponent of the numerator. The exponent of the 'cleaned electronvolt' is still -19. The exponent of 'second' is 0, so the result will have units of 1 e-19 Watts. Finally the internal floating point value of the 'cleaned electron volt' is multiplied by the numeric value of the second (1) and put as the numeric value of the resulting temporary. (I think its just 1.602177). unfortunately I havent implemented the ev unit yet to check. [*](unfortunately my definition of coherent unit is different from the SI's definition so I will need to rename it)
I again need to apologize for the sloppy language, but this is what I mean by converting to SI units. I am not concerned about your handling of SI prefixes (which I think is fine) nor do I wish to force people to explicitly select the prefix when two quantities that have prefixes but are already in SI units are used in a computation (though some people would probably like to be able to force a certain prefix under some conditions). I'm more worried about things like conversions between pounds and Newtons, or between rods and fathoms, where there is more to do than keep track of a power of 10.
Yes. The best advice is to use SI units as recommended by the SI and PQS follows this advice by favouring the SI where possible. However one major use of PQS is in helping in the countless real everyday situations where for whatever reason other units are being used. [...]
OK. My concern is that I can add the theoretical ability for PQS to encompass unit-systems other than the SI, but the only way to see if it actually works is to write a reasonable sized implementation for a reasonable number of systems. What you are asking for is I know very well a non trivial expansion of the scope of the project! My solution to this is not even to attempt to go there. My goal is simply to cover the SI system and let others extend the project to more generic systems.
I agree that what I'm asking for is non-trivial. And I can't promise that the increased use that would come from this expansion will justify your added time. I'm simply looking at it from the point of view of "what should it do to be the solution for my problems in my work" and adding in some of what I know is done by others in the physical sciences.
One way to proceed would be to concentrate on solving the problem satisfactorily only in that field. It should then be possible to see if there is anything more generic behind the different solutions.
BTW Thanks for all your comments and input, and I'm sorry that I keep sounding so negative but I have good reasons for being sceptical, some of which I hope I have explained .
regards Andy Little
And thanks for your willingness to discuss them. I know that you put hours and sweat into this library, and everyone prefers to be told how well they did instead of get into long discussions of what could or should be changed, so I'm sure there are times when this discussion gets tiresome for you. Thanks for continuing to be an active part of the conversation and giving your insight as someone who has actually done the work to implement some of these ideas.
No problem and thanks for your time, thoughts explanations etc. Maybe someone with a physics background will start some research on implementing a similar scheme with other unit systems than the SI regards Andy Little

On Thu, Jun 15, 2006 at 12:09:44PM +0100, Andy Little wrote:
OK . This seems to confirm your use of the term 'unitless' as equivalent to dimensionless or numeric. If the speed of light is a numeric type in the relativistic system it leads me to wonder what math constants such as pi would mean (if anything) in this system?
Pi, and e and all the various others mean exactly the same things they have always meant, and have exactly the same values. All I'm doing is defining relationships between dimensions and selecting scales to measure those dimensions. No matter what scale I choose for the measurement, the ratio of the circumference of a circle to the diameter doesn't change. The values of things like pi and e are in some sense that I think we should avoid trying to describe in detail on this list more fundamental than any unit system.
OK :-) ... I'm happy to accept that. I am not a physicist! It strikes me that what we are discussing is a conjecture. The conjecture is(something like) The rules applied to the SI system in PQS can be applied to the relativistic units system. What you say above makes the alarm bells ring for me, that the conjecture is false, because PQS has no mechanism for distinguishing dimensionless types. However not being a physicist I dont want to go there and I probably have it all wrong....
...so I'm happy not to try and prove or disprove the conjecture. I'll leave that to someone else :-)
Proving is a bit much :p. Although it can easily be made plausible... it's almost axiomatic. The "rules" are that the exponent (dimension) of each unit of any such system is the same on either side of the equal sign of any equation. This will always be true when the units of that system are independ (and they have to be, or it wasn't a unit system). A rule is also that you cannot add two quantities that have different dimensions. Relativistics or not-- those rules simply apply. Below I write square brackets around quantities with a dimension, and nothing around normal numbers. [m] = m * [kg] = [m_0 / sqrt(1 - ([v]/[c])^2)] note that '1' is dimensionless, so this should compile if ([v]/[c])^2 isn't dimensionless, and thus if [v]/[c] isn't dimensionless, and thus if the dimension of [v] and [c] weren't the same. But they are: [v] = v * [m/s], [c] = c * [m/s] --> [v]/[c] = v/c, thus [m] = m * [kg] = [m_0 / sqrt(1 - (v/c)^2)] = [m_0] / sqrt(1 - (v/c)^2) And because the dimension on either side of the '=' has to be the same, we need a 'kg' to the power 1 on the right-hand side as well for this to be legal. Obviously, [m_0] = m_0 * [kg], so we have: [kg] = [kg]. Check. The only thing that is important for systems that those rules are applied to is that it's different units cannot be expressed into eachother. For example, if a system has units U and V, then there shouldn't be ANY way to write: U = f(V). If that *is* possible, then you can completely get rid of U: either U or V is not a unit, and the "system" with both U and V isn't a unit system.
false, because PQS has no mechanism for distinguishing dimensionless types.
All the built-in types? I'd use double (or Modular_Integer etc etc), for them :) -- Carlo Wood <carlo@alinoe.com>

On Thu, Jun 15, 2006 at 03:10:00PM +0200, Carlo Wood wrote:
false, because PQS has no mechanism for distinguishing dimensionless types.
All the built-in types? I'd use double (or Modular_Integer etc etc), for them :)
Or is what you meant that you CAN'T use Modular_Integer as dimensionless part? I think that (indeed) a PQS library should support ANY external type as base field, not just doubles. -- Carlo Wood <carlo@alinoe.com>

"Carlo Wood" wrote
On Thu, Jun 15, 2006 at 03:10:00PM +0200, Carlo Wood wrote:
false, because PQS has no mechanism for distinguishing dimensionless types.
All the built-in types? I'd use double (or Modular_Integer etc etc), for them :)
Or is what you meant that you CAN'T use Modular_Integer as dimensionless part?
When the result of an Op on 2 quanties is dimensionless the type of the result is the same as quantity_lhs::value_type Op quantity_lhs::value_type
I think that (indeed) a PQS library should support ANY external type as base field, not just doubles.
Not any... they have to model some Concept, but I need to write out the Concept requirements properly (FWIW there is a Value_type(should be *ValueType*) concept) and the requirements on value_types in the current implementation should be relaxed if possible. regards Andy Little

Carlo Wood said: (by the date of Thu, 15 Jun 2006 15:25:06 +0200)
On Thu, Jun 15, 2006 at 03:10:00PM +0200, Carlo Wood wrote:
false, because PQS has no mechanism for distinguishing dimensionless types.
All the built-in types? I'd use double (or Modular_Integer etc etc), for them :)
Or is what you meant that you CAN'T use Modular_Integer as dimensionless part?
I think that (indeed) a PQS library should support ANY external type as base field, not just doubles.
oh yes! Ability to to use some InifinitePrecision class, or even Boost.Rational numbers is very important. Another question is the ability to switch underlying numerical data types on runtime, which I think shouldn't be allowed - that would make everything damn too complicated. I'm thinking now about free_quantity. -- Janek Kozicki |
participants (24)
-
Andy Little
-
Beth Jacobson
-
Bronek Kozicki
-
Carlo Wood
-
Daryle Walker
-
Dave Steffen
-
David Walthall
-
Emil Dotchevski
-
Felipe Magno de Almeida
-
Geoffrey Irving
-
Gerhard Wesp
-
Hickerson, David A
-
Janek Kozicki
-
John Phillips
-
John Phillips
-
Leland Brown
-
Matthias Troyer
-
Michael Fawcett
-
Michael Marcin
-
Noel Belcourt
-
Oleg Abrosimov
-
Paul A Bristow
-
Sebastian Redl
-
Topher Cooper