
Andy Little wrote (in an earlier post):
OK, what I was going onto say was that this is only the beginning of the complexities involved in strong type checking on physical quantities .
Yeah, and wait till you hear this one! It gets worse - though I also have a solution that works for me. Read on, if you dare :-) Warning - this post is primarily for the more mathematically inclined - though I think it brings up an important issue that anyone wanting to implement a matrix library on dimensioned quantities needs to think about. John Phillips <phillips <at> mps.ohio-state.edu> writes:
Maybe I'm not getting it yet, but I don't think this gets you out of the woods. The problem I see is that matrices are used for many things other than just transformations. For example, a user might want to diagonalize a matrix, or find the eigenvalues, or any number of other things. If so, having units on the matrix becomes problematic, I think.
With certain restrictions, this can be done. But it brings up other problems. Consider: It's a basic tenet of matrix algebra that for a square matrix A (as long as it has an inverse), both of these are true: A * inv(A) = I inv(A) * A = I where I is the identity matrix of the same size as A. Now suppose A is a matrix of elements having different dimensions. For instance, if A is a 2x2 covariance matrix of position and velocity values, it might have units like A: m^2 m^2/s m^2/s m^2/s^2 Its inverse has well-defined dimensions, with the units you might expect, like: inv(A): 1/m^2 s/m^2 s/m^2 s^2/m^2 But if you multiply A * inv(A) or inv(A) * A, you'll get two different products, neither of which is the identity matrix!!! The results are: A * inv(A) = 1 0 s (I call this "I1" below) 0/s 1 inv(A) * A = 1 0/s (I call this "I2" below) 0 s 1 I = 1 0 0 1 Of course, we all learned in high school physics that "0" and "0 seconds" are *not* the same thing, and Andy's library faithfully embodies that truth. Thus, we have three *different* matrix values above. To make matters worse - for the same reason, neither A * I = A nor I * A = A. In fact, neither expression (A * I) nor (I * A) will compile, because they both involve addition of quantities with different dimensions. This addition would be (correctly, I think) not allowed by the library, even though in this particular case, one of the addends has a numerical value of zero. To multiply A by the identity, we would need (I1 * A) or (A * I2). What a mess! BTW, I believe John's example of Guassian elimination may fail not only because of things like pivoting (as he points out), but even without that, because of this issue - addition or subtraction of values where one is numerically exactly zero but has the wrong dimensions. So what's a programmer to do? In my case, I was already using something akin to "t3_quantity" for these "mixed" matrices, and in that context I came up with the following solution: I introduced a special value that a t3_quantity can hold, called an adimensional zero. Normally, a value of my t3_quantity has specific dimensions (or is specifically dimensionless), even if its numerical value happens to be zero. An adimensional zero is a special case. What this means is that its dimensions are undefined, ambiguous. Semantically, this means it can be added to a value of any dimension (zero + x = x, of course, with the dimensions of the result the same), and when multiplied by any value the result is another adimensional zero (zero * x = zero). I define a constant called "zero," and then define the identity matrix as I = 1 zero zero 1 Now (I1 == I) and (I2 == I) both evaluate to true, (A * I) and (I * A) both compile and give A as the result, etc. Problem solved! It works out quite well in all my linear algebra. But sadly, at the price of increasing the run- time overhead already required by my "t3_quantity." Anybody got a better idea? -- Leland