
"Leland Brown"wrote
Andy Little <andy <at> servocomm.freeserve.co.uk> writes:
[...]
For anyone who scratches their head as to what t3_quantity would be good for, here's one answer! It can be used with existing linear algebra libraries with a minimum of effort. (It would still be good to use t3_quantity only when necessary, however, because of the run-time penalty.)
OK. That is the sort of functionality I saw for the t3_quantity. In a simple implementation it would need to carry its dimension and unit information around at runtime though AFAICS, which would be quite an overhead in terms of size and speed.
blas::vec3<float> a(1, 0, 0); blas::vec3<double> b(0, 1, 0); blas::vec3<double> result = cross_product(a, b); // returns vec3<double>!
template <typename LHS, typename RHS> typename boost::result_of_plus<LHS, RHS>::type operator +(const LHS &lhs, const RHS &rhs) { typename boost::result_of_plus<LHS, RHS>::type nrv(lhs); nrv += rhs; return nrv; }
In my work I implemented something very similar to this - but only for vectors with elements of homogeneous dimensions (e.g., a length vector, a velocity vector). When I have a vector of mixed quantities (length, velocity, time, dimensionless - together in one vector), I end up going to something more like t3_quantity. (FYI, this occurs in the context of least-squares estimation of model parameters, where the parameters are of various dimensions.)
Having only investigated transform matrices, I had hoped that integrity of quantities can be maintained in most cases. But I havent done extensive experiments In cases of vectors, I have only used vectors where all elements are one type of quantity. The vectors are used to represent position, direction and so on in 3 dimensions. A container that holds different quantities I would consider to be a tuple. But I stress I am not an expert. [...]
The problem is that this is quite a heavy modification to ask of a numeric matrix library, especially when most users will be using it for numeric values.
Yes, unfortunately, that's true. It will make it very hard to integrate any dimensional analysis library with any existing matrix library.
The point of all this is that the price of strong type checking (ie using PQS rather than numerics) may be discarding all your current libraries, which are based on the assumption of numerics. That is quite a high price!.
Again, unfortunately true. (But not because of bad PQS implementation! AFAICS the situation would be the same with any strongly-typed dimensional analysis library.)
The question then is: when are the benefits of strong type checking (so use a Quantity type) justified, and when arent they (so use a float type). That would be a good question to answer in the PQS docs AFAICS. But not a trivial one.
I dont know any solution to this problem( except to exit the strong type checking) but basically there it is.
You can exit the strong type checking, or you can pay the performance penalty for t3_quantity. In effect what I ended up doing was both - by putting a compile-time switch in my t3_quantity to turn off the dimensional analysis. Then I get the best of both worlds - I live with the performance penalty for the sake of the dimensions-checking (and enforced documentation of units) until my formulas are debugged, and then I get the benefit of the speed in my production code - all while using a matrix library that doesn't care about any of this. (And BTW, it did find several bugs in my computations by flagging dimensions problems!)
That sounds like an interesting useage. I would guess that the only problem apart from slow performance would be that the t3_quantity would use a lot of space compared with a float, which would have an impact if used in some situations. regards Andy Little