
AMDG Noah Roberts <roberts.noah <at> gmail.com> writes:
Steven Watanabe wrote:
I may be wrong, but in order to avoid any excess loss of precision you have to store a set of all the base units and track the actual unit at runtime involving a merge with every multiplyor divide. This kind of overhead is not always acceptable.
I don't understand what you are saying.
suppose that you store the actual value and the conversion factor. struct quantity { double conversion_factor; double value; }; Now multiplying two quantities requires two multiplications. Alternately, you convert to SI before doing the multiplication and return a different type. Either way you introduce extra operations thus reducing the precision of the result. The only way I can think of to enable runtime units without losing precision is struct base_unit { double conversion_factor; }; struct unit_impl { typedef boost::rational exponent_t; //keep this sorted to allow merge std::vector<std::pair<base_unit*, exponent_t> > impl; double conversion_factor; }; static std::set<unit_impl*> all_units; typedef boost::shared_pointer<unit_impl> unit; struct quantity { unit u; double value; }; Now all unit multiplications add the exponents of identical dimensions. Every time you create a new unit you look it up see whether an identical unit has already been created. If so than you return a pointer to the existing unit. This is so that you can explicitly set the conversion factors for complex units and thus get maybe another bit of precision.
I've never seen that stated. I don't believe it either. If conversions are not the point of the library than why is a very significant portion of the library dealing with units and conversions?
A lot of code is dedicated to conversions because they are rather difficult to implement given the current representation. Most of it is in detail/conversion_impl.hpp which is highly repetitious. If I ever get around to simplifying it the portion of the library dealing with it will appear much smaller. In Christ, Steven Watanabe