
Practically speaking, when developing such a technically minded piece of software, approximately 30% of the development effort goes into the computation kernel whereas the remaining 70% of the effort goes to the overall infrastructure, business layer, presentation layer (batch, gui, api,..), interoperability, productizing etc. It is sad to say, but such is life.
A Boost library isn't mandated to completely solve all aspects of a given problem domain for all potential users. I could easily have written a comparably negative review of, for example, GIL because it doesn't meet my specific needs. However, it appears to be fine for the purposes that many users intend to use it for. Similarly, one could argue that the Boost quaternion/octonion library is too restricted - why not implement complete Clifford algebras instead? And, as has been noted, that library also does not provide transparent support for 3D graphics applications that are the most likely candidates for real-world use of quaternions. In any case, like most Boost authors, I wrote a library that was useful for my applications - in research, I spend basically all my time implementing and testing new equations and have little or no use for GUI features. I certainly am not arguing that what you're asking for is not useful - but if you wanted to submit a runtime unit library to Boost, should I reject it because it incurs runtime overhead that is not acceptable to me? At this point, unfortunately, either a unit library does or does not incur this runtime overhead - for some users it is acceptable and for others it is not. I would be delighted to see you put forth your library as a complementary contribution to Boost and help ensure interoperability. I don't really understand why this particular topic appears to have become a zero sum game...
to implement. Once you are sure what you are doing, you go and implment it (using plain dimensionless floating point arithmetic). That worked well for over 50 years now.
The Mars Climate Orbiter team would disagree with this assessment... http://www.space.com/news/orbiter_error_990930.html
apply some fancy transforms beforehand to basically make all numbers dimensionless, but then you dont need a unit library anyway. Similar situations occur in all kind of fluid dynamics, finite difference, finite element codes, dynamic simulation codes and the like.
Ensuring that an equation is dimensionless is sufficient in all cases...
the units _really_ match in every possible case. And there are lots of cases possible. If they do not match, a conversion is usually acceptable, as it does not happen over and over again. We also talk about serialisation aspects here. Language dependend output (as suggested for future development in the documentation) to files is an absolute no-go: Otherwise the file saved by one engineer will not be able to get read by his collegue, only because they do not have the same language setting.
The current submission fully supports Boost.Serialization which provides language-independent input and output of units and quantities.
Furthermore, many times, you do not know wile writing the software, what kind of quantity you actually handle. This is not so true for compuation kernels but more for backend storage that abstractly handles results of one computation over to some other computation, some postprocessing tools or the like. Nevertheless correct unit handling is indispensable.
If you are going to work in a fixed internal unit system, you can use explicit unit conversion through the constructor to do user input.
distinguishable somehow, but the author does not tell us of what type the product (2*meter)*(5*Newton) should actually be and how a compiler should tell the difference.
2 is a scalar value and 5 is a scalar value, so 10*Newton*meter is an energy. Torque is a pseudovector, so two value types whose product forms a pseudovector would result in a torque. No library can substitute for a complete understanding of the problem domain.
The author states that the intent is obviously F = 2 m kg s^(-2); In contrast to the author, I personally would naturally expect something like: F = 2.0[N]
It would be relatively easy to generate specializations for unit output that address the named derived units in the SI system. However, in general a unit may be expressed in many ways if you allow a non-orthogonal basis, so there is no general solution that can unambiguously assign a set of fundamental and derived units to it. You could possibly reduce it to the minimal combination of fundamental and derived units, but this may not give the user what they expect, either...
I have yet to see any stringend need for rational exponents. They rather obscure the physical meaning of certain terms, imho.
Some people need them.
To define all possible conversions of n unitsystems, you'll need about O(n*n) helper structs to be defined. I may be overlooking some more elaborated automatism, though. However, we are likely to face a larger number of unitsystems as already the conversion of "cm" in "m" requires different systems.
There is no way around this if you want to avoid having a common unit system through which all conversions occur. The reasons why that is a bad idea have been exhaustively discussed in previous conversations in this mailing list on the topic. Furthermore, we may want to only allow implicit conversions to go in one direction, so two specializations are needed for each pair of units.
The handling of temperatures (affine units in general) is correct but not very practical.
I'd be happy to hear of a more practical solution that preserves the zero runtime overhead of the library.
From the technical aspects, this library looks probably good. My objections focus on the semantic level.
Thanks for your input. Matthias