
What is your evaluation of the design? For a first iteration it's good. There is room for improvement but I think use needs to guide that. However, I feel that the library should use MPL standard operators instead of creating new ones. This has been discussed and deemed impossible but it is not. What is your evaluation of the implementation? It seems to be slower to compile than competitors by quite a margin. On the other hand, this library is more generic and can handle arbitrary dimensions. What is your evaluation of the documentation? More instruction on what constitutes a unit and quantity so that a developer may override these concepts and use them with the library. What is your evaluation of the potential usefulness of the library? It would be more useful with runtime conversions. I don't believe static conversions will be that useful. Did you try to use the library? I played with it, I did not use it in any real project. With what compiler? MSVC 8 and g++ Did you have any problems? With optimizations on full the library performed almost as well as a double. Next to 0 overhead but not quite. Without these optimizations it performed quite poorly, which could make debugging difficult. I could not find enough optimizations with the Microsoft compiler to make this library usable; 0 overhead was definitely not 0 overhead. Compilation overhead could be a bear for large projects. How much effort did you put into your evaluation? About 10 hours in which I tested various performance issues and tried to use it in various ways I need it to work. It performs moderately well but unless I can find the correct switches for the MS compiler it won't be useful to me. It also doesn't do everything I need and even after digging through the implementation I am not sure it will be useful or easy to even extend it. Are you knowledgeable about the problem domain? I do not work in a real time project where precision is of utmost importance; in fact we often round values. I work in fluid flow analysis, which is true scientific computing in the sense that it is a set of "close enough" guesses to predict the real world with a fairly high degree of certainty. This needs to be fast, so can't pay the expense of constant conversions, but doesn't have the same needs as some of the issues that have come up in discussion. Do you think the library should be accepted as a Boost library? I do not believe it meets the need of a wide enough audience, no.

Hi, Noah Roberts wrote:
Do you think the library should be accepted as a Boost library?
I do not believe it meets the need of a wide enough audience, no.
I still have to evaluate this library thoroughly, but I'm pretty sure that this kind of functionality pleases a quite large audience. Most people I know don't use compile-time unit checks right now, but they don't do it because they decided that it's not needed but because they either don't know about the possibility or don't know a good implementation. If a unit library would be included in Boost, this would be a major step towards unit-safe code for many of them because there would be no good reason not to write unit-safe code. People were able to work with raw pointers, too, but smart_ptr and pointer_container are just so much better than just "being careful" that this practice seems to diminish. I believe that the general need for this type of library is large enough to justify inclusion into Boost. Malte

AMDG Noah Roberts <roberts.noah <at> gmail.com> writes:
What is your evaluation of the design?
For a first iteration it's good. There is room for improvement but I think use needs to guide that.
However, I feel that the library should use MPL standard operators instead of creating new ones. This has been discussed and deemed impossible but it is not.
Since that time I have made a few changes which make it trivial to switch to mpl::plus etc. I agree that that would be a good thing.
What is your evaluation of the implementation?
It seems to be slower to compile than competitors by quite a margin. On the other hand, this library is more generic and can handle arbitrary dimensions.
What is your evaluation of the documentation?
More instruction on what constitutes a unit and quantity so that a developer may override these concepts and use them with the library.
Well, quantities are not used by any other portion of the library, so overriding it won't do you much good. On the other hand, I think that we should try to support usage such as template<class Dimensions> class runtime_unit { //... }; quantity<runtime_unit<length_type> > x = 3.28 * SI::meters; This would mean that the quantity would store the unit (perhaps as a base class to get EBCO) and do the arithmetic on the unit and the value_type at the same time. It would also mean not defining the quantity operators to require unit<...>. All easy changes.
What is your evaluation of the potential usefulness of the library?
It would be more useful with runtime conversions. I don't believe static conversions will be that useful.
Sorry for being so dense but, I still don't really understand the case for runtime conversions. Do you think you can provide me with a concrete case that I cannot solve easily using the current library?
Did you try to use the library?
I played with it, I did not use it in any real project.
With what compiler?
MSVC 8 and g++
Did you have any problems?
With optimizations on full the library performed almost as well as a double. Next to 0 overhead but not quite. Without these optimizations it performed quite poorly, which could make debugging difficult. I could not find enough optimizations with the Microsoft compiler to make this library usable; 0 overhead was definitely not 0 overhead. Compilation overhead could be a bear for large projects.
I think that I can speed up compiliation a bit (maybe 2x but probably not more than that), but it would make the code very very ugly (much worse than it is alreay) so I didn't think that it was worth while.
How much effort did you put into your evaluation?
About 10 hours in which I tested various performance issues and tried to use it in various ways I need it to work. It performs moderately well but unless I can find the correct switches for the MS compiler it won't be useful to me. It also doesn't do everything I need and even after digging through the implementation I am not sure it will be useful or easy to even extend it.
Are you knowledgeable about the problem domain?
I do not work in a real time project where precision is of utmost importance; in fact we often round values. I work in fluid flow analysis, which is true scientific computing in the sense that it is a set of "close enough" guesses to predict the real world with a fairly high degree of certainty. This needs to be fast, so can't pay the expense of constant conversions, but doesn't have the same needs as some of the issues that have come up in discussion.
The library is powerful enough to make conversion to a fixed system from some system that is known at runtime easy, *provided that the set of possible systems is known at compile time* (though not necessarily in a single monolithic definition). If this is not what you need, would you mind elaborating?
Do you think the library should be accepted as a Boost library?
I do not believe it meets the need of a wide enough audience, no.
Thanks for your comments. In Christ, Steven Watanabe

Since that time I have made a few changes which make it trivial to switch to mpl::plus etc. I agree that that would be a good thing.
It would be good, but, ultimately, this is an implementation detail that should have no impact on the library's end users...probably not something that needs to be a top priority...
I think that I can speed up compiliation a bit (maybe 2x but probably not more than that), but it would make the code very very ugly (much worse than it is alreay) so I didn't think that it was worth while.
I would rather not make the code totally incomprehensible unless there is a very clear and compelling reason to. It takes less than 30 seconds for me to recompile all 22 example programs on my machine, which is quite acceptable... Matthias

AMDG Matthias Schabel <boost <at> schabel-family.org> writes:
Since that time I have made a few changes which make it trivial to switch to mpl::plus etc. I agree that that would be a good thing.
It would be good, but, ultimately, this is an implementation detail that should have no impact on the library's end users...probably not something that needs to be a top priority...
Aren't users allowed to manipulate dimension lists directly?
I would rather not make the code totally incomprehensible unless there is a very clear and compelling reason to. It takes less than 30 seconds for me to recompile all 22 example programs on my machine, which is quite acceptable...
Exactly my conclusion. In Christ, Steven Watanabe

Steven Watanabe wrote:
AMDG
Matthias Schabel <boost <at> schabel-family.org> writes:
Since that time I have made a few changes which make it trivial to switch to mpl::plus etc. I agree that that would be a good thing. It would be good, but, ultimately, this is an implementation detail that should have no impact on the library's end users...probably not something that needs to be a top priority...
Aren't users allowed to manipulate dimension lists directly?
Exactly. One would expect it to follow conventions. Also see the static_rational, which I think needs its own review and be added to the math library.
I would rather not make the code totally incomprehensible unless there is a very clear and compelling reason to. It takes less than 30 seconds for me to recompile all 22 example programs on my machine, which is quite acceptable...
Exactly my conclusion.
I did not time it but I did compare it relative to both my own home brewed quantity class and to doubles. Our software can take a long time to compile already and this particular implementation was at least 10x as slow to compile as doubles. This could have significant impact on its utility as well; it might be entirely impractical to use in very large projects that do a lot of computations.

AMDG Noah Roberts <roberts.noah <at> gmail.com> writes:
Exactly. One would expect it to follow conventions. Also see the static_rational, which I think needs its own review and be added to the math library.
I don't know about that. static_rational has a special constraint that doesn't apply to mpl integral constants: equal values must have the same type.
I did not time it but I did compare it relative to both my own home brewed quantity class and to doubles.
<snip>
Ok. After the interface is settled I'll see how much I can optimize without damaging the code. In Christ, Steven Watanabe

Steven - are you OK with the changes I've committed to the sandbox? I'm still not 100% happy with the angle syntax, but it isn't obvious to me how to improve it... I'm also a little concerned about the runtime optimization issues raised - I'll take a look at the assembly generated by gcc. I was particularly surprised by the Intel compiler performance. I believe Intel bought the KAI C++ compiler that was well known to be extremely good at optimization in the presence of template code (in particular expression templates)... Matthias

AMDG Matthias Schabel <boost <at> schabel-family.org> writes:
Steven - are you OK with the changes I've committed to the sandbox? I'm still not 100% happy with the angle syntax, but it isn't obvious to me how to improve it...
You're misusing dimensionless_quantity.... Did you run the tests and examples?
I'm also a little concerned about the runtime optimization issues raised - I'll take a look at the assembly generated by gcc. I was particularly surprised by the Intel compiler performance. I believe Intel bought the KAI C++ compiler that was well known to be extremely good at optimization in the presence of template code (in particular expression templates)...
We could try comparing it to a simple wrapper. If that is slower than double than there is nothing we can do to improve perfomance. In Christ, Steven Watanabe

Steven - are you OK with the changes I've committed to the sandbox? I'm still not 100% happy with the angle syntax, but it isn't obvious to me how to improve it...
You're misusing dimensionless_quantity....
Not sure what you mean - I need to preserve system information in dimensionless quantities to make the trig functions properly invertible : that is acos(cos(theta)) == theta should hold true...
Did you run the tests and examples?
The only one that needed changing was test_dimensionless_quantity because mutating value() had not been removed and there were a number of tests relying on that. I'll remove them and put together a test on quantity_cast/from_value when the syntax is stable...
I'm also a little concerned about the runtime optimization issues raised - I'll take a look at the assembly generated by gcc. I was particularly surprised by the Intel compiler performance. I believe Intel bought the KAI C++ compiler that was well known to be extremely good at optimization in the presence of template code (in particular expression templates)...
We could try comparing it to a simple wrapper. If that is slower than double than there is nothing we can do to improve perfomance.
I agree; I'll look into seeing how difficult it would be to have a non-dimension checking quantity wrapper class that allows us to preserve syntax but doesn't bother to do metaprogramming. Conversions will clearly be a problem in this case... Matthias

AMDG Matthias Schabel <boost <at> schabel-family.org> writes:
You're misusing dimensionless_quantity....
Not sure what you mean - I need to preserve system information in dimensionless quantities to make the trig functions properly invertible : that is acos(cos(theta)) == theta should hold true...
/// cos of theta in radians template<class Y> typename dimensionless_quantity<*angle::radian*,Y>::type cos(const quantity<angle::radian,Y>& theta) { return std::cos(theta.value()); } /// utility class to simplify construction of dimensionless quantities template<*class System*,class Y> struct dimensionless_quantity { typedef quantity<typename dimensionless_unit<System>::type,Y> type; }; In Christ, Steven Watanabe

Steven Watanabe wrote:
AMDG
Noah Roberts <roberts.noah <at> gmail.com> writes:
What is your evaluation of the potential usefulness of the library?
It would be more useful with runtime conversions. I don't believe static conversions will be that useful.
Sorry for being so dense but, I still don't really understand the case for runtime conversions. Do you think you can provide me with a concrete case that I cannot solve easily using the current library?
Since any problem can be solved under almost any circumstance, no. The problem I see is that to use this library I have to replicate the unit system for runtime. Instead of having support in the library I have to find a way to interact with it. Since this is the more important aspect for me, and I believe many others, and providing the dimensional safety is rather trivial, attempting to wrap this library to provide runtime conversion support seems like too much work. The problem simply is that most software needs to do conversions at runtime. This set far outweighs those that use units but do not need this. Since runtime support can be implemented so that it has a negligible impact on performance I am unconvinced of the utility of a library that only deals with it at compile time.
The library is powerful enough to make conversion to a fixed system from some system that is known at runtime easy, *provided that the set of possible systems is known at compile time* (though not necessarily in a single monolithic definition). If this is not what you need, would you mind elaborating?
There is one use case that we have in our products that is not covered by that. User defined units. These are specified as some conversion to an arbitrary base we set up. There would need be room for this. In fact I can see many situations in which the user should be able to enter arbitrary units. We only allow two such in our main product but I do know of a recipe book software that allows any unit to be entered; when examining the genericness of this library such programs should certainly be included in the set of those that would find a unit library useful...scientific and real time computing is not a wide enough view when looking at boost inclusion in my opinion. Your idea of a allowing the unit override might solve may of my objections. I would have to review that feature as it came. When I was looking at this library it did not seem at all straight forward to do so.

AMDG Noah Roberts <roberts.noah <at> gmail.com> writes:
Steven Watanabe wrote:
Do you think you can provide me with a concrete case that I cannot solve easily using the current library?
Since any problem can be solved under almost any circumstance, no.
Key word: easily. Never mind. What you say below answers my question
<snip>
There is one use case that we have in our products that is not covered by that. User defined units. These are specified as some conversion to an arbitrary base we set up.
Ah. So to do such conversions you need: template<class Quantity> class unit_converter { typedef typename get_unit<Quantity>::type Unit; typedef typename get_dimensions<Unit>::type Dimensions; typedef typename Quantity::value_type value_type; public: typedef Quantity result_type; typedef const value_type& argument_type; typedef result_type function_type(const value_type&); template<class System> unit_converter(const unit<Dimensions, System>&) : converter(static_cast<function_type*>(&convert_impl<OtherSystem>)) {} unit_converter(const value_type& factor) : converter(lambda::bind(from_value(), lambda::_1 * factor)) {} unit_converter(const value_type& factor, const value_type& offset) : converter(lambda::bind(from_value(), lambda::_1 * factor + offset)) {} result_type operator()(const value_type& v) const { return(converter(v)); } private: boost::function<function_type> converter; template<class OtherSystem> static result_type convert_impl(const value_type& v) { return(result_type(v * unit<Dimensions, OtherSystem>())); } struct from_value { template<class T> struct sig { typedef result_type type; }; result_type operator()(const value_type& v) const { return(result_type::from_value(v)); } } }; Adding the reverse conversion ~doubles the size. Again if this doesn't fit you needs it may take even more code.
Your idea of a allowing the unit override might solve may of my objections. I would have to review that feature as it came. When I was looking at this library it did not seem at all straight forward to do so.
Allowing it is pretty straight forward. I admit that it probably doesn't look like it unless you know the internals well. In Christ, Steven Watanabe
participants (4)
-
Malte Clasen
-
Matthias Schabel
-
Noah Roberts
-
Steven Watanabe