
5 Apr
2007
5 Apr
'07
4 p.m.
As has been mentioned earlier in this thread compile-time dimensional analysis is extremely useful. However many common compilers have issues fully optimizing these wrappers away. The correctness verification also happens on every compile and takes a measurable amount of time. Is there a configuration of this library one can use that could ensure drop in compatability with raw floating-point types? I'm thinking you could control dimensional analysis with a preprocessor switch (much like many people do for concept checking now). What would this take? Would disallowing all implicit and explicit conversions be sufficient? Thanks, Michael Marcin