
On Thursday 02 July 2009 08:25:11 Neal Becker wrote:
I completely agree with the sentiment (explicit is better than implicit), but I would (and have) done it differently:
N @ N -> N.
If you want different, you cast the operands to wider field first.
Too hard to use correctly, in my experience. For example, during DSP algorithm development, I have a function that computes something: template <typename A=double, typename B=double, typename C=double, typename R=double> R compute_cmag( A a, B b, C c) { return R( (a*(b+c))*(a*(b+c)) ); // grouping necessary for hardware model } After algorithm development, during fixed point conversion (for implementation on a chip), I find that the following suffices: A = s[2,-3] 7 bits B = s[4,-2] 8 bits C = s[4,-3] 9 bits R = u[7,2] with saturation and truncation 6 bits where s[a,b] is a signed type holding bits for 2^a, 2^(a-1),...,2^b and u[a,b] is the corresponding unsigned type. In generic code of the form above, I do not want to compute the largest type required for all intermediate computations, since the compiler can do it for me. In fact, I do not even want to know whether the intermediate precision is larger than what can represented by fixed-point numbers, since I expect the compiler to give me an error if the intermediate precision becomes too large. There are likely to be hundreds of such computations in a model for, say, a demodulator, and I definitely do not want to go through every single one of them trying to compute maximum intermediate precisions (unless the compiler tells me to do so). Regards, Ravi