
Many thanks for all the replies to my proposal. First I should say that I have deliberately not tried to offer anything beyond what the built-in integer and floating-point types offer, except for being fixed-point. The result is a "lean" implementation. I would like to think that this is the "boost way". Ideally, features like overflow detection could be added in an orthogonal way, for example by providing something other than a built-in integer type as the implementation type. I'd also like to mention that I have been using this type to implement a 2D set<point> container using space filling curves. That's also something that I could perhaps contribute, but it is a much larger piece of work and the effort to make it good enough for Boost is hard for me to justify. Now to address some of the comments. Multiply and divide: Michael Marcin wrote:
How do you decide what type to promote to for fixed-point multiply and divides?
My multiply function looks like this: template <int XWB, int XFB, int YWB, int YFB> pbe::fixed<XWB+YWB,XFB+YFB> operator*(pbe::fixed<XWB,XFB> x, pbe::fixed<YWB,YFB> y) { pbe::fixed<XWB+YWB,XFB+YFB> r; r.val = x.val * y.val; return r; } Think of it like this: XXXX.XX * YY.YYY = XXXXYY.XXYYY So the result has enough bits in all cases. (Hmm, I'm never sure about how sign bits affect this.) Divide looks like this: template <int XWB, int XFB, int YWB, int YFB> pbe::fixed<XWB+YFB,XFB+YWB> operator/(pbe::fixed<XWB,XFB> x, pbe::fixed<YWB,YFB> y) { pbe::fixed<XWB+YFB,XFB+YWB> r; r.val = x.val; r.val <<= (YWB+YFB); r.val /= y.val; return r; } I think this is the right thing to do, but the case of recurring fractions makes it more difficult for me to reason about than multiplication. Unfortunately, although I'm happy with the types, I'm not happy with the arithmetic. For example, if I multiply together a pair of fixed<15,16>s the result will be a fixed<30,32>, and the processor needs to do a 32x32->64-bit multiply. With the code shown above it just does a 32x32->32-bit multiply and gets the answer wrong. I could (should) convert one operand to the result type first, which implies a 64x32->64-bit multiply. I have tried this with gcc for ARM and it calls a library routine (muldi3) to do a 64x64->64-bit multiply. The fundamental point here is that there is an issue with ordinary integer maths in C++, i.e. there isn't a way to efficiently multiply together two 32-bit values giving a 64-bit result. So maybe what we need is a "better integers" library that can do this sort of thing; fixed<> could then use this as its implementation type. Conversions: Michael Marcin wrote:
[implicit conversions] are way too dangerous. An explicit syntax is necessary.
I think I agree, but of course the built-in types have implicit conversions (albeit with a few warnings). Is some useful compromise possible using #warning or conditionally-enabled conversions? Overflow detection: Michael Marcin wrote:
optional overflow/underflow detection Neal Becker wrote: Programmable overflow/underflow behavior, probably specified as a template paramater. * ignore overflow * throw on overflow * limit on overflow Michael Marcin wrote: assert on overflow
Right, this is an important one, as it's clearly important in some applications. Not in mine, as it happens. Some thoughts: The built-in types don't have any overflow detection, and I'm not aware of any existing library that offers this. If an "overflowing integer" library existed, fixed<> could use it as an implementation type and get overflow detection for free. The run-time overhead of overflow detection is quite large. If your processor has condition codes and you write assembler to access them, or if your fixed type has space for guard bits, it might add one check instruction per arithmetic instruction, halving the speed. In other cases the tests are even more complex. In many cases, expressions clearly can't overflow, e.g. a=a/b when b>=1. So in an expression like a=(b*c)/d you might want to overflow-check the result of the multiplication but not the division, especially if you care about avoiding unnecessary overhead. This suggests that indicating the overflow-detection behaviour in the type might not be sufficiently fine-grain. Something like a = check(b*c)/d might be better. Or, perhaps doing checks only in assignments is a useful strategy; how about overloading something for "assign with check"? Are number-of-bits limits sufficient, or do applications need to be able to specify arbitrary upper and lower bounds? (For example, latitude/longitude.) Floating point: Michael Marcin wrote:
Looking through the code Phil linked to it looks like it's using a lot of floating-point arithmetic.
For the case of "fixed op float", I convert the fixed to a float and do the operation in floating-point. I don't think there's a sensible alternative to that; if I were to convert the float to fixed, how many bits would I give it? (For addition/subtraction I suppose it would be reasonable to convert to the same size as the other operand, but not for multiplication/division.) Apart from that, what floating point have you spotted? Michael Marcin wrote:
fixed-point math routines for the common standard library math functions.
Yes, that would be good to have for many applications, though I currently don't need it. This is well outside my expertise; would someone like to contribute something? Boost.proto: Maurizio Vitale wrote:
boost::proto for building expression templates involving differnt classes of fixed-point [also] to compute overflow and quantization correction
Well, using proto could make it much more powerful but also more complex and less obviously zero-overhead. I think I'd like to see how far it's possible to get without going down this road. Any observations would be much appreciated. Evaluation: John Femiani wrote:
... examples of the generated machine code
Yes absolutely. I have done some very quick investigations and I seem to be getting the expected zero overhead with g++. Regards, Phil.