Fixed point arithmetic - any interest?

Dear All, I have recently written some code for fixed point arithmetic which I would be happy to contribute to Boost, if it is considered worthy. Please let me know if this is something that you would use, especially if you would be able to help with testing on platforms other than Linux and other tasks necessary for Boostification. Here's a very quick summary: You declare a fixed-point variable by specifying the number of whole-number bits and the number of fraction bits, e.g. fixed<8,23> x; This has a range of about -256 to +256 with a precision of 1/(2^23), and occupies 32 bits. boost::integer is used to choose an implementation type with sufficient bits. The usual arithmetic operators are implemented with some help from boost::operators, and should have no overhead; note that overflow is not detected (like the built-in integral types). Implicit conversions are provided to and from the built-in integral and floating-point types (well, actually I have only added the ones that I have needed; the others should only need copy&paste). Implicit conversion between fixed types of different sizes is also possible. I'm starting to think that perhaps I should make some of these conversion explicit: I worry that code using this might be doing an "invisible" conversion via a floating point type that I didn't want. Any thoughts about this would be appreciated. Operations between fixed-point values of different sizes are possible, and return a fixed-point value with sufficient bits to store the result. For example: fixed<10,4> a; fixed<15,2> b; fixed<15,4> c; c = a + b; ..except that actually one more bit is needed in the maximum-value case. There are no doubt more opportunities for refinement in this area. You can see the code here: http://svn.chezphil.org/libpbe/trunk/include/fixed.hh Thanks for your attention. Phil.

Phil Endecott wrote:
Dear All,
I have recently written some code for fixed point arithmetic which I would be happy to contribute to Boost, if it is considered worthy. Please let me know if this is something that you would use, especially if you would be able to help with testing on platforms other than Linux and other tasks necessary for Boostification. Here's a very quick summary:
You declare a fixed-point variable by specifying the number of whole-number bits and the number of fraction bits, e.g.
fixed<8,23> x;
I have a very similar library that I've been developing/using for work for some months. I believe number of whole-number bits is commonly called magnitude bits.
This has a range of about -256 to +256 with a precision of 1/(2^23), and occupies 32 bits. boost::integer is used to choose an implementation type with sufficient bits.
Interesting. How do you decide what type to promote to for fixed-point multiply and divides. AFAIK Boost.Integer doesn't support 64-bit types which forced me to roll my own solution.
The usual arithmetic operators are implemented with some help from boost::operators, and should have no overhead; note that overflow is not detected (like the built-in integral types).
Platforms where binary fixed-point math is useful unfortunately tend to use compilers that are a little dated. In particular EBO is missed by RVCT 2.2 in some cases which makes using Boost.Operators a no go for this type of project IMO.
Implicit conversions are provided to and from the built-in integral and floating-point types (well, actually I have only added the ones that I have needed; the others should only need copy&paste). Implicit conversion between fixed types of different sizes is also possible. I'm starting to think that perhaps I should make some of these conversion explicit: I worry that code using this might be doing an "invisible" conversion via a floating point type that I didn't want. Any thoughts about this would be appreciated.
These are way too dangerous. An explicit syntax is necessary.
Operations between fixed-point values of different sizes are possible, and return a fixed-point value with sufficient bits to store the result. For example:
fixed<10,4> a; fixed<15,2> b; fixed<15,4> c; c = a + b;
..except that actually one more bit is needed in the maximum-value case. There are no doubt more opportunities for refinement in this area.
Interesting. The only mixed-type arithmetic I support currently is multiplication through a very cheesy expression template that delays the calculation until it is assigned to a result type.
You can see the code here:
I'll check it out. I think a Boost worthy implementation (which mine certainly isn't in its present form) would require at least: - optional overflow/underflow detection - fixed-point math routines for the common standard library math functions.

Michael Marcin wrote:
Phil Endecott wrote: [...]
I think a Boost worthy implementation (which mine certainly isn't in its present form) would require at least: - optional overflow/underflow detection
Programmable overflow/underflow behavior, probably specified as a template paramater. * ignore overflow * throw on overflow * limit on overflow

Neal Becker wrote:
Michael Marcin wrote:
Phil Endecott wrote: [...]
I think a Boost worthy implementation (which mine certainly isn't in its present form) would require at least: - optional overflow/underflow detection
Programmable overflow/underflow behavior, probably specified as a template paramater.
* ignore overflow * throw on overflow * limit on overflow
I'd also add assert on overflow, which I would have as default in debug builds and ignore as default in release builds. The question is how does one do this cleanly? Bear in mind that, at least for my purposes, it is necessary that sizeof(fixed<16,16>) == sizeof(GLfixed). I'd also like to multiply and divide fixed-point numbers without using a larger intermediate type when possible. I'd also like to be able to store 16-bit fixed-point types as 32-bit ints for performance or 16-bit shorts for size but I don't have any idea how to specify this cleanly. Looking through the code Phil linked to it looks like it's using a lot of floating-point arithmetic. Isn't fixed-point math generally used where floating-point doesn't offer enough precision or a FPU doesn't exist? For me I don't have a FPU so this is not too useful. I've uploaded my current fixed-point code. www.mikemarcin.com/misc/cross_fixed.zip Please have a look through it. Thanks, Michael Marcin

"Michael Marcin" <mmarcin@method-solutions.com> writes:
I'd also add assert on overflow, which I would have as default in debug builds and ignore as default in release builds. The question is how does one do this cleanly? Bear in mind that, at least for my purposes, it is necessary that sizeof(fixed<16,16>) == sizeof(GLfixed).
I'm working on a very similar library, also using boost::operators for minimizing the behaviours to be specified and (at the 4th rewrite) boost::proto for building expression templates involving differnt classes of fixed-point (I'll have limited precision, < 64bit, unlimited fixed precision and variable precision). I'm also using proto, with a different evaluation context to compute overflow and quantization correction (SystemC is rather fancy in what allows there, if you want to see overflow modes you've never seen before, go there). Overflow and quantization handling are user specifiable, with no impact on the object size (e.g. for me a fized<16,16> will be 4 bytes). You can get an idea on how to do it by looking at the conversion library by Fernando Cacciola, included in boost. Best regards, Maurizio

Phil Endecott wrote:
Dear All,
I have recently written some code for fixed point arithmetic which I would be happy to contribute to Boost, if it is considered worthy.
I'm interested. You may not know that there was a fixed decimal proposal that rejected...5 years ago now? http://f1.grp.yahoofs.com/v1/IM0mRnFUa0mZSoJEp3yqTReGrwCdpzLigMsOuJuijMvLOeF... You'll probably want to try and look at the review comments and see why it was rejected. Michael Marcin wrote:
AFAIK Boost.Integer doesn't support 64-bit types which forced me to roll my own solution.
That's incorrect. There is support for 64-bit types. The documentation, unfortunately, doesn't say so -- you have to read the header. I use boost::int64_t and boost::uint64_t in date-time -- works on every platform boost is ported to AFAIK. Jeff

Jeff Garland wrote:
Phil Endecott wrote:
Dear All,
I have recently written some code for fixed point arithmetic which I would be happy to contribute to Boost, if it is considered worthy.
I'm interested. You may not know that there was a fixed decimal proposal that rejected...5 years ago now?
http://f1.grp.yahoofs.com/v1/IM0mRnFUa0mZSoJEp3yqTReGrwCdpzLigMsOuJuijMvLOeF...
You'll probably want to try and look at the review comments and see why it was rejected.
I'll try to find the review but a cursory glance looks like this is supposed to be for something like BCD ala http://en.wikipedia.org/wiki/Binary-coded_decimal What I'm referring to is binary scaling ala http://en.wikipedia.org/wiki/Binary_scaling Maybe there is too much confusion using the term fixed-point? Although fixed is such a nice class name to use for a drop in replacement for float on machines that have a FPU... I can't really think of anything I'd accept in its place. Maybe boost::binary_scaling::fixed and boost::binary_decimal::fixed could coexist?
Michael Marcin wrote:
AFAIK Boost.Integer doesn't support 64-bit types which forced me to roll my own solution.
That's incorrect. There is support for 64-bit types. The documentation, unfortunately, doesn't say so -- you have to read the header. I use boost::int64_t and boost::uint64_t in date-time -- works on every platform boost is ported to AFAIK.
Oh? Okay I'll take a look again Thanks, Michael Marcin

I was wondering if this is still being actively developed ? If so when will it be submitted for review ? Richard Day

Richard Day wrote:
I was wondering if this is still being actively developed ? If so when will it be submitted for review ?
Here's what Julio had to say in March on the subject: http://lists.boost.org/Archives/boost/2007/03/118196.php I know he did a little work since then... HTH, Jeff

Jeff Garland wrote:
Richard Day wrote:
I was wondering if this is still being actively developed ? If so when will it be submitted for review ?
Here's what Julio had to say in March on the subject:
http://lists.boost.org/Archives/boost/2007/03/118196.php
I know he did a little work since then...
HTH,
Jeff
Thanks for the info. Richard Day

On 4/18/07, Jeff Garland <jeff@crystalclearsoftware.com> wrote:
Michael Marcin wrote:
AFAIK Boost.Integer doesn't support 64-bit types which forced me to roll my own solution.
That's incorrect. There is support for 64-bit types. The documentation, unfortunately, doesn't say so -- you have to read the header. I use boost::int64_t and boost::uint64_t in date-time -- works on every platform boost is ported to AFAIK.
Boost does have 64-bit types in cstdint.hpp, but boost.integer doesn't support them yet for uint_t, int_max_value_t, and such: http://boost.cvs.sourceforge.net/boost/boost/boost/integer.hpp?view=markup There are patches floating around for it, though: http://tinyurl.com/2gq3xt ~ Scott McMurray

Many thanks for all the replies to my proposal. First I should say that I have deliberately not tried to offer anything beyond what the built-in integer and floating-point types offer, except for being fixed-point. The result is a "lean" implementation. I would like to think that this is the "boost way". Ideally, features like overflow detection could be added in an orthogonal way, for example by providing something other than a built-in integer type as the implementation type. I'd also like to mention that I have been using this type to implement a 2D set<point> container using space filling curves. That's also something that I could perhaps contribute, but it is a much larger piece of work and the effort to make it good enough for Boost is hard for me to justify. Now to address some of the comments. Multiply and divide: Michael Marcin wrote:
How do you decide what type to promote to for fixed-point multiply and divides?
My multiply function looks like this: template <int XWB, int XFB, int YWB, int YFB> pbe::fixed<XWB+YWB,XFB+YFB> operator*(pbe::fixed<XWB,XFB> x, pbe::fixed<YWB,YFB> y) { pbe::fixed<XWB+YWB,XFB+YFB> r; r.val = x.val * y.val; return r; } Think of it like this: XXXX.XX * YY.YYY = XXXXYY.XXYYY So the result has enough bits in all cases. (Hmm, I'm never sure about how sign bits affect this.) Divide looks like this: template <int XWB, int XFB, int YWB, int YFB> pbe::fixed<XWB+YFB,XFB+YWB> operator/(pbe::fixed<XWB,XFB> x, pbe::fixed<YWB,YFB> y) { pbe::fixed<XWB+YFB,XFB+YWB> r; r.val = x.val; r.val <<= (YWB+YFB); r.val /= y.val; return r; } I think this is the right thing to do, but the case of recurring fractions makes it more difficult for me to reason about than multiplication. Unfortunately, although I'm happy with the types, I'm not happy with the arithmetic. For example, if I multiply together a pair of fixed<15,16>s the result will be a fixed<30,32>, and the processor needs to do a 32x32->64-bit multiply. With the code shown above it just does a 32x32->32-bit multiply and gets the answer wrong. I could (should) convert one operand to the result type first, which implies a 64x32->64-bit multiply. I have tried this with gcc for ARM and it calls a library routine (muldi3) to do a 64x64->64-bit multiply. The fundamental point here is that there is an issue with ordinary integer maths in C++, i.e. there isn't a way to efficiently multiply together two 32-bit values giving a 64-bit result. So maybe what we need is a "better integers" library that can do this sort of thing; fixed<> could then use this as its implementation type. Conversions: Michael Marcin wrote:
[implicit conversions] are way too dangerous. An explicit syntax is necessary.
I think I agree, but of course the built-in types have implicit conversions (albeit with a few warnings). Is some useful compromise possible using #warning or conditionally-enabled conversions? Overflow detection: Michael Marcin wrote:
optional overflow/underflow detection Neal Becker wrote: Programmable overflow/underflow behavior, probably specified as a template paramater. * ignore overflow * throw on overflow * limit on overflow Michael Marcin wrote: assert on overflow
Right, this is an important one, as it's clearly important in some applications. Not in mine, as it happens. Some thoughts: The built-in types don't have any overflow detection, and I'm not aware of any existing library that offers this. If an "overflowing integer" library existed, fixed<> could use it as an implementation type and get overflow detection for free. The run-time overhead of overflow detection is quite large. If your processor has condition codes and you write assembler to access them, or if your fixed type has space for guard bits, it might add one check instruction per arithmetic instruction, halving the speed. In other cases the tests are even more complex. In many cases, expressions clearly can't overflow, e.g. a=a/b when b>=1. So in an expression like a=(b*c)/d you might want to overflow-check the result of the multiplication but not the division, especially if you care about avoiding unnecessary overhead. This suggests that indicating the overflow-detection behaviour in the type might not be sufficiently fine-grain. Something like a = check(b*c)/d might be better. Or, perhaps doing checks only in assignments is a useful strategy; how about overloading something for "assign with check"? Are number-of-bits limits sufficient, or do applications need to be able to specify arbitrary upper and lower bounds? (For example, latitude/longitude.) Floating point: Michael Marcin wrote:
Looking through the code Phil linked to it looks like it's using a lot of floating-point arithmetic.
For the case of "fixed op float", I convert the fixed to a float and do the operation in floating-point. I don't think there's a sensible alternative to that; if I were to convert the float to fixed, how many bits would I give it? (For addition/subtraction I suppose it would be reasonable to convert to the same size as the other operand, but not for multiplication/division.) Apart from that, what floating point have you spotted? Michael Marcin wrote:
fixed-point math routines for the common standard library math functions.
Yes, that would be good to have for many applications, though I currently don't need it. This is well outside my expertise; would someone like to contribute something? Boost.proto: Maurizio Vitale wrote:
boost::proto for building expression templates involving differnt classes of fixed-point [also] to compute overflow and quantization correction
Well, using proto could make it much more powerful but also more complex and less obviously zero-overhead. I think I'd like to see how far it's possible to get without going down this road. Any observations would be much appreciated. Evaluation: John Femiani wrote:
... examples of the generated machine code
Yes absolutely. I have done some very quick investigations and I seem to be getting the expected zero overhead with g++. Regards, Phil.

On Apr 19, 2007, at 3:25 PM, Phil Endecott wrote:
I'd also like to mention that I have been using this type to implement a 2D set<point> container using space filling curves.
This is highly interesting to me. I suppose you're doing this for nearest neighbors of some kind. You must be aware of Timothy Chan's paper "Closest-point problems simplified on the RAM", in Proc. 13th ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 472-473, 2002. I imagine (this was my first thought when reading Chan's paper) that everything can be wrapped into a functor that computes the order along the curve. Is this the approach you're implementing? I'd be interested to hear your thoughts / experience / esp. your setup (why you are interested in this in the first place). You can reply to the list if you think it's of general interest, or to me directly. Re: the original topic of this thread, I really don't see the point of fixed-point arithmetic for this, though, it seems to me point coordinates could be computed in floating point, then rounded/scaled to integers for inserting into the set<point>. Iow, the fixed-point coordinates are likely simply representational, not used in computations. Unless you're dealing with moving points, and even then... Btw, in Dr Dobb's I saw an article recently which I really didn't understand. Seemed like the author rediscovered quad-trees. Seems like a functor for comparing along space-filling curves would also make a great paper for Dr Dobb's / CUJ. Cheers, -- Herve Bronnimann

Herv? Br?nnimann wrote:
On Apr 19, 2007, at 3:25 PM, Phil Endecott wrote:
I'd also like to mention that I have been using this type to implement a 2D set<point> container using space filling curves.
This is highly interesting to me. I suppose you're doing this for nearest neighbors of some kind.
Actually I'm currently just using it to find the subset of a set of points that lie within a rectangle. At some point in the future I may need to do clustering, at which point nearest-neighbour would become useful. Some sort of tree (R-Tree, whatever) might sound like a more obvious solution to my current problem. I decided to try space filling curves instead because they let me exploit the standard containers (e.g. std::map); I almost "adapt" the standard 1D container to 2D (or potentially higher dimensions).
You must be aware of Timothy Chan's paper "Closest-point problems simplified on the RAM", in Proc. 13th ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 472-473, 2002.
I have not found a freely-downloadable version of that.
I imagine (this was my first thought when reading Chan's paper) that everything can be wrapped into a functor that computes the order along the curve. Is this the approach you're implementing?
My current implementation uses the Z-curve, which simply interleaves the bits from each coordinate. I have done this by writing an interleaved_pair<T> class which has some similarity to std::pair<T,T>. I can then implement point_map<point<COORD_T>,V> as std::map<interleaved_pair<COORD_T>,V>. So no, there isn't a functor; it could be expanded in that way, but what benefit does the functor's state have? Other curves are more expensive to compute but more efficient in use. I have done some quick experiments with a freely-available C implementation of the Hilbert curve, but I don't yet have a large enough test data set to quantify whether it is better overall than my simple Z-curve implementation.
I'd be interested to hear your thoughts / experience / esp. your setup (why you are interested in this in the first place).
I'm making a pan/zoom user interface to view GPS tracks superimposed on a base map.
You can reply to the list if you think it's of general interest, or to me directly.
Re: the original topic of this thread, I really don't see the point of fixed-point arithmetic for this, though, it seems to me point coordinates could be computed in floating point, then rounded/scaled to integers for inserting into the set<point>. Iow, the fixed-point coordinates are likely simply representational, not used in computations.
One difference between a 32-bit fixed-point value and a 32-bit floating-point value is that in the latter case you "waste" 8 bits for the exponent. So for latitude/longitude GPS data you get 256 (or maybe 128) times better precision in the same memory space by using fixed point. IIRC the difference is between centimetres and metres of resolution. In my application, having found the set of points that are currently visible, their positions are converted to integer screen positions for display; at this point it is certainly more efficient to not convert to and from floating point again.
Unless you're dealing with moving points, and even then...
Btw, in Dr Dobb's I saw an article recently which I really didn't understand. Seemed like the author rediscovered quad-trees.
I have had a quick look and can't find it.
Seems like a functor for comparing along space-filling curves would also make a great paper for Dr Dobb's / CUJ. Cheers, -- Herve Bronnimann
Regards Phil.

On Apr 22, 2007, at 5:47 PM, Phil Endecott wrote:
This is highly interesting to me. I suppose you're doing this for nearest neighbors of some kind.
Actually I'm currently just using it to find the subset of a set of points that lie within a rectangle. At some point in the future I may need to do clustering, at which point nearest-neighbour would become useful.
I see. For range searching, kd-trees would be the best in linear storage.
Some sort of tree (R-Tree, whatever) might sound like a more obvious solution to my current problem. I decided to try space filling curves instead because they let me exploit the standard containers (e.g. std::map); I almost "adapt" the standard 1D container to 2D (or potentially higher dimensions).
I had a student do inplace static and dynamic kd-trees: stored inside a vector, the ordering is key to the organisation of the kd- tree. The code and documentation are available along with the rest at: http://photon.poly.edu/~hbr/publi/ilya-thesis/index.html (http://photon.poly.edu/~hbr/publi/ilya-thesis/doc/geometry/ static__kd__tree_8hpp.html) (http://photon.poly.edu/~hbr/publi/ilya-thesis/doc/geometry/ dynamic__kd__tree_8hpp.html) It's unfinished, as I've come to believe that all Theses are (*sigh*). The thesis itself is accessible at http://photon.poly.edu/~hbr/publi/ilya-thesis/thesis.pdf
You must be aware of Timothy Chan's paper "Closest-point problems simplified on the RAM", in Proc. 13th ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 472-473, 2002.
I have not found a freely-downloadable version of that.
On Timothy's web page: http://www.cs.uwaterloo.ca/~tmchan/pub.html#ram I must warn you: Timothy's one of the smartest people around, and the paper is a deceptively two-page long which could easily fit eight. But all the code is given in pseudo-code and you don't even need to understand to program it (although it's nicer to understand, of course). I've not tried to code it myself, but
One difference between a 32-bit fixed-point value and a 32-bit floating-point value is that in the latter case you "waste" 8 bits for the exponent. So for latitude/longitude GPS data you get 256 (or maybe 128) times better precision in the same memory space by using fixed point. IIRC the difference is between centimetres and metres of resolution. In my application, having found the set of points that are currently visible, their positions are converted to integer screen positions for display; at this point it is certainly more efficient to not convert to and from floating point again.
Excellent point. Thanks. -- HB

Operations between fixed-point values of different sizes are possible,
and return a fixed-point value with sufficient bits to store the result. For example:
fixed<10,4> a; fixed<15,2> b; fixed<15,4> c; c = a + b;
..except that actually one more bit is needed in the maximum-value case. There are no doubt more opportunities for refinement in this area.
I for one am very interested, provided that they can be parameterized for overflow and rounding policies, and that max-value case is handled so that operations are not lossless until the machine limit is reached (64 bit or 32 bit) or until an assignment to a smaller type is made. I also would like signed and unsigned versions, and some examples of the generated machine code or performance metrics showing how the generated code compares to hand coded fixed point or integer arithmetic. I personally was thinking that the effective way to use fixed point arithmetic would end up involving expression templates (Proto?) or something and a zillion hand-coded specializations for whole expressions. In particular I am interested in using this kind of thing with GIL so that I can multiply pixels better. John _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
participants (9)
-
Hervé Brönnimann
-
Jeff Garland
-
John Femiani
-
Maurizio Vitale
-
me22
-
Michael Marcin
-
Neal Becker
-
Phil Endecott
-
Richard Day