
I just wrote a quick test with the following loop for some random initialization values:
for (int i = 0; i < iterations; ++i) { sum += a * b; a += b; }
I used MSVC++ to compile.
I set iterations to a billion. When I used ints I got around 800 ticks. When I used floats I got around 3700 ticks.
That is almost a factor five. For my purposes, that's an optimization I am willing to sacrifice a lot for.
I just did the same test, with SSE2 enabled. int: 0.916 seconds, 1321730048 float: 1.398 seconds, 4.5036e+015 In this case, floats were 70% the speed of ints, but also gave a modicum of the correct result. The ints were faster but gave the incorrect result. Furthermore, this was ints not fixed-point. Would doing it using an actual fixed-point number make up the %30 difference? Perhaps, but it would still give the wrong result. I didn't mean to claim that fixed-point was "always slower than floating point" - especially for toy cases like this ints will be faster (if wrong) because there is nothing else for the CPU todo while the FPU is working. Add in some fetches or stores and I think you will get a far different result. In any case, I merely intended to make the point that one good reason for using fixed-point over floating point is accuracy and not necessarily speed. Regards, Christian.