
Le jeudi 03 août 2006 à 13:37 +0200, Johan Råde a écrit :
The following function does bit testing, without advance knowledge of which bit is the sign bit. The function figures out on its own, at compile time, which bit is the sign bit. Hence you do not have to worry about IEEE 754 and endianness.
Actually, you do have to worry about IEEE-754. Without it, you have no guarantee that the number representation has the correct properties. For example, with floating-point numbers that rely on a two-complement representation of the mantissa, you will get that +7 is negative (since at least one of its bits will be covered by sign_mask).
The only thing that is required is that sizeof(float) == sizeof(int).
bool signbit(float x) { const float one = 1; const float neg_one = -1; const int sign_mask = reinterpret_cast<const int&>(one) ^ reinterpret_cast<const int&>(neg_one); return reinterpret_cast<int&>(x) & sign_mask; }
You are invoking undefined behavior here: you are accessing a float through a reference to an int. As a consequence, GCC produces code that accesses uninitialized memory (since one and neg_one are optimized away) and returns random values. You have to use char* pointers and memcopies so that there is no aliasing issue. Best regards, Guillaume