
I hadn't heard of this problem before, but I'm glad to know about it now. Do you know whether zeroing the remaining bits solves the problem? If so, specializing for FP versus integer types would allow handling the difference.
There are two problems I can think of off the top of my head: The first one I've already described in an earlier post and concern the "long double" data type on different compilers. The second would concern NaN values. I have not encountered this myself (and don't have the IEEE standard handy, so can't check if this is even a valid concern) but I can imagine that a swapped floating point number could result in a NaN value. There are several valid NaN representations and it doesn't seem unreasonable that a CPU could "normalize" all NaNs to just a single one. Of course, swapping this normalized NaN could result in a completely different number. On a similar note, CPUs can handle denormalized numbers in different ways, sometimes just rounding them down to zero. Again, this has the potential to break in the same way as described above. Honestly, it's hard to say how to try and prevent this - since it's so architecture and compiler dependent - other than through a large collection of test cases.
I agree that FP should be disabled if not supported correctly, but I'd rather find the correct solution and use it for FP types.
Yes, agreed. Tom