
Again, I think the possibilities outstrip reality, and reality is the only thing that counts here... as long as each architecture has a specific #define that can be detected at compile time. The worst thing is that a particular architecture may not be supported, and a #error can be issued.
I imagine architectures are consistent themselves, but something like this would help if that were necessary:
#if defined(__i386__) #define BOOST_BYTEORDER_16 12 #define BOOST_BYTEORDER_32 1234 #define BOOST_BYTEORDER_64 12345678 #elif defined(__sparc__) #define BOOST_BYTEORDER_16 21 #define BOOST_BYTEORDER_32 4321 #define BOOST_BYTEORDER_64 87654321 #elif defined(__dinosaur__) #define BOOST_BYTEORDER_16 21 #define BOOST_BYTEORDER_32 4321 #define BOOST_BYTEORDER_64 87654321 #else #error Byte ordering undetected #endif
It gets worse. IA-64 supports switching endianness at runtime. :-) See e.g. http://developer.intel.com/design/itanium/manuals/245317.pdf section 4.4: "Accesses to memory quantities larger than a byte may be done in a big-endian or little-endian fashion. The byte order for all memory access instructions is determined by UM in the User Mask register." The user mask can be changed at user-level program's discretion and thus isn't a function of the OS platform used either. I suppose the least incorrect thing to do here would be to define it depending on the default for the particular OS/compiler used. /Mattias