
On Thu, Sep 8, 2011 at 12:44 PM, Phil Endecott <spam_from_boost_dev@chezphil.org> wrote:
Beman Dawes wrote:
I'm not sure how widespread differing endianness between ints and floats is across the various platforms.
The Wikipedia article says it's uncommon, IIRC, but we need some real data on that.
I think it's vanishingly rare and can be ignored (as can any non-ieee754 formats).
Let's hope so! Perhaps the only real impact of such historical curiosities is a need to be sure there are test cases that would fail if we ran into one of them.But normal test cases probably do that anyhow.
The only case I'm aware of is the format of doubles on the original (20-year-old) ARM FPA chip. This would store the bytes of a double as 45670123, i.e the bytes are little-endian within the words (like ints) but the two words are ordered big endian.
I worked briefly in the early 1980's on a 16-bit system that stored 32-bit integers as two 16-bit words in big endian order but the bytes within the words in little endian order:-)
<anecdote>This chip was designed by 3 guys, and I shared an office with 2 of them at the time. One came from Acorn who had been using little-endian ARM chips for years, and the other came from Apple (who were about to put an ARM chip in the first Newton) and had a 68000 big-endian background. So perhaps it's not surprising that it got muddled. (The third guy designed the divide-and-square-root unit, nocturnally.) I doubt that more than a few hundred of these chips were ever made, but the format was a bit more widespread because the chip could be emulated by the OS through illegal instruction traps.</anecdote>
:-) --Beman