
Kim Barrett <kab@irobot.com> writes:
At 11:52 AM -0800 2/11/06, Robert Ramey wrote:
David Abrahams wrote:
IMO the size_type change should be considered a bugfix, as it was not possible to portably serialize collections
of over 4 G objects
Strictly speaking, of any size.
Right. Unsigned int isn't required to be more than 16 bits, and there *are* compilers where it is a 16 bit number.
And changing the type of the count from "unsigned int" to std::size_t would actually be worse, in a practical sense. The representation size (how many bits of data will appear in the archive) must be the same on all platforms.
sizeof(unsigned int) is commonly 4 on 32bit platforms sizeof(unsigned int) is (I think) commonly 4 on 64bit platforms sizeof(std::size_t) is commonly 4 on 32bit platforms sizeof(std::size_t) is commonly 8 on 64bit platforms
(I'm knowingly and intentionally ignoring DSP's and the like in the above).
The count should be some type that has a fixed (up to byte-order issues) representation, i.e. something like uint64_t.
Or a variable-length representation, which is what I *think* Matthias chose. -- Dave Abrahams Boost Consulting www.boost-consulting.com