
"Peter Dimov" <pdimov@mmltd.net> writes:
David Abrahams wrote:
"Peter Dimov" <pdimov@mmltd.net> writes:
The status quo is that the size of the container is consistently written or read as an unsigned int, is it not?
I think so, though I could be mistaken.
Consider the simplistic example:
void f( unsigned int ); // #1 void f( unsigned long ); // #2
void g( std::vector<int> & v ) { unsigned int n1 = v.size(); f( n1 ); // #1
size_t n2 = v.size(); f( n2 ); // ??? }
Sure. So how does this relate to serialization?
Consider an archive where unsigned int and unsigned long have a different internal representation. When a value of a size_t type is written on platform A, where size_t == unsigned int, platform B, where size_t == unsigned long, won't be able to read the file.
Sure, but I don't see what that has to do with the ambiguity in overload resolution you're pointing at above. I don't think anyone is suggesting that we use size_t; int has the same problem, after all. I thought Matthias wass using a variable-length representation, but on inspection it looks like he's just using a "strong typedef" around std::size_t, which should work adequately for the purposes we're discussing. -- Dave Abrahams Boost Consulting www.boost-consulting.com