
Nikolay Mladenov wrote:
Considering your portability context (mine is even narrower): 1. the binary_archives already serialize boost::serialization::collection_size_type as unsigned int(uint32_t would still be better). 2.All this code will be unnecessary if the string serialization is not using "plain longs".
Okay... I understand the complaint about the size of strings not being handled like the size of everything else, and agree that it would be nice to be consistent. Because, well, consistency is nice. I was also thrown by the special handling for std::string and had to spend some time in the debugger, serializing things to disk and going through the archives with hexdump. :/ C'est la vie. But since the binary_archives don't make any claim about portability, it would seem that you should serialize all sizes as std::size_t (often 'plain long'), even if the std library's containers' size_types aren't consistent (which I wasn't aware of until Robert pointed it out. still haven't checked.). I'd have to have a look: with a plain binary_archive, is it not possible to save a std::vector with greater than std::numeric_limits<uint32_t>::max() elements on a platform where std::vector<T>::size_type is uint64_t? If you buy that, then you still need to construct a portable_binary_archive and handle container_size_type consistently across architectures (e.g. convert to uint64_t before/after storing/loading, checking for overflow on platforms where the in-memory container_size_type is smaller than the on-disk container_size_type). Then the difference is only that one currently requires an extra override for std::string. -t