
Up until yesterday, I had accepted this as the word on this topic and had resolved to make strong_type for collection size - or encourage Matthias to do - depending on what was convenient. I just came upon the fact that each stl collection C has its own size_type predefined as in C::size_type . I came upon this quite be accident in reviewing the SGI stl documentation. It never occured to me to look there - as it would never occur to me that different collections might have different types for the size. But now that this is there - what implications does it have. I expect that C::size_type usually or always implemented as a typedef - NOT a strong type - so using it might be problematic. I just thought I would throw that into the pot. I don't have a strongly held opinion on this particular subject. But, I'm bumping the archive implementation version from 3 to 4 with this release and collections will have a tiny bit of conditional code to handle older archives in any case so now would be a convenient time to make changes. Robert Ramey Matthias Troyer wrote:
Dear Robert, dear all,
Let me try to stop the explosive growth of this thread by summarizing the problems and let me state that I think that Robert's strong typedef proposal is the best solution, and argue why.
The first problem with the current state is that it does not allow for more than 4G elements in a collection to be serialized. Another serious problem is that it does not allow an archive to treat a collection size differently from the integral type used to represent it. That feature is useful for portable binary archives, and is absolutely essential for serialization using MPI archives. For MPI archives we need to treat size types differently than integers (I don't want to go into details here since that will only distract from the discussion).
Needing to distinguish size types from integers in some archives, rules out choosing any other integer type to represent the sizes. Furthermore, there will never be a consensus as to which integral type is best. If I want to store a 4G+ collection, I will vote for a 64 bit integer type, while if I want to serialize millions of short containers, I would hate to waste the memory needed for 64 bit size types.
Fortunately there is an elegant solution: use a "strong typedef" to distinguish container sizes from an unsigned int (or std::size_t), and let the archive decide how to represent it, just as Robert suggests:
On Feb 12, 2006, at 10:42 PM, Robert Ramey wrote:
Even if a strong type is used, it is neither necessary nor is it desireable to add it to every archive.
The procedures would be:
create a header boost/collection_size.hpp which would contain something like
namespace boost { BOOST_STRONG_TYPE(collection_size_t, std::size_t)
// now we have a collection type BOOST_CLASS_IMPLEMENTION_LEVEL<collection_size_t, object) // no versioning for effiency reasons
This will work with all existing archives, and the serialize function:
template<class Archive> void seriaize(Archive &ar, collection_size_t &, const unsigned int version){ ar & t; // if its converted autmatically to size_t // or ar & static_cast<collection_size_t &>(t); // if not converted automatically }
is actually not needed in my experience. I have actually implemented this solution and it passes all regression tests.
With this solution, existing archives will continue to work, and any programmers who want or need to serialize size types differently from std::size_t can overload the serialization of collection_size_t in their archive. Thus anybody's wishes can be granted with this solution and I think we should go for it, as we had already discussed last November.
Matthias
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost