
Just to chime in, the modifications I came up with for size_t are very similar. One must be able to treat size_t separately, #ifdefs and overloads of save() in the derived archive type won't cut it. I gather that this isn't big news. These portability testing mods might come in handy if you're going to make these modifications. Robert, you want this stuff? I'm worried that the integration could start to become a hassle. It's a lot of trivial changes to test modules, Jamfile tweaks, a modified tmpnam() and a modified remove(), the use of boost::random instead of std::rand() so that A is the same on all architectures, reseeding of the rng's in certain places, and the ubiquitous class "A" is either a portable one or a nonportable one depending on what archive you're testing. If you don't specify the "I want to test portability" flag, the tests run as they do now, except for that they're unit tests, not test_main() tests. The portability bit comes in when you specify --serialization-testdata-dir=/some/where. I've changed tmpnam() to macro TESTFILE("unit_test_name") which returns /some/where/platform/version/compiler/archivetype.unit_test_name (for instance /path/to/Mac_OS/103300/GNU_C__version_4.0.0/portable_binary_archive.hpp.variant_A), and remove(), now finish("unit_test_name"), is a no-op when testing portability. This allows you to afterwards run a little utility to walk the filesystem at /some/where and compare the checksums of the corresponding archivetype.unittestnames. So one just points /some/where to network disk, or writes a little script to do a remote copy, and then runs the comparison. My hunch is that a checksum won't be a good comparison for xml and text archives due to variances in the underlying implementations of << for primitive types, (I've only been hammering on a portable binary archive), but one could easily use a whitespace-ignorant "diff" or something. It isn't ideal, as whitespace differences could still concievably trip things up, but this would require extensive modifications to every unit test, and I wasn't going to do them if there was a reasonable chance the changes wouldn't be used. Another problem is that it isn't easy to plug in your own archive type. One must add files to libs/serialization/test and hack around with the jamfiles. Needs a better interface. Matthias, I'm curious as to what your testing strategy has been, how automated it is, and if you see such a scheme as being useful... -t On Tue, Oct 11, 2005 at 09:57:04AM +0200, Matthias Troyer wrote:
There are actually only very few modifications:
1. boost/archive/detail/oserializer.hpp and iserializer.hpp require modifications for the serialization of C-arrays of fixed length. In my version, the class save_array_type is modified to dispatch to save_array when fast_array_serialization is possible. The underlying problem here is that oserializer.hpp implements the serialization of a type here (the C array!). The optimal solution to this problem would be to move the array serialization to a separate header boost/ serialization/array.hpp, as is done for all C++ classes. 2. boost/serialization/vector.hpp is also modified to dispatch to save_array and load_array where possible. I don't think that this is a problem?
3. I had to introduce a new strong typedef in basic_archive.hpp:
BOOST_STRONG_TYPEDEF(std::size_t, container_size_type) BOOST_CLASS_IMPLEMENTATION(boost::archive::container_size_type, primitive_type)
I remember that you suggested in the past that this should be done anyways. One reason is that using unsigned int for the size of a container, as you do it now will not work on platforms with 32 bit int and 64 bit std::size_t : the size of a container can be more than 2^32. I don't always want to serialize std::size_t as the integer chosen by the specific implementation either, since that would again not be portable. By introducing a strong typedef, the archive implementation can decide how to serialize the size of a container.
The further modifications to the library in
boost/serialization/collections_load_imp.hpp boost/serialization/collections_save_imp.hpp boost/serialization/vector.hpp
were to change the collection serialization to use the container_size_type.
I don't think that you will object to this.
There is actually another hidden reason for this strong typedef: Efficient MPI serialization without need to copy into a buffer requires that I can distinguish between special types used to describe the data structure (class id, object id, pointers, container sizes, ...) and plain data members.
Next I have done a few changes to archive implemenations, the only important one of which is:
4. boost/archive/basic_binary_[io]archive.hpp serialize container_size_type as an unsigned int as done till now. It might be better to bump the file version and serialize them as std::size_t.
All the other changes were to modify the binary archives and the polymorphic archive to support fast array serialization. In contrast to the above points this is optional. Instead we could provide fast_binary_[io]archive and fast_polymporphic_[io]archive, that differ from their normal versions just by supporting fast array serialization. I could live with this as well, although it makes more sense in my opinion to just addd the save_array/load_array features to the existing archives.
Of all the points above, I believe that you will not have anything against points 3 and 4 since you proposed something similar already in the past if I remember correctly.
Issue 2 should also be noncontroversial, and the main discussion should thus be on issue 1, on how one can improve the design of [io] serializer to move the the implementation of array serialization into a separate header.
Matthias
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost