
Matthias Troyer wrote:
On Nov 24, 2005, at 5:24 PM, Peter Dimov wrote:
David Abrahams wrote:
To be fair, I haven't done the analysis: are you sure your approach doesn't lead to an MxN problem (for M archives and N types that need to be serialized)?
Yes, it does, in theory. Reality isn't that bad. For every M, the archive author has already added the necessary overloads for every "fundamental" type that supports optimized array operations. This leaves a number of user-defined types n (because the number is smaller than N), times M.
In addition, even if the author of an UDT hasn't provided an overload for a particular archive A, the user can add it himself. The m*n number for a particular codebase is bounded, and the overloads are typically one- liners.
What if the number n is infinite (e.g. all possible structs consisting only of fundamental types), which is what Robert calls "bitwise serializable"?
Structs aren't bitwise serializable in general because of padding/packing/alignment. Archives that do not have a documented external format and just fwrite whatever happens to be in memory at the time aren't really archives, they are a very specific subset with limited uses (interprocess communication on the same machine, the same compiler and the same version) that should not shape the design. ("Archive" implies persistency, and relying on a specific memory layout is not a way to achieve it.) If you have such an archive, you can add an overload SFINAE'd on is_bitwise_serializable instead of separate overloads for every type. This allows you to turn this specific 4*inf problem into a 4+inf problem (don't forget that you need +inf specializations of is_bitwise_serializable).