
On Nov 24, 2005, at 5:24 PM, Peter Dimov wrote:
David Abrahams wrote:
To be fair, I haven't done the analysis: are you sure your approach doesn't lead to an MxN problem (for M archives and N types that need to be serialized)?
Yes, it does, in theory. Reality isn't that bad. For every M, the archive author has already added the necessary overloads for every "fundamental" type that supports optimized array operations. This leaves a number of user-defined types n (because the number is smaller than N), times M.
In addition, even if the author of an UDT hasn't provided an overload for a particular archive A, the user can add it himself. The m*n number for a particular codebase is bounded, and the overloads are typically one- liners.
What if the number n is infinite (e.g. all possible structs consisting only of fundamental types), which is what Robert calls "bitwise serializable"? Matthias