
Matthias Troyer wrote:
On Oct 9, 2005, at 6:44 PM, Robert Ramey wrote:
I only took a very quick look at the diff file. I have a couple of questions:
It looks like that for certain types, (C++ arrays, vector<int>, etc) we want to use binary_save/load to leverage on the fact the fact that we can assume in certain situations that storage is contiguous.
Exactly.
Note that there is an example in the package - demo_fast_archive which does exactly this for C++ arrays. It could easily extended to cover any other desired types. I believe that using this as a basis would achieve all you desire and more which a much smaller investment of effort. Also it would not require changing the serialization library in any way.
This would lead to code duplication since we would need to overload the serialization of
array vector multi_array valarray Blitz array ublas dense vectors ublas dense matrices MTL vectors MTL matrices ...
I don't think this would lead to code duplication. Since the save/load functions are implemented as templates, code is only emitted for those overloads actually invoked by the user program.
not only for the demo_fast_archive, but for all archives that need such an optimization. Archives that immediately come to my mind are
binary archives (as in your example) all possible portable binary archives MPI serialization PVM serialization all possible polymorphic archives ....
Thus we have the problem that M types of data structures can profit from the fast array serialization in N types of archives. Instead of providing MxN overloads for the serialization library, I propose to introduce just one traits class, and implement just M overloads for the serialization and N implementations of save_array/load_array.
By making and "archive wrapper" similar to the one in demo_fast_archive one can make special provisions for M special data types. These will then automatically be applicable to all existing archives including the polymorphic versions. Note that I realise that demo_fast_archive uses a class rather than a template. I did this to make the example more clear. But this could have been easily be recast as a template.
Your example is just M=1 (array) and N=1 (binary archive). If I understand you correctly, what you propose needs M*N overloads. With minor extensions to the serialization library, the same result can be achieved with a coding effort of M+N.
I didn't mean to suggest that demo_fast_archive be used as is. My intention is to show that any existing archive can be extended with overloads for specific types. There is no need to alter the core of the library in order to do this. Archive classes "know" about specific types in only a few very special cases: NVP and some types used internally by the archive implementations. A key goal of the the serialization library has been to maintain this so as to avoid MxN issues in the library itself. I believe: a) that by using derivation similar to demo_fast_archive you can achieve all the goals you desire without modification of the library itself. b) that this approach will result in smallest amount of additional coding effort. c) that the result will be applicable to current and future archives without any other coding changes. Robert Ramey