
Zitat von Robert Ramey
Hmmm - sycronicity here. 1.43 includes two new case studies. On is for a simple light weight archive class meant to be used for debug logging. It only handles output, doesn't, doesn't follow derivation paths for polymorhic base classes. Best part is
I'll have a look, thank you.
I still use something like this just to avoid the construction overhead of a Boost.Serialization archive when it's not needed
I'm not sure I understand the motivation for this - but then I don't have to. Note that the implementation of the serialization library relies on template metaprogramming to generate code ONLY for those features actually invoked. So, I'm not convinced of the utility of the above approach. Perhaps there's "too much" overhead in the construction of an archive - but someone would ahve to make a case for this assertion.
in libraries under construction STLdb, Persistent and maybe even STM we (ab)use serialization for copying, cloning, comparing,... individual objects. so in a lot of cases an archive is constructed, used to serialize exactly ONE object and then destructed, because archives cannot be reused as their state cannot be reset. (and even if their state could be reset, there would be thread-safety issues.) consider for example: /// creates a deep copy of t template<class T> T copy(T const &t){ { memory_oarchive ar; ar << t; } T tmp; { memory_iarchive ar; ar >> tmp; } return tmp; } "clone" instantiated with a very simple type that doesn't serialize pointers results in almost no code, almost as if operator= was used in case a deep copy doesn`t differ from a shallow copy for that type. almost any archive construction overhead is "too much" overhead here. Stefan