
But of course you can
just zip the xml stuff, and a binary_nvp_archive is a lot more work than just factoring tags and indentation out of xml_archive...
You could just store the stuff using binary serialization. Remember you can always make a small program which reads the binary_?archive and creates the equivalent xml one. If storage space is an issue Iwould consider: a) use Jonathon Turkanis stream library to make a matched pair of compressed input and output streams. b) use these streams with binary archive to create the output file. c) make a small program which de-serializes these files and perhaps selects the "interesting" part and re-serializes them to something "readable" like XML d) and pipe the xml output to your favorite viewer. I guess I'm basically a lazy person.
Anyway, the purpose isn't visualization after the program has run, it is more like
pretty(log_stream) << my_particle;
The random_iarchive is intended as a tool to be used in this process: for instance, I won't sleep well until I have seen a terabyte's worth of events get serialized in one run.... The tests have to be *big*, stressful, lots of data.
I'm thinking just the opposite, that I'll sleep well UNTIL someone tries that.
We have a similar testing infrastructure that I've thrown together... We're a "make" shop... I wasn't sold on bjam. And running classes through all archive types, automatically, is obviously the only way to do it: I put together a few macros to accomplish this in code rather than in a bunch of build-system mechanics. One macro creates tests for one class through all archives. Not sure if they would integrate with Boost.Test so easily, though, and Boost.Test is surely more robust in various ways in case of failure. I can post 'em if you're curious.
I've accomodated myself to bjam and Boost.Test. I've got complains about both but I don't want to make an issue of them because it seems that the authors are working hard to address them and i don't want to annoy them. Just for the record my complaints are: a) bjam - I just can't understand it. b) test - changes are made to the development tree which break programs which relie on the test code. Then the corrections aren't promptly made. Boost Test is the bedrock of the whole boost foundation. For boost test the priorty has to be i) correctness accross all platforms boost supports ii) backward compatibility iii) new features Currently I believe with a couple of small issues with the test system. i) it seems we have problems building the test library wint sunpro 5. ii) The last time I ran tests on borland compilers in release mode the programs failed in the test library iii) test library won't build with Comeau due to an issue with libcomo and va_arg I should note that I've noticed improvements re CW and others here so I suppose these kind of questions are being addressed. It may seem that I'm holding the test library to a higher standard than others - I suppose that's true. Its only because it has to be. Maybe we should run the boost tests with the previous version of the library ! (Hmmm - that might actually be a good idea.)
Since serialization requires these classes to be registered, it seemed to me there might be a way to do this. But maybe its all overkill.
I was wondering how to accomplish it. I am in, say,
template <typename T> void random_iarchive::load_override(vector<shared_ptr<T> >), with T = Base.
My random_iarchive has had Base and several types Derived registered with it already. Because I know what Base is (from T), I can easily populate the vector with shared_ptr<Base>, but in order to populate it with Base and a variety of classes Derived, I have to somehow ask the archive what possibilites are registered and choose one... Forgive me if I'm way off base. The whole business of type registration in the archives is still pretty opaque to me, and my gut says that this is either impossible or overkill.
LOL - the whole business of type registration IS pretty opaque. Its also one hell of a lot harder that it seems at first glance. At least for me. I'm not sure, but BOOST_EXPORT might work better for you than explicit registration. it MIGHT be possible to access all the exported types and generate a test automatically but I havn't looked into this. If it were me, I would just write a test which explicitly tries all the known derived types. I don't like the idea of randomness in tests.
Actually, now that you mention the memoization_archive, it would actually be ideal if there were an archive that could do a deep *comparison*, thus eliminating the need to write all those operator==()s. I had thought about this and deemed it impossible, but if you're talking about deep copy.... Then you've got a real full-of-data workout canned in a function for an arbitrary serializable user class:
(for each A in xml, text, binary) MyHugeClass src, dst; random_iarchive >> src; // src now swollen with data A_oarchive oa(somewhere) << src; A_iarchive ia(somewhere) >> dst; comparison_archive ca(src) << dst; // or however that looks
From your serialization(archive) method, you get xml/txt/binary i/o, comparison and copy.
I envisioned the "memoization" archive as just an adaptor which tweaks the handling of pointers to implement deep copy. This might be useful for storing/recovering data state without creating a bunch of new objects. Its 99% easy then runs into a "small" issue with objects of a derived class serialized through the base class. There it stands for now. Your the first consider "comparison" archive. of course the issues are the same. There also exists the possibility of free implementation of deep copies and comparisons by overriding serialize functions without storing the data at all. If one considers the serialize functions as a "reflection" of the class which it corresponds to then the whole topic spins off the track of serialization. Some day things might look like: class A data about class A /// serializable members automatically generated deep copy, deep compare, and serialization. Food for thought Robert Ramey