Am Thursday 17 September 2009 15:10:11 schrieb Peter Soetens:
Hi,
I'm trying to find out if boost::serialization can be used in real-time applications to stream data into a fifo to another process. It is mandatory that no memory allocations happen during the serialization. I tested this with a std::vector<double> of 10 elements in combination with the boost::iostreams library.
I'm guessing that the 2 allocations in the serialisation path come from a temporay std::string object, when writing the 'serialization::archive' string into the archive.
you can supress that by passing no_header to the archive. however, that won't solve the problem. the archives internally use STL containers for type registration and object tracking (and maybe other things, I don't know all the details). so even if you can avoid those 2 allocations for a vector<double>, there is no way to avoid allocations in the general case, for any type, until boost.serialization accepts a custom allocator, or a traits class that handles the registration stuff. that code is statically linked right now so if you wanted to implement that you'd also have to refactor boost.serialization. also note that one archive can only be used for one serialization process - I'm guessing one object in your case. if you serialize 2 objects into 1 archive they can only be read in that order from the stream. so you'd have to take the allocations of archive construction into account, too. the only simple way I see right now is using an own archive that doesn't derive from boost.serialization's common_?archive. but that comes close to an implementing an new serialization system.