
De : De la part de Jeff Flinn Envoyé : mardi 11 décembre 2012 19:56
You can avoid some of the above multiple allocations, copies and traversals of data by composing the appropriate combination of boost::iostream filters and sinks/sources.
I'm not sure what you mean here. I've got one file_source and one multichar_input_filter to watch the progress. But it doesn't seem to come from here when I desactivate the filter, it is as long as before.
Using the text archive would certainly reduce the overall archive size, avoid the need for base64 converision and simplify parsing during de-serialization.
I like the XML archive as it makes it easy to edit if needed. This is not true with the text archive. If I wanted something uneditable I'd rather use the binary archive which serialize my matrix very fast.
Profiling the operation should help otherwise.
I'm not sure how to achieve that. On linux I'm using gprof, but unfortunatly I'm working on windows. Any free profiling tool I can use on Windows ? Anyway, when I go step by step in the serialization code all the time is actually spent in the serialization line: ar & BOOST_SERIALIZATION_NVP(matrix_str); And my filter report that almost all data are read from the file. If I stop my debugger, I end up in the middle of boost spirit functions. Seems to me that it has something to do with the parsing...