
At 8:32 PM -0700 8/19/05, Robert Ramey wrote:
Kim Barrett wrote:
(As noted in another thread, I've run across some performance issues, which might be addressed by resetting and reusing archives, rather than constructing new archives all the time.)
So, we have received only anecdotal data on where such performance bottlenecks might be.
Reset and reuse demonstrably helps; the question was why. And Alex Besogonov <cyberax@elewise.com> wrote (15-Aug-2005):
I've managed to pinpoint the bottleneck: it's containers reallocations of dynamic storage. Each time serialization is performed at least one dynamic memory allocation for each container is neccessary.
If containers are reused then these allocations are performed only once, because STL containers don't deallocate underlying storage in their clear/reset/... methods.
That seems like more than anecdotal data...
I'm pretty sure that intertwining runtime DLL loading with all of our clients of the serialization library is just not going to fly
Note that as presentlly implemented an archive used for marshalleling something like IPC transaction would be a short operation - open archive with a string stream, serialize, close archive, send string to ipc connection.
And as noted previously, that approach of creating a new archive for each marshalling operation has a significant performance impact.
My previous suggested solution of deriving from an existing archive class an adding would work very well in this case and be indistinguishable from a solution which built threading in at a lower level.
Even without the performance implications for creating new archives, this would not be acceptable in our system. These IPC transactions are part of a robot control system. We can withstand some fine-grained latencies due to lock contention for synchronized data structures. Turning off the whole IPC system for however long it takes to load a DLL is something else entirely.