
Hi.
It depends. How long does it take to make a copy of the vector?
Hmm. let's say it takes 35 seconds, but this method consumes very much memory, eventually tripling the memory consumption (when thread2.job and thread3.job run in parallel). Other, resource-efficient, alternatives?
If you are adding records at a maximal rate of 6 per minute, for copying the data vector to take 35 seconds - either your process must be running a really long time, your data elements need to be really large or you must be running in a really slow environment (embedded?). Note though that this same copy operation may be triggered by a simple append and you most likely do not want that so perhaps such a large vector is not the data structure you need in the first place. How about splitting your data structure into a list of vectors (something like a deque?). Then you know you will only be appending to the last vector in the sequence and there is no risk of preceeding ones having their iterators invalidated while still in use. There are still many corner-cases to cover here depending on your exact needs and the implementation will be more complex than with a simple 'copy the vector when needed' solution but it will avoid having to hold all your data two or three times in memory. Some corner cases that pop to mind are: * Short locks when adding a new vector while iterating through your existing vector list. Note that this operation must not invalidate existing vector list iterators or your have the same problem you initially started with. * Whether you will need multiple locks or at most a single one. * What to do when you need to append data to the data structure and the last vector in the data structure is still in use - most likely add a new vector to the data structure. * Whether and when to merge multiple vectors into a single one. * How to cope with updates to existing records - possibly add a separate mutex to guard read & update operations to allow them to be run in parallel. * If you need efficient index-based random access to your data you will need more external book-keeping which can get real hairy. The whole tasks seems like a barrel of fun. :-) Hope this helps. Best regards, Jurko Gospodnetić