On 16 October 2016 at 08:36, Michael Marcin
It doesn't have the wasted duplicated information that a tuple of std::vectors has.
In your example, which I think is flawed as an example, you create in memory a 20+ GB data structure (hope I got the maths right). I realised this after a std::bad_alloc was thrown. Some duplicate info in the std::vector(s) seems rather irrelevant. This data structure optimizes one operation (calculating the average) at the cost of pessimising almost any other operation (relocation if the vector needs to grow, insert, delete, push_back etc are all done 4 times in your example), while iterating over the records will land you right into cache-miss-heaven. Keeping a running average while you read/input/delete the data gives you an O(1) average. Consider using a pector https://github.com/aguinet/pector instead of a vector. You state that the example is a toy example. To me the example shows that iterating over a vector of smaller objects (pods) is faster than iterating over a vector of larger objects, duh. The real use case might be more interesting, maybe you can describe it. degski