
Michael Marcin wrote:
I like being able to depend on the order of private members. It allows me to nicely encapsulate my data while using SIMD operations internally.
I can imagine a cache performance degradation because it reorders my members to make the class size smaller but must access memory randomly in order to initialize them in the constructor in the order they appeared in the class definition instead of accessing them with ascending addresses if they were laid out the order in which they were specified. Possible, but not very likely on today's desktop systems, given typical class sizes. A L1 cache line on x86 usually has 64 bytes - enough for 16 integers/pointers, still enough for 8 pointers on x86-64. Thus, any object not exceeding this size would always fit into one cache line, and
You use SIMD operations on non-uniform data? What kind? there is no penalty to accessing the single line in random order. And let's not forget that it might be _because_ of the reordering that the object has shrunk to 64 bytes in the first place. On the other hand, if the compiler sees that the object IS larger, it could simply not reorder the elements. I find the argument about binary compatibility more convincing. It would indeed be tricky to work out good, predictable rules. Sebastian Redl