
Actually, I can report from experience that hand-rolling my own stack-based vector allowed me to speed up my code by at least an order of magnitude, so the benefit is not theoretical. My situation was that I was working with vectors that were relatively small and for which I knew the upper-bound at compile time. Since most of the computation was just bitwise operations such as XORs, when I profiled my code I found that the majority of the time was spent allocating and freeing memory for std::vector. Even when I explicitly reserved memory (so that std::vector would only perform an allocation once) there was still a significant overhead. Replacing std::vector with a stack-based vector eliminated this overhead and significantly sped up my code by an order of magnitude. In fact, it was exactly my experience with this performance improvement which inspired me to start this thread in the first place. Cheers, Greg On 1/22/11 11:52 PM, Emil Dotchevski wrote:
On Sat, Jan 22, 2011 at 11:13 PM, Stephan T. Lavavej <stl@exchange.microsoft.com> wrote:
The As If Rule always applies, but I can confidently say that nobody's compiler and Standard Library implementation conspires in such a way. Right, so it remains theoretical.
My point was that the benefits of a stack-based vector type are also theoretical. Practically speaking, if std::vector is causing performance problems, I don't see myself thinking "ok, I need a stack-based vector." In that case, it makes more sense to me to throw all abstraction out and get down to the metal.
Emil Dotchevski Reverge Studios, Inc. http://www.revergestudios.com/reblog/index.php?n=ReCode _______________________________________________ Unsubscribe& other changes: http://lists.boost.org/mailman/listinfo.cgi/boost