
Bruno MartÃnez <br1@internet.com.uy> writes:
On Thu, 18 Aug 2005 17:42:26 -0300, David Abrahams <dave@boost-consulting.com> wrote:
Bruno MartÃnez <br1@internet.com.uy> writes:
It probably doesn't make sense to have a language like that on a speed basis. If I don't misunderstand the efficiency appendix of your book, even at<0, myvec> is O(N) because of the similar instantiations the compiler has to go through.
What led you to believe that?
The book. It has to search for the "memoization record" in a list, even if re instantiating the template would have been simple.
Okay, let's be precise. *Technically*, at<0, myvec> is O(X) -- where X is the number of prior instantiations of at<i,v> -- on most existing compilers, but note that v doesn't have to be myvec nor does X have any relation to size<myvec>::value.
The difference is that this work is not interpreted.
<blink> Sorry, you lost me.
Nothing new. The search for the memoization record is faster because it's builtin, and it doesn't go through the process of template instantiation.
Okay, that's one way to look at it. The reasons it is faster are not so important for the purposes of this analysis, so I prefer to say that it can be very difficult to observe this X because the constant is so low compared to the cost of instantiating a single template.
Metrowerks seems to be bringing the elements accessed to the front af the list, even.
I think GCC does it too.
Even without some weird O(N)->O(1) cases, memoization has a sw engineering impact. As it's said in the book, it's easier to avoid blobs, for example, because there's no speed hit.
The book says that?
Sorry for putting words in your mouth.
I guess you're suggesting that instantiating N templates with 1 nested typedef might be presumed to be slower than instantiating 1 template with N nested typedefs? -- Dave Abrahams Boost Consulting www.boost-consulting.com