
Bruno MartÃnez <br1@internet.com.uy> writes:
I made a test with gcc, in which I traverse a list in forward direction but naively use at to get to the current position. There are four variants. homemade_slow uses the first drop definition. homemade_fast uses the second. mpl_vector and mpl_list use those sequence classes. The times were, in seconds:
homemade_slow 41 homemade_fast 0.7 mpl_vector 4 mpl_list 11
Memoization has lots of weird effects like this. If you amortize your "fast" access over touching all the elements of the list you clearly are beating mpl::vector. But what happens when you only need to look at the last 5 elements of a 100-element sequence? I wonder how likely that is after all.
I've been thinking the past days of other functions like drop that could be improved in a similar way, but they have eluded me. I didn't find mention of similar techniques in the MPL book, so I wonder if I'm making any sense.
You're making plenty of sense; it's something I've thought about quite a bit. But I never quite realized that you can make it "okay" to use O(N) operations as though they were O(1). Maybe this demands a re-think of metaprogram efficiency. If the MPL algorithms and iterators could be reworked to take better advantage of memoization, it might be that list becomes the most efficient datastructure overall, again. It would also be very interesting to see what happens to an algorithm like sort when you take forced memoization into account. -- Dave Abrahams Boost Consulting www.boost-consulting.com