
on Tue Sep 02 2008, "Giovanni Piero Deretta" <gpderetta-AT-gmail.com> wrote:
Great, but what are you doing with these stacks? Normally, the answer would be something like "I'm applying the X algorithm to a range of elements that represent Y but have been adapted to look like Z"
Usually is a matter of converting a text document to a point in a feature vector space as a preprocessing stage:
e.g., for the purposes of search? Just trying to get a feel for what you're describing.
Most pipelines start with tokenization, normalization, filtering and hashing, with a 'take' operation at the end.
By 'take' you mean a destructive read?
The resulting range is usually sorted (which implies breaking the laziness),
Yep.
unique-ed and augmented with score information (another map).
I very often need to compute set union and intersection of pair of these ranges. [I do not have yet a lazy adaptor for this (actually I do, but is kind of experimental)].
Okay.
Most of the uses of lazy ranges are in relatively non performance critical part of the application, so I do not aim for absolute zero abstraction overhead.
Okay, so... is it worth breaking up the iterator abstraction for this purpose?
A nice thing of lazy ranges is that are useful to control peak memory usage of the application (in fact if you are careful, you can do very little dynamic memory allocation)
Yes.
I'm interested in using dynamic iterators in the near future for code decoupling
You mean, like, any_iterator?
and It would be a pity if these ranges couldn't fit in a small object optimization buffer.
Do you know how big that target ("small object optimization buffer") is? -- Dave Abrahams BoostPro Computing http://www.boostpro.com