
on Mon Sep 01 2008, Arno Schödl <aschoedl-AT-think-cell.com> wrote:
If we subscribe to the rule that ranges must stay valid for their iterators to be valid,
I don't. I do subscribe to the rule that generic code cannot afford to destroy an arbitrary range while its iterators are still in use.
Isn't that what I said?
No, but it might be what you meant.
There may be iterators that work without their ranges, but in general they don't.
the adapted_range::iterator can use the common data stored in the range, while the adapted_iterator stores the common data itself. Both could even be derived from the same source code.
Yeah, that's still a lot of coding effort.
I think you could write it generically, a la iterator_facade/adaptor, so it is a one-time fixed cost.
I'm not sure if it's worth the effort.
Do you then still need a factored iterator?
You need to be able to take two adapted iterators and turn them into a range. Do you want that range to contain redundant data? I don't.
Or do you want to avoid people having to use the range abstraction?
I certainly don't want to force it on them.
Ok, now I understand. The debate is about primacy of ranges or iterators.
I'm not sure I agree.
You propose that iterators stay the primary vehicle
Let's say, I propose that they are first-class citizens that work without an underlying Range object. char* buffer=malloc(lotsabytes); How am I going to operate on that if there needs to be an underlying range object? There isn't one; the iterators stand alone. (Ranges can be first-class citizens too if you like).
and to convert them to/from ranges by stripping the common information.
As an optimization.
But that would mean that there is no "lean" iterator, all iterators would contain the redundant information.
I think int* is plenty lean. I don't know what you mean by "lean iterator," actually. You can store the common information in the iterator itself, or you'll have pointers or references to the common information, but either way there will be redundant bits in a pair of filter_iterators.
When stacking N difference_ranges, the size difference between "fat" and "lean" iterators is 2^N. Thus in fully generic code where you don't know anything about the stacking depth, even generating a single fat iterator carries a potentially exponential penalty.
You've lost me. Maybe you'd better spell out what you mean by "fat" and "lean," and by "generating a fat iterator."
This fact makes me think that range is not merely a fancy name for a pair of iterators but a concept in its own right between containers and iterators. Generic algorithms must be written on the basis of ranges rather than iterators or take a significant performance hit.
Have you actually measured anything yet? I think you're being at least very hasty. If you do something that slows down operation on pairs of pointers, you won't be doing anyone favors. -- Dave Abrahams BoostPro Computing http://www.boostpro.com