
Jeffrey Lee Hellrung, Jr. wrote:
Repeatedly adapted ranges might have **very** large size. For example, on Mac OS X 10.7 with gcc-4.7,
std::vector<int> v;
sizeof(v | uniqued) --> 32 sizeof(v | uniqued | uniqued) --> 80 sizeof(v | uniqued | uniqued | uniqued) --> 176 sizeof(v | uniqued | uniqued | uniqued | uniqued) --> 368
// OvenToBoost's taken adaptor sizeof(v | taken(1)) --> 48 sizeof(v | taken(1) | taken(1)) --> 112 sizeof(v | taken(1) | taken(1) | taken(1)) --> 240 sizeof(v | taken(1) | taken(1) | taken(1) | taken(1)) --> 496
sizeof(v | strided(1)) --> 80 sizeof(v | strided(1) | strided(1)) --> 272 sizeof(v | strided(1) | strided(1) | strided(1)) --> 848 sizeof(v | strided(1) | strided(1) | strided(1) | strided(1)) --> 2576
Wow. I might investigate what this looks like with this alternative range adaptor implementation I've been suggesting (i.e., a lazy implementation rather than the iterator-based implementation presently in Boost.Range).
Range forwarders (e.g. uniqued, taken(1), strided(1), etc.) are generally lightweight, and so the lazy implementation should also be lightweight. As for moved_range, avoiding caching adapted ranges greatly reduces its size. Indeed, whether or not caching adapted ranges was one of my major concerns in implementing moved_range. moved_range stores a moved container and a fusion vector of range forwarders. Without caching an adapted range, the container have to be adapted each time begin or end is called. This means that a single pair of begin/end calls requires adapting two pairs of begin/end iterators. This seems inefficient, and so I chose to cache an adapted range in moved_range. However, I cannot predict how badly this size problem affects the performance, because my knowledge about copy elision is scarce... Regards, Michel