
From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Nevin Liber
On 14 March 2013 12:20, Andy Jost
wrote: In some cases, particularly when compile times are more critical than execution times, it would be preferable to let the caller choose the set implementation without losing the advantages of separate compilation.
Do you have any benchmarks to show this isn't in the noise?
What isn't in the noise? Compile times?
It seems the implementation of this would be a straightforward application of type erasure. It seems too easy, in fact, which makes me wonder whether I'm missing something.
*Every* operation becomes slow. Algorithms become really slow, if callable at all.
I don't see how this is justified. The virtual call certainly adds overhead, but compared to a normal inlined function it's not a huge amount. Algorithms should become slower by a small constant factor.
For instance, if you had a "wrapper" that models a sequence >container, what iterator category do you pick for it?
The iterator category would exactly match that of the underlying container. The goal is *not* to abstract away the differences between different containers; it is to abstract away the differences between different implementations of the same container. So std::std and tr1::unordered_set and std::set<...,MyAllocator> can be used interchangeably, but not std::set and std::vector.
In any case, is this a good idea?
I can't think of a case I've ever had for choosing the container at run time.
That's not the point. The aim is to compile the algorithm only once. As a real-world example (not to far from my scenario) say your project takes five minutes on 20 CPUs to compile, and the algorithm in question consumes less than 0.0001% of the overall execution time. Wouldn't you take a 10x hit in algorithm performance to improve the compile time? -Andy