
On 24.09.2011, at 01:30, Dave Abrahams wrote:
on Fri Sep 23 2011, Sebastian Redl <sebastian.redl-AT-getdesigned.at> wrote:
On 22.09.2011 17:57, Dave Abrahams wrote:
on Thu Sep 22 2011, Sebastian Redl<sebastian.redl-AT-getdesigned.at> wrote:
Overload resolution is supposed to be linear in the number of overloads. According to whom? The C++ standard has a footnote that outlines a linear algorithm for overload resolution. Clang follows this algorithm, and I suspect pretty much every other compiler does as well. Therefore, if resolution is superlinear, it's a bug.
I know this is a technicality, but...
The fact that an efficient algorithm for X exists does not mean every author of code that does X has promised to use that algorithm. It also doesn't guarantee that factors not considered in the description of the algorithm (name mangling, generation of debug info, etc.) won't make it inefficient in practice.
True, although I would still believe that this generally means there's room for improvement. All general guidelines aside though, if Clang is superlinear in overload resolution, I want to know about it.
In general, all algorithms in a compiler should be linear, or worst case n*log(n). Any quadratic or worse algorithm is pretty much a bug. I'd like to think so, too, but I'm not sure all implementors would agree with you.
I can't speak for any other compilers, but I'm pretty sure Ted and Doug would agree with me about the main compilation pass of Clang. We make exceptions for emitting errors, and of course for the static analyzer, whose pass-sensitive checks are inherently exponential.
I'm sure that even with Clang one can demonstrate metaprograms whose compilation in principle ought to be linear but are slower than that in practice.
And I would love to see such metaprograms and figure out why that is. Sebastian