
From: "Andrew Holden"
3: Look up the value in the first array, and then look up the code block number in the second array.
4: Go to the pointer you just got.
The actual blocks of code are laid out in order right after the goto in step 4, in the same order as the source code. Breaks become goto end of switch. If you leave out a break, the processor really does JUST fall through to the next case.
All in all, if you have a monster switch statement, the compiler-generated code probably is just about as fast as you can get. It takes less than a dozen machine instructions to implement it, and it has constant complexity.
Actually this is not particularly good. For one thing, it's O(n), not constant time, because doing an array lookup is O(n). For many switch use-cases this is no big deal, but it certainly isn't unheard of to have a switch with dozens (or more) of cases, each of which does almost nothing (like return a value). In this case, the lookup can actually dominate your runtime. There's a theorem in hash theory (proved by Komlos, one of my old profs) that says that you can always generate a perfect hash function if you know the values you need to hash. That is, you can generate a hash function that requires exactly one lookup in every case. But I don't know any compiler that actually does this. Since all cases are known at compile time, doing this should be a beneficial runtime optimization for large switches (if the generated hash function is sufficiently cheap). Maybe this could be done using the MPL? I don't know TMP well enough to know where to begin on something like this. See http://en.wikipedia.org/wiki/Perfect_hash_function for more detail. - James Jones Administrative Data Mgmt. Webmaster 375 Raritan Center Pkwy, Suite A Data Architect Edison, NJ 08837