
Jeremy Maitin-Shepard wrote:
On 03/10/2011 04:06 PM, Simonson, Lucanus J wrote:
I can attest that it is generally not compatible with generation of optimal code. Others may disagree, but I've had some unique experience with trying to create a C++ abstraction on vector instruction sets. Other than the fact that I couldn't do some necessary AST transformations in the ETs of our DSEL for vector operations because there is no way to identify whether two arguments to an expression are the same varaible, only the same type, there were many, many other problems that caused the reality to fall short of our expectations and promising results early on. Frankly, if you want fast SSE accelerated code you should do it by hand in assembler. For infinite precision integer in particular you need access to the carry bit that is generated by vector artihmetic and other instruction level features. Just replacing int with int_128_t isn't going to get you too far. You need to hand unroll loops and generally put a lot of thought into what you are doing at the instructio n level to get the best implementation and everything needs to be driven by benchmark results. Even intrisics don't give you the ammount of control that is needed to get max performance because the compiler can't run benchmarks to guide its choices the way you can. (Well, Intel compiler actually can to some extent...) Instead of writing assember we should provide points of customization to allow the user to override all the algorithms with their own implementation. That could be gmp, their own hand coded assembly or some high level code that has been optimized through Intel parallel building blocks, for example, which accomplishes with vector type abstraction in C++ what ETs cannot through use of a JIT compilation step.
Relying on "customization points" to customize the performance, rather than behavior, runs counter to the whole point of a library. The idea, after all, is to provide a fast arbitrary precision integer library, not to merely provide an interface, and then ask the user to write the implementation. Editing the source code itself is the appropriate customization point for optimizing the implementation. Any such performance optimizations should just be submitted back and included in the official version of the library, rather than forcing all users to individually waste their time reimplementing the optimizations. If possible, a compile-time/preprocessor option to replace the existing C++ implementation with calls to GMP, for instance, would seem to be quite useful and also compatible with Boost licensing.
I don't believe it is practical for a boost library to try to compete with gmp on performance, and there are good reasons why people who do wouldn't contribute their code back to us. I agree that a compile-time option to use gmp is a good thing, but if we do that we might as well generalize it so that people who have an implementation that is compatible with their commercial license can use that instead. LGPL is not for everyone. Regards, Luke