Boost.SIMD review

Hi Boost Dev, I'm the core maintainer of Pythran[0], a python-to-native-code compiler focused on numeric computations. For more than 3 years, we've been using Boost.SIMD [1] as a generic backend for SIMD code generation, without much worries. Basically our compiler generates C++ code that includes Boost.simd calls. The big plus to us are: + Performances. The generated code runs at the same speed as the one written with intrinsics; + Target abstraction, esp. concerning the vector size and the various arithmetic operations; + Complete libm API, vectorized, which is a big plus when porting existing code (or when matching an existing API as we do); + Header only, which actually makes it easier for us to ship; + Low requirements (it does not require us to ship a large part of boost).? The compilation time is sometimes a worry, but it has decreased over the year. We don't use complex data movement like shuffle or pack, so I don't have any feedback on that. All in one, if that ever matters, I would be happy to have it integrated in Boost! ++ Serge [0] https://pythonhosted.org/pythran/ [1] https://github.com/NumScale/boost.simd
participants (1)
-
serge guelton