
On Sun, Mar 3, 2013 at 9:15 PM, Ioannis Papadopoulos <ipapadop@cse.tamu.edu> wrote:
How does it compare to VexCL (https://github.com/ddemidov/vexcl), Bolt (http://developer.amd.com/tools/heterogeneous-computing/amd-accelerated-paral...) and Thrust (https://developer.nvidia.com/thrust)?
VexCL is an expression-template based linear-algebra library for OpenCL. The aims and scope are a bit different from the Boost Compute library. VexCL is closer in nature to the Eigen library while Boost.Compute is closer to the C++ standard library. I don't feel that Boost.Compute really fills the same role as VexCL and in fact VexCL could be built on top of Boost.Compute. Bolt is an AMD specific C++ wrapper around the OpenCL API which extends the C99-based OpenCL language to support C++ features (most notably templates). It is similar to NVIDIA's Thrust library and shares the same failure, lack of portability. Thrust implements a C++ STL-like API for GPUs and CPUs. It is built with multiple backends. NVIDIA GPUs use the CUDA backend and multi-core CPUs can use the Intel TBB or OpenMP backends. However, thrust will not work with AMD graphics cards or other lesser-known accelerators. I feel Boost.Compute is superior in that it uses the vendor-neutral OpenCL library to achieve portability across all types of compute devices.
A comparison would be nice. Moreover, why not piggy-back on the libraries that are already available (and they probably have better optimizations in place) and simply write a nice wrapper around them (and maybe, crazy idea, allow a single codebase to use both AMD and nVidia GPUs at the same time)?
Boost.Compute does allow you to use both AMD and nVidia GPUs at the same time with the same codebase. In fact you can also throw in your multi-core CPU, Xeon Phi accelerator card and even a Playstation 3. Not such a crazy idea after all ;-). -kyle