Boost.Compute v0.1 Released
I'm proud to announce the initial release (version 0.1) of
Boost.Compute! It is available on GitHub [1] and instructions for
using the library can be found in the documentation [2].
Boost.Compute is a GPGPU and parallel-programming library based on
OpenCL. It provides an STL-like API and implements many common
containers (e.g. vector<T>, array
Awesome!
On Sun, Mar 16, 2014 at 5:03 PM, Kyle Lutz
I'm proud to announce the initial release (version 0.1) of Boost.Compute! It is available on GitHub [1] and instructions for using the library can be found in the documentation [2].
-- -Matt Calabrese
Am 17.03.14 01:11, schrieb Matt Calabrese:
Awesome!
On Sun, Mar 16, 2014 at 5:03 PM, Kyle Lutz
mailto:kyle.r.lutz@gmail.com> wrote: I'm proud to announce the initial release (version 0.1) of Boost.Compute! It is available on GitHub [1] and instructions for using the library can be found in the documentation [2].
Yes, this is most interesting! Best Regards, beet
On Sun, Mar 16, 2014 at 7:03 PM, Kyle Lutz
Boost.Compute is a GPGPU and parallel-programming library based on OpenCL. It provides an STL-like API and implements many common containers (e.g. vector<T>, array
) as well as many common algorithms (e.g. sort(), accumulate(), transform()). A full list can be found in the header reference [3].
Besides the OpenGL niceness and STLishness, is there a reason to prefer Boost.Compute over alternatives targeted at OpenCL/CUDA numerics? There's already been much work in this space [1], including VexCL [2] which many of my collaborators like. While there is an answer in the FAQ [3], it seems dodgy as clearly VexCL could not presently be built on Boost.Compute since the latter does not support for CUDA while the former does. - Rhys [1] http://arxiv.org/abs/1212.6326 [2] https://github.com/ddemidov/vexcl [3] http://kylelutz.github.io/compute/boost_compute/faq.html
On Sun, Mar 16, 2014 at 6:10 PM, Rhys Ulerich
On Sun, Mar 16, 2014 at 7:03 PM, Kyle Lutz
wrote: Boost.Compute is a GPGPU and parallel-programming library based on OpenCL. It provides an STL-like API and implements many common containers (e.g. vector<T>, array
) as well as many common algorithms (e.g. sort(), accumulate(), transform()). A full list can be found in the header reference [3]. Besides the OpenGL niceness and STLishness, is there a reason to prefer Boost.Compute over alternatives targeted at OpenCL/CUDA numerics? There's already been much work in this space [1], including VexCL [2] which many of my collaborators like. While there is an answer in the FAQ [3], it seems dodgy as clearly VexCL could not presently be built on Boost.Compute since the latter does not support for CUDA while the former does.
Good point. That FAQ entry was written before VexCL added its CUDA back-end (which occurred relatively recently). Boost.Compute and VexCL have different aims and scopes. Boost.Compute is more similar to the C++ STL while VexCL is more similar to a linear algebra library like Eigen. Also see this StackOverflow question [1] entitled "Differences between VexCL, Thrust, and Boost.Compute". -kyle [1] http://stackoverflow.com/questions/20154179/differences-between-vexcl-thrust...
Hi Kyle
Good point. That FAQ entry was written before VexCL added its CUDA back-end (which occurred relatively recently). Boost.Compute and VexCL have different aims and scopes. Boost.Compute is more similar to the C++ STL while VexCL is more similar to a linear algebra library like Eigen. Also see this StackOverflow question [1] entitled "Differences between VexCL, Thrust, and Boost.Compute".
[1] http://stackoverflow.com/questions/20154179/differences-between-vexcl-thrust...
Thank you for the information. - Rhys
Hi,
I am the developer of VexCL :).
On Mon, Mar 17, 2014 at 7:43 PM, Rhys Ulerich
Hi Kyle
Good point. That FAQ entry was written before VexCL added its CUDA back-end (which occurred relatively recently). Boost.Compute and VexCL have different aims and scopes. Boost.Compute is more similar to the C++ STL while VexCL is more similar to a linear algebra library like Eigen. Also see this StackOverflow question [1] entitled "Differences between VexCL, Thrust, and Boost.Compute".
[1] http://stackoverflow.com/questions/20154179/differences-between-vexcl-thrust...
Thank you for the information.
I have updated the answer on stackoverflow following Gonzalo's comment above about unavailability of easy interaction with user-defined functors and lambdas. I'll duplicate it here for convenience: Update: @gnzlbg commented that there is no support for C++ functors and lambdas in OpenCL-based libraries. And indeed, OpenCL is based on C99 and is compiled from sources stored in strings at runtime, so there is no easy way to fully interact with C++ classes. But to be fare, OpenCL-based libraries do support user-based functions and even lambdas to some extent. - Boost.Compute provides its own implementation of simple lambdashttp://kylelutz.github.io/compute/boost_compute/advanced_topics.html#boost_c... (based on Boost.Proto), and allows to interact with user-defined structs through BOOST_COMPUTE_ADAPT_STRUCThttp://kylelutz.github.io/compute/BOOST_COMPUTE_ADAPT_STRUCT.html andBOOST_COMPUTE_CLOSUREhttp://kylelutz.github.io/compute/BOOST_COMPUTE_CLOSURE.html macros. - VexCL provides linear-algebra-like DSL (also based on Boost.Proto), and also supports conversion of generic C++ algorithms and functorshttps://github.com/ddemidov/vexcl#converting-generic-c-algorithms-to-opencl (and even Boost.Phoenix lambdas) to OpenCL functions (with restrictions). - I believe AMD's Bolt does supports user-defined functors through its "C++ for OpenCL" extension magic. Having said that, CUDA-based libraries (and may be C++ AMP) have an obvious advantage of actual compile-time compiler (can you even say that?), so the integration with user code can be much tighter. Another point is that when you have an AMD GPU (which generally provide more performance per dollar), then more advanced CUDA compiler has zero advantages :). I do believe that there is a place for a library (such as Boost.Compute) that would provide a set of standard accelerated algorithms. I missed such a library a few times while implementing VexCL. But Kyle, before you propose Boost.Compute for the inclusion into Boost (I think you should really do that!) you should make sure that the provided algorithms perform on par with the other libraries (e.g. Thrust) on the same hardware (I did not compare the performances, so this could be the case already). -- Cheers, Denis
Is it possible to use ordinary C++ functions/functors or C++11 lambdas with Boost.Compute?http://kylelutz.github.io/compute/boost_compute/faq.html#boost_compute.faq.i... Unfortunately no. OpenCL relies on having C99 source code available at run-time in order to execute code on the GPU. Thus compiled C++ functions or C++11 lambdas cannot simply be passed to the OpenCL environment to be executed on the GPU. This is the reason why I wrote the Boost.Compute lambda library. Basically it takes C++ lambda expressions (e.g. _1 * sqrt(_1) + 4) and transforms them into C99 source code fragments (e.g. “input[i] * sqrt(input[i]) + 4)”) which are then passed to the Boost.Compute STL-style algorithms for execution. While not perfect, it allows the user to write code closer to C++ that still can be executed through OpenCL. Also check out the BOOST_COMPUTE_FUNCTION() macro which allows OpenCL functions to be defined inline with C++ code. An example can be found in the monte_carlo example code.
I find this to be a serious (killer) downside. Are there any examples/FAQ about launching kernels that call member functions or about using Boost.Compute within class hierarchies? Even a minimal example where the data is a class member e.g. vector and the kernel uses one or two member functions would be very much appreciated. The std proposal for a parallel algorithms library, TBB, C++AMP, and Thrust seem to be a better fit for a "C++ interface to multi-core CPU and GPGPU computing platforms" than any OpenCL based library I've seen (OpenCL, Bolt, VexCL, Boost.Compute). OpenCL and C++ seem to not be made for each other. OpenCL is just FUBAR without extra compiler support like C++AMP or OpenACC. On Monday, March 17, 2014 1:03:57 AM UTC+1, Kyle Lutz wrote:
I'm proud to announce the initial release (version 0.1) of Boost.Compute! It is available on GitHub [1] and instructions for using the library can be found in the documentation [2].
Boost.Compute is a GPGPU and parallel-programming library based on OpenCL. It provides an STL-like API and implements many common containers (e.g. vector<T>, array
) as well as many common algorithms (e.g. sort(), accumulate(), transform()). A full list can be found in the header reference [3]. I hope to propose Boost.Compute for review in the next few months but for I'm looking for more wide-spread testing and feedback from the Boost community (please note the FAQ [4] and design rationale [5] where I hope to have answered some common questions).
Thanks, Kyle
[1] https://github.com/kylelutz/compute [2] http://kylelutz.github.io/compute/ [3] http://kylelutz.github.io/compute/compute/reference.html [4] http://kylelutz.github.io/compute/boost_compute/faq.html [5] http://kylelutz.github.io/compute/boost_compute/design.html _______________________________________________ Boost-users mailing list Boost...@lists.boost.org javascript: http://lists.boost.org/mailman/listinfo.cgi/boost-users
Hello Kyle,
On 17 March 2014 00:03, Kyle Lutz
I'm proud to announce the initial release (version 0.1) of Boost.Compute! It is available on GitHub [1] and instructions for using the library can be found in the documentation [2].
Boost.Compute is a GPGPU and parallel-programming library based on OpenCL. It provides an STL-like API and implements many common containers (e.g. vector<T>, array
) as well as many common algorithms (e.g. sort(), accumulate(), transform()). A full list can be found in the header reference [3]. I hope to propose Boost.Compute for review in the next few months but for I'm looking for more wide-spread testing and feedback from the Boost community (please note the FAQ [4] and design rationale [5] where I hope to have answered some common questions).
Thanks, Kyle
[1] https://github.com/kylelutz/compute [2] http://kylelutz.github.io/compute/ [3] http://kylelutz.github.io/compute/compute/reference.html [4] http://kylelutz.github.io/compute/boost_compute/faq.html [5] http://kylelutz.github.io/compute/boost_compute/design.html _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
I am looking forward to try this out. I have a couple of questions: - how do the algorithms compare performance-wise with similar CUDA libraries? I remember trying Boost.Compute in the early days and IIRC there was quite a performance gap. Would it be possible to add a performance section to the documentation? - Are you planning any support for multi-device computations? In my experience, available memory can be quite a bottleneck on GPUs, and having support for muti-device computations (i.e., multiple GPUs but also GPUs/CPU hybrids) would be quite handy. I am happy to see this kind of work happening in OpenCL and Boost land, and I really like the STL-like design of the library. Cheers, Francesco.
participants (7)
-
beet
-
Denis Demidov
-
Francesco Biscani
-
Gonzalo BG
-
Kyle Lutz
-
Matt Calabrese
-
Rhys Ulerich