Re: [boost] [compute] GPGPU Library - Request For Feedback

03.03.2013 2:25, Kyle Lutz:
The library is built around the OpenCL framework which allows it to be portable across many types of devices (GPUs, CPUs, and accelerator cards) from many different vendors (NVIDIA, Intel, AMD).
It looks like this library is some kind of abstraction around OpenCL. But as I can see, it does not offer ways to abstract from OpenCL kernels syntax in general way: https://github.com/kylelutz/compute/blob/master/example/monte_carlo.cpp . I.e. not just simple things like boost::compute::sqrt, or simple lambda expressions like "_1 * 3 - 4". Kernel description can be abstracted in several ways: 1) Approach similar to Boost.Phoenix - generate OpenCL kernel code based on gathered expression tree. 2) Approach similar to TaskGraph library: http://www3.imperial.ac.uk/pls/portallive/docs/1/45421696.PDF - describe kernel in terms of special function calls, macros, expression templates and so on. Actual kernel is generated when that description code is "executed". Here is small demo - http://ideone.com/qQ4Pvo (check output at bottom). -- Evgeny Panasyuk

On Thu, Mar 7, 2013 at 2:18 PM, Evgeny Panasyuk
It looks like this library is some kind of abstraction around OpenCL. But as I can see, it does not offer ways to abstract from OpenCL kernels syntax in general way: https://github.com/kylelutz/compute/blob/master/example/monte_carlo.cpp . I.e. not just simple things like boost::compute::sqrt, or simple lambda expressions like "_1 * 3 - 4".
Kernel description can be abstracted in several ways:
1) Approach similar to Boost.Phoenix - generate OpenCL kernel code based on gathered expression tree.
2) Approach similar to TaskGraph library: http://www3.imperial.ac.uk/pls/portallive/docs/1/45421696.PDF - describe kernel in terms of special function calls, macros, expression templates and so on. Actual kernel is generated when that description code is "executed". Here is small demo - http://ideone.com/qQ4Pvo (check output at bottom).
This is the direction I would like to go with the library. However, coming up with a nice general solution for specifying kernel code in C++ is quite tricky. If you have example code for a potential API I'd love to take a look. For now, the only two exposed ways of using custom functions is directly specifying the OpenCL code (like in the monte_carlo example) or using the lambda expression framework. In the short-term future I am looking at also allowing bind() like functions to compose multiple built-in functions along with literal values. Internally there is also the meta_kernel class which is a hybrid of C++ code and raw OpenCL C code strings which is used by the algorithms to implement generic kernels. This may one day be cleaned-up and promoted to the public API but for now it is an implementation detail. I have been planning on making more use of Boost.Phoenix as you suggested. Thanks for posting the TaskGraph paper. It looks very interesting and I will take a closer look when I get some spare time. Cheers, Kyle
participants (2)
-
Evgeny Panasyuk
-
Kyle Lutz