On 23 Dec 2014 at 11:21, Kyle Lutz wrote:
For instance, we can dynamically build programs at run-time by combining algorithmic skeletons (such as reduce or scan) with custom user-defined reduction functions and produce optimized kernels for the actual platform that executes the code (which in fact can be dramatically different hardware than where Boost.Compute itself was compiled). It also allows us to automatically tune algorithm parameters for the actual hardware present at run-time (and also allows us to execute currently algorithms as efficiently as possible on future hardware platforms by re-tuning and scaling up parameters, all without any recompilation). It also allows us to generate fully specialized kernels at run-time based on dynamic-input/user-configuration (imagine user-created filter pipelines in Photoshop or custom database queries in PGSQL).
Back when I was planning something very like Compute some years ago, I was going to make a C++ metaprogramming based clang AST manipulator. The idea was that you'd use libclang to hold the OpenCL kernels as an in memory AST, and you'd write C++ which when executed transformed the ASTs rather like Boost.Python. clang, if I remember, has a full fat OpenCL to LLVM compiler, and better still that works as expected in gdb. I figured it should be possible to integrate a debugger frontend for that so you could breakpoint and debug your C++-as-OpenCL nicely. It's a much bigger project than yours of course. And one rendered a bit obsolete by the rise of C++ AMP. Still, food for thought. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/