
Hi,
This is a call for interest in a GPU computing library for Boost.
For me this is very interesting.
* C++ library for general-purpose computing on GPUs/Accelerators * Based on OpenCL (Open Computing Language)
Is it possible to have various backends, so that you could use Cuda with NVidia GPUs or e.g. something completely different on High Performance Clusters that distributes the code and uses e.g. MPI for communication, or whatever might come in handy? As far as I have understood, Thrust is designed that way and there are a fiew Backends available.
Furthermore, a lambda expression library was written using Boost.Proto which allows for mathematical expressions to be defined at the call site of an algorithm and then be executed on the GPU. For example, to multiply each element in a vector by the square root of itself and then add four:
transform(v.begin(), v.end(), v.begin(), _1 * sqrt(_1) + 4);
Is there a way to use C++11 Lambdas here? I think that would make the library feel native in future as well. On the other hand I hav no idea how that could be possible at all.
// transfer the values to the device device_vector = host_vector;
Is there a way to do some kind of streaming here as well? So instead of moving all the data to the GPU and getting the result back I think of some kind of stream, that I could use to transfer data to the GPU, have it do some calculations on it and get the results back from another stream while the GPU is working on the next chunk of data? Christof -- okunah gmbh Software nach Maß Zugspitzstraße 211 www.okunah.de 86165 Augsburg cd@okunah.de Registergericht Augsburg Geschäftsführer Augsburg HRB 21896 Christof Donat UStID: DE 248 815 055