> > That's excellent (sorry I have not seen this in the docs before).
> > However I think it's not a good idea to create your own futures. This
> > does not scale well nor does it compose with std::future or
> > boost::future. Is there a way to use whatever futures (or more
> > specifically, threading implementation) the user decides to use?
>
> The boost::compute::future class wraps a cl_event object which is used to
> monitor the progress of a compute kernel. I'm not sure how to achieve this
> with std::future or boost.thread's future (or even which to chose for the
> API). Any pointers (or example code) would be greatly appreciated.
Yes, that's exactly the problem. It is nothing specific to your particular
library but a general issue to solve. In the library we develop (HPX,
https://github.com/STEllAR-GROUP/hpx/) we face the same issue of having to
rely on our own synchronization primitives which makes it impossible to
reuse std::future (or boost::future). Any suggestion on how to create
specialization/customization points allowing to make boost::future
universally applicable would be most appreciated.
Regards Hartmut
---------------
http://boost-spirit.comhttp://stellar.cct.lsu.edu
On 3/4/2013 8:47 PM, Kyle Lutz wrote:
> On Sun, Mar 3, 2013 at 9:15 PM, Ioannis Papadopoulos
> <ipapadop(a)cse.tamu.edu> wrote:
>> A comparison would be nice. Moreover, why not piggy-back on the libraries
>> that are already available (and they probably have better optimizations in
>> place) and simply write a nice wrapper around them (and maybe, crazy idea,
>> allow a single codebase to use both AMD and nVidia GPUs at the same time)?
>
> Boost.Compute does allow you to use both AMD and nVidia GPUs at the
> same time with the same codebase. In fact you can also throw in your
> multi-core CPU, Xeon Phi accelerator card and even a Playstation 3.
> Not such a crazy idea after all ;-).
>
> -kyle
>
> _______________________________________________
> Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
>
Thanks for the comparison.
The only issue I see with Boost.Compute is that it will have problems
supporting well the well-known architectures. Basically, it takes all
the optimizations that have been researched and developed for maximum
efficiency and throws them out of the window.
For example, there is a wealth of CUDA algorithms highly optimized for
nVidia GPUs. These will have to be reimplemented in OpenCL. And tuned
(ouch); possibly for each device (ouch x 2). I see it as a massive task
for a single person or a small group of people that are doing that in
their spare time.
However, if Boost.Compute implements something similar to
Boost.Multiprecision's multi-backend approach, then you can use
underneath thrust, Bolt, whatever else there is and only fall-back to
the OpenCL code if there is nothing else (or the user explicitly
requires it).
They way I'd see something with the title Boost.Compute is as an
algorithm selection library - you have multiple backends, which you may
choose from based on automatic configuration at compile time and at run
time based on type and input size of your input data.
Starting from multiple backends is a good start.
----- Original Message -----
From: "Rob Stewart" <robertstewart(a)comcast.net>
To: <boost(a)lists.boost.org>
Sent: Wednesday, March 06, 2013 12:12 AM
Subject: Re: [boost] License of endian and limits in Boost detail
On Mar 5, 2013, at 2:29 AM, Philip Bennefall <philip(a)blastbay.com> wrote:
> * Permission to use, copy, modify, distribute and sell this software
> * and its documentation for any purpose is hereby granted without fee,
> * provided that the above copyright notice appear in all copies and
> * that both that copyright notice and this permission notice appear
> * in supporting documentation.
>
> This looks to me like it enforces inclusion of the above text in object
> code distributions,
IANAL, but I read that as only requiring the copyright notice in copies of
the source and in the documentation, not in the binaries.
Do you mean the end user documentation accompanying binaries (e.g. the
documentation of a derivative work)? That is the part I want to avoid.
Philip Bennefall
___
Rob
_______________________________________________
Unsubscribe & other changes:
http://lists.boost.org/mailman/listinfo.cgi/boost