
I think if you look at the source of SSEPlus from AMD they are doing it at runtime. They have a CPUID functionality to get the correct SEE level, and dispatch to the correct intrinsic as per it. I do not have a perfect use case, but in gaming applications it can be beneficial to see if the GPU is present at run time, so as to use GPU or not. Mostly games would be compiled and supplied to the end user. Personally I use CPU ID when compiling it self, my script executes CPUID and sets my SSE_SUPPORT_LEVEL and GPU_SUPPORT macros to a certain value, which is then used. But it does not take into considerations the end user. As I use it for my experiments, and the binaries execute in my machine only. Ofcourse if it is needed by people Boostification needs to be done for it. On Tue, Dec 11, 2012 at 9:08 PM, Mathias Gaunard < mathias.gaunard@ens-lyon.org> wrote:
On 11/12/12 16:18, Paul A. Bristow wrote:
But I'm not sure if selection of instruction set should not be better
done by the compiler at compile time, not at runtime. This should give the best possible performance (at the expense of different .exe versions). Or is this more useful for choosing the right version of .exe?
That is indeed the only correct way to do it, though you may do so at the object level and not the executable level.
You still need however to have a mechanism to choose which version of your function/exe to call...
______________________________**_________________ Unsubscribe & other changes: http://lists.boost.org/** mailman/listinfo.cgi/boost<http://lists.boost.org/mailman/listinfo.cgi/boost>