
On Wed, Apr 02, 2008 at 12:21:43PM -0500, Stephen Nuchia wrote:
In my experience, there are very few instances where parallelism can be usefully concealed behind a library interface. OpenMP has very high overhead and will help only for very long-running functions -- at least under Microsoft's compiler on x86 and x64 platforms. Algorithms that are likely to be applied to very large datasets could have the OMP pragmas inserted optionally but they would need to be protected by #ifdef logic (or given distinct names) because otherwise the overhead
Does #ifdef around #pragma omp indeed work? At least #define PARALLEL #pragma omp and using PARALLEL does not work.
will destroy programs that make more frequent calls on smaller datasets.
I must also repeat the age-old, time-tested capital-T Truth about optimization: if you do something that is not suggested by and validated against careful analysis of realistic use cases you are wasting your time. I strongly advise you to not start hacking without solid data.
The compression algorithms (zip, ...) (part of the streams library?) would be a very good candidate. I once tested a parallel bzip2 algorithm and it scales really well. Jens