
On 4/14/08, John Maddock <john@johnmaddock.co.uk> wrote:
Anup Mandvariya wrote:
Hi All, I have some queries related with boost.math library as ststed below:
a) Is it possible to write many of the functions in the boost.math library (like beta and gamma functions) so that they can be parallelized irrespective of the "way of parallelization"(such as using OpenMP, MPI, etc...) and the "environment" (like multicore or a cluster)?
I don't know, you would need to devise an parrallelisation API that's independent of the underlying mechanism, given that OpenMP uses #pragmas and MPI code I'm not sure that's really feasable. The alternative would be lot's of #if..#else logic I guess :-(
b) What is the possibility of extending boost.math libraries (particularly beta and gamma functions) to generic libraries using generic programming process?
I'm not sure I understand what you mean - they are intended to be generic already - and work with any type that satisfies the requirements here
http://svn.boost.org/svn/boost/trunk/libs/math/doc/sf_and_dist/html/math_too...
So for example I already use them with NTL::RR and also have experimental versions which work with Boost.Interval and/or mpfr_class.
Is this what you meant?
HTH, John.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Thanks John, My second query was, If is it possible to generalize these libraries so that they can be parallelize both for shared-memory as well as distributed-distributed memory? -- Regards, Anup Mandvariya +919985330660 "Truth Must Have No Compromise"