
MapReduce is a programming model from Google that is designed for scalable data processing. Google's implementation is for scalability over many thousands of commodity machines, but there is value in using the idiom on multi-core processors to partition processing to efficiently use the CPU available, and avoid the complexities of multi-threaded development.
Broad scope comment: MapReduce is merely a parallel map and fold operations, which is quite limited in terms of expressivity compared to broader level parallel pattern like parallel DP or algorithmic skeletons. See : * Cole, M., Algorithmic skeletons: structured management of parallel computation, MIT Press, 1989. * Skillicorn, D. B., Architecture-Independent Parallel Computation, IEEE Computer, 1990, 23, 38-50 * Cole, M., Algorithmic skeletons in Research Directions in Parallel Functional Programming, Springer, 1999 * Sérot, J., Ginhac, D., Skeletons for parallel image processing: an overview of the SKIPPER project. Parallel Computing, 2002, 28, 1685-1708 for some seminal bibliography google "forgot" to mention and * Falcou J., Serot J., Formal semantics applied to the implementation of a skeleton-based parallel programming library, ParCo 2007 - Aachen, Sept. 2007 for some recent advance using Boost Actual comment: Do you have actual performance data for non trivial task ? more precisely, I kno from experience that such kind of implementation may suffer from some C++ induced overhead. How do you compare to han written pthread or boost::thread code ? I'm interested to see how this performs