
Dean Michael Berris wrote:
I personally have dealt with two types of parallelization: parallelization at a high level (dealing with High Performance Computing using something like MPI for distributed message-passing parallel computation across machines) and parallelization at a low level (talking about SSE and auto-vectorization).
MPI is by no way high-level; it's low-level in that you have to explicitly say which tasks execute where and who they communicate with. Threads, for example, are much more high-level than that: they get scheduled dynamically, trying to use best the hardware as it is being used, or some other optimization factor, depending on the scheduler. The difference between MPI and SIMD is not low-level vs high-level however: it's task-parallel vs data-parallel.