Thanks
Your code works well.
I also got the error of MPI_ERR_TRUNCATE .
I solved it in this way, If I use:
master :
world.irecv(resultSourceRank, upStreamTaskTag, myResultTaskPackage[iRank][taskCounterT3]);
I got this error, because I declared " TaskPackage myResultTaskPackage. "
It seems that the 2-dimension array cannot be used to receive my defined class package from worker, who sends a TaskPackage to master.
So, I changed it to an int 2-d array to get the result, it works well.
But, I still want to find out how to store the result in a data structure with the type TaskPackage because int type data can only be used to carry integers. Too limited.
What I want to do is:
The master can store the results from each worker and then combine them together to form the final result after collecting all results from workers.
But, if the master has number of tasks that cannot be divided evenly by worker numbers, each worker may have different number of tasks.
If we have 11 task and 3 workers.
aveTaskNumPerNode = (11 - 11%3) /3 = 3leftTaskNum = 11%3 =2 = Z
the master distributes each of left tasks from worker 1 to work Z (Z < totalNumWorkers).
For example, worker 1: 4 tasks, worker 2: 4 task, worker 3: 3 tasks.
The master tries to distribute tasks evenly so that the difference between workloads of each worker is minimized.
I found your code "hello1.cpp" also use the " std::vector
Date: Tue, 29 Jun 2010 18:24:53 +0200 From: riccardo.murri@gmail.com To: boost-users@lists.boost.org Subject: Re: [Boost-users] boostMPI asychronous communication
Hi Jack,
please find attached a sample program that does a master/worker exchange, along the lines of the pseudocode you posted yesterday.
Caveat: I have not been able to have the master wait on *all* messages sent to all workers in one single call -- I get MPI_ERR_TRUNCATE if the number of messages sent to a single worker is > 1. I'm afraid I'm not experienced enough with OpenMPI or Boost.MPI to debug it any further.
Instead the code sends a message to each worker, waits for the reply, then sends another batch, etc. -- this is akin to having a barrier after each round of computation.
Best regards, Riccardo
_________________________________________________________________ The New Busy is not the old busy. Search, chat and e-mail from your inbox. http://www.windowslive.com/campaign/thenewbusy?ocid=PID28326::T:WLMTAGL:ON:W...