boostMPI asychronous communication

Dear All: How to do asychronous communication among nodes by boot.MPI or OpenMPI in cluster ? I need to set up a kind of asychronous communication protocol such that message senders and receivers can communicate asychronously without losing anymessages between them. I do not want to use blocking MPI routines because the processors can do otheroperations when they wait for new messages coming. On boost.org, I donot find this kind of MPI routines that support this asychronous communication. Any help is appreciated. thanks Jack June 27 2010 _________________________________________________________________ The New Busy is not the too busy. Combine all your e-mail accounts with Hotmail. http://www.windowslive.com/campaign/thenewbusy?tile=multiaccount&ocid=PID28326::T:WLMTAGL:ON:WL:en-US:WM_HMP:042010_4

Hi Jack,
On Mon, Jun 28, 2010 at 5:21 AM, Jack Bryan
How to do asychronous communication among nodes by boot.MPI or OpenMPI in cluster ?
you likely want to use boost::mpi::communicator::{isend,irecv,iprobe}; also the test_* /wait_*routines in boost/mpi/nonblocking.hpp can be useful. Regards, Riccardo

thanks I know that. MPI_irecv() ; do other works; MPI_wait(); But, my message receiver is much slower than sender. when the receiver is doing its local works, the sender has sent out their messages. but at this time, the receiver is very busy doing its local work and cannot post MPI_irecv to get the messages from senders. Any help is appreciated. jack
Date: Mon, 28 Jun 2010 10:16:17 +0200 From: riccardo.murri@gmail.com To: boost-users@lists.boost.org Subject: Re: [Boost-users] boostMPI asychronous communication
Hi Jack,
On Mon, Jun 28, 2010 at 5:21 AM, Jack Bryan
wrote: How to do asychronous communication among nodes by boot.MPI or OpenMPI in cluster ?
you likely want to use boost::mpi::communicator::{isend,irecv,iprobe}; also the test_* /wait_*routines in boost/mpi/nonblocking.hpp can be useful.
Regards, Riccardo _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
_________________________________________________________________ Hotmail is redefining busy with tools for the New Busy. Get more from your inbox. http://www.windowslive.com/campaign/thenewbusy?ocid=PID28326::T:WLMTAGL:ON:W...

Hi Jack,
On Mon, Jun 28, 2010 at 4:00 PM, Jack Bryan
MPI_irecv() ; do other works; MPI_wait(); But, my message receiver is much slower than sender. when the receiver is doing its local works, the sender has sent out their messages. but at this time, the receiver is very busy doing its local work and cannot post MPI_irecv to get the messages from senders.
If you know what messages the receiver is going to receive, you can post your irecv() *before* starting the compute-intensive loop. If you can't post the irecv() before the busy loop, MPI will buffer messages for you, up to some implementation-defined limit: "Send of all modes [...] can be started whether a matching receive has been posted or not [...] If the call causes some system resources to be exhausted, then it will fail and return an error code." (MPI 2.1 spec, sec 3.7, page 48 of the printed edition) You might be able to get better help on an MPI-specific forum. Best regards, Riccardo

Thanks, I just posted irecv before isend. master node: request=irecv(); do its local work; isend(message to worker nodes);l wait(request). worker node: while(stil have new task ){ recv(message); do its local work; isend(result message to master)} if there is only one task to worker, it works. But, if there are 2 tasks to workers, master cannot get the result from worker. the master node always wait for the results from workers, it seems that master misses the results when there are more than one task for worker to run. Any help is appreciated. Jack
Date: Mon, 28 Jun 2010 16:46:56 +0200 From: riccardo.murri@gmail.com To: boost-users@lists.boost.org Subject: Re: [Boost-users] boostMPI asychronous communication
Hi Jack,
On Mon, Jun 28, 2010 at 4:00 PM, Jack Bryan
wrote: MPI_irecv() ; do other works; MPI_wait(); But, my message receiver is much slower than sender. when the receiver is doing its local works, the sender has sent out their messages. but at this time, the receiver is very busy doing its local work and cannot post MPI_irecv to get the messages from senders.
If you know what messages the receiver is going to receive, you can post your irecv() *before* starting the compute-intensive loop.
If you can't post the irecv() before the busy loop, MPI will buffer messages for you, up to some implementation-defined limit: "Send of all modes [...] can be started whether a matching receive has been posted or not [...] If the call causes some system resources to be exhausted, then it will fail and return an error code." (MPI 2.1 spec, sec 3.7, page 48 of the printed edition)
You might be able to get better help on an MPI-specific forum.
Best regards, Riccardo _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
_________________________________________________________________ The New Busy is not the old busy. Search, chat and e-mail from your inbox. http://www.windowslive.com/campaign/thenewbusy?ocid=PID28326::T:WLMTAGL:ON:W...

Hi Jack,
On Mon, Jun 28, 2010 at 5:18 PM, Jack Bryan
I just posted irecv before isend. master node: request=irecv(); do its local work; isend(message to worker nodes);l wait(request). worker node: while(stil have new task ){ recv(message); do its local work; isend(result message to master) } if there is only one task to worker, it works. But, if there are 2 tasks to workers, master cannot get the result from worker.
Smells of deadlock. Maybe the master is waiting for a message coming from the wrong worker? We might need to delve into details; can you please post a minimal Boost.MPI program exhibiting this "blocking" behavior? Best regards, Riccardo

Thanks, This is the main part of me code, which may have deadlock. Master: for (iRank = 0; iRank < availableRank ; iRank++){ destRank = iRank+1; for (taski = 1; taski <= TaskNumPerRank ; taski++){resultSourceRank = destRank; recvReqs[taskCounterT2] = world.irecv(resultSourceRank, upStreamTaskTag, resultTaskPackageT2[iRank][taskCounterT3]); reqs = world.isend(destRank, taskTag, myTaskPackage); ++taskCounterT2;} // taskTotalNum = availableRank * TaskNumPerRank // right now, availableRank =1, TaskNumPerRank =2 mpi::wait_all(recvReqs, recvReqs+(taskTotalNum)); -----------------------------------------------worker: while (1){world.recv(managerRank, downStreamTaskTag, resultTaskPackageW); do its local work on received task; destRank = masterRank; reqs = world.isend(destRank, taskTag, myTaskPackage);if (recv end signal) break;} Any help is appreciated. Jack
Date: Mon, 28 Jun 2010 17:29:19 +0200 From: riccardo.murri@gmail.com To: boost-users@lists.boost.org Subject: Re: [Boost-users] boostMPI asychronous communication
Hi Jack,
On Mon, Jun 28, 2010 at 5:18 PM, Jack Bryan
wrote: I just posted irecv before isend. master node: request=irecv(); do its local work; isend(message to worker nodes);l wait(request). worker node: while(stil have new task ){ recv(message); do its local work; isend(result message to master) } if there is only one task to worker, it works. But, if there are 2 tasks to workers, master cannot get the result from worker.
Smells of deadlock. Maybe the master is waiting for a message coming from the wrong worker?
We might need to delve into details; can you please post a minimal Boost.MPI program exhibiting this "blocking" behavior?
Best regards, Riccardo _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
_________________________________________________________________ Hotmail has tools for the New Busy. Search, chat and e-mail from your inbox. http://www.windowslive.com/campaign/thenewbusy?ocid=PID28326::T:WLMTAGL:ON:W...

Hello Jack,
On Mon, Jun 28, 2010 at 7:46 PM, Jack Bryan
This is the main part of me code, which may have deadlock.
Master: for (iRank = 0; iRank < availableRank ; iRank++) { destRank = iRank+1; for (taski = 1; taski <= TaskNumPerRank ; taski++) { resultSourceRank = destRank; recvReqs[taskCounterT2] = world.irecv(resultSourceRank, upStreamTaskTag, resultTaskPackageT2[iRank][taskCounterT3]); reqs = world.isend(destRank, taskTag, myTaskPackage); ++taskCounterT2; }
// taskTotalNum = availableRank * TaskNumPerRank // right now, availableRank =1, TaskNumPerRank =2 mpi::wait_all(recvReqs, recvReqs+(taskTotalNum)); ----------------------------------------------- worker: while (1) { world.recv(managerRank, downStreamTaskTag, resultTaskPackageW); do its local work on received task; destRank = masterRank; reqs = world.isend(destRank, taskTag, myTaskPackage); if (recv end signal) break; }
1. I can't see where the outer for-loop in master is closed; is the wait_all() part of that loop? (I assume it does not.) Can you send a minimal program that I can feed to a compiler and test? This could help. 2. Are you sure there is no tag mismatch between master and worker? master: world.isend(destRank, taskTag, myTaskPackage); ^^^^^^^ worker: world.recv(managerRank, downStreamTaskTag, resultTaskPackageW); ^^^^^^^^^^^^^^^^^ unless master::taskTag == worker::downStreamTaskTag, the recv() will wait forever. Similarly, the following requires that master::upStreamTaskTag == worker::taskTag: master: ... = world.irecv(resultSourceRank, upStreamTaskTag, ...); worker: world.isend(destRank, taskTag, myTaskPackage); // destRank==masterRank 3. Do the source/destination ranks match? The master waits for messages from destinations 1..availableRank (inclusive range), and the worker waits for a message from "masterRank" (is this 0?) 4. Does the master work if you replace the main loop with the following? Master: for (iRank = 0; iRank < availableRank ; iRank++) { destRank = iRank+1; for (taski = 1; taski <= TaskNumPerRank ; taski++) { // XXX: the following code does not contain any reference to // "taski": it is sending "TaskNumPerRank" copies of the // same message ... reqs = world.isend(destRank, taskTag, myTaskPackage); }; }; // I assume the outer loop does *not* include the wait_all() // expect a message from each Task int n = 0; while (n < taskTotalNum) { mpi::status status = world.probe(); world.recv(status.source(), status.tag(), resultTaskPackageT2[status.source()][taskCounterT3]); ++n; }; Best regards, Riccardo

Thanks for your reply. I have checked the tags, master and worker tags match. The deadlock happens in the case of 2 tasks scheduled on one processor. If there is only one task on one processor, there is no deadlock.It works well. The master is resopnsible for scheduling tasks to workers, which need to run the assigned tasks and feedback results to master. if I assign one task to each worker, it works well. But, when I increase the # of task to 2 on worker node, it is deadlock. The master only schedules 2 tasks to one worker in order to simplify the analysisfor the poential deadlock. The worker can receive the 2 tasks and run them, but the master cannot get the results from worker. the main idea: master (node0) counter=0;totalTaskNum =2;while (counter < totalTaskNum ){TaskPackage myTaskPackage(world); world.isend(node1, downStreamTaskTag, myTaskPackage); recvReqs[counter] = world.irecv(node1, upStreamtaskTag, taskResultPackage[counter]);counter++;}world.wait_all(recvReqs, recvReqs+(totalTaskNum)); worker (node 1): while(1){TaskPackage workerTaskPackage(world); world.recv(node0,downStreamTaskTag, workerTaskPackage ); do it local work; world.isend(node0, upStreamTaskTag, workerTaskPackage); if (no new task) break;} My code has many classes, I am trying to find out how to cut out the main part from it. Any help is appreciated. thanks Jack
Date: Mon, 28 Jun 2010 21:28:47 +0200 From: riccardo.murri@gmail.com To: boost-users@lists.boost.org Subject: Re: [Boost-users] boostMPI asychronous communication
Hello Jack,
On Mon, Jun 28, 2010 at 7:46 PM, Jack Bryan
wrote: This is the main part of me code, which may have deadlock.
Master: for (iRank = 0; iRank < availableRank ; iRank++) { destRank = iRank+1; for (taski = 1; taski <= TaskNumPerRank ; taski++) { resultSourceRank = destRank; recvReqs[taskCounterT2] = world.irecv(resultSourceRank, upStreamTaskTag, resultTaskPackageT2[iRank][taskCounterT3]); reqs = world.isend(destRank, taskTag, myTaskPackage); ++taskCounterT2; }
// taskTotalNum = availableRank * TaskNumPerRank // right now, availableRank =1, TaskNumPerRank =2 mpi::wait_all(recvReqs, recvReqs+(taskTotalNum)); ----------------------------------------------- worker: while (1) { world.recv(managerRank, downStreamTaskTag, resultTaskPackageW); do its local work on received task; destRank = masterRank; reqs = world.isend(destRank, taskTag, myTaskPackage); if (recv end signal) break; }
1. I can't see where the outer for-loop in master is closed; is the wait_all() part of that loop? (I assume it does not.) Can you send a minimal program that I can feed to a compiler and test? This could help.
2. Are you sure there is no tag mismatch between master and worker?
master: world.isend(destRank, taskTag, myTaskPackage); ^^^^^^^ worker: world.recv(managerRank, downStreamTaskTag, resultTaskPackageW); ^^^^^^^^^^^^^^^^^
unless master::taskTag == worker::downStreamTaskTag, the recv() will wait forever.
Similarly, the following requires that master::upStreamTaskTag == worker::taskTag:
master: ... = world.irecv(resultSourceRank, upStreamTaskTag, ...); worker: world.isend(destRank, taskTag, myTaskPackage); // destRank==masterRank
3. Do the source/destination ranks match? The master waits for messages from destinations 1..availableRank (inclusive range), and the worker waits for a message from "masterRank" (is this 0?)
4. Does the master work if you replace the main loop with the following?
Master: for (iRank = 0; iRank < availableRank ; iRank++) { destRank = iRank+1; for (taski = 1; taski <= TaskNumPerRank ; taski++) { // XXX: the following code does not contain any reference to // "taski": it is sending "TaskNumPerRank" copies of the // same message ... reqs = world.isend(destRank, taskTag, myTaskPackage); }; }; // I assume the outer loop does *not* include the wait_all()
// expect a message from each Task int n = 0; while (n < taskTotalNum) { mpi::status status = world.probe(); world.recv(status.source(), status.tag(), resultTaskPackageT2[status.source()][taskCounterT3]); ++n; };
Best regards, Riccardo _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
_________________________________________________________________ The New Busy think 9 to 5 is a cute idea. Combine multiple calendars with Hotmail. http://www.windowslive.com/campaign/thenewbusy?tile=multicalendar&ocid=PID28326::T:WLMTAGL:ON:WL:en-US:WM_HMP:042010_5

The isend also returns a request object that you need to call wait on
Matthia
Sent from my iPad
On Jun 29, 2010, at 12:55 AM, Jack Bryan
Thanks for your reply.
I have checked the tags, master and worker tags match.
The deadlock happens in the case of 2 tasks scheduled on one processor.
If there is only one task on one processor, there is no deadlock. It works well.
The master is resopnsible for scheduling tasks to workers, which need to run the assigned tasks and feedback results to master.
if I assign one task to each worker, it works well.
But, when I increase the # of task to 2 on worker node, it is deadlock.
The master only schedules 2 tasks to one worker in order to simplify the analysis for the poential deadlock.
The worker can receive the 2 tasks and run them, but the master cannot get the results from worker.
the main idea:
master (node0)
counter=0; totalTaskNum =2; while (counter < totalTaskNum ) { TaskPackage myTaskPackage(world);
world.isend(node1, downStreamTaskTag, myTaskPackage); recvReqs[counter] = world.irecv(node1, upStreamtaskTag, taskResultPackage[counter]); counter++; } world.wait_all(recvReqs, recvReqs+(totalTaskNum));
worker (node 1):
while(1) { TaskPackage workerTaskPackage(world); world.recv(node0,downStreamTaskTag, workerTaskPackage );
do it local work;
world.isend(node0, upStreamTaskTag, workerTaskPackage);
if (no new task) break; }
My code has many classes, I am trying to find out how to cut out the main part from it.
Any help is appreciated.
thanks
Jack
Date: Mon, 28 Jun 2010 21:28:47 +0200 From: riccardo.murri@gmail.com To: boost-users@lists.boost.org Subject: Re: [Boost-users] boostMPI asychronous communication
Hello Jack,
On Mon, Jun 28, 2010 at 7:46 PM, Jack Bryan
wrote: This is the main part of me code, which may have deadlock.
Master: for (iRank = 0; iRank < availableRank ; iRank++) { destRank = iRank+1; for (taski = 1; taski <= TaskNumPerRank ; taski++) { resultSourceRank = destRank; recvReqs[taskCounterT2] = world.irecv(resultSourceRank, upStreamTaskTag, resultTaskPackageT2[iRank][taskCounterT3]); reqs = world.isend(destRank, taskTag, myTaskPackage); ++taskCounterT2; }
// taskTotalNum = availableRank * TaskNumPerRank // right now, availableRank =1, TaskNumPerRank =2 mpi::wait_all(recvReqs, recvReqs+(taskTotalNum)); ----------------------------------------------- worker: while (1) { world.recv(managerRank, downStreamTaskTag, resultTaskPackageW); do its local work on received task; destRank = masterRank; reqs = world.isend(destRank, taskTag, myTaskPackage); if (recv end signal) break; }
1. I can't see where the outer for-loop in master is closed; is the wait_all() part of that loop? (I assume it does not.) Can you send a minimal program that I can feed to a compiler and test? This could help.
2. Are you sure there is no tag mismatch between master and worker?
master: world.isend(destRank, taskTag, myTaskPackage); ^^^^^^^ worker: world.recv(managerRank, downStreamTaskTag, resultTaskPackageW); ^^^^^^^^^^^^^^^^^
unless master::taskTag == worker::downStreamTaskTag, the recv() will wait forever.
Similarly, the following requires that master::upStreamTaskTag == worker::taskTag:
master: ... = world.irecv(resultSourceRank, upStreamTaskTag, ...); worker: world.isend(destRank, taskTag, myTaskPackage); // destRank==masterRank
3. Do the source/destination ranks match? The master waits for messages from destinations 1..availableRank (inclusive range), and the worker waits for a message from "masterRank" (is this 0?)
4. Does the master work if you replace the main loop with the following?
Master: for (iRank = 0; iRank < availableRank ; iRank++) { destRank = iRank+1; for (taski = 1; taski <= TaskNumPerRank ; taski++) { // XXX: the following code does not contain any reference to // "taski": it is sending "TaskNumPerRank" copies of the // same message ... reqs = world.isend(destRank, taskTag, myTaskPackage); }; }; // I assume the outer loop does *not* include the wait_all()
// expect a message from each Task int n = 0; while (n < taskTotalNum) { mpi::status status = world.probe(); world.recv(status.source(), status.tag(), resultTaskPackageT2[status.source()][taskCounterT3]); ++n; };
Best regards, Riccardo _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
The New Busy think 9 to 5 is a cute idea. Combine multiple calendars with Hotmail. Get busy. _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users

Thanks I have replaced the main loop with your code. but, it is still deadlock. Any help is appreciated. Jack June 28 2010
Date: Mon, 28 Jun 2010 21:28:47 +0200 From: riccardo.murri@gmail.com To: boost-users@lists.boost.org Subject: Re: [Boost-users] boostMPI asychronous communication
Hello Jack,
On Mon, Jun 28, 2010 at 7:46 PM, Jack Bryan
wrote: This is the main part of me code, which may have deadlock.
Master: for (iRank = 0; iRank < availableRank ; iRank++) { destRank = iRank+1; for (taski = 1; taski <= TaskNumPerRank ; taski++) { resultSourceRank = destRank; recvReqs[taskCounterT2] = world.irecv(resultSourceRank, upStreamTaskTag, resultTaskPackageT2[iRank][taskCounterT3]); reqs = world.isend(destRank, taskTag, myTaskPackage); ++taskCounterT2; }
// taskTotalNum = availableRank * TaskNumPerRank // right now, availableRank =1, TaskNumPerRank =2 mpi::wait_all(recvReqs, recvReqs+(taskTotalNum)); ----------------------------------------------- worker: while (1) { world.recv(managerRank, downStreamTaskTag, resultTaskPackageW); do its local work on received task; destRank = masterRank; reqs = world.isend(destRank, taskTag, myTaskPackage); if (recv end signal) break; }
1. I can't see where the outer for-loop in master is closed; is the wait_all() part of that loop? (I assume it does not.) Can you send a minimal program that I can feed to a compiler and test? This could help.
2. Are you sure there is no tag mismatch between master and worker?
master: world.isend(destRank, taskTag, myTaskPackage); ^^^^^^^ worker: world.recv(managerRank, downStreamTaskTag, resultTaskPackageW); ^^^^^^^^^^^^^^^^^
unless master::taskTag == worker::downStreamTaskTag, the recv() will wait forever.
Similarly, the following requires that master::upStreamTaskTag == worker::taskTag:
master: ... = world.irecv(resultSourceRank, upStreamTaskTag, ...); worker: world.isend(destRank, taskTag, myTaskPackage); // destRank==masterRank
3. Do the source/destination ranks match? The master waits for messages from destinations 1..availableRank (inclusive range), and the worker waits for a message from "masterRank" (is this 0?)
4. Does the master work if you replace the main loop with the following?
Master: for (iRank = 0; iRank < availableRank ; iRank++) { destRank = iRank+1; for (taski = 1; taski <= TaskNumPerRank ; taski++) { // XXX: the following code does not contain any reference to // "taski": it is sending "TaskNumPerRank" copies of the // same message ... reqs = world.isend(destRank, taskTag, myTaskPackage); }; }; // I assume the outer loop does *not* include the wait_all()
// expect a message from each Task int n = 0; while (n < taskTotalNum) { mpi::status status = world.probe(); world.recv(status.source(), status.tag(), resultTaskPackageT2[status.source()][taskCounterT3]); ++n; };
Best regards, Riccardo _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
_________________________________________________________________ Hotmail has tools for the New Busy. Search, chat and e-mail from your inbox. http://www.windowslive.com/campaign/thenewbusy?ocid=PID28326::T:WLMTAGL:ON:W...

Hi Jack, please find attached a sample program that does a master/worker exchange, along the lines of the pseudocode you posted yesterday. Caveat: I have not been able to have the master wait on *all* messages sent to all workers in one single call -- I get MPI_ERR_TRUNCATE if the number of messages sent to a single worker is > 1. I'm afraid I'm not experienced enough with OpenMPI or Boost.MPI to debug it any further. Instead the code sends a message to each worker, waits for the reply, then sends another batch, etc. -- this is akin to having a barrier after each round of computation. Best regards, Riccardo

Thanks
Your code works well.
I also got the error of MPI_ERR_TRUNCATE .
I solved it in this way, If I use:
master :
world.irecv(resultSourceRank, upStreamTaskTag, myResultTaskPackage[iRank][taskCounterT3]);
I got this error, because I declared " TaskPackage myResultTaskPackage. "
It seems that the 2-dimension array cannot be used to receive my defined class package from worker, who sends a TaskPackage to master.
So, I changed it to an int 2-d array to get the result, it works well.
But, I still want to find out how to store the result in a data structure with the type TaskPackage because int type data can only be used to carry integers. Too limited.
What I want to do is:
The master can store the results from each worker and then combine them together to form the final result after collecting all results from workers.
But, if the master has number of tasks that cannot be divided evenly by worker numbers, each worker may have different number of tasks.
If we have 11 task and 3 workers.
aveTaskNumPerNode = (11 - 11%3) /3 = 3leftTaskNum = 11%3 =2 = Z
the master distributes each of left tasks from worker 1 to work Z (Z < totalNumWorkers).
For example, worker 1: 4 tasks, worker 2: 4 task, worker 3: 3 tasks.
The master tries to distribute tasks evenly so that the difference between workloads of each worker is minimized.
I found your code "hello1.cpp" also use the " std::vector
Date: Tue, 29 Jun 2010 18:24:53 +0200 From: riccardo.murri@gmail.com To: boost-users@lists.boost.org Subject: Re: [Boost-users] boostMPI asychronous communication
Hi Jack,
please find attached a sample program that does a master/worker exchange, along the lines of the pseudocode you posted yesterday.
Caveat: I have not been able to have the master wait on *all* messages sent to all workers in one single call -- I get MPI_ERR_TRUNCATE if the number of messages sent to a single worker is > 1. I'm afraid I'm not experienced enough with OpenMPI or Boost.MPI to debug it any further.
Instead the code sends a message to each worker, waits for the reply, then sends another batch, etc. -- this is akin to having a barrier after each round of computation.
Best regards, Riccardo
_________________________________________________________________ The New Busy is not the old busy. Search, chat and e-mail from your inbox. http://www.windowslive.com/campaign/thenewbusy?ocid=PID28326::T:WLMTAGL:ON:W...
participants (3)
-
Jack Bryan
-
Matthias Troyer
-
Riccardo Murri