Thanks,
This is the main part of me code, which may have deadlock.
Master:
for (iRank = 0; iRank < availableRank ; iRank++)
{
destRank = iRank+1;
for (taski = 1; taski <= TaskNumPerRank ; taski++)
{
resultSourceRank = destRank;
recvReqs[taskCounterT2] = world.irecv(resultSourceRank, upStreamTaskTag, resultTaskPackageT2[iRank][taskCounterT3]);
reqs = world.isend(destRank, taskTag, myTaskPackage);
++taskCounterT2;
}
// taskTotalNum = availableRank * TaskNumPerRank
// right now, availableRank =1, TaskNumPerRank =2
mpi::wait_all(recvReqs, recvReqs+(taskTotalNum));
-----------------------------------------------
worker:
while (1)
{
world.recv(managerRank, downStreamTaskTag, resultTaskPackageW);
do its local work on received task;
destRank = masterRank;
reqs = world.isend(destRank, taskTag, myTaskPackage);
if (recv end signal)
break;
}
Any help is appreciated.
Jack
> Date: Mon, 28 Jun 2010 17:29:19 +0200
> From: riccardo.murri@gmail.com
> To: boost-users@lists.boost.org
> Subject: Re: [Boost-users] boostMPI asychronous communication
>
> Hi Jack,
>
> On Mon, Jun 28, 2010 at 5:18 PM, Jack Bryan <dtustudy68@hotmail.com> wrote:
> > I just posted irecv before isend.
> > master node:
> > request=irecv();
> > do its local work;
> > isend(message to worker nodes);l
> > wait(request).
> > worker node:
> > while(stil have new task ){
> > recv(message);
> > do its local work;
> > isend(result message to master)
> > }
> > if there is only one task to worker, it works.
> > But, if there are 2 tasks to workers, master cannot get the result from
> > worker.
>
> Smells of deadlock. Maybe the master is waiting for a message coming
> from the wrong worker?
>
> We might need to delve into details; can you please post a minimal
> Boost.MPI program exhibiting this "blocking" behavior?
>
> Best regards,
> Riccardo
> _______________________________________________
> Boost-users mailing list
> Boost-users@lists.boost.org
> http://lists.boost.org/mailman/listinfo.cgi/boost-users
Hotmail has tools for the New Busy. Search, chat and e-mail from your inbox. Learn more.