Re: [Boost-users] mpi non-blocking communication

(1) You must initialize MPI with MPI_Init_thread() instead of MPI_Init(). The boost::mpi::communicator ctor uses MPI_Init(), so you must run the initialization yourself and *then* create the communicator object. For instance:: MPI_Init_thread(&argc, &argv, MPI_THREAD_SERIALIZED) // ... mpi::communicator world;
Riccardo, I have the following code in my main: int main(int argc, char* argv[]) { boost::mpi::environment env(argc, argv); const int mpi_thread_support = MPI::Init_thread(argc, argv, MPI_THREAD_SERIALIZED ); if (mpi_thread_support != MPI_THREAD_SERIALIZED) { std::cerr<< "MPI implementation does not support threads" <<std::endl; return 1; } boost::mpi::communicator world; ... } This prints an error however: Calling MPI_Init or MPI_Init_thread twice is erroneous. How does 1 avoid that the mpi::communicator constructor calling MPI_INIT twice? Perhaps it's the environment constructor that calls MPI_INIT? rds,

Hi Hicham, On Mon, Dec 6, 2010 at 11:58 AM, Hicham Mouline <hicham@mouline.org> wrote:
(1) You must initialize MPI with MPI_Init_thread() instead of MPI_Init(). The boost::mpi::communicator ctor uses MPI_Init(), so you must run the initialization yourself and *then* create the communicator object. For instance:: MPI_Init_thread(&argc, &argv, MPI_THREAD_SERIALIZED) // ... mpi::communicator world;
I have the following code in my main:
int main(int argc, char* argv[]) { boost::mpi::environment env(argc, argv); const int mpi_thread_support = MPI::Init_thread(argc, argv, MPI_THREAD_SERIALIZED ); if (mpi_thread_support != MPI_THREAD_SERIALIZED) { std::cerr<< "MPI implementation does not support threads" <<std::endl; return 1; } boost::mpi::communicator world; ... }
This prints an error however:
Calling MPI_Init or MPI_Init_thread twice is erroneous.
How does 1 avoid that the mpi::communicator constructor calling MPI_INIT twice?
Perhaps it's the environment constructor that calls MPI_INIT?
Yes, indeed. There's a mistake in the pseudo-code I sent in my reply to Philipp: one must call MPI_Init_thread() before istanciating boost::mpi::environment. So the correct init order is: MPI_Init_thread(&argc, &argv, required, &provided); if (required > provided) { // warning: MPI impl does not support the requested threading model } mpi::environment env(argc, argv); mpi::communicator world; Thanks for the correction! Best regards, Riccardo

-----Original Message----- From: boost-users-bounces@lists.boost.org [mailto:boost-users- bounces@lists.boost.org] On Behalf Of Riccardo Murri Sent: 06 December 2010 22:32 To: boost-users@lists.boost.org Subject: Re: [Boost-users] mpi non-blocking communication
Yes, indeed. There's a mistake in the pseudo-code I sent in my reply to Philipp: one must call MPI_Init_thread() before istanciating boost::mpi::environment. So the correct init order is:
MPI_Init_thread(&argc, &argv, required, &provided); if (required > provided) { // warning: MPI impl does not support the requested threading model } mpi::environment env(argc, argv); mpi::communicator world;
Thanks for the correction!
Best regards, Riccardo _______________________________________________
On a related note, I posted a question to the openmpi forum. MPI_INIT is identical MPI_INIT_THREAD(MPI_THREAD_SINGLE), but I couldn't see the utility of having MPI_THREAD_FUNNELED. I mean: if I mpirun a process, and then a thread that is not the main thread calls MPI_INIT only by ususal mpi::environment construction, would it work? If not, then I would see the value of MPI_THREAD_FUNNELED regards,

Hi Hicham, On Tue, Dec 7, 2010 at 12:10 AM, Hicham Mouline <hicham@mouline.org> wrote:
On a related note, I posted a question to the openmpi forum. MPI_INIT is identical MPI_INIT_THREAD(MPI_THREAD_SINGLE), but I couldn't see the utility of having MPI_THREAD_FUNNELED.
MPI_THREAD_SINGLE = no threads at all. MPI_THREAD_FUNNELED = you can have threads, but only the *main* thread (see below) issues MPI calls; you can think of this as: your application is multi-threaded, but it is single-threaded when it comes to MPI interaction. MPI_THREAD_SERIALIZED = your application uses threads, and they can issue independent MPI calls; however, you synchronize your threads so that no two MPI calls happen concurrently (i.e., the MPI implementation may be non-reentrant). MPI_THREAD_MULTIPLE = your application uses threads, and they call MPI functions concurrently. IIRC, both OpenMPI and MPICH support up to MPI_THREAD_MULTIPLE (if compiled with threading enabled).
I mean: if I mpirun a process, and then a thread that is not the main thread calls MPI_INIT only by ususal mpi::environment construction, would it work?
By definition the "main thread" is the one that calls MPI_Init or MPI_Init_thread. Hope this helps, Riccardo

-----Original Message----- From: boost-users-bounces@lists.boost.org [mailto:boost-users- bounces@lists.boost.org] On Behalf Of Riccardo Murri Sent: 07 December 2010 08:14 To: boost-users@lists.boost.org Subject: Re: [Boost-users] mpi non-blocking communication
Hi Hicham,
On Tue, Dec 7, 2010 at 12:10 AM, Hicham Mouline <hicham@mouline.org> wrote:
On a related note, I posted a question to the openmpi forum. MPI_INIT is identical MPI_INIT_THREAD(MPI_THREAD_SINGLE), but I couldn't see the utility of having MPI_THREAD_FUNNELED.
MPI_THREAD_SINGLE = no threads at all.
MPI_THREAD_FUNNELED = you can have threads, but only the *main* thread (see below) issues MPI calls; you can think of this as: your application is multi-threaded, but it is single-threaded when it comes to MPI interaction.
Thanks Riccardo, I am just curious as to the value of distinguishing between these 2. Is it just informative? I mean, as you said, it is as if the application is single-threaded when it comes to MPI interaction. What value is there in telling the MPI implementation that we have other threads but they will not call you? I guess I'm stuck on this point, regards,

Hi Hicham, On Tue, Dec 7, 2010 at 9:37 AM, Hicham Mouline <hicham@mouline.org> wrote:
MPI_THREAD_SINGLE = no threads at all.
MPI_THREAD_FUNNELED = you can have threads, but only the *main* thread (see below) issues MPI calls; you can think of this as: your application is multi-threaded, but it is single-threaded when it comes to MPI interaction.
I am just curious as to the value of distinguishing between these 2. Is it just informative?
The distinction could be important to the MPI implementation, in that it may have to take some further steps if threads are used *at all*, e.g., using the re-entrant version of C library calls. (Whether it actually makes any difference, it really depends on the MPI library.) More info in the "Rationale" paragraph at: http://www.mpi-forum.org/docs/mpi22-report/node260.htm#Node260 Cheers, Riccardo
participants (2)
-
Hicham Mouline
-
Riccardo Murri