Re: [Boost-users] Hybrid parallelism, no more + mpi+serialization, many questions

Here's how I'd probably do this using a pull model: <snip> Brian
Hi Brian, Thanks for the solution. I think I'll go first just with monothread mpi processes. Can I use MPI splitting the communicators following my tree's shape? Does MPI allow splitting world, and then splitting again, then again....? rds,

I'm not positive exactly what you mean. Can you explain in more detail?
Maybe you're referring to spawning? You can spawn processes, but I'm
very inexperienced with that. It sounds, however, like you'll always
have at least millions of tasks. For ease of programming, I'd
recommend just using mpirun to launch as many processes as you can
afford, and pass tasks to them when they request work to do. Not
understanding your problem in enough detail, that's the best I can
recommend (for example, is it easy to just "jump" to a starting point
in your tree, or do you have to traverse from the start?)
Brian
On Fri, Nov 19, 2010 at 7:14 AM, Hicham Mouline
Here's how I'd probably do this using a pull model: <snip> Brian
Hi Brian,
Thanks for the solution. I think I'll go first just with monothread mpi processes.
Can I use MPI splitting the communicators following my tree's shape? Does MPI allow splitting world, and then splitting again, then again....?
rds, _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users

Hello, Following a couple of threads I posted here about MPI, I wish to thank Brian B, Dave A, Matthias T, James S, Riccardo M for clarifications. I have reached the following: 1. I am using the openmpi MPI impl. 2. openmpi doesn't allow to mix windows and linux boxes to be part of the mpi communicator. 3. I have a GUI application. It is better that the GUI code/libs be not part of the mpi processes at all. 4. Therefore, I need to have the GUI run in a separate process (a 1st executable) and the mpi processes run a 2nd executable that will run the calculations. 5. The GUI (on windows) would control a master mpi process which would then dispatch tasks to the "slaves" mpi processes. slave and master run the same executable (all on linux). This is cleaner. This GUI application will need to send, to the master mpi process, the static data that is loaded once and that the calculations will run on. The GUI will probably need to launch the mpi runtime (ie. some rsh of 'mpirun -np x -host .... <calculator image>') and then after that connect to the mpi master. I will need some custom code for this. Here it seems to me I will have some duplication. The data types the GUI will send to the master are the same types the mpi master will send to the slaves. I will need serialization also, and I guess some library to send messages over the network (from gui to master mpi) boost::asio? The objective is to keep the mpi processes clean of the gui code. Is there a simpler way? I've asked the same question on the OMPI forum as well: http://www.open-mpi.org/community/lists/users/2010/11/14866.php I appreciate any advice you can give, regards,

On 22 Nov 2010, at 22:59, Hicham Mouline wrote:
Hello,
Following a couple of threads I posted here about MPI, I wish to thank Brian B, Dave A, Matthias T, James S, Riccardo M for clarifications. I have reached the following: 1. I am using the openmpi MPI impl. 2. openmpi doesn't allow to mix windows and linux boxes to be part of the mpi communicator. 3. I have a GUI application. It is better that the GUI code/libs be not part of the mpi processes at all. 4. Therefore, I need to have the GUI run in a separate process (a 1st executable) and the mpi processes run a 2nd executable that will run the calculations. 5. The GUI (on windows) would control a master mpi process which would then dispatch tasks to the "slaves" mpi processes. slave and master run the same executable (all on linux). This is cleaner.
This GUI application will need to send, to the master mpi process, the static data that is loaded once and that the calculations will run on. The GUI will probably need to launch the mpi runtime (ie. some rsh of 'mpirun -np x -host .... <calculator image>') and then after that connect to the mpi master. I will need some custom code for this. Here it seems to me I will have some duplication.
The data types the GUI will send to the master are the same types the mpi master will send to the slaves. I will need serialization also, and I guess some library to send messages over the network (from gui to master mpi) boost::asio? The objective is to keep the mpi processes clean of the gui code.
Is there a simpler way?
I've asked the same question on the OMPI forum as well: http://www.open-mpi.org/community/lists/users/2010/11/14866.php
I appreciate any advice you can give,
regards,
Hi, you can pack it into a text archive using Boost.Serialization and send that text archive via files (or maybe Boost.Asio - but I'm not an expert there) from the GUI to the MPI master. Matthias
participants (3)
-
Brian Budge
-
Hicham Mouline
-
Matthias Troyer