BGL - parallel/distributed

Does anybody know how to use the functions in graph/parallel and graph/distributed? I couldn't find any references in documentation to parallel/distributed processing in graph lib, though there is obviously such functionality. Is it the same implementation as in Parallel BGL project (http://www.osl.iu.edu/research/pbgl/)? I must say that the documentation of graph library desperately needs an update.

For boost 1.40 you can find it here:
http://www.boost.org/doc/libs/1_40_0/libs/graph_parallel/doc/html/index.html
I am not sure what are the differences with 1.41.
On Fri, Nov 6, 2009 at 2:17 PM, Ondrej Sluciak
-- Mathieu

Ok, I tried the simplest possible example, but just didn't figure out
where the problem is. I am on windows, mpich2 installed, all boost libs
correctly compiled with mpi, but if I do something like
#include
-- Dipl.-Ing. Ondrej Sluciak Room CG-04-06 Vienna University of Technology, Austria Institute of Communications and Radio-Frequency Engineering Gusshausstrasse 25-29/389 http://www.nt.tuwien.ac.at

You are right, that was what I was missing. Thank you. Anyway, no I can compile it and run it, but the program always exits at the line Graph g(4) with exit code 0x1. It quits in the inizialization of mpi_process_group::impl::impl(std::size_t num_headers, std::size_t buffer_sz, communicator_type parent_comm) : comm(parent_comm, boost::mpi::comm_duplicate), oob_reply_comm(parent_comm, boost::mpi::comm_duplicate), allocated_tags(boost::mpi::environment::max_tag()) exactly in libs/mpi/src/communicator.cpp communicator::communicator(const ...) on the line BOOST_MPI_CHECK_RESULT(MPI_Comm_dup, (comm, &newcomm)); It doesn't crash or anything, just exits the code "normally" with exit code 1. Do you have any idea what is wrong? Jeremiah Willcock wrote:
-- Dipl.-Ing. Ondrej Sluciak Room CG-04-06 Vienna University of Technology, Austria Institute of Communications and Radio-Frequency Engineering Gusshausstrasse 25-29/389 http://www.nt.tuwien.ac.at

To be even more precise, it exits the code in function MPI_Comm_dup of mpi library. On linux I guess you have to have the mpi running, but I don't know how it is on windows with mpich2. Jeremiah Willcock wrote:
-- Dipl.-Ing. Ondrej Sluciak Room CG-04-06 Vienna University of Technology, Austria Institute of Communications and Radio-Frequency Engineering Gusshausstrasse 25-29/389 http://www.nt.tuwien.ac.at

You never initialized the Boost.MPI environment. See the tests, but you're looking for something like: mpi::environment env(argc, argv); This basically functions like MPI_Init() plus some Boost.MPI initialization. Comm_dup fails because the communicator you're trying to dup doesn't exist. -Nick

Thank you very much. I thought that the constructor of graph would also take care about initializing the mpi (if it has distributedS selector). Some working example in the documentation of graph_parallel would help a lot for sure. Now it seems that everything works for me. Thank you once more. Nick Edmonds wrote:
-- Dipl.-Ing. Ondrej Sluciak Room CG-04-06 Vienna University of Technology, Austria Institute of Communications and Radio-Frequency Engineering Gusshausstrasse 25-29/389 http://www.nt.tuwien.ac.at
participants (4)
-
Jeremiah Willcock
-
Mathieu Malaterre
-
Nick Edmonds
-
Ondrej Sluciak