
Hi, On Friday, 10. December 2010 17:06:34 Mattia Lambertini wrote:
Perhaps you could attach the code of a minimal working example?
of course, i can give you a link to the example i've tested [0] and here below is the graph i used (METIS):
5 9 1 3 1 2 2 4 1 5 2 2 7 4 3 5 1 1 1 2 1
[0] http://www.boost.org/doc/libs/1_44_0/libs/graph_parallel/example/breadth_fi rst_search.cpp
To compile the example you have to link:
boost_graph_parallel-mt boost_mpi-mt boost_system-mt
Thanks for your time.
Thank you for the link and all. There is something in the documentation at http://www.osl.iu.edu/research/pbgl/documentation/breadth_first_search.html that makes me believe the observed behavior is actually normal. If you look at " Making Visitors Safe for Distributed BFS" point 3, it basically says that the "best" value needs to be stored for each vertex. Looking at your results, this seems to be the case as the distances are all smaller or equal to the distances in the single-process case. The only thing that is actually confusing me with these thoughts is that there is a second node with label "0" in the distributed execution result. This seems to me as if the bfs-algorithm started there again. Because the node seems to be local to another process than the original start node, this could actually indeed be the case, but I am not sure. Unfortunately, I am not familiar with METIS at all, so is there a way to provide the underlying graph in a more intuitive format? An actual image would be nice. Or again, the dot-output for graphviz? This would make it easier to understand how the graph is actually partitioned among the processes and perhaps one could then understand why the "better"(smaller) values are set. Again, I have no distributed boost here, nor mpi, so I can't test it on my own, sorry. Best, Cedric