
Hi, we're intersted in how to optimize BGL on Multicore for adding our standard Bellman-Ford algorithms for optical networks. How do the current boost work on Multicore? BGL is thread-safe? Which parts are required to optimize? Cheers, Giorgio. -- Quiero ser el rayo de sol que cada día te despierta para hacerte respirar y vivir en me. "Favola -Moda".

Giorgio Zoppi <giorgio.zoppi <at> gmail.com> writes:
we're intersted in how to optimize BGL on Multicore for adding our standard Bellman-Ford algorithms for optical networks. How do the current boost work on Multicore? BGL is thread-safe? Which parts are required to optimize?
I believe what you're interested in is the Parallel BGL. See http://www.boost.org/doc/libs/release/libs/graph_parallel/doc/html/index.htm... and http://osl.iu.edu/research/pbgl/

2010/5/30 Adam Merz <adammerz@hotmail.com>:
Giorgio Zoppi <giorgio.zoppi <at> gmail.com> writes:
we're intersted in how to optimize BGL on Multicore for adding our standard Bellman-Ford algorithms for optical networks. How do the current boost work on Multicore? BGL is thread-safe? Which parts are required to optimize?
I believe what you're interested in is the Parallel BGL. See http://www.boost.org/doc/libs/release/libs/graph_parallel/doc/html/index.htm... and http://osl.iu.edu/research/pbgl/
I skimmed the doc, however we're more fit in a multicore solution than a distributed process solution. I cannot find something about that in the doc. Any hints? -- Quiero ser el rayo de sol que cada día te despierta para hacerte respirar y vivir en me. "Favola -Moda".

On Sun, 30 May 2010, Giorgio Zoppi wrote:
2010/5/30 Adam Merz <adammerz@hotmail.com>:
Giorgio Zoppi <giorgio.zoppi <at> gmail.com> writes:
we're intersted in how to optimize BGL on Multicore for adding our standard Bellman-Ford algorithms for optical networks. How do the current boost work on Multicore? BGL is thread-safe? Which parts are required to optimize?
I believe what you're interested in is the Parallel BGL. See http://www.boost.org/doc/libs/release/libs/graph_parallel/doc/html/index.htm... and http://osl.iu.edu/research/pbgl/
I skimmed the doc, however we're more fit in a multicore solution than a distributed process solution. I cannot find something about that in the doc. Any hints?
We are in the process of adding shared-memory/hybrid parallelism to PBGL, but that's still in a very early stage. You can run multiple MPI processes on the same machine (running them on different cores) and they will communicate using shared memory. One issue that comes up is that memory is often the limitation for graph algorithms, not CPU performance; in that case, multicores don't really do much for performance. Are you using more expensive algorithms that aren't memory intensive? If so, there might be ways to do simple things with existing BGL algorithms to make them run with shared memory (for example, adding OpenMP pragmas). -- Jeremiah Willcock

2010/5/30 Jeremiah Willcock <jewillco@osl.iu.edu>:
On Sun, 30 May 2010, Giorgio Zoppi wrote:
2010/5/30 Adam Merz <adammerz@hotmail.com>:
Giorgio Zoppi <giorgio.zoppi <at> gmail.com> writes:
we're intersted in how to optimize BGL on Multicore for adding our standard Bellman-Ford algorithms for optical networks. How do the current boost work on Multicore? BGL is thread-safe? Which parts are required to optimize?
I believe what you're interested in is the Parallel BGL. See
http://www.boost.org/doc/libs/release/libs/graph_parallel/doc/html/index.htm... and http://osl.iu.edu/research/pbgl/
I skimmed the doc, however we're more fit in a multicore solution than a distributed process solution. I cannot find something about that in the doc. Any hints?
We are in the process of adding shared-memory/hybrid parallelism to PBGL, but that's still in a very early stage. You can run multiple MPI processes on the same machine (running them on different cores) and they will communicate using shared memory. One issue that comes up is that memory is often the limitation for graph algorithms, not CPU performance; in that case, multicores don't really do much for performance. Are you using more expensive algorithms that aren't memory intensive? If so, there might be ways to do simple things with existing BGL algorithms to make them run with shared memory (for example, adding OpenMP pragmas).
So if the graph is with sparse matrix, you could provide a more compact implementation and modify (parallelize) your algorithms. -- Quiero ser el rayo de sol que cada día te despierta para hacerte respirar y vivir en me. "Favola -Moda".

On Sun, 30 May 2010, Giorgio Zoppi wrote:
2010/5/30 Jeremiah Willcock <jewillco@osl.iu.edu>:
On Sun, 30 May 2010, Giorgio Zoppi wrote:
2010/5/30 Adam Merz <adammerz@hotmail.com>:
Giorgio Zoppi <giorgio.zoppi <at> gmail.com> writes:
we're intersted in how to optimize BGL on Multicore for adding our standard Bellman-Ford algorithms for optical networks. How do the current boost work on Multicore? BGL is thread-safe? Which parts are required to optimize?
I believe what you're interested in is the Parallel BGL. See
http://www.boost.org/doc/libs/release/libs/graph_parallel/doc/html/index.htm... and http://osl.iu.edu/research/pbgl/
I skimmed the doc, however we're more fit in a multicore solution than a distributed process solution. I cannot find something about that in the doc. Any hints?
We are in the process of adding shared-memory/hybrid parallelism to PBGL, but that's still in a very early stage. You can run multiple MPI processes on the same machine (running them on different cores) and they will communicate using shared memory. One issue that comes up is that memory is often the limitation for graph algorithms, not CPU performance; in that case, multicores don't really do much for performance. Are you using more expensive algorithms that aren't memory intensive? If so, there might be ways to do simple things with existing BGL algorithms to make them run with shared memory (for example, adding OpenMP pragmas).
So if the graph is with sparse matrix, you could provide a more compact implementation and modify (parallelize) your algorithms.
I don't understand what you're saying. What kinds of algorithms are you running on your graphs that you would like to parallelize? -- Jeremiah Willcock

Hi, I have a small question regarding boost.log: would it be possible to 'register' different outputfiles for different loggers? BOOST_LOG(get_normal_log()) << msg; // goes to 'normal.log' BOOST_LOG(get_special_log()) << msg; // goes to 'special.log' I could not find how to do this reading the manual or the review mails, but I might have missed the obvious .. Thanks in advance, Dirk

On 05/31/2010 04:34 PM, Dirk Griffioen wrote:
Hi,
I have a small question regarding boost.log: would it be possible to 'register' different outputfiles for different loggers?
BOOST_LOG(get_normal_log()) << msg; // goes to 'normal.log' BOOST_LOG(get_special_log()) << msg; // goes to 'special.log'
Filtering is the key. If each of your loggers has an attribute with a unique value, you can write filters that will pass records from the particular loggers to the particular sinks. That way you can create several file sinks that will write records from their respective loggers. Alternatively, you can use the multi-file sink [1]. In that case you only need one sink, which will distribute records to different files, which names will be based on the attribute values attached to the record. [1] <http://boost-log.sourceforge.net/libs/log/doc/html/log/detailed/sink_backends.html#log.detailed.sink_backends.text_multifile>
participants (5)
-
Adam Merz
-
Andrey Semashev
-
Dirk Griffioen
-
Giorgio Zoppi
-
Jeremiah Willcock