Scalability test overBGL
Hi..
I m performing scalability test over BGL . Means i m checking out how big
a graph can be
constructed using BGL.
For the moment i m considering complete graph.
I m able to construct a complete graph with 3000 vertices but beyond that
it crashes.
Anybody who has done scalability test over BGL Kindly help. As this can be
treated as a benchmark
for BGL and then we can improve on this in near future...
Following is my code..
#include <cstdlib>
#include <iostream>
#include<vector>
#include <iostream>
#include <fstream>
#include <string>
#include
I m able to construct a complete graph with 3000 vertices but beyond that it crashes. Anybody who has done scalability test over BGL Kindly help. As this can be treated as a benchmark for BGL and then we can improve on this in near future... Following is my code..
-- snipp Hi, I don't have any problems running your code on a 64bit machine until the limits of main memory (roughly about 11k vertices on 16gb ram) using g++ on gnu/linux. I don't see why there should be any scaling issues with the BGL. I have used the BGL with several million vertices and O(n) edges without any problems. A more memory friendly graph structure is the compressed sparse row graph if your're dealing with static graphs. cheers moritz -- Moritz Hilger Combinatorial Optimization & Graph Algorithms TU Berlin +49 30 314-25773
I m able to construct a complete graph with 3000 vertices but beyond that it crashes. Anybody who has done scalability test over BGL Kindly help. As this can be treated as a benchmark for BGL and then we can improve on this in near future... Following is my code..
-- snipp
Hi, I don't have any problems running your code on a 64bit machine until the limits of main memory (roughly about 11k vertices on 16gb ram) using g++ on gnu/linux. I don't see why there should be any scaling issues with the BGL. I have used the BGL with several million vertices and O(n) edges without any problems. A more memory friendly graph structure is the compressed sparse row graph if your're dealing with static graphs. cheers moritz
-- Moritz Hilger Combinatorial Optimization & Graph Algorithms TU Berlin +49 30 314-25773 _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users ForwardSourceID:NT0004BC12 =====-----=====-----===== Notice: The information contained in this e-mail message and/or attachments to it may contain confidential or privileged information. If you are not the intended recipient, any dissemination, use, review, distribution, printing or copying of the information contained in this e-mail message and/or attachments to it are strictly prohibited. If you have received this communication in error,
Hi After increasing the RAM size to 4GB i could get result for 5000 vertices. Result of are calculate in BFS and DFS in 16 and 14 sec respectively.. And testing out for 7000 vertices currently . But in my case the edges are of O(VxV)(where V is the vertex) I m finding out the threshold values for all these. Which give a complete insight that for a given memory how big the graph can be constructed and what time the various algorithm takes to calculate the result. Do you have any analysis data by which i can compare. Also let me know if any other important parameter can be considered checking for scalability. Right now i m considering complete graph so once it works fine for this then it will work for all other cases. Kindly help. Thanks Abhishek Vyas Tata Consultancy Services Mailto: abhishek.v@tcs.com Website: http://www.tcs.com ____________________________________________ Experience certainty. IT Services Business Solutions Outsourcing ____________________________________________ boost-users-bounces@lists.boost.org wrote on 10/16/2007 04:17:03 PM: please notify us by reply e-mail or telephone and immediately and permanently delete the message and any attachments. Thank you
On Oct 16, 2007, at 7:19 AM, abhishek.v@tcs.com wrote:
After increasing the RAM size to 4GB i could get result for 5000 vertices. Result of are calculate in BFS and DFS in 16 and 14 sec respectively.. And testing out for 7000 vertices currently .. But in my case the edges are of O(VxV)(where V is the vertex) I m finding out the threshold values for all these. Which give a complete insight that for a given memory how big the graph can be constructed and what time the various algorithm takes to calculate the result.
An adjacency list is not the right data structure for a graph with O (VxV) edges. Have you considering using the adjacency_matrix? http://www.boost.org/libs/graph/doc/adjacency_matrix.html
Do you have any analysis data by which i can compare. Also let me know if any other important parameter can be considered checking for scalability. Right now i m considering complete graph so once it works fine for this then it will work for all other cases.
We've dealt with graphs with tens of millions of edges before with the BGL. The most important point is to pick the appropriate By the way, an undirected adjacency list requires far more memory than either a directed or a bidirectional adjacency list, because edges actually edge up being stored in three places: the source vertex, the target vertex, and a separate "edges" list that allows faster iteration over the set of edges in the graph. - Doug
participants (3)
-
abhishek.v@tcs.com
-
Doug Gregor
-
moritz Hilger