
I'd think a hashed dictionary would be needed for such a large data set. There is a caviot though, weither or not a good hash function can be created. The good thing is that its easy to experiment with different STL associative containers, as long as you have SGI's STL or a derivative/ or good alternative. In SGI, the container is hash_map. Martin Okrslar wrote:
Dear Boost users,
for an application in computational biology I am intendig to use the BGL to compute strongly connected components of a graph with about 600.000 nodes.
In the BGL book on page 103 it is mentioned, that "The choice to use std::map to implement the property map is rather inefficient in this case...". Since my graph is large I'm a bit worried about this statement.
Could anybody of you please give me a hint, what other datastructure I might use here, to make it efficient?
all the best Martin
-- ----------------------------------------------------------------------------- Martin Okrslar MPI for Molecular Genetics phone: ++ 49 + 30 / 8413-1166 Computational Molecular Biology Fax: ++ 49 + 30 / 8413-1152 Ihnestrasse 73 email: okrslar@molgen.mpg.de D-14195 Berlin URL: http://cmb.molgen.mpg.de -----------------------------------------------------------------------------
*Yahoo! Groups Sponsor* ADVERTISEMENT <http://rd.yahoo.com/M=246920.2960106.4328965.2848452/D=egroupweb/S=1705006788:HM/A=1464858/R=0/*http://www.gotomypc.com/u/tr/yh/cpm/grp/300_Cquo_1/g22lp?Target=mm/g22lp.tmpl>
Info: <http://www.boost.org> Wiki: <http://www.crystalclearsoftware.com/cgi-bin/boost_wiki/wiki.pl> Unsubscribe: <mailto:boost-users-unsubscribe@yahoogroups.com>
Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service <http://docs.yahoo.com/info/terms/>.