Hi. According to the site, most boost graph algorithms guarantee complexity bounds. For instance, DFS and BFS have complexity time O(V + E), Bellman-Ford has complexity O(VE), etc... As fair as I'm concerned (and from the knowledge I have) the complexity time bounds exposed on the graph web pages (like the ones I described above) are usually the 'best' theoretical values for them that can be found in the literature. However, something is still 'dark' for me. Most of this algorithms achieve this time bounds when implemented in a very specialized manner. Boost graph library is NOT like this, since we know it's a very nice generic library. Boost uses Property Maps for a lot of algorithms. My question is whether or not this property maps affect the complexity bounds. For example: In the original implementation of Bellman-Ford (which achieves O(VE)) weights are attached to edges. In boost's implementation, they're in the form of property maps. Right there, there's a change from a constant time operation (edge->get_weight, for example) to a logarithmic time operation (weight_map[edge], for example). I must say that I didn't take carefull look at the algorithms implementations, but it just seems weird to me that the same complexity bounds can still be achieved with those property maps. Can anyone help understand that? Thanks. -- Leandro Terra C. Melo