
Provisional Dissonance:
What I was looking for, really, was dialog on existing design strategies. It sounds like your issues with property maps mirror my confusion with the library design, though. While I can fully appreciate the desire to write generic code, it feels like BGL has pushed the complexity of creating and enforcing coupling between the generic graph/node/visitor/algorithm/etc onto the client. [...]
A graph algorithm must be able to associate information on the elements of the graph. For instance, depth-first search needs to know whether it has already discovered a specific node or not (and more). In the graph utilities we implemented as part of a school project in Java, we basically relied on the hashCode() and equals() methods that every Java Object provides. The algorithm templates then internally use HashMaps for associating information on node and edge objects. This means that each time an algorithm needs to access some information (that the algorithm requires internally) on a node, a hash table lookup is performed. Assuming a good hash function, hash table lookups are O(1), but they are still slower than a direct array accesses, or accessing a member of a structure. The design employed by BGL makes it possible to provide faster lookups than what is possible with hash tables. The cost is, as you have noticed, tougher requirements for the client. [...]
If, instead, it is a synthesized construct for communication between a graph and the algorithm operating over it, why should the client be responsible for its maintenance instead of the framework? What are the advantages of this approach?
The advantage is that the client can choose how nodes are mapped to indices. The client can, in particular, provide a significantly faster mapping method. For example, the color of a node could be stored in the node itself, which would mean that accessing the color property would be a very fast O(1) operation. It could easily be an order of magnitude faster than a hash table lookup.
What are the disadvantages of simplifying things for the client?
The minimal information needed by graph algorithms, assuming best asymptotic complexity guarantees are desired, is that the client provides hashing and equality comparison functions for nodes and edges. The disadvantage of requiring only hashing and equality comparison from the client is that sometimes there are significantly more efficient ways to associate information with nodes and the opportunity to take advantage of those more efficient ways is lost. It should also be possible for a graph algorithm to internally convert a graph to an isomorphic copy that allows essentially optimal accesses (it should be doable in O(V+E) time and space in most cases). However, the time and space required by the conversion would probably make such a conversion useless except for graph algorithms whose complexity is significantly higher than O(V+E).
This is the type of design issue that I intended to question with my initial post.
Does this answer your question? Regards, Vesa Karvonen _________________________________________________________________ MSN 8 helps eliminate e-mail viruses. Get 2 months FREE*. http://join.msn.com/?page=features/virus