[Graph] Using a bit-based ajacency matrix.
Dear All, Does anybody have experience with the above. That is, to represent the graph as a bit-based ajacency matrix, and then use that together with Boost.Graph? I'm thinking that Boost.Graph will not exploit this representation optimally. Am I correct? Thanks in advance -Thorsten
On Tue, Feb 24, 2009 at 12:13 PM, Thorsten Ottosen < thorsten.ottosen@dezide.com> wrote:
Dear All,
Does anybody have experience with the above. That is, to represent the graph as a bit-based ajacency matrix, and then use that together with Boost.Graph?
Not specifically. Building at bitmap-based adjacency matrix shouldn't actually be too difficult, if you're considering it as a space-optimization of the adjacency matrix. Just overload the same set of functions for the adjacency_matrix.
I'm thinking that Boost.Graph will not exploit this representation optimally. Am I correct?
Maybe, maybe not - it depends on your meaning. The efficiency of BGL algorithms depends on the efficiency of the overloads you provide. Andrew Sutton andrew.n.sutton@gmail.com
Andrew Sutton skrev:
On Tue, Feb 24, 2009 at 12:13 PM, Thorsten Ottosen
mailto:thorsten.ottosen@dezide.com> wrote: Dear All,
Does anybody have experience with the above. That is, to represent the graph as a bit-based ajacency matrix, and then use that together with Boost.Graph?
Not specifically. Building at bitmap-based adjacency matrix shouldn't actually be too difficult, if you're considering it as a space-optimization of the adjacency matrix. Just overload the same set of functions for the adjacency_matrix.
Yes, it is a space optimization, but it is also an optimization is other ways, because we can sometimes make linear operations happen in O(1) time.
I'm thinking that Boost.Graph will not exploit this representation optimally. Am I correct?
Maybe, maybe not - it depends on your meaning. The efficiency of BGL algorithms depends on the efficiency of the overloads you provide.
I not sure, but I think it would require a new graph concept to take advantage of it. -Thorsten
Not specifically. Building at bitmap-based adjacency matrix shouldn't
actually be too difficult, if you're considering it as a space-optimization of the adjacency matrix. Just overload the same set of functions for the adjacency_matrix.
Yes, it is a space optimization, but it is also an optimization is other ways, because we can sometimes make linear operations happen in O(1) time.
Really? Which ones?
I not sure, but I think it would require a new graph concept to take advantage of it.
That seems unlikely. As far as I know, there aren't many algorithms that dispatch on different graph kinds - except for directionality, but even that's quite rare. As the library stands, the graph concepts are primarily descriptive. If there's a substantial difference between the adjacency bitmap and the adjacency matrix, it would be worthwhile to define a refinement. Andrew Sutton andrew.n.sutton@gmail.com
Andrew Sutton skrev:
Not specifically. Building at bitmap-based adjacency matrix shouldn't actually be too difficult, if you're considering it as a space-optimization of the adjacency matrix. Just overload the same set of functions for the adjacency_matrix.
Yes, it is a space optimization, but it is also an optimization is other ways, because we can sometimes make linear operations happen in O(1) time.
Really? Which ones?
A simple example would be to compute the common neighbours of A and B. This is simply adjbitset(A) & adjbitset(B) in O(1) time. No iteration is needed. I think there will be many other examples where the big-O characteristics are different when you look into it.
I not sure, but I think it would require a new graph concept to take advantage of it.
That seems unlikely. As far as I know, there aren't many algorithms that dispatch on different graph kinds - except for directionality, but even that's quite rare. As the library stands, the graph concepts are primarily descriptive.
If there's a substantial difference between the adjacency bitmap and the adjacency matrix, it would be worthwhile to define a refinement.
Ok. -Thorsten
AMDG Thorsten Ottosen wrote:
Andrew Sutton skrev:
Not specifically. Building at bitmap-based adjacency matrix shouldn't actually be too difficult, if you're considering it as a space-optimization of the adjacency matrix. Just overload the same set of functions for the adjacency_matrix.
Yes, it is a space optimization, but it is also an optimization is other ways, because we can sometimes make linear operations happen in O(1) time.
Really? Which ones?
A simple example would be to compute the common neighbours of A and B. This is simply
adjbitset(A) & adjbitset(B)
in O(1) time. No iteration is needed. I think there will be many other examples where the big-O characteristics are different when you look into it.
Since when can you compute the bitwise and of an arbitrary size bitset in constant time? In Christ, Steven Watanabe
Steven Watanabe skrev:
AMDG
Thorsten Ottosen wrote:
Andrew Sutton skrev:
Not specifically. Building at bitmap-based adjacency matrix shouldn't actually be too difficult, if you're considering it as a space-optimization of the adjacency matrix. Just overload the same set of functions for the adjacency_matrix.
Yes, it is a space optimization, but it is also an optimization is other ways, because we can sometimes make linear operations happen in O(1) time.
Really? Which ones?
A simple example would be to compute the common neighbours of A and B. This is simply
adjbitset(A) & adjbitset(B)
in O(1) time. No iteration is needed. I think there will be many other examples where the big-O characteristics are different when you look into it.
Since when can you compute the bitwise and of an arbitrary size bitset in constant time?
Well, if the bitset size is less than or equal the word size. For larger sizes, the O(1/W * n ) complexity is also significantly better. -Thorsten
AMDG Thorsten Ottosen wrote:
Steven Watanabe skrev:
Since when can you compute the bitwise and of an arbitrary size bitset in constant time?
Well, if the bitset size is less than or equal the word size.
That has nothing to do with big-O. big-O describes an algorithm's asymptotic behavior as the input becomes large.
For larger sizes, the O(1/W * n ) complexity is also significantly better.
Last time I checked, O(constant * n) = O(n). Yes, it's a lot faster, but it's faster by a constant factor. In Christ, Steven Watanabe
Steven Watanabe skrev:
AMDG
Thorsten Ottosen wrote:
Steven Watanabe skrev:
Since when can you compute the bitwise and of an arbitrary size bitset in constant time?
Well, if the bitset size is less than or equal the word size.
That has nothing to do with big-O. big-O describes an algorithm's asymptotic behavior as the input becomes large.
Right. Anyway, it doesn't change the fact that it is a lot faster, often exchanging linear complexity with constant complexity (without alking about big-O complexity). -Thorsten
participants (3)
-
Andrew Sutton
-
Steven Watanabe
-
Thorsten Ottosen