
On Sat, Apr 24, 2004 at 07:13:25PM +0000, Joaquin M Lopez Munoz wrote:
0,0,1 0,0,2 0,1,0 0,5,4 1,0,0 1,0,2 1,2,3 ...
Given the lexicographical order, it is possible to efficiently search for incomplete keys where only the first values are given:
mc.equal_range(make_tuple(1,0)) yields 1,0,0 1,0,2
mc.equal_range(make_tuple(0)) yields 0,0,1 0,0,2 0,1,0 0,5,4
and so on. Accordingly, comparison operators between composite_key results and tuples are overloaded so that the following holds:
composite_key</* as before*> ck; ck(record(1,2,3))<=make_tuple(1,2); // yields true make_tuple(1,2)<=ck(record(1,2,3)); // yields true
So far, this is all IMHO perfectly consistent, but the following design decision is not obvious to me: given the previous, what should be the result of
ck(record(1,2,3))==make_tuple(1,2); // true or false?
As these two objects are neither greater than the other, at least they are *equivalent*, but maybe allowing operator== to return true is too much.
It sounds to me like you have a partial ordering or a strict weak ordering, but not a total ordering. http://www.sgi.com/tech/stl/StrictWeakOrdering.html That is, these two objects: ck(record(1,2,3)) make_tuple(1,2) are in the same equivalence class, but they are not equal. My hunch is that if you call these two object "k" and "t" (key and tuple), then you should have k < t false t < k false t == k does not compile That's just my quick hunch based on what I've quoted above; I haven't considered what consequences it has for the library, but that's my intuitive notion of how tuples as "partial keys" ought to work. -- -Brian McNamara (lorgon@cc.gatech.edu)