
Thomas All you say is obviously. The counter_tree can't be so fast as the vectors in algorithms as sort or binary search. I only say that the random algorithms can run with the vector_tree with a good results If you have a hundred million elements and want sort for to search them, obviously the vector_tree is not the best option, even the std::set is slow if you compare with a vector, the sort function and the binary search. But commonly the number of elements used in the majority of the programs are nor so big, and the times with thousands even few millions elements are decent. The main utility of the vector_tree is when you must insert or delete many elements in central positions. It is only an option more, with different features, and if you consider it useful, use .Only that Sincerely yours Francisco Tapia El 12 de abril de 2012 20:30, Thomas Klimpel <Thomas.Klimpel@synopsys.com>escribió:
Dave Abrahams wrote:
on Thu Apr 12 2012, Francisco José Tapia <fjtapia-AT-gmail.com> wrote:
The iterators are random access , and you can subtract two iterators in a O(log N) time for to know the number of nodes between them (even with the end( ) and rend( ) iterators)
If by "are random access" you mean they satisfy the random access iterator requirements, then this is a contradiction. Iterator subtraction is required to be O(1) for random access iterators.
If the container can store at most 2^64 elements, it can be hard to distinguish O(1) from O(log N). However, if O(log N) implies cache misses and swapping, then O(log N) may really be different from O(1) in practice.
Regards, Thomas
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost