
On Wed, Feb 20, 2013 at 3:12 AM, Jeffrey Lee Hellrung, Jr. < jeffrey.hellrung@gmail.com> wrote:
On Tue, Feb 19, 2013 at 2:04 AM, Vadim Stadnik <vadimstdk@gmail.com> wrote:
On Tue, Feb 19, 2013 at 4:17 PM, Nathan Ridge <zeratul976@hotmail.com
wrote: ...
This was discussed upthread. To summarize, the standard library makes the design choice of not providing certain operations if they cannot be implemented with a certain asymptotic complexity (for example, subtraction of two iterators if it cannot be implemented in constant time), and Boost is trying to be consistent with this design choice.
I agree that consistency of the library design is important, but the case is not completely trivial. IMO, this design principle is only a theoretical one. In practice, it is often unhelpful and counterproductive. Because of this principle, STL and C++ miss the big selling point.
Okay, let's see why...
An end user (who pays for and uses a software product) is interested in the best running time of an algorithm which is determined not only by costs of specific operations, but also by their frequencies. Relatively small number of inefficient operations does not affect the total computational cost of an algorithm.
Sometimes, yes, I agree.
Also, this design principle makes illegal advanced and/or augmented data
structures which are beneficial for many complex algorithms. For example, insert() and erase() functions for a single element with O(log N) cost are not allowed by the C++ standard, however, significantly less efficient O(N) operations are allowed for std::vector.
I'm not really sure what to make of this paragraph. The standard makes some guarantees for std:: data structures, but you're free to provide your own augmented data structures with whatever complexity guarantees make sense, and I guarantee you wouldn't be violating any laws.
This is a bit non-standard interpretation of the C++ standard :). Augmented data structures have been recently discussed a number of times by Boost community, some of them are waiting for a review. There is a problem of recognizing the value of augmented data structures for STL, since formally, the subscript operator, function at() and iterators that support std::distance() with O(log N) computational complexity are currently non-standard features and thus many C++ professionals are at least cautious about new data structures.
I think that the current standard specifications for computational
complexities of operations are to some degree obsolete. They create unnecessary restrictions that will be either amended or removed completely in future versions of STL.
Strongly disagree, and you haven't provided any examples of these "unnecessary restrictions" (or you aren't being precise in what you mean). Much code and many algorithms rely on the constant-time random access of std::vector for their own complexity bounds; are you suggestion the standard should drop this complexity guarantee?
I can re-formulate the issue of STL design. After fixing for many years numerous performance bottlenecks caused first of all by linear cost update operations of std::vector, since it is the default STL container, I have come to the following conclusion. Right representation of math concepts through right interfaces is more useful than specification of computational complexities of single operations if a library provides a wide choice of interchangeable containers. The design decisions based on the computational complexity of a single operation are just not practical. At the design time of a library it is not possible to know frequencies of all specific operations for all user algorithms. The parameter of the total computational cost of a user algorithm is more important in real life. The solutions to current inefficiencies of STL are available through advanced data structures, but they are non-standard due to the specification of computational complexities for single operations. Something has to be changed in STL to take full advantage of these solutions. Regards, Vadim Stadnik