
I'll comment on a few parts of the thread here: David Abrahams wrote:
Of course backward compatibility is desirable, but maintaining it rigidly prevents forward progress. Judicious breakage does not necessarily mean that the library is "experimental" in any meaningful way. Nobody really has a problem with the stability of shared_ptr, for example, and yet look at its list of breaking changes over the years:
http://www.boost.org/doc/libs/1_37_0/libs/smart_ptr/compatibility.htm Breaking changes are certainly sometimes required, and this type of page in the boost docs is an ideal example of what *should* be done when breaking changes are needed. Most of these changes are additions rather than changes, they are well documented, and they actually make sense. The changes in boost.range (based on my review of the changes last night - so I might be wrong) are that they are breaking changes, they are not documented at all, and that they don't really make sense. It is not possible to detect a singular range in the current version of the library (at least in release code), despite an issingular function existing. This basically means that a default constructed range cannot be detected to have been default constructed. This in turn makes it hard to use a range and a container in the same template code - which is the source of the problem here. Being able to use a range just like a container and vice versa is one of the strongest use cases for range IMHO. David Abrahams wrote:
So what happens to the iterators library when it's time for the rewrite I've been planning? I intend, as much as possible, to maintain compatibility with the old interfaces, of course. Is that still a "stable" library?
Simple. Because the iterators library is presumably in the stable "core" part of the boost library, the new version gets produced, internally tested, and prepared for release, including detailed documentation of any breaking changes. The tests include testing it with any dependencies in core and general boost (to continue with my old, perhaps poor, naming standard). Once testing is complete, it goes out in the next boost core beta for an extended beta test; at the end of which, if accepted, it goes into the next release of boost. I don't think anyone would want to completely disallow breaking changes, we are all used to them, so long as they are documented and released well. The main idea of having 2 separate release cycles is simply that new code can come into boost very quickly (which is desirable), and also come into boost in a way that companies can pick it up much faster. But at the same time, old stable code has a much more cautious review timescale, in keeping with the fact that much of this code is used by huge numbers of C++ developers around the world. When I've been a library writer in the past, I take complaints about breaking changes to my code as a sort of compliment. It means people use my code, and like it enough to complain when it changes in an unsuspected way. David Abrahams wrote:
Which versions of the "stable" library would the "new" libraries be required to work with?
This is a pretty simple one to answer. The latest version of the core library. The way the release cycle would work is simply that the fast changing library (which new code would go into) would update as regularly as boost does at present. When it was decided to up-rev the core library, both core and general would up-rev together to maintain compatibility; then the general library (and any new libraries) would be required to work with the new core. I've used this type of development methodology myself in the past and it is very successful IMHO. Dave