
On Mon, Nov 24, 2008 at 9:28 AM, David Abrahams <dave@boostpro.com> wrote:
on Sun Nov 23 2008, "vicente.botet" <vicente.botet-AT-wanadoo.fr> wrote:
Daniel Walker has expresed this better "Once accepted, the tests should be a verification that the library does what the community voted on. The tests are a verification of quality. At that point, I think it might be a good idea to quarantine the tests, take them out of the authors hands, and put them under the stewardship of a benevolent dictator of boost as a whole so that they can be used to assure that the library does what the community voted on. "
If we need to change while we make evolulion on a library this is a symptom the interface has changed and the same way the test is broken, the user code canbe broken. If we forbid this test changes, we are able to identify breaking changes.
I'm sorry, but I just don't think anything like this is going to work. Among other things, I think it will be a huge pain for existing library authors (suppose I want to _add_ something to a test?) and will deter people from contributing to Boost, and I don't think you're going to get a positive consensus on it among existing contributors. This seems like an overreaction to one person's failure at disciplined management of library evolution.
I see your point and I agree that policies should not make it difficult for authors to test their code. However, submitting a patch is not so difficult. As for contributing to Boost, people are also deterred when they take the time to understand a library and submit improvements to it, only to have the rug pulled out from under them. This is actually my personal experience with Boost.Range. The Range concepts used to have an empty(r) that I believe addressed the issue of empty ranges independently of iterator_range. But the whole thing falls on its face when for no sensible reason the function was removed from the concept definition. Why should I use the new Range concepts, let alone contribute to the library, if these are not even the concepts that were released after review and acceptance for boost? I mean, I'm not throwing in the towel, I'm just expressing my frustration, not only as a user but as a contributer.
Rather than set up systems that will decrease agility, increase coupling, and give contriutors the sense that the Boost community doesn't trust them to do what's right, suppose we set up a mailing list to which all the test checkins are posted? Then anyone who wants to monitor the evolution of a library's tests can subscribe to that list.
I think this is a good idea, and in principle I could support it. However, I must point out that it brings up another issue: Individuals in the Boost community have some responsibility for monitoring Boost development. If I had been paying closer attention, I could have protested the changes in the Range concepts a long time ago. Unfortunately, it's not easy to pay participate in all the mail traffic on this list, especially when you have other demands on your attention in life. This is why I'd like to automat quality control of testing to the extent that after the community reviews and votes on the unit tests as part of library acceptance, they can expect their votes won't one day be nullified so easily. I don't know that my suggestion would have even been enough to catch the problem in Boost.Range, but it might help in the future. Daniel Walker