
on Fri Aug 03 2007, Vladimir Prus <ghost-AT-cs.msu.su> wrote:
David Abrahams wrote:
on Thu Aug 02 2007, Vladimir Prus <ghost-AT-cs.msu.su> wrote:
there's one bit where out process is not broken, it's nonexistent. The important aspect of Boost is that we have lots of automated tests, or lots of different configurations and there's the goal of no regressions. This is a very strict goal.
At the same time we don't have any equally strict, or even written down bug-triage-and-developer-pinging process.
I agree that we could improve in that area, but it doesn't have much to do with our long release cycle.
I think it's one of the primary problem causing long release cycle.
Interesting.
The bugs that held up our last release showed up in our regression tests, not in our bug tracker.
This difference is not important --
Okay... well, developer pinging can and should be automated. We already have a "strict" mechanism for it in place. Maybe it could be better; I don't know. What kind of bug triage process do you think we should have?
regressions are trivially convertible into bugs in a bug tracker; and clearly regressions must be tracked somehow.
I guess the problem is that such conversions are hard to effectively automate. One mistake in a library could turn into 50 test failures; if they all look the same, you'd probably only want one ticket.
And long release cycle is direct result of:
1. Wanting zero regressions. 2. Library authors sometimes being not available, and there being no pinging process.
There certainly is a pinging process. Don't you get the "there are bugs in one or more libraries you maintain" messages?
3. Having to time window for fixing.
Do you mean http://article.gmane.org/gmane.comp.lib.boost.devel/158259 ? If not, what is a "time window" and how would a time window help?
So we end up with a regression and all we know there's a regression. We do not know if this issue is being worked on, or if the library author will have time only in N days, or if the library author needs help from platform expert (which will have time only in N days). The library author, in turn, might have little motivation fixing a single regression on obscure platform, if he feels that there are 100 other regressions that are not worked on.
It's a plausible scenario that probably happens sometimes, but do you have any evidence at all that it's at the heart of the long release cycle?
We actually had examples of such proactive release management in past, and it worked good, but it's clearly time consuming. So one possible solution is to
1. Document the process.
I guess that would be helpful, since you seem to have some ideas that differ from what we've been doing, but haven't been specific about them. -- Dave Abrahams Boost Consulting http://www.boost-consulting.com The Astoria Seminar ==> http://www.astoriaseminar.com