
On 1:59 PM, David Abrahams wrote:
I am one of those hopefuls who responded on the thread that proposed the idea for volunteers. I have always wanted to understand and contribute to the Boost libraries because I felt that it would give me an insight into the design and implementation of Boost (and perhaps the C++ standard libraries themselves) to an extent that I lack today. And I am certain greater participation can only mean good thing, provided we have answers to the following questions (or at least know where to start in trying to answer):
a. What are the concrete criteria for admitting a volunteer - where do you set the bar. These must be verifiable objective criteria. I don't think we can really come up with objective criteria. Each
At Sun, 31 Oct 2010 00:22:25 +0530, Arindam Mukherjee wrote: library maintainer has his own set of values and his own style, and—at least if the maintainers are going to be involved in the decision—contributions mustn't clash too badly with that style and set of values. Therefore, criteria for accepting contributions, if not contributors, will be, to some extent, subjective.
I agree. I think a volunteer's own motivation will carry him farther than anything. I think it will start out largely self-study: studying a library's documentation and regression tests to understand it. Hopefully there would be two or three such volunteers per library, and they can ask questions of each other. Learning how to (a) identify a spurious Ticket and diplomatically dispose of it, or (b) adapt it into a legitimate regression test or extension to an existing test, possibly with (c) a minimal-impact patch ... that alone will sharpen the volunteers skills a lot, and get the attention of the library's maintainer(s) in terms of mentoring.
b. Do we have a process in place which makes the induction of volunteers easy - how easily can a new recruit get down to the business of fixing the bugs? Part of it depends on the bar you set in (a) and part of it depends on the process you set. For example, the volunteers at the least need to know the bug-fixing process that is in place today including tools, reviews, etc. How quickly can this knowledge be imparted.
I think self-study will rule the day here, too. Where the most instruction is needed is in building and running the regression tests in isolation. My method might be a bit unorthodox: hack run.py, then regression.py, and operate things just like a regression test, but without uploading data. (More detail later.) Anyone can add comments to a ticket, though I think a more clear explanation of some things like severities 'showstopper' and 'regression' would be helpful. But navigating a ticket is one way to get to know it.
c. As somebody already mentioned, to what extent can you provide mentoring and who does it.
d. Finally, would someone assign tickets to volunteers - I feel this would be a better idea than letting people pick and choose when the volunteers start off. The process could get eased off as a volunteers spends more time with the code base and therefore gets more familiar.
If one volunteer has more advanced experience, he could assign tickets. If a maintainer has just stepped out in front of a bus, though, there may not be anyone to do this.
I am sure the questions are easy to ask and there are logistical hurdles to take into account in trying to answer any of these questions.
The bain to boost's quality is thinking someone else is taking care of it.