
On 1:59 PM, Dean Michael Berris wrote:
On Sat, Dec 18, 2010 at 1:15 AM, Jim Bell <Jim@jc-bell.com> wrote:
On 1:59 PM, Dean Michael Berris wrote:
On Mon, Nov 29, 2010 at 9:32 PM, Jim Bell <Jim@jc-bell.com> wrote:
I'd like someone to walk through a case study right here on the list. (Or a few people!) I'll take this up at a later time. It's a later time. ;-)
Yes, alright. So just to see if I get the context right, a case study on what to do with an MIA maintainer is needed.
Again, thanks for the thoughtful reply. But I'm thinking something much less ambitious. (And more measurable.) How about this question: as of this moment, we have X number of patches already in Trac. What's the average quality of those patches? (If a library has diverged from a patch, don't count it, as that's not fair to the patch.) See where I'm going? If 97% of existing patches would be accepted, how much apprehension should you have about the next one? (Or if only 40%...) Or, if a Boost-trusted developer not intimately familiar with the library can, in two minutes (or ten? twenty?) determine that the patch is valid, with 97% accuracy, what would that mean? And if the patch had to do directly with a regression failure, how would that change things?