Bug-fix volunteers: risks, downsides?

To round out the discussion... What risks or downsides would there be to recruiting a little legion of bug-fix volunteers and turning them loose on the tickets and/or regression matrix?

On 29/10/10 15:28, Jim Bell wrote:
To round out the discussion...
What risks or downsides would there be to recruiting a little legion of bug-fix volunteers and turning them loose on the tickets and/or regression matrix?
Boost is complex and sets very high requirements. Volunteers risk that they may not be able to cut the mustard (having some GSoC experiences in mind). Thus, potential of dropped and unfinished tasks may be high. The Boost team may find it challenging to find resources to mentor during the process. Best regards, -- Mateusz Loskot, http://mateusz.loskot.net Charter Member of OSGeo, http://osgeo.org Member of ACCU, http://accu.org

On Fri, Oct 29, 2010 at 7:49 PM, Mateusz Loskot <mateusz@loskot.net> wrote:
On 29/10/10 15:28, Jim Bell wrote:
To round out the discussion...
What risks or downsides would there be to recruiting a little legion of bug-fix volunteers and turning them loose on the tickets and/or regression matrix?
Boost is complex and sets very high requirements. Volunteers risk that they may not be able to cut the mustard (having some GSoC experiences in mind). Thus, potential of dropped and unfinished tasks may be high. The Boost team may find it challenging to find resources to mentor during the process.
While this is a very real risk the other side of the coin is that, to my perception at least, Boost needs to do something to maintain or increase participation levels. The 'old guard' have done a magnificent job, and their commitment has frankly been awesome, but they can't be expected to crew the ship for ever. The barriers to entry to competence as a Boost developer are indeed substantial, and probably rising, but unless new blood is somehow 'inducted' I fear the future for Boost is limited. - Rob.

On Sat, Oct 30, 2010 at 11:09 PM, Robert Jones <robertgbjones@gmail.com> wrote:
On Fri, Oct 29, 2010 at 7:49 PM, Mateusz Loskot <mateusz@loskot.net> wrote:
On 29/10/10 15:28, Jim Bell wrote:
To round out the discussion...
What risks or downsides would there be to recruiting a little legion of bug-fix volunteers and turning them loose on the tickets and/or regression matrix?
Boost is complex and sets very high requirements. ... unfinished tasks may be high. The Boost team may find it challenging to ...
increase participation levels. The 'old guard' have done a magnificent ... unless new blood is somehow 'inducted' I fear the future for Boost is limited.
I am one of those hopefuls who responded on the thread that proposed the idea for volunteers. I have always wanted to understand and contribute to the Boost libraries because I felt that it would give me an insight into the design and implementation of Boost (and perhaps the C++ standard libraries themselves) to an extent that I lack today. And I am certain greater participation can only mean good thing, provided we have answers to the following questions (or at least know where to start in trying to answer): a. What are the concrete criteria for admitting a volunteer - where do you set the bar. These must be verifiable objective criteria. b. Do we have a process in place which makes the induction of volunteers easy - how easily can a new recruit get down to the business of fixing the bugs? Part of it depends on the bar you set in (a) and part of it depends on the process you set. For example, the volunteers at the least need to know the bug-fixing process that is in place today including tools, reviews, etc. How quickly can this knowledge be imparted. c. As somebody already mentioned, to what extent can you provide mentoring and who does it. d. Finally, would someone assign tickets to volunteers - I feel this would be a better idea than letting people pick and choose when the volunteers start off. The process could get eased off as a volunteers spends more time with the code base and therefore gets more familiar. I am sure the questions are easy to ask and there are logistical hurdles to take into account in trying to answer any of these questions. Arindam

At Sun, 31 Oct 2010 00:22:25 +0530, Arindam Mukherjee wrote:
I am one of those hopefuls who responded on the thread that proposed the idea for volunteers. I have always wanted to understand and contribute to the Boost libraries because I felt that it would give me an insight into the design and implementation of Boost (and perhaps the C++ standard libraries themselves) to an extent that I lack today. And I am certain greater participation can only mean good thing, provided we have answers to the following questions (or at least know where to start in trying to answer):
a. What are the concrete criteria for admitting a volunteer - where do you set the bar. These must be verifiable objective criteria.
I don't think we can really come up with objective criteria. Each library maintainer has his own set of values and his own style, and—at least if the maintainers are going to be involved in the decision—contributions mustn't clash too badly with that style and set of values. Therefore, criteria for accepting contributions, if not contributors, will be, to some extent, subjective.
b. Do we have a process in place which makes the induction of volunteers easy - how easily can a new recruit get down to the business of fixing the bugs? Part of it depends on the bar you set in (a) and part of it depends on the process you set. For example, the volunteers at the least need to know the bug-fixing process that is in place today including tools, reviews, etc. How quickly can this knowledge be imparted.
c. As somebody already mentioned, to what extent can you provide mentoring and who does it.
d. Finally, would someone assign tickets to volunteers - I feel this would be a better idea than letting people pick and choose when the volunteers start off. The process could get eased off as a volunteers spends more time with the code base and therefore gets more familiar.
I am sure the questions are easy to ask and there are logistical hurdles to take into account in trying to answer any of these questions.
Can you suggest some answers, even as straw men? We need a place to start. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

As a general comment.. I think having a pool of bug-fixers is a wonderful idea. And is frankly, one of the requirements of Boost staying afloat. On 10/30/2010 3:05 PM, David Abrahams wrote:
At Sun, 31 Oct 2010 00:22:25 +0530, Arindam Mukherjee wrote:
I am one of those hopefuls who responded on the thread that proposed the idea for volunteers. I have always wanted to understand and contribute to the Boost libraries because I felt that it would give me an insight into the design and implementation of Boost (and perhaps the C++ standard libraries themselves) to an extent that I lack today. And I am certain greater participation can only mean good thing, provided we have answers to the following questions (or at least know where to start in trying to answer):
a. What are the concrete criteria for admitting a volunteer - where do you set the bar. These must be verifiable objective criteria.
I don't think we can really come up with objective criteria. Each library maintainer has his own set of values and his own style, and—at least if the maintainers are going to be involved in the decision—contributions mustn't clash too badly with that style and set of values. Therefore, criteria for accepting contributions, if not contributors, will be, to some extent, subjective.
I agree mostly ;-) Like regular reviews for library submissions, and also for GSoC students, I think we can use similar criteria for what makes a good volunteer. I.e. we can have a vetting process for volunteers and their contributions. IIRC for libraries we only really require that people test their code on two toolsets locally. And of course we do reviews of the library as a whole before initial inclusion. The GSoC guidelines are a bit more fluid.. I expect some demonstrable knowledge of the problem domain and of C++. I have seen other participant organizations require some form of contribution to the project before considering a student. And something like that I would think is possible in this case. So here's an initial set of criteria / process for this: For volunteers to get SVN write access: 1. Must submit some minimum number of patches to Trac. 2. Some minimum number of patches to existing tickets must be accepted, reviewed, applied, and tested. I.e. a new volunteer would turn bug tickets into patch tickets to get this started. 3. A single patch must be reviewed by some minimum number of existing contributors. And either blessed or not for application. 4. Patches must be locally tested on some minimal number of toolsets. Either multiple toolsets on one operating system, or preferably multiple toolsets on multiple OSs. It would be up to the reviewers to decide if the tested toolsets are sufficient in the context of the particular patch. Any regular maintainer, including existing volunteers, can help with the above. The hope being that we can start small and have this grow itself without increasing the burden on current contributors too much. Some possible numbers for the above: (1) five submitted patches, (2) three applied patches, (3) two reviewers for a patch, with high, preference for the library maintainer to be a reviewer but not required, and (4) two toolsets. After volunteers have write access we would want to still monitor their patches so we would want to keep the review of their patches. So perhaps after some number of closed tickets we would remove the review portion with the expectation that they would be responsible enough at this time to seek out reviews for non-trivial, or non-controversial, patches.
b. Do we have a process in place which makes the induction of volunteers easy - how easily can a new recruit get down to the business of fixing the bugs? Part of it depends on the bar you set in (a) and part of it depends on the process you set. For example, the volunteers at the least need to know the bug-fixing process that is in place today including tools, reviews, etc. How quickly can this knowledge be imparted.
Well, as you might have guessed there really isn't a process for bug fixing other than what is required for release management. Essentially it's mostly up to the individual maintainers to deal with it as they like. Hence my suggestion above for the process :-) As far as tools go though, we do have a rather fixed set of requirements for the testing part of this. And it's fairly easy to explain.
c. As somebody already mentioned, to what extent can you provide mentoring and who does it.
I think we can at minimum have the same set of contributors that we have for GSoC help out with the mentoring of this as they (a) tend to have the desire to help out, and (b) they tend to be the most broadly knowledgeable of Boost Libraries. Which I think roughly means 15 or so people to mentor at the start.
d. Finally, would someone assign tickets to volunteers - I feel this would be a better idea than letting people pick and choose when the volunteers start off. The process could get eased off as a volunteers spends more time with the code base and therefore gets more familiar.
Assigning tickets might be a hard task for contributors to do initially as it might take a considerable amount of time to actively find tickets. And also would make it harder on volunteers as the particular domain might be out of their realm. Perhaps it might be better to have contributors mark tickets as candidates for volunteers to take on. Immediately what comes to mind are tickets for platforms that maintainers don't usually have access to as good candidates for this ticket pool.
I am sure the questions are easy to ask and there are logistical hurdles to take into account in trying to answer any of these questions.
Can you suggest some answers, even as straw men? We need a place to start.
Hopefully the above is a good place to start. Note, I wrote the above with the background of spending a few release cycles years ago doing nothing but fixing test failures. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

At Sat, 30 Oct 2010 22:15:30 -0500, Rene Rivera wrote:
As a general comment.. I think having a pool of bug-fixers is a wonderful idea. And is frankly, one of the requirements of Boost staying afloat.
On 10/30/2010 3:05 PM, David Abrahams wrote:
At Sun, 31 Oct 2010 00:22:25 +0530, Arindam Mukherjee wrote:
I am one of those hopefuls who responded on the thread that proposed the idea for volunteers. I have always wanted to understand and contribute to the Boost libraries because I felt that it would give me an insight into the design and implementation of Boost (and perhaps the C++ standard libraries themselves) to an extent that I lack today. And I am certain greater participation can only mean good thing, provided we have answers to the following questions (or at least know where to start in trying to answer):
a. What are the concrete criteria for admitting a volunteer - where do you set the bar. These must be verifiable objective criteria.
I don't think we can really come up with objective criteria. Each library maintainer has his own set of values and his own style, and—at least if the maintainers are going to be involved in the decision—contributions mustn't clash too badly with that style and set of values. Therefore, criteria for accepting contributions, if not contributors, will be, to some extent, subjective.
I agree mostly ;-) Like regular reviews for library submissions, and also for GSoC students, I think we can use similar criteria for what makes a good volunteer. I.e. we can have a vetting process for volunteers and their contributions.
IIRC for libraries we only really require that people test their code on two toolsets locally. And of course we do reviews of the library as a whole before initial inclusion.
The GSoC guidelines are a bit more fluid.. I expect some demonstrable knowledge of the problem domain and of C++. I have seen other participant organizations require some form of contribution to the project before considering a student.
That's a good idea; would weed out lots of the non-serious submissions.
And something like that I would think is possible in this case.
So here's an initial set of criteria / process for this:
For volunteers to get SVN write access:
1. Must submit some minimum number of patches to Trac.
2. Some minimum number of patches to existing tickets must be accepted, reviewed, applied, and tested. I.e. a new volunteer would turn bug tickets into patch tickets to get this started.
3. A single patch must be reviewed by some minimum number of existing contributors. And either blessed or not for application.
4. Patches must be locally tested on some minimal number of toolsets. Either multiple toolsets on one operating system, or preferably multiple toolsets on multiple OSs. It would be up to the reviewers to decide if the tested toolsets are sufficient in the context of the particular patch.
Any regular maintainer, including existing volunteers, can help with the above. The hope being that we can start small and have this grow itself without increasing the burden on current contributors too much. Some possible numbers for the above: (1) five submitted patches, (2) three applied patches, (3) two reviewers for a patch, with high, preference for the library maintainer to be a reviewer but not required, and (4) two toolsets.
After volunteers have write access we would want to still monitor their patches so we would want to keep the review of their patches. So perhaps after some number of closed tickets we would remove the review portion with the expectation that they would be responsible enough at this time to seek out reviews for non-trivial, or non-controversial, patches.
This all sounds great to me.
c. As somebody already mentioned, to what extent can you provide mentoring and who does it.
I think we can at minimum have the same set of contributors that we have for GSoC help out with the mentoring of this as they (a) tend to have the desire to help out, and (b) they tend to be the most broadly knowledgeable of Boost Libraries. Which I think roughly means 15 or so people to mentor at the start.
Wow, that's a huge number, compared to my expectation! That would be wonderful.
d. Finally, would someone assign tickets to volunteers - I feel this would be a better idea than letting people pick and choose when the volunteers start off. The process could get eased off as a volunteers spends more time with the code base and therefore gets more familiar.
Assigning tickets might be a hard task for contributors to do initially as it might take a considerable amount of time to actively find tickets. And also would make it harder on volunteers as the particular domain might be out of their realm. Perhaps it might be better to have contributors mark tickets as candidates for volunteers to take on. Immediately what comes to mind are tickets for platforms that maintainers don't usually have access to as good candidates for this ticket pool.
Not sure who you're referring to as "contributors" in this section. Could you clarify?
I am sure the questions are easy to ask and there are logistical hurdles to take into account in trying to answer any of these questions.
Can you suggest some answers, even as straw men? We need a place to start.
Hopefully the above is a good place to start. Note, I wrote the above with the background of spending a few release cycles years ago doing nothing but fixing test failures.
I like it!! Can you (co-)implement it? -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 1:59 PM, David Abrahams wrote:
I am one of those hopefuls who responded on the thread that proposed the idea for volunteers. I have always wanted to understand and contribute to the Boost libraries because I felt that it would give me an insight into the design and implementation of Boost (and perhaps the C++ standard libraries themselves) to an extent that I lack today. And I am certain greater participation can only mean good thing, provided we have answers to the following questions (or at least know where to start in trying to answer):
a. What are the concrete criteria for admitting a volunteer - where do you set the bar. These must be verifiable objective criteria. I don't think we can really come up with objective criteria. Each
At Sun, 31 Oct 2010 00:22:25 +0530, Arindam Mukherjee wrote: library maintainer has his own set of values and his own style, and—at least if the maintainers are going to be involved in the decision—contributions mustn't clash too badly with that style and set of values. Therefore, criteria for accepting contributions, if not contributors, will be, to some extent, subjective.
I agree. I think a volunteer's own motivation will carry him farther than anything. I think it will start out largely self-study: studying a library's documentation and regression tests to understand it. Hopefully there would be two or three such volunteers per library, and they can ask questions of each other. Learning how to (a) identify a spurious Ticket and diplomatically dispose of it, or (b) adapt it into a legitimate regression test or extension to an existing test, possibly with (c) a minimal-impact patch ... that alone will sharpen the volunteers skills a lot, and get the attention of the library's maintainer(s) in terms of mentoring.
b. Do we have a process in place which makes the induction of volunteers easy - how easily can a new recruit get down to the business of fixing the bugs? Part of it depends on the bar you set in (a) and part of it depends on the process you set. For example, the volunteers at the least need to know the bug-fixing process that is in place today including tools, reviews, etc. How quickly can this knowledge be imparted.
I think self-study will rule the day here, too. Where the most instruction is needed is in building and running the regression tests in isolation. My method might be a bit unorthodox: hack run.py, then regression.py, and operate things just like a regression test, but without uploading data. (More detail later.) Anyone can add comments to a ticket, though I think a more clear explanation of some things like severities 'showstopper' and 'regression' would be helpful. But navigating a ticket is one way to get to know it.
c. As somebody already mentioned, to what extent can you provide mentoring and who does it.
d. Finally, would someone assign tickets to volunteers - I feel this would be a better idea than letting people pick and choose when the volunteers start off. The process could get eased off as a volunteers spends more time with the code base and therefore gets more familiar.
If one volunteer has more advanced experience, he could assign tickets. If a maintainer has just stepped out in front of a bus, though, there may not be anyone to do this.
I am sure the questions are easy to ask and there are logistical hurdles to take into account in trying to answer any of these questions.
The bain to boost's quality is thinking someone else is taking care of it.

On Sun, Oct 31, 2010 at 9:30 PM, Jim Bell <Jim@jc-bell.com> wrote:
Where the most instruction is needed is in building and running the regression tests in isolation. My method might be a bit unorthodox: hack run.py, then regression.py, and operate things just like a regression test, but without uploading data. (More detail later.)
That sounds much harder than simply running "bjam" in the library's test/ subdirectory. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 1:59 PM, Dave Abrahams wrote:
On Sun, Oct 31, 2010 at 9:30 PM, Jim Bell <Jim@jc-bell.com> wrote:
Where the most instruction is needed is in building and running the regression tests in isolation. My method might be a bit unorthodox: hack run.py, then regression.py, and operate things just like a regression test, but without uploading data. (More detail later.) That sounds much harder than simply running "bjam" in the library's test/ subdirectory.
Thanks! That does sound much easier. I didn't see that anywhere in the docs.

At Tue, 02 Nov 2010 16:16:37 -0500, Jim Bell wrote:
On 1:59 PM, Dave Abrahams wrote:
On Sun, Oct 31, 2010 at 9:30 PM, Jim Bell <Jim@jc-bell.com> wrote:
Where the most instruction is needed is in building and running the regression tests in isolation. My method might be a bit unorthodox: hack run.py, then regression.py, and operate things just like a regression test, but without uploading data. (More detail later.) That sounds much harder than simply running "bjam" in the library's test/ subdirectory.
Thanks! That does sound much easier. I didn't see that anywhere in the docs.
That's a cryin' shame. Everyone seems to think they need to generate HTML, etc. etc. in order to run the tests, when all you need is bjam. If you use bjam, though, make sure you capture the output somewhere so you can analyze the errors among all the other output. IDEs and/or Emacs work really well for this. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 1:59 PM, David Abrahams wrote:
At Tue, 02 Nov 2010 16:16:37 -0500, Jim Bell wrote:
On 1:59 PM, Dave Abrahams wrote:
On Sun, Oct 31, 2010 at 9:30 PM, Jim Bell <Jim@jc-bell.com> wrote:
Where the most instruction is needed is in building and running the regression tests in isolation. My method might be a bit unorthodox: hack run.py, then regression.py, and operate things just like a regression test, but without uploading data. (More detail later.) That sounds much harder than simply running "bjam" in the library's test/ subdirectory.
Thanks! That does sound much easier. I didn't see that anywhere in the docs.
That's a cryin' shame. Everyone seems to think they need to generate HTML, etc. etc. in order to run the tests, when all you need is bjam.
Could we add a section to <http://www.boost.org/development/running_regression_tests.html>, describing running tests locally? (Or at least a link to the page describing this?) That's the only documentation I've found (though I could easily have missed something). I've burned a few hours on this. (And, more importantly, perceived it to be a difficult thing to do in isolation.) Will all command lines (i.e., compiler/linker flags) be identical to the real regression tests?

On 11/3/2010 11:34 AM, Jim Bell wrote:
On 1:59 PM, David Abrahams wrote:
At Tue, 02 Nov 2010 16:16:37 -0500, Jim Bell wrote:
On 1:59 PM, Dave Abrahams wrote:
On Sun, Oct 31, 2010 at 9:30 PM, Jim Bell<Jim@jc-bell.com> wrote:
Where the most instruction is needed is in building and running the regression tests in isolation. My method might be a bit unorthodox: hack run.py, then regression.py, and operate things just like a regression test, but without uploading data. (More detail later.) That sounds much harder than simply running "bjam" in the library's test/ subdirectory.
Thanks! That does sound much easier. I didn't see that anywhere in the docs.
That's a cryin' shame. Everyone seems to think they need to generate HTML, etc. etc. in order to run the tests, when all you need is bjam.
Could we add a section to <http://www.boost.org/development/running_regression_tests.html>, describing running tests locally? (Or at least a link to the page describing this?) That's the only documentation I've found (though I could easily have missed something).
There's also some brief mention of testing in the library author's test policy documentation <http://www.boost.org/development/test.html>. But it doesn't really mention the usual "run bjam in your test dir" procedure. So we do really need to add something about that.
I've burned a few hours on this. (And, more importantly, perceived it to be a difficult thing to do in isolation.)
Will all command lines (i.e., compiler/linker flags) be identical to the real regression tests?
They are semantically the same. I.e. there's no difference from running bjam in the individual lib test dirs vs. running it in the boost-root/status dir, which runs all of the test and which is what the testing system does. The latter just runs the individual ones in sequence. Of course there will be platform and setup differences depending on what the individual testers have done to create their system and toolsets. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

Jim Bell wrote:
On 1:59 PM, David Abrahams wrote:
At Tue, 02 Nov 2010 16:16:37 -0500, Jim Bell wrote:
On 1:59 PM, Dave Abrahams wrote:
On Sun, Oct 31, 2010 at 9:30 PM, Jim Bell <Jim@jc-bell.com> wrote:
Where the most instruction is needed is in building and running the regression tests in isolation. My method might be a bit unorthodox: hack run.py, then regression.py, and operate things just like a regression test, but without uploading data. (More detail later.) That sounds much harder than simply running "bjam" in the library's test/ subdirectory.
Thanks! That does sound much easier. I didn't see that anywhere in the docs.
That's a cryin' shame. Everyone seems to think they need to generate HTML, etc. etc. in order to run the tests, when all you need is bjam.
Could we add a section to <http://www.boost.org/development/running_regression_tests.html>, describing running tests locally? (Or at least a link to the page describing this?) That's the only documentation I've found (though I could easily have missed something).
Perfect time to insert a plug for my personal method. To test a specific library (e.g. serialization) a) cd to ../lib/serialization/test b) ../../../tools/regression/src/library_test.sh (or ..\..\..\library_test.bat) And you will be rewarded with and HTML table in the ../libs/serialization/test directory which has all your test results for all platforms and combinations. Each time you rerun the tests, the column is updated. Each time you run tests with a new set of attributes (compiler, os, variant, etc) you get a new column added to the table. This is my standard method for dealing with this on my own machine. Robert Ramey
I've burned a few hours on this. (And, more importantly, perceived it to be a difficult thing to do in isolation.)
Will all command lines (i.e., compiler/linker flags) be identical to the real regression tests?
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On 1:59 PM, Robert Ramey wrote:
On 1:59 PM, David Abrahams wrote:
At Tue, 02 Nov 2010 16:16:37 -0500, Jim Bell wrote:
[...] Thanks! That does sound much easier. I didn't see that anywhere in the docs. That's a cryin' shame. Everyone seems to think they need to generate HTML, etc. etc. in order to run the tests, when all you need is bjam. [...] Perfect time to insert a plug for my personal method. To test a specific
Jim Bell wrote: library (e.g. serialization)
a) cd to ../lib/serialization/test b) ../../../tools/regression/src/library_test.sh (or ..\..\..\library_test.bat)
And you will be rewarded with and HTML table in the ../libs/serialization/test directory which has all your test results for all platforms and combinations. Each time you rerun the tests, the column is updated. Each time you run tests with a new set of attributes (compiler, os, variant, etc) you get a new column added to the table.
You know, I came across this not long ago, but it had a few problems too... * You have to manually (albeit trivially) hack tools/regression/build/Jamroot.jam to get library_status to build. * Doc discrepancy: output files are in the <compiler>/release dir, not dist/bin * Doc discrepancy: library_test.bat (and .sh) are in a different dir. * library_status was hanging or running away on an empty bjam.log (can't find this in my notes so I don't remember the exact symptoms--sorry!) * I was trying to use it with boost built with a different '--build-dir' argument (a directory outside boost's root), and it didn't play well. I looked at --locate-root but couldn't bridge the gap. (The '--echo' parameter was very helpful.) * Win32 seemed less supported than linux (though I can run bash under win32, many can't). So I decided it wasn't maintained. And that bjam was, indeed, not meant to be run without help. I'd love to see this work for the view it gives, but I don't mind looking through a boost log either if I'm looking for a specific test result.

Jim Bell wrote:
On 1:59 PM, Robert Ramey wrote:
Jim Bell wrote:
On 1:59 PM, David Abrahams wrote:
At Tue, 02 Nov 2010 16:16:37 -0500, Jim Bell wrote:
[...] Thanks! That does sound much easier. I didn't see that anywhere in the docs. That's a cryin' shame. Everyone seems to think they need to generate HTML, etc. etc. in order to run the tests, when all you need is bjam. [...] Perfect time to insert a plug for my personal method. To test a specific library (e.g. serialization)
a) cd to ../lib/serialization/test b) ../../../tools/regression/src/library_test.sh (or ..\..\..\library_test.bat)
And you will be rewarded with and HTML table in the ../libs/serialization/test directory which has all your test results for all platforms and combinations. Each time you rerun the tests, the column is updated. Each time you run tests with a new set of attributes (compiler, os, variant, etc) you get a new column added to the table.
You know, I came across this not long ago, but it had a few problems too...
* You have to manually (albeit trivially) hack tools/regression/build/Jamroot.jam to get library_status to build.
I think things got changed a little from my original submission in order to deal with some other issues. You might want to update the jamfile.
* Doc discrepancy: output files are in the <compiler>/release dir, not dist/bin * Doc discrepancy: library_test.bat (and .sh) are in a different dir.
I don't remember making and documentation for it - but if there is feel free to correct it or ask me to do it.
* library_status was hanging or running away on an empty bjam.log (can't find this in my notes so I don't remember the exact symptoms--sorry!)
Hmmm - haven't seen this myself.
* I was trying to use it with boost built with a different '--build-dir' argument (a directory outside boost's root), and it didn't play well. I looked at --locate-root but couldn't bridge the gap. (The '--echo' parameter was very helpful.)
I likely didn't test it with different scenarios that I use - so all bets are off.
* Win32 seemed less supported than linux (though I can run bash under win32, many can't).
library_test.bat should work - though I don't use it. It's a very simple script if someone want's/needs to fix it. I use it on win32 only. I test using cygwin/gcc-4.3.4. library_test.sh/bat works under standard windows.
So I decided it wasn't maintained.
Hmmm - I've been using it "forever" with little problem. Admitadly I built it long ago but since it hasnt' bugged me - I just haven't touched it.
And that bjam was, indeed, not meant to be run without help.
I'd love to see this work for the view it gives, but I don't mind looking through a boost log either if I'm looking for a specific test result.
I notice that the table now builds with some extraneous columns now that the the tests include some DLLS. I could fix it but it easier to ignore than to really fix. Robert Ramey

On 1:59 PM, Robert Ramey wrote:
[...] I don't remember making and documentation for it - but if there is feel free to correct it or ask me to do it.
boost/tools/regression/doc/index.html This is a very cool tool, and my remarks are only meant to be constructive. (And indicate the intimidation factor in running regressions in isolation).

At Thu, 04 Nov 2010 11:47:26 -0500, Jim Bell wrote:
On 1:59 PM, Robert Ramey wrote:
[...] I don't remember making and documentation for it - but if there is feel free to correct it or ask me to do it.
boost/tools/regression/doc/index.html
This is a very cool tool, and my remarks are only meant to be constructive. (And indicate the intimidation factor in running regressions in isolation).
And, just so my point doesn't get lost, unless you really can't live without an HTML table: all you need is: Go into the library's test/ subdirectory and run bjam. Successes and failures should be clearly visible in the output. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On Thu, Nov 4, 2010 at 1:34 AM, Jim Bell <Jim@jc-bell.com> wrote:
Could we add a section to <http://www.boost.org/development/running_regression_tests.html>, describing running tests locally? (Or at least a link to the page describing this?) That's the only documentation I've found (though I could easily have missed something).
Would you like commit access to the website so you can do it yourself? -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 11/3/2010 5:48 PM, Dave Abrahams wrote:
On Thu, Nov 4, 2010 at 1:34 AM, Jim Bell <Jim@jc-bell.com> wrote:
Could we add a section to <http://www.boost.org/development/running_regression_tests.html>, describing running tests locally? (Or at least a link to the page describing this?) That's the only documentation I've found (though I could easily have missed something). Would you like commit access to the website so you can do it yourself?
Just hand the bull the keys to the china shop? Are you crazy?!? I'm still not sure what my involvement can/ought to be. I'll get back to you on that.

"Jim Bell" <Jim@JC-Bell.com> wrote in message news:4CD18F27.8020803@JC-Bell.com...
On 1:59 PM, Dave Abrahams wrote:
That sounds much harder than simply running "bjam" in the library's test/ subdirectory.
Thanks! That does sound much easier. I didn't see that anywhere in the docs. That's a cryin' shame. Everyone seems to think they need to generate HTML, etc. etc. in order to run the tests, when all you need is bjam.
Could we add a section to <http://www.boost.org/development/running_regression_tests.html>, describing running tests locally? (Or at least a link to the page describing this?) That's the only documentation I've found (though I could easily have missed something).
I've burned a few hours on this. (And, more importantly, perceived it to be a difficult thing to do in isolation.)
+1 I also went with the procedure described on the web (for testing my new Boost.Function implementation) and as a result totally spammed the server(s)... A 'do not upload results to server' option for the run.py script would be quite useful (and the ability to see the HTML formatted/coloured results, w/o uploading, even more ;) ps. apologies if this was already discussed/solved...I just noticed this being discussed... -- "What Huxley teaches is that in the age of advanced technology, spiritual devastation is more likely to come from an enemy with a smiling face than from one whose countenance exudes suspicion and hate." Neil Postman

At Sat, 30 Oct 2010 18:39:28 +0100, Robert Jones wrote:
On Fri, Oct 29, 2010 at 7:49 PM, Mateusz Loskot <mateusz@loskot.net> wrote:
On 29/10/10 15:28, Jim Bell wrote:
To round out the discussion...
What risks or downsides would there be to recruiting a little legion of bug-fix volunteers and turning them loose on the tickets and/or regression matrix?
Boost is complex and sets very high requirements. Volunteers risk that they may not be able to cut the mustard (having some GSoC experiences in mind). Thus, potential of dropped and unfinished tasks may be high. The Boost team may find it challenging to find resources to mentor during the process.
While this is a very real risk the other side of the coin is that, to my perception at least, Boost needs to do something to maintain or increase participation levels. The 'old guard' have done a magnificent job, and their commitment has frankly been awesome, but they can't be expected to crew the ship for ever. The barriers to entry to competence as a Boost developer are indeed substantial, and probably rising, but unless new blood is somehow 'inducted' I fear the future for Boost is limited.
+1 -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 29 October 2010 15:28, Jim Bell <Jim@jc-bell.com> wrote:
To round out the discussion...
What risks or downsides would there be to recruiting a little legion of bug-fix volunteers and turning them loose on the tickets and/or regression matrix?
Often fixing bugs on major compilers can break them on less ones, and sometimes these aren't noticed (I for example run testers for clang, which are sometimes broken). I don't know the best way of doing this, but ideally bug fixes by volunteers would want to have a "no compiler is more broken than it was before" requirement. This would obviously slow down how quickly bug fixes could be applied, because there would need to be a cycle of the testers to see which new problems have been introduced. Chris

On 1:59 PM, Chris Jefferson wrote:
On 29 October 2010 15:28, Jim Bell <Jim@jc-bell.com> wrote:
To round out the discussion...
What risks or downsides would there be to recruiting a little legion of bug-fix volunteers and turning them loose on the tickets and/or regression matrix?
Often fixing bugs on major compilers can break them on less ones, and sometimes these aren't noticed (I for example run testers for clang, which are sometimes broken). I don't know the best way of doing this, but ideally bug fixes by volunteers would want to have a "no compiler is more broken than it was before" requirement. This would obviously slow down how quickly bug fixes could be applied, because there would need to be a cycle of the testers to see which new problems have been introduced.
Good point, particularly for fixing regression tests. When attempting a regression fix, it's initial state across all platforms needs to be noted and compared to its final state.
participants (10)
-
Arindam Mukherjee
-
Chris Jefferson
-
Dave Abrahams
-
David Abrahams
-
Domagoj Saric
-
Jim Bell
-
Mateusz Loskot
-
Rene Rivera
-
Robert Jones
-
Robert Ramey