Regression report for Releases?

Do we have the results of the 1.32 regression tests somewhere? I'm curious what the actual final release results were, but I can't find them anywhere. I thought at one time we maintained them on the website? Anyway, it be really nice to snapshot the final test web pages and include them in the release package. That way everyone could simply look at the results and get an idea about how their compiler/platform might work with that release -- of course that should be on the website too. Jeff

Jeff Garland writes:
Do we have the results of the 1.32 regression tests somewhere?
http://www.meta-comm.com/engineering/boost-regression/1_32_0/developer/summa... (http://tinyurl.com/6ysmd)
This was the intention all along, but it was never carried out in the release race.
FWIW, the above page is referenced from 1.32 release notes, although it probably could be emphasized more. -- Aleksey Gurtovoy MetaCommunications Engineering

On Tue, 01 Mar 2005 21:32:46 -0600, Aleksey Gurtovoy wrote
Jeff Garland writes:
Do we have the results of the 1.32 regression tests somewhere?
http://www.meta-comm.com/engineering/boost-regression/1_32_0/developer/summa...
Thanks!
Fair enough -- I was almost afraid to ask since I hate to make the release process any harder than it is now.
FWIW, the above page is referenced from 1.32 release notes, although it probably could be emphasized more.
You're right. I think a link right under regression tests would be nice. If we do something like this we might want to name the folder 'latest_release' o that we don't have to change links to it every time we do a release. Jeff

On Wed, 2 Mar 2005 09:02:34 +0000 (UTC), David Abrahams wrote
Somebody has to do the dirty work ;-). Why don't you edit that step into the release process page?
Done -- the text there is obviously provisional as I'm not sure of the best way to get all the results pages off the meta-comm site short of a web spider or asking for help... Jeff

"Jeff Garland" <jeff@crystalclearsoftware.com> writes:
I don't quite understand why anything like that would be needed... but then I'm on the plane and can't see the message you're replying to, so I might be missing something. I thought I was just suggesting you add a step to our official "steps for the release manager" page. -- Dave Abrahams Boost Consulting www.boost-consulting.com

On Sat, 19 Mar 2005 20:01:41 -0500, David Abrahams wrote
Yes, I already added something to the release_mgr_checklist.html. The problem is that the steps on that page are very detailed -- exact commands and such to be entered. Since the regression results aren't a single web-page, but rather an interlinked set of pages they need to be gathered as a group and put into the overall Boost web-site structure. Since the pages the release manager needs to gather aren't on sourceforge it isn't obvious the exact commands to follow -- hence the provisional nature of the step details... Jeff

Jeff Garland writes:
They are also available in archive form: http://www.meta-comm.com/engineering/boost-regression/cvs-head.zip -- Aleksey Gurtovoy MetaCommunications Engineering

At 01:01 AM 3/2/2005, Jeff Garland wrote:
On Tue, 01 Mar 2005 21:32:46 -0600, Aleksey Gurtovoy wrote
That points up a general problem; the release manager's workload is too high. Perhaps we should consider a team approach to release management. One person concentrate on regression tests, another on patch management, a third on webmaster activities. That way when it comes to putting the actual release together, the release manager won't be as frazzled. --Beman

Beman Dawes wrote:
A great idea! IMO it's still too much for one person to overview just the regression tests. (Sometimes I've problems even with my few results... ;-) ) It's still a miracle for me how Aleksey managed to do so many (different) things for the last release without losing his head. Stefan

"Jeff Garland" <jeff@crystalclearsoftware.com> writes:
Agreed, it's too hard, but that shouldn't stop us from talking about what we would be doing in an ideal world. Accordingly: - A health report for the latest release should always be available on the website. - Regressions from the previous release are nice to know but less important. I realize we show both in one report, but this may help us adjust our emphasis or coloring (maybe it's already perfect in the user report; I don't know) - A health report for the current state of the repository should always be available on the website. - Regressions from the previous release are crucial to know also - When we branch for a release, we absolutely must track the release branch, but we also should be continuing to display the health of the trunk - We ought to have a system for automatically notifying anyone who checks in a regression, and displaying information about the change responsible for the regression on the status page. - There should be a way for a developer to request testing of a particular branch/set of revisions - There should be enough computing power to handle all these tests in a timely fashion. We also need to discuss how the main trunk will be treated. Gennadiy has suggested in the past that checking in breaking changes to the trunk is a perfectly legitimate technique for test-driven development. I agree in principle, but that idea seems to generate a lot of friction with other developers trying to stabilize their test results. The ability to request testing of a branch might go a long way toward eliminating that sort of problem. -- Dave Abrahams Boost Consulting www.boost-consulting.com

On Sat, 19 Mar 2005 20:12:55 -0500, David Abrahams wrote
Yes, it is there -- in the middle of the front page, but even so I missed it. So I think the link belongs under 'regression tests' called something like 'current release'.
In fact, I think from the user perspective the question really goes something like: "I'm using Intel 8.1 on Windows and Linux and I want to use Python, smart_ptr, and serialization -- can I expect these libraries to work with release xyz?" And several variations on that theme. So in an "ideal world" scenario I would have a web form where the user could enter her needs and a script would filter the regression results down to the set of interest for the user.
Agree -- I think the big blocker here is expanding the set of regression testers during this period. Another factor is that the regression compilers/platforms tested between releases is not really stable. It has been growing, so we have now have 'new results' that can't really 'diff' from the last release. For example, we now have a regression tester for Solaris which we didn't have in 1.32. I'm not sure that's obvious from the way we display results.
Do we even have a way of tracking the check-ins? That might be a good first step. I notice that sourceforge seems to be generating some sort of email when I check-in, but I don't know of a way to subscribe to the changelist.
- There should be a way for a developer to request testing of a particular branch/set of revisions
I'd put this high on my list. Without it there is no practical way for developers to regression test on a branch. Which means that using branches for development isn't really that practical.
- There should be enough computing power to handle all these tests in a timely fashion.
Guess it depends on what you consider timely -- 1 minute, 1 hour, 1 day? We are somewhere in the 2 day sort of range now. From the developer perspective, the ideal world would be 'right now'. I've got these changes I'm working on, I've tested on my core compilers and I'm ready to see the results for other compilers. It seems like most testers run one regression test per day, while others run several. So depending on when you check-in, it takes up to a couple days to really see the results of all platforms/compilers. The only way I see us getting closer to the ideal is more machines really dedicated to just Boost testing...
I agree with him about wanting to use the compiler to find breakage, but the problem is that his particular library is one that many libraries depend on. As a result, it really needs to stay stable during the release period to ensure that we don't have 2 days of downtime while something Boost-wide is broken. So I really think that we need to start thinking about a dependency analysis of Boost and an added freeze date for 'core libraries' that need to stay stable during the release process. Developers will have to finish what they want in the release earlier. This could certainly be relaxed if branch-testing is available since a developer could be much more sure of avoiding mainline breakage...
The ability to request testing of a branch might go a long way toward eliminating that sort of problem.
Agree completely. Jeff

Jeff Garland wrote:
On Sat, 19 Mar 2005 20:12:55 -0500, David Abrahams wrote
"Jeff Garland" <jeff@crystalclearsoftware.com> writes:
Yes we do. Dave and I, long ago, set up those emails SF sends so we could get Buildbot to work. So if we track CVS changes to the individual builds we can tell who and what breaks a build. Even though I'm still working on the buildbot here's a sneak peek.. http://build.redshift-software.com:9990/
Or going to an active system like Buildbot. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com - 102708583/icq

On Sun, 20 Mar 2005 13:53:48 -0600, Rene Rivera wrote
Very cool! No doubt buildbot will be a great asset -- when do you think it will be ready to 'go production'?
I don't think the existence of Buildbot solves all of our resource issues. I would expect only a limited number of the current regression testers will be able to install and use Buildbot -- I'm certain there will be firewall and other issues for some that just stop this from happening. Plus if it takes 5 hours to run a Boost build you will still have a long delay before you find out if something is broken. For most developers they would like to see a library focused rebuild, which for most could happen in minutes. As an example, since almost nothing in Boost depends on date-time it's very hard for me to break all of Boost. So rerunning all of the Boost regression for a date-time check-in is mostly a waste of resources. We've also had several previous discussions on other things that are pushing up the need for additional resources including: Boost is just plain getting bigger, need for non-debug regression tests, double testing for dll non-dll versions of linked libs, etc. Jeff

"Jeff Garland" <jeff@crystalclearsoftware.com> writes:
If we get a testing farm from OSL I'm sure we can get a lot of slaves there.
Plus if it takes 5 hours to run a Boost build you will still have a long delay before you find out if something is broken.
We simply have to get incremental testing working.
Boost.Build does dependency analysis; there's no reason to re-run everything from scratch. -- Dave Abrahams Boost Consulting www.boost-consulting.com

On Sun, 20 Mar 2005 18:22:53 -0500, David Abrahams wrote
Yes, this helps tremendously, but as soon as there is a check-in in boost.test or boost.config you're still back to basically a full rebuild. So if you have one machine there will be some backup behind these full builds. Also, you probably want to periodically rebuild all just b/c I have yet to meet a perfect dependency checker...
It seems to be broken at the moment, but I agree that most of the time this will do the job. Still, if there was library level selection that would still be better for those changes where the developer knew of a library he wanted to test first. Not complaining on either of these, just aiming for the ideal world ;-) Jeff

On Sun, 20 Mar 2005 22:19:07 -0500, David Abrahams wrote
No I mean straight up changes to files. I've resorted to frequently using bjam -a after I checked in a change that broke something. After I tracked it down I realized that an incremental bjam failed to force a test rebuild and run even though it should have -- thus I missed the error before I checked in. On the other side of the coin, I see stuff rebuild that I think should not be impacted by a change. I haven't spent time to be sure that an unneeded dependency hasn't crept in, but it seems unlikely. Honestly I don't understand how this started happening because I haven't rebuilt bjam in ages... Jeff

"Jeff Garland" <jeff@crystalclearsoftware.com> writes:
I don't know what you mean by that. If it's header files you were changing, then discovering that they are relevant to the build depends on Boost.Build's ability to detect that they are #included (possibly indirectly) in something that is named in a Jamfile. That could be thwarted by #include SOME_MACRO(...), of which we have many examples in Boost. Fixing that correctly might involve invoking the compiler to do just a preprocessing phase, capturing and analyzing the output, and analyzing the results before shipping them off to the second phase of compilation. Some compilers like g++ can dump dependencies themselves during ordinary compilation; that would be much faster. If you were changing source files listed directly in a Jamfile and there was no recompilation, or the headers were #included directly, then we have a deeper problem.
As I said, dependency analysis is currently incomplete.
On the other side of the coin, I see stuff rebuild that I think should not be impacted by a change.
I've never seen that, except inasmuch as dependency analysis is also conservative. So it can't currently detect that #ifdef __GNUC__ // <== change made here #endif shouldn't cause MSVC to recompile anything. Likewise changes in comments aren't distinguished.
The build system is composed of a collection of _interpreted_ .jam source files that control its behavior. Rebuilding bjam needn't have anything to do with it. -- Dave Abrahams Boost Consulting www.boost-consulting.com

On Mon, 21 Mar 2005 03:42:53 -0500, David Abrahams wrote
What I was trying to say was I changed a header file that is unaffected by a macro and obviously included...see below, however...
Now that I slept on it, that's probably what happened. Awhile back we did add some changes that conditionally include entire files based on a macro. But it's of the form #ifdef SOME_MACRO #include file1 #else #include file2 #endif which didn't immediately match the pattern above in my brain ;-)
No, haven't seen that.... Jeff

"Jeff Garland" <jeff@crystalclearsoftware.com> writes:
No, that's a different story. In that case BB is supposed to be conservative, and act as though both file1 and file2 were included. If you can reproduce it, I'd like to see that. -- Dave Abrahams Boost Consulting www.boost-consulting.com

Jeff Garland wrote:
Do you know when it started happening.. I know that at one point before the 1.32 release there was a change that broke the regex that's used for the scanning in BBv1. Somehow the tabs in the regex got translated to spaces which made for a very picky scanner :-( PS. Let that be a warning to all.. If you see real TABS in the BB .jam files *don't* "fix" them. They are usually intentional. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com - 102708583/icq

On Mon, 21 Mar 2005 21:09:09 -0600, Rene Rivera wrote
That would have been about the time -- and there's a decent chance I haven't updated that part of my tree. I tend not to perturb the rest of Boost in my maintenance tree. I'll update and let you know if my behavior is better... Thx, Jeff

David Abrahams writes:
From http://www.crystalclearsoftware.com/cgi-bin/boost_wiki/wiki.pl?Boost.Testing:
* Incremental testing is not reliable: * Marked as expected-to-fail tests are rerun. There is not point to rerun tests if the library is marked as unusable or the test is marked as expected to fail on particular toolset. BBv1 running in testing mode should accept the list of tests which are disabled. * The obsolete tests (test which do not exists any more) are still included in the test results. The tests which have been removed still have their test results in the component directories. * Jamfiles/rule files are not included as dependencies. * bjam doesn't track dependencies if they were included as #include MACRO -- Aleksey Gurtovoy MetaCommunications Engineering

Jeff Garland wrote:
My goal is to have it doing Linux regressions (gcc-release) by next week. I taking it carefully, and hence slowly, as it's crucial to reduce the chances of any test system from breaking. So I do some changes and let the thing run for a day to see if anything strange happens. After it's running on my limited setup we can talk about expanding to other brave testers out there :-)
Nothing can solve every problem, unfortunately.
Proxies can solve most firewall problems, so I wouldn't worry too much about that. As for the requirements of running Buildbot itself, they are equivalent to those of using the current regression.py script. But yes there will be issues just getting the setup working.
At minimum with Buildbot you can see the build log live. So if the build you triggered only builds a small part of the overall Boost then you'll get to see results almost immediately. Obviously I'm making two assumptions here.. that we can resolve some of the incremental testing problems, and that your changes don't cause a Boost wide rebuild like changing something like type traits would.
Definitely. I made the suggestion earlier that we should break up the testing so that some testers can devote resources to only testing subsets of Boost: http://permalink.gmane.org/gmane.comp.lib.boost.testing/392 (I know it's a long post.. The scalability section is what I'm referring to.) -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com - 102708583/icq

On Mon, 21 Mar 2005 20:47:56 -0600, Rene Rivera wrote
Great!
Proxies can solve most firewall problems, so I wouldn't worry too much about that.
Ok, I'll accept that it might work if the slave goes outbound to the master to connect. I don't really understand the architecture, but don't worry about explaining it...I'll read the docs when I get a free moment.
No problem on the length -- somehow I missed this mail completely. Obviously you and I agree on the need to split things up. I (with others) have suggested before that a big help would be splitting out the 'dll' versus 'static'. I've also suggested we consider standardizing different levels 'basic' vs 'exhaustive', etc. I won't rediscuss it all, but I think there are other things we can do to improve the testing scalability... http://lists.boost.org/MailArchives/boost/msg64471.php Jeff ps: sorry, it's a long thread with lots of back and forth ;-)

"Jeff Garland" <jeff@crystalclearsoftware.com> writes:
I think it should also be available from an obvious link when you go to get information on or download the current release.
Okay, yeah; that would be an improvement.
This is part of why I think BuildBot is a good idea.
It probably isn't.
CVS?
We can set up a mailing list for it to send to, if you want to see those. But I don't think that would solve the problem by itself.
Yeah, I mean something very close to "right now."
Those irons are in the fire now; see my "Testing Farm" thread.
If we could initiate tests on a branch by request we wouldn't have this problem; he could run all those tests before merging.
I'd really like to avoid that.
Yup.
-- Dave Abrahams Boost Consulting www.boost-consulting.com

On Sun, 20 Mar 2005 18:19:33 -0500, David Abrahams wrote
I think it should also be available from an obvious link when you go to get information on or download the current release.
Sure.
I don't know of a CVS command that would give me all the changes to the repository in the last 12 hours (there may be one). So if something breaks and I didn't check anything in, I might want to find out who made the change and fire off a heads-up email to them. So I think the mailing list might have some utility. This might be particularly relevant to the folks running regressions b/c as some if they see a library start to fail it would be much more obvious who's to blame...
Yes, but there will still be some limit. If we are 3 days from release and the test farm is humming away on both mainline and the release branch we proably won't have infinate bandwidth to test developer branches. But I agree totally that regression testing on developer branches is a huge and powerful feature...
Why, what does it hurt? Core library developers just need to adjust their timeline thinking. Think of it this way. All releases have a certain cycle to them. Stabalizing the core is one phase. So the release timeline looks something like: -45 days core library freeze -30 days last new library added -15 days branch for release -2 days all code frozen (doc changes ok) Back in the ideal world, I'd like to see Boost releases become really inexpensive in terms of time. In combination with some of the other things we are discussing we should be able to normalize the process to the point where the length of the release timeline is very short -- I'd say 15 days or less. We have a long way to get there though... Jeff

"Jeff Garland" <jeff@crystalclearsoftware.com> writes:
cvs diff -D "12 hours ago" -D now
Not infinite; just "enough."
Extra overhead; more to manage. I'd rather have an automated system that "just works" (yeah, right ;->)
Back in the ideal world, I'd like to see Boost releases become really inexpensive in terms of time.
Agreed.
Yep. -- Dave Abrahams Boost Consulting www.boost-consulting.com

On Sun, 20 Mar 2005 22:17:00 -0500, David Abrahams wrote
Thx, that's handy to know...
Well, I don't see any overhead or any more to manage. We don't have to keep running some tool. We pretty much know what the core libs are. Make it policy, tell the developers when they need to freeze and trust they will observe it. If they don't and break the build well, they will face the wrath of the rest of us. I don't see why we couldn't make some reasonable agreement like this... Jeff

"Jeff Garland" <jeff@crystalclearsoftware.com> writes:
Well, according to your own post, there would be "a dependency analysis of Boost and an added freeze date for 'core libraries' that need to stay stable during the release process," "core library developers just need to adjust their timeline thinking", and there's an additional step in the release process called "core library freeze."
We could. I thought we sorta did make such a tacit agreement. But it's always nicer when processes allow us to avoid such restrictions. Well, you may have a point anyway; any change that's likely to cause all tests to rebuild should be minimized late in the release cycle. -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams wrote:
But Buildbot can't solve it all on it's own. I'm sure that many Boost users would like to help out with resources but they only have a small amount CPU access to give. So running tests for 5-8 hours is preventing from getting resources we could otherwise have. Letting people test smaller parts of Boost is one way to entice those people to donate such resources.
There's already a mail list it's sending to, and one can subscribe to it. But no, it would not solve the problem. Just because you can see the check ins doesn't help if you can't match them to test failures.
Yes. And this is something that could be done with Buildbot. Currently there's a very simple "request a build" page which could be expanded to allow indication of branch and library to rebuild. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com - 102708583/icq

Jeff Garland writes:
Same for a developer, actually, who usually wants to track only a couple of libraries. The problem with such scheme is that it by definition bumps up requirements on the reports' hosting site. Right now the pages are nothing but plain HTML. Ideally, to handle the kind of dynamic requests you describe in real time we need a database backend, and that severely limits where the reports can be hosted. If we as a group decide that the benefits of having something like this outweigh the downside, this can be pulled off quite easily.
Suggestions are welcome. -- Aleksey Gurtovoy MetaCommunications Engineering

Aleksey Gurtovoy wrote:
I'm not sure having such a set of dynamic result pages would help. It's nice to be able to answer the question of "I'm using _x_ compiler and _y_ and _z_ libraries, does it work?". But that can just as easily be answered with reports that are limited to the platform and compiler. I've been in the situation of checking the results to see if what I'm using (smart_ptr, regex, spirit, threads, etc.) work for Linux+GCC-3.2.3. The user report I would have rather seen would be one that lists all the results for that one platform. I think that the big result grid, and the results by library only really help the library developers. So if we are wishing for things to happen, I'd say to change the user reports so that it presents single toolset results individually. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com - 102708583/icq

BTW, I was only suggesting this for the release results not the daily builds. For the release the idea would be to post-process the xml data into a pre-processed data set that could be coupled with a simple form written in something like php to take the query and display results. That way we wouldn't be bumping up the hosting requirements, but would be improving the user experience. I'm certain this wouldn't be too hard to do... On Mon, 21 Mar 2005 21:33:43 -0600, Rene Rivera wrote
Sure that would help, but as I recall I was envisioning an 'ideal world' ;-) Jeff

Jeff Garland wrote:
In an ideal world all compilers would be standard compliant, produce optimal code always, do it in no time flat, give specific error messages that would be understood by all, automatically suggest ways to improve your code... Oh, ahh, what a wonderful dream I was having :-) I'll take any world that improves things -- truly I would! -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com - 102708583/icq

Rene Rivera writes:
I'm not sure how typical it is, though. For instance, I know that, as Boost users, here at Meta we care for more than one platform and for more than one compiler on each platform, and I'm sure that we are not the only mutli-platform folks out there.
I think that the big result grid, and the results by library only really help the library developers.
One advantage of a "big grid" is that when things are well (and when we release, they are :), it inspires significant confidence in quality and portability of the libraries. For instance, as a user, I find this one is very inspiring: http://www.meta-comm.com/engineering/boost-regression/1_32_0/developer/summa...
I'd hate to loose a user-oriented "big picture", but if we are targeting primarily _release_ user reports, we can have both. -- Aleksey Gurtovoy MetaCommunications Engineering

On Tue, 22 Mar 2005 08:46:07 -0600, Aleksey Gurtovoy wrote
No doubt.
http://www.meta-comm.com/engineering/boost-regression/1_32_0/developer/summa... I find this view very misleading. It glosses over the fact that the developer has indicated some parts of a library may not be available for a particular compiler/platform. As an example, it might lead you to believe that all features of date_time are available on gcc 2.95.3 which just isn't the case.
Agree -- I just need to find time to write my little script ;-) Jeff

Jeff Garland writes:
Well, yes, it's a developer view. My main point still applies.
This would have been shown in the user report, it's just, as I said before, currently user reports are in flux and we don't have the corresponding picture which I could post a link to. -- Aleksey Gurtovoy MetaCommunications Engineering

David Abrahams writes:
Meaning user-oriented report showing whether she can use a specific library on a specific platform, or something else?
- Regressions from the previous release are nice to know but less important.
I disagree. The are crutial for current Boost users, in particular in deciding whether to upgrade or not.
I realize we show both in one report, but this may help us adjust our emphasis or coloring
? We didn't establish yet that we _want_ to adjust our emphasis.
(maybe it's already perfect in the user report; I don't know)
I'm sure it's not perfect (and the user reports are currently in flux), but it's our current understanding that the needs of developers and users are different enough to warrant different emphasis/coloring/etc.
- A health report for the current state of the repository should always be available on the website.
I submit that a "health report" without regressions/explicit markup information is useless. What's your use case for it?
- Regressions from the previous release are crucial to know also
Yes.
Right now it's impractical, but may be with enough resource donations this is going to change.
Agreed 100%.
- There should be a way for a developer to request testing of a particular branch/set of revisions
This can easy get out of control, though. How do we ensure that not all our resources are used to test something on some branch and the main trunk still gets what it needs to be tested on a regular basis?
- There should be enough computing power to handle all these tests in a timely fashion.
Right, and some mechanism to make sure that when it's not the case, the mainstream testing gets priority. -- Aleksey Gurtovoy MetaCommunications Engineering

Aleksey Gurtovoy <agurtovoy@meta-comm.com> writes:
Yes.
Okay, I buy that.
Chill, my friend. I only meant, "should we decide it's neccessary."
Agreed.
None.
Prioritization, lots of computing resources, and incremental rebuilds.
Sure. -- Dave Abrahams Boost Consulting www.boost-consulting.com

Aleksey Gurtovoy wrote:
David Abrahams writes:
The easiest way, assuming we already have the resources, is to reserve certain testers for different types of builds. Having a core of test setups dedicated to mainline testing. While having others dedicated to on-demand branch testing.
I think the key here is that we need to manage the testing resources themselves. We need to decide what the priorities for the various testing chores we have and reflect that onto the availability of the resources. At minimum if we desire to have mainline, release branch, and on-demand testing then it means replicating resources for those three cases. I guess it would be possible for one resource to do both mainline and release branch testing. But on-demand testing is something that conflicts directly with other types of testing. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com - 102708583/icq
participants (7)
-
Aleksey Gurtovoy
-
Beman Dawes
-
David Abrahams
-
Jeff Garland
-
Peter Dimov
-
Rene Rivera
-
Stefan Slapeta