On 08 Oct 2015, at 22:06, Edward Diener
wrote: On 10/8/2015 1:46 PM, Bjørn Roald wrote:
On 04 Oct 2015, at 14:49, Raffi Enficiaud
wrote: Le 04/10/15 13:38, John Maddock a écrit :
On 04/10/2015 12:09, Bjorn Reese wrote:
As many others have said, Boost.Test is "special" in that the majority of Boost's tests depend on it. Even breakages in develop are extremely painful in that they effectively halt progress for any Boost library which uses Test for testing.
This sort of problem has been discussed before on this list without any real progress. I think a solution to this is needed to allow boost tools maintainers (boost.test is also a tool), similar services that library maintainers enjoy. A solution may also provide better test services for all boost developers and possibly other projects. An idea of a possible way forward providing a test_request service at boost.org/test_request is outlined below.
I would like thoughts on how useful or feasible such a service would be, these are some questions I would like to have answered;
- Will library maintainers use a boost.org/test_request service? - How valuable would it be, as compared to merging to develop and waiting for current test reports? - How much of a challenge would it be to get test runners (new and old) onboard? - How feasible is it to set up a service as outlined below based on modification of the current system for regression testing in boost? - What alternatives exist providing same kind of, or better value to the community, hopefully with less effort? E.g.: can Jenkins or other such test dashboards / frameworks easily be configured to provide the flexibility and features needed here?
removed most of message, see original post.
I think that what you have written is extremely valuable but I think you may underrate the need for individual developers to be able to test their library, or any other Boost library, on their local machine using a testing environment that is part of Boost. Currently that testing tool is usually either Boost.Test or lightweight test.
Thanks, I am not trying to underrate local testing. Local testing must be simple and shall in general be performed prior to using a test request service or other remote testing. As before. The practical question is on how many target platforms this is feasible for boost developers to test locally at a reasonable cost in hardware, software licenses and time. Local testing should clearly be performed at least the development platform, preferably with more than one compiler. Ideally tools for local testing, whether it is test libraries, frameworks, dashboards, reporting, or virtualisation should be improved to the point where remote testing would not be needed, but that is probably not feasible. Thus the need to support remote testing in a flexible and efficient way for the developers. It is there to fill the holes local testing cannot or will not fill, not to replace local testing.
I am not against, in general, testing tools which are outside of Boost, but that would have to be co-ordinated in such a way where any end-user of a Boost library would be able to have easy access to some other testing tool outside of Boost.
I am not sure I follow, but I see this sort of service as much or as little as a part of boost as the current regression test runners and report generators are. Some sort of access control limiting this to changes in boost.org repositories and boost developers is likely needed, if not for anything else to ease the security concerns of the test runners. Clearly this could be expanded to something more general outside the scope of boost.org repositories, but the main issue with that is computing resources at the test runners and their willingness of test runners to serve some more general cause. So that complicates too much I think, at least as a goal to start with. It is probably for some other organisation, possibly with the same or similar tools. Boost users shall use local tools at their disposal to test that boost work on their target platform, like today. Nothing changes there, they likely have the computing resources to do that. If they do not, it is not a Boost mission to fix that. Test programs that come with boost is available to them to make this simple. As before, hopefully they are improved as well as they will be tested well with test requests.
I don't think we can reduce testing Boost libraries solely to test runners which are part of some online service.
agreed
Nonetheless I would welcome online testing services which would automate the regression testing of Boost libraries on the appropriate branches ( currently 'develop' and 'master' ). This would remove the onus of testing and resources from individual testers and would provide for a much wider range of operating systems/compiler/versions than we currently have.
That is the idea, basically putting the control of "what to test” remotely in the hands of the individual developer, not only the bot managing the boost.org develop and master branch. The challenge I believe is that the added flexibility will cause less structure and easily exhaust the test runner resources unless it is under some sort of control. It may be too simple post test requests invoking compilation and testing parts of boost you do not need to test and on more runners than you need. — Bjørn