On Wed, Oct 25, 2017 at 3:36 PM, Vinnie Falco via Boost < boost@lists.boost.org> wrote:
On Wed, Oct 25, 2017 at 12:12 PM, James E. King, III via Boost
wrote: Doesn't this defeat the purpose of using Appveyor and Travis, or do you have additional tests that take too long for CI builds to be useful? If you keep some tests to yourself, the process of vetting new code changes from outside becomes unmaintainable.
My motto is "Trust in God, but always run the tests locally." There are security considerations here, so I make sure that my workflow always brings external submissions through my own machine where I can inspect it with a debugger and my own two eyes. I am strongly opposed to any "farm to market" system since that could lead to being lazy.
Sounds like, "write once, debug everywhere". :) This development philosophy makes sense to me for a new project. It is not scalable into maintenance mode during which other folks are typically enabled. For example, if the CMT picked up a project which had no CI builds and no measurement of code coverage on the unit tests, and they don't have the secret formula that the maintainer was using to vet code changes, they have to reinvent the wheel. If however the original author sets up a robust CI environment with code coverage metrics, the CMT would be able to understand what expectations for quality submissions are at a minimum. This can be further improved with static code analysis (cppcheck, Coverity Scan) as well as -fsanitize type builds and valgrind-type builds. What I said previously is not intended as a replacement for a thorough code review. Someone should always inspect every line of code change for security, practicality, and performance related issues. If we all make as much of our baseline quality requirements as automated as possible, the maintainability of the entire codebase going into the future becomes easier. We are certainly aligned on providing quality product, no matter what the method. I think everyone would agree this is good for any project. - Jim