
At 12:35 PM 2/12/2004, Martin Wille wrote:
Beman Dawes wrote:
I don't think smaller tests is a good idea. I'm asking for "more comprehensive" tests.
Say a library now covers 100 test cases spread out among five test programs. I'd like to see those five programs refactored into one program, still covering the 100 test cases. Or even adding more test cases. A single program would cut the overhead, including the human overhead.
There are at least three drawbacks to this approach:
1. "something is wrong" is all the information you get from a failing test. Esp. you'll likely see only one of several problems related to a failing test program. The next problem will only become visible after the first problem has been fixed.
That's only correct for new, immature code. Many Boost libraries are now mature. They pass all tests, except in exceptional circumstances.
2. Some tests are known to fail for certain compilers. If those tests are joined with other tests then we'll lose information about these other tests.
Most compilers are now passing close to 100% of all tests. Hopefully, with the next round of compiler updates, they will be passing every test. Granularity on 100% passes brings no benefits.
3. Compile time may become very large for large test programs or heavy template usage. E.g. in one case, we had to split a test into three (Spirit's switch_p tests) in order to make testing feasible.
It is hard to know the overall effect without accurate timings. My personal belief is that on average the total time will drop. But we need timings to know for sure. --Beman