
I think the important question here is what mode do we want.
One mode is where you have a feature test as a .cpp file which tries to use "important" functions from a library, and then declare a Boost.Build metatarget that builds that .cpp and links to a library.
What is important part. I noticed that in many cases when you try to do "better" checks and include part of code and test full compilability you find yourself with failed test for 101 other reasons. So, unless you really have very strict specification of what you need, just test of library can be linked or the header can be included. Or even just exists. Sometimes simpler (at least by default) is just better.
In this case, if you modify the .cpp, the regular dependency checking will rebuild the test. Also, the test will be repeated if the system headers for the library change -- a particularly nifty example is including "whatever/version.hpp" and making some decisions based on version. Some changes to the way libraries are linked will allow the test to be rebuilt when a system library itself is modified.
I'm not aware of any build system that does this. Even autoconf would not try to "refind" something if it has changed. Also in autotools there is a separation between configuration part (configure.in) and declaration part (makefile.am) so once you change configure.in all configuration steps are rerun.
Another mode is where configure checks tries to compile or link something, and caches the result. One can remove the cache and run all checks again, but otherwise the system assumes nothing changes.
Actually both autotools and cmake cache results as it is infeasible to reconfigure system after each change.
I, personally, is not very happy with the second mode as implemented in cmake.
The problem with CMake is the fact that it remembers positive checks that sometimes may be incorrect, all negative ones it checks each time. I'm not happy with it either, but I don't think that the approach suggested is feasible as well. For example: bjam - check if library can be linked? no - mark this. I install library and run build again: bjam the "missing" link is remembered and not checked. On other hand, you don't want to check all dependencies each time you run bjam as tests make take huge amount of time. So basically cmake's and auto*'s approach is quite similar even thou cmake's cache little bit more "preserving" for good and bad.
However, it might be that the best mode depends on the nature of the test, and we need to support both. Comments?
No I don't think you need to support all possible ways, just do a one way that works, well documented and consistent. ---------------- Small thing I noticed, Many build systems including BBv2 are very oriented on making "build" system very simple and straightforward. And indeed, in all of them (ok maybe not autotools) you can build simple project with 2-3 lines of code. What most of them missing is how to configure the system easily, how to perform fast and easy environment checks. In this way autoconf did very-very good job, CMake is quite good as well... My $0.02 Artyom