
Le 28/09/11 09:47, Christopher Jefferson a écrit :
On 28 Sep 2011, at 04:42, Ben Robinson wrote:
On Tue, Sep 27, 2011 at 11:09 AM, Dave Abrahams<dave@boostpro.com> wrote:
on Tue Sep 27 2011, Gennadiy Rozental<rogeeff-AT-gmail.com> wrote:
Dave Abrahams<dave<at> boostpro.com> writes:
Now if you can come up with another approach to test these expectations I'd be happy to listen. We already have an approach; it requires integration with the test system. Yes, it's imperfect, but it does do the kind of testing needed to see that MyComponent<int> is prohibited.
Can you elaborate on this in greater detail? Currently, if I want to prove a static assertion fails, my admittedly cumbersome technique is to uncomment the test for that condition, compile to produce the error, then re-comment out the test. This becomes very tedious for large numbers of regression tests. Because we have quite a lot of these kinds of tests, we have added to our private tester code the following:
If you use wrap code in:
#ifdef DM_FAILING_CODE_UNIQUEID (where you can change UNIQUEID to different values) #endif
(which we find by grepping. The ifndef is to make it more likely we'll notice a mis-spelling of the macro)
Then the tester does:
'compiling file.cc should pass, compiling with each of the DM_FAILING_CODE_UNIQUEIDs turned on should fail. Hi,
this doesn't solves the issue as even if you have less files you will have as many tests.
I don't know if such a thing would be interesting to boost. It would seem much simpler, while still allowing one to write compact compile-time tests, rather than needing many, many files.
Boost author are free to organize theirs test using this technique but I don't think it is more convenient to analyze when a failure occurs. Best, Vicente