Unittest capability for meta-programs feedback request

This submission would make it possible to write a complete set of unit tests for meta-programs, to test both the positive, compiling statements, and the negative, non-compiling statements. These tests will all compile, and the negative tests can throw an exception instead of failing to compile. The submission source is available at https://github.com/icaretaker/Metatest. The provided examples are based on the factorial metafunction from Chapter 8 of "C++ Template Metaprogramming" by David Abrahams and Aleksey Gurtovoy - http://www.boostpro.com/mplbook/. Chapter 8 explains the rational behind the BOOST_MPL_ASSERT_* macros. This submission complements these macros by allowing the writing of regression unit tests, to ensure the user will encounter the mpl formatted compiler error, if the library types are incorrectly instantiated. Currently, many meta-programs are written using one of the BOOST_MPL_ASSERT_* macros. Unit tests for these metaprograms must of course compile, therefore unittests for these programs can verify that valid types produce the correct run-time behavior. This submission provides the following four complementary macros: BOOST_METATEST(pred) BOOST_METATEST_NOT(pred) BOOST_METATEST_RELATION(x, rel, y) BOOST_METATEST_MSG(cond, msg, types) Meta-program authors can use these macros to assert predicates/relations/conditions exactly as done with BOOST_MPL_ASSERT_*. In shipped library code, these four macros exactly forward to their BOOST_MPL_ASSERT_* counterparts, since the BOOST_MPL_ASSERT_* macros do an excellent job of providing maximally informative compiler error messages. In unit test code (where BOOST_METATEST_RUNTIME will need to be defined), these macros will instead instantiate an object, which will throw a runtime exception if the predicate/relation/condition fails. Now meta-program authors can write unittests like the following: BOOST_AUTO_TEST_CASE(factorials_should_not_be_negative) { BOOST_CHECK_THROW( factorial<mpl::int_<-1> > instance;, metatest_exception ); } As you can see, this unit test validates that the factorial meta-function will in fact fail to compile. All exceptions are subclassed from boost::metatest_exception. If the user would instead like to supply their own exception types for individual assertions, one of the following four macros can be used alternatively: BOOST_METATEST_EXP(pred, exp) BOOST_METATEST_NOT_EXP(pred, exp) BOOST_METATEST_RELATION_EXP(x, rel, y, exp) BOOST_METATEST_MSG_EXP(cond, msg, types, exp) Another feature of this library, is that it is independent of any unit testing framework, provided the framework can catch exceptions by type. The examples provided show how this is done with Boost Test. The submission files also show how to use try/catch directly should your unit test framework not have such a capability. I look forward to your feedback. Thank you, Ben Robinson, Ph.D.

on Tue Sep 27 2011, Ben Robinson <icaretaker-AT-gmail.com> wrote:
This submission would make it possible to write a complete set of unit tests for meta-programs, to test both the positive, compiling statements, and the negative, non-compiling statements. These tests will all compile, and the negative tests can throw an exception instead of failing to compile.
I think you're over-promising here. You can't turn all non-compiling code into compiling code by wrapping it in something. Some errors are hard errors and there's nothing you can do about it without modifying the code. If I write a metaprogram component that is designed to generate a compilation error if it is mis-used, and then I write a test to prove that it does generate that compilation error, and you can somehow turn that into a test that throws without modifying my component... you are a god. [You may still be a god, but not because you solved this problem ;-)] I think what you mean to say is that it tests both positive and negative assertions without responding to failed assertions by generating a compilation error. Personally I am very uncomfortable with the use of exceptions to deal with failed assertions, and in a test that's just a bunch of compile-time assertions, I don't see *any* advantage whatsoever in using exceptions. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Dave Abrahams <dave <at> boostpro.com> writes:
on Tue Sep 27 2011, Ben Robinson <icaretaker-AT-gmail.com> wrote:
This submission would make it possible to write a complete set of unit tests for meta-programs, to test both the positive, compiling statements, and the negative, non-compiling statements. These tests will all compile, and the negative tests can throw an exception instead of failing to compile.
I think you're over-promising here. You can't turn all non-compiling code into compiling code by wrapping it in something. Some errors are
He can. He just ifdef it out essentially. If I understand correctly, in a few words an idea comes to this: Imagine you develop template component: template<...> class MyComponent { BOOST_MPL_ASSERT((condition)) ... }; Here "condition" is supposed to check at compile time some compile time condition template arguments are expected to satisfy. In other words - concept. Imagine now that you would have written this class like this: template<...> class MyComponent { typedef condition concept; ... }; Now this version does not check anything at compile time, BUT somewhere else in the code you can write SOME_ASSERT((MyComponent::concept)). Now this assert can be compile time OR runtime - this is not really important, but in test environment we usually want runtime error. We can even wrap it using macro like this: #ifdef UNITEST #define TESTABLE_ASSERT( cond ) typedef cond concept; #else #define TESTABLE_ASSERT( cond ) BOOST_MPL_ASSERT((cond)) #endif and use it in your class : template<...> class MyComponent { TESTABLE_ASSERT((condition)) ... }; In regular builds this is identical to MPL_ASSERT. In test environment check is delayed. Why we want to delay? See below.
Personally I am very uncomfortable with the use of exceptions to deal with failed assertions, and in a test that's just a bunch of compile-time assertions, I don't see *any* advantage whatsoever in using exceptions.
This is usual deal with unit tests: you want to test all the expectations. Imagine you expect that your component does not work with int. Meaning MyComponent<int> should fail to compile. How can you record and test this expectation? In original version - no way to do this. Your only option is to put into test module some test statements and comment them out. Now imagine that you or someone else changes implementation of the component and suddenly MyComponent<int> compiles. Your original expectation is broken. And yet your test module does not notify about it. It goes unnoticed until very late in a project some other third party mistakenly starting to use MyComponent<int> and get's incorrect runtime numbers, only because they were suppose to use MyComponent<double> instead. With the approach above these expectations are testable. You define macro UNITEST, use TESTABLE_ASSERT in your development and that's it. Now if you can come up with another approach to test these expectations I'd be happy to listen. That's said I am not sure this whole deal deserves separate set of macros (it does deserve maybe a page in docs explaining the approach), but I frankly did not look what OP proposed in details - there might be something more there. Regards, Gennadiy

on Tue Sep 27 2011, Gennadiy Rozental <rogeeff-AT-gmail.com> wrote:
Dave Abrahams <dave <at> boostpro.com> writes:
on Tue Sep 27 2011, Ben Robinson <icaretaker-AT-gmail.com> wrote:
This submission would make it possible to write a complete set of unit tests for meta-programs, to test both the positive, compiling statements, and the negative, non-compiling statements. These tests will all compile, and the negative tests can throw an exception instead of failing to compile.
I think you're over-promising here. You can't turn all non-compiling code into compiling code by wrapping it in something. Some errors are
He can. He just ifdef it out essentially.
Well, then it can't test anything, can it?
If I understand correctly, in a few words an idea comes to this:
Imagine you develop template component:
template<...> class MyComponent { BOOST_MPL_ASSERT((condition)) ...
};
These cases are easy to deal with, and are not what I'm talking about If there's a compile-time bool to work with, one can turn that into anything one wants (compile-time error, runtime assert, etc.) Sometimes, however, there's no compile-time bool to work with. If I am the author of std::pair and I write: // test that pair doesn't somehow convert unrelated types // into values that can be used for construction std::pair<int,int> x("foo", "bar"); I expect that test to fail compilation. There's no useful assertion you can do that will turn it into a runtime error.
Personally I am very uncomfortable with the use of exceptions to deal with failed assertions, and in a test that's just a bunch of compile-time assertions, I don't see *any* advantage whatsoever in using exceptions.
This is usual deal with unit tests: you want to test all the expectations.
Yes
Imagine you expect that your component does not work with int. Meaning MyComponent<int> should fail to compile.
Yes. This is the kind of case I'm talking about.
How can you record and test this expectation? In original version - no way to do this.
Hm? Original versoin of what?
Your only option is to put into test module some test statements and comment them out.
We do it today by having "expected compilation failure" (compile-fail) tests (a more robust system would test the contents of the error message, but that's another thing).
Now imagine that you or someone else changes implementation of the component and suddenly MyComponent<int> compiles. Your original expectation is broken. And yet your test module does not notify about it.
Not mine; I build a compile-fail test.
With the approach above these expectations are testable. You define macro UNITEST, use TESTABLE_ASSERT in your development and that's it.
I'm sorry, I read what you wrote above but don't see anything in there that would make this work.
Now if you can come up with another approach to test these expectations I'd be happy to listen.
We already have an approach; it requires integration with the test system. Yes, it's imperfect, but it does do the kind of testing needed to see that MyComponent<int> is prohibited. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Dave Abrahams <dave <at> boostpro.com> writes:
Sometimes, however, there's no compile-time bool to work with.
If I am the author of std::pair and I write:
// test that pair doesn't somehow convert unrelated types // into values that can be used for construction std::pair<int,int> x("foo", "bar");
I expect that test to fail compilation. There's no useful assertion you can do that will turn it into a runtime error.
Yes. There are always some implied/implicit expectations. And I also do not see any way to make it into a testable concepts since it belongs to a member function, unless we can come up with a means to attach concept to the member function. In that case we would have checked std::pair<int,int>::pair<std::string,std::string>::concept
Personally I am very uncomfortable with the use of exceptions to deal with failed assertions, and in a test that's just a bunch of compile-time assertions, I don't see *any* advantage whatsoever in using exceptions.
This is usual deal with unit tests: you want to test all the expectations.
Yes
Imagine you expect that your component does not work with int. Meaning MyComponent<int> should fail to compile.
Yes. This is the kind of case I'm talking about.
How can you record and test this expectation? In original version - no way to do this.
Hm? Original versoin of what?
of MyComponent implementation in my first reply.
Your only option is to put into test module some test statements and comment them out.
We do it today by having "expected compilation failure" (compile-fail) tests (a more robust system would test the contents of the error message, but that's another thing).
Yes. There is this option, but it's hardly robust. You can't be sure it fails to compile for a reason you expect it to and checking against compiler output is frankly madness. Not only it's different for different compilers, but it also tend to change with every modification of component implementation.
Now imagine that you or someone else changes implementation of the component and suddenly MyComponent<int> compiles. Your original expectation is broken. And yet your test module does not notify about it.
Not mine; I build a compile-fail test.
In practice there are few people who rely on these. Having compilable, runtime reportable and robust alternative would be of use for everyone else.
With the approach above these expectations are testable. You define macro UNITEST, use TESTABLE_ASSERT in your development and that's it.
I'm sorry, I read what you wrote above but don't see anything in there that would make this work.
In a test code you'd define UNITTEST on top and write something like: BOOST_CHECK( !MyComponent<int>::concept::value ) This failure will be reported at runtime.
Now if you can come up with another approach to test these expectations I'd be happy to listen.
We already have an approach; it requires integration with the test system. Yes, it's imperfect, but it does do the kind of testing needed to see that MyComponent<int> is prohibited.
Again very few users have testing system smart enough even to recognize "expected compile failure" tests. And I personally would not use it if I can help it. Gennadiy

on Tue Sep 27 2011, Gennadiy Rozental <rogeeff-AT-gmail.com> wrote:
Dave Abrahams <dave <at> boostpro.com> writes:
Sometimes, however, there's no compile-time bool to work with.
If I am the author of std::pair and I write:
// test that pair doesn't somehow convert unrelated types // into values that can be used for construction std::pair<int,int> x("foo", "bar");
I expect that test to fail compilation. There's no useful assertion you can do that will turn it into a runtime error.
Yes. There are always some implied/implicit expectations. And I also do not see any way to make it into a testable concepts
My point is only this: Ben implied that he can turn every legitimate compile-fail test into a runtime test. He can't. Unless you are still arguing that he can, we have nothing further to argue about. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Dave Abrahams <dave <at> boostpro.com> writes:
My point is only this:
Ben implied that he can turn every legitimate compile-fail test into a runtime test. He can't.
I pretty much asked asked the same question in some other thread and got the same conclusion.
Unless you are still arguing that he can, we have nothing further to argue about.
The only point I was trying to argue is that there might be some valid use cases for runtime testable compile time concepts for the purposes of unit testing. Gennadiy

on Tue Sep 27 2011, Gennadiy Rozental <rogeeff-AT-gmail.com> wrote:
Dave Abrahams <dave <at> boostpro.com> writes:
My point is only this:
Ben implied that he can turn every legitimate compile-fail test into a runtime test. He can't.
I pretty much asked asked the same question in some other thread and got the same conclusion.
Unless you are still arguing that he can, we have nothing further to argue about.
The only point I was trying to argue is that there might be some valid use cases for runtime testable compile time concepts for the purposes of unit testing.
Like I said, we have nothing to argue about :-) -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Le 27/09/11 21:18, Gennadiy Rozental a écrit :
Dave Abrahams<dave<at> boostpro.com> writes:
Sometimes, however, there's no compile-time bool to work with.
If I am the author of std::pair and I write:
// test that pair doesn't somehow convert unrelated types // into values that can be used for construction std::pair<int,int> x("foo", "bar");
I expect that test to fail compilation. There's no useful assertion you can do that will turn it into a runtime error. Yes. There are always some implied/implicit expectations. And I also do not see any way to make it into a testable concepts since it belongs to a member function, unless we can come up with a means to attach concept to the member function. In that case we would have checked
std::pair<int,int>::pair<std::string,std::string>::concept
Personally I am very uncomfortable with the use of exceptions to deal with failed assertions, and in a test that's just a bunch of compile-time assertions, I don't see *any* advantage whatsoever in using exceptions. This is usual deal with unit tests: you want to test all the expectations. Yes
Imagine you expect that your component does not work with int. Meaning MyComponent<int> should fail to compile. Yes. This is the kind of case I'm talking about.
How can you record and test this expectation? In original version - no way to do this. Hm? Original versoin of what? of MyComponent implementation in my first reply.
Your only option is to put into test module some test statements and comment them out. We do it today by having "expected compilation failure" (compile-fail) tests (a more robust system would test the contents of the error message, but that's another thing). Yes. There is this option, but it's hardly robust. You can't be sure it fails to compile for a reason you expect it to and checking against compiler output is frankly madness. Not only it's different for different compilers, but it also tend to change with every modification of component implementation.
Now imagine that you or someone else changes implementation of the component and suddenly MyComponent<int> compiles. Your original expectation is broken. And yet your test module does not notify about it. Not mine; I build a compile-fail test. In practice there are few people who rely on these. A lot of Boost authors relay in this technique Having compilable, runtime reportable and robust alternative would be of use for everyone else. I recognize the utility of such a system, what I don't like is the complexity it introduce in the implementation. If some one find a way to report these compile failures at runtime without needed to refactor too much the intended implementation I will be the first to adhere to it. Now if you can come up with another approach to test these expectations I'd be happy to listen. We already have an approach; it requires integration with the test system. Yes, it's imperfect, but it does do the kind of testing needed to see that MyComponent<int> is prohibited. Again very few users have testing system smart enough even to recognize "expected compile failure" tests. And I personally would not use it if I can help it.
Having a build system that check failure of compilation is quite easy. This doesn't needs nothing too much smart. In order to be more confident that the compile failure corresponds to the expected one you could start with an archetype of the concept and test that it compiles. The you can remove one at a time each one of the requirements and check that the program doesn't compiles any more. Best, Vicente

On Tue, Sep 27, 2011 at 11:09 AM, Dave Abrahams <dave@boostpro.com> wrote:
on Tue Sep 27 2011, Gennadiy Rozental <rogeeff-AT-gmail.com> wrote:
Dave Abrahams <dave <at> boostpro.com> writes:
Now if you can come up with another approach to test these expectations I'd be happy to listen.
We already have an approach; it requires integration with the test system. Yes, it's imperfect, but it does do the kind of testing needed to see that MyComponent<int> is prohibited.
Can you elaborate on this in greater detail? Currently, if I want to prove
a static assertion fails, my admittedly cumbersome technique is to uncomment the test for that condition, compile to produce the error, then re-comment out the test. This becomes very tedious for large numbers of regression tests. Thank you, Ben Robinson, Ph.D. --
Dave Abrahams BoostPro Computing http://www.boostpro.com
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Ben Robinson <icaretaker <at> gmail.com> writes:
Can you elaborate on this in greater detail? Currently, if I want to prove a static assertion fails, my admittedly cumbersome technique is to uncomment the test for that condition, compile to produce the error, then re-comment out the test. This becomes very tedious for large numbers of regression tests.
The idea is to commit failing to compile code and make test system to treat it as PASS condition. Gennadiy

on Tue Sep 27 2011, Ben Robinson <icaretaker-AT-gmail.com> wrote:
on Tue Sep 27 2011, Gennadiy Rozental <rogeeff-AT-gmail.com> wrote:
Dave Abrahams <dave <at> boostpro.com> writes:
Now if you can come up with another approach to test these expectations I'd be happy to listen.
We already have an approach; it requires integration with the test system. Yes, it's imperfect, but it does do the kind of testing needed to see that MyComponent<int> is prohibited.
Can you elaborate on this in greater detail? Currently, if I want to prove
a static assertion fails, my admittedly cumbersome technique is to uncomment the test for that condition, compile to produce the error, then re-comment out the test. This becomes very tedious for large numbers of regression tests.
I wouldn't advocate the technique for testing that a static assertion fails; what you're proposing in this thread is much better suited to that use case. However, if you need to prove that something fails compilation (an altogether different problem), you can do it by having the testing system invert the result of running the compiler. For robustness you should also make sure that everything compiles successfully if you disable the specific trigger for the failure, and maybe check the error messages. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On Tue, Sep 27, 2011 at 11:30 PM, Dave Abrahams <dave@boostpro.com> wrote:
We already have an approach; it requires integration with the test system. Yes, it's imperfect, but it does do the kind of testing needed to see that MyComponent<int> is prohibited.
I wouldn't advocate the technique for testing that a static assertion fails; what you're proposing in this thread is much better suited to that use case.
Excellent! Now I suppose I need to justify my implementation choices, as well as demonstrate how the technique can be used through better examples. I hope that once this technique is digested, the community will discover surprising new ways to make use of it. However, if you need to prove that something fails compilation (an
altogether different problem), you can do it by having the testing system invert the result of running the compiler. For robustness you should also make sure that everything compiles successfully if you disable the specific trigger for the failure, and maybe check the error messages.
Agreed, I have not added this capability to my current GNU Make build system, but I certainly will. Thanks to everybody who explained this approach to proving compilation failures. Thank you, Ben Robinson, Ph.D. --
Dave Abrahams BoostPro Computing http://www.boostpro.com
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On 28 Sep 2011, at 04:42, Ben Robinson wrote:
On Tue, Sep 27, 2011 at 11:09 AM, Dave Abrahams <dave@boostpro.com> wrote:
on Tue Sep 27 2011, Gennadiy Rozental <rogeeff-AT-gmail.com> wrote:
Dave Abrahams <dave <at> boostpro.com> writes:
Now if you can come up with another approach to test these expectations I'd be happy to listen.
We already have an approach; it requires integration with the test system. Yes, it's imperfect, but it does do the kind of testing needed to see that MyComponent<int> is prohibited.
Can you elaborate on this in greater detail? Currently, if I want to prove
a static assertion fails, my admittedly cumbersome technique is to uncomment the test for that condition, compile to produce the error, then re-comment out the test. This becomes very tedious for large numbers of regression tests.
Because we have quite a lot of these kinds of tests, we have added to our private tester code the following: If you use wrap code in: #ifdef DM_FAILING_CODE_UNIQUEID (where you can change UNIQUEID to different values) #endif (which we find by grepping. The ifndef is to make it more likely we'll notice a mis-spelling of the macro) Then the tester does: 'compiling file.cc should pass, compiling with each of the DM_FAILING_CODE_UNIQUEIDs turned on should fail. This makes it possible to write tests like: #include <utility> int main(void) { std::pair<int, int> p; std::pair<void*, void*> q; #ifdef DM_FAILING_CODE_ASSIGN p = q; #endif #ifdef DM_FAILING_CODE_EQUALS p == q; #endif } I don't know if such a thing would be interesting to boost. It would seem much simpler, while still allowing one to write compact compile-time tests, rather than needing many, many files. The only problem we have had is that it is easy to miss a mis-spelt DM_FAILING_CODE, so we are thinking about changing to #ifndef instead, to make them more likely to be caught :) Chris
Thank you,
Ben Robinson, Ph.D.
--
Dave Abrahams BoostPro Computing http://www.boostpro.com
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Le 28/09/11 09:47, Christopher Jefferson a écrit :
On 28 Sep 2011, at 04:42, Ben Robinson wrote:
On Tue, Sep 27, 2011 at 11:09 AM, Dave Abrahams<dave@boostpro.com> wrote:
on Tue Sep 27 2011, Gennadiy Rozental<rogeeff-AT-gmail.com> wrote:
Dave Abrahams<dave<at> boostpro.com> writes:
Now if you can come up with another approach to test these expectations I'd be happy to listen. We already have an approach; it requires integration with the test system. Yes, it's imperfect, but it does do the kind of testing needed to see that MyComponent<int> is prohibited.
Can you elaborate on this in greater detail? Currently, if I want to prove a static assertion fails, my admittedly cumbersome technique is to uncomment the test for that condition, compile to produce the error, then re-comment out the test. This becomes very tedious for large numbers of regression tests. Because we have quite a lot of these kinds of tests, we have added to our private tester code the following:
If you use wrap code in:
#ifdef DM_FAILING_CODE_UNIQUEID (where you can change UNIQUEID to different values) #endif
(which we find by grepping. The ifndef is to make it more likely we'll notice a mis-spelling of the macro)
Then the tester does:
'compiling file.cc should pass, compiling with each of the DM_FAILING_CODE_UNIQUEIDs turned on should fail. Hi,
this doesn't solves the issue as even if you have less files you will have as many tests.
I don't know if such a thing would be interesting to boost. It would seem much simpler, while still allowing one to write compact compile-time tests, rather than needing many, many files.
Boost author are free to organize theirs test using this technique but I don't think it is more convenient to analyze when a failure occurs. Best, Vicente

On Tue, Sep 27, 2011 at 7:59 AM, Dave Abrahams <dave@boostpro.com> wrote:
on Tue Sep 27 2011, Ben Robinson <icaretaker-AT-gmail.com> wrote:
... I think you're over-promising here. You can't turn all non-compiling code into compiling code by wrapping it in something.
I clearly overstated what I was trying to accomplish. Let me try again: This submission would make it possible to write unit tests for static assertions in meta-programs, to verify that static assertions will pass when they should, and also to verify that static assertions will fail when they should. These tests will all compile, and the non-passing static assertions will communicate the detected failure to a test framework at run-time, instead of failing to compile. If ... you can
somehow turn that into a test that throws without modifying my component... you are a god. [You may still be a god, but not because you solved this problem ;-)]
I am not a god, but try telling that to my 6-year-old triplets and 18-month-old baby. :) I think what you mean to say is that it tests both positive and negative
assertions without responding to failed assertions by generating a compilation error.
Correct.
Personally I am very uncomfortable with the use of exceptions to deal with failed assertions, and in a test that's just a bunch of compile-time assertions, I don't see *any* advantage whatsoever in using exceptions.
Exceptions are simply an implementation detail I am using instead of the compile error. One benefit of writing unit tests which show something failing when it should, is for regression analysis. Rather than discussing this point in generalities, I will provide more examples of its usage, and hopefully I can better demonstrate the usefulness that way. Thank you, Ben Robinson, Ph.D. --
Dave Abrahams BoostPro Computing http://www.boostpro.com
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

on Tue Sep 27 2011, Ben Robinson <icaretaker-AT-gmail.com> wrote:
Personally I am very uncomfortable with the use of exceptions to deal with failed assertions, and in a test that's just a bunch of compile-time assertions, I don't see *any* advantage whatsoever in using exceptions.
Exceptions are simply an implementation detail I am using instead of the compile error.
Yes; I think it would be better to use a different mechanism.
One benefit of writing unit tests which show something failing when it should, is for regression analysis.
Of course!
Rather than discussing this point in generalities, I will provide more examples of its usage, and hopefully I can better demonstrate the usefulness that way.
Good; looking forward to it. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Hi Ben,
This submission would make it possible to write a complete set of unit tests for meta-programs, to test both the positive, compiling statements, and the negative, non-compiling statements. These tests will all compile, and the negative tests can throw an exception instead of failing to compile.
By negative tests do you mean compile-time predicates that return false?
The submission source is available at https://github.com/icaretaker/Metatest. The provided examples are based on the factorial metafunction from Chapter 8 of "C++ Template Metaprogramming" by David Abrahams and Aleksey Gurtovoy - http://www.boostpro.com/mplbook/. Chapter 8 explains the rational behind the BOOST_MPL_ASSERT_* macros. This submission complements these macros by allowing the writing of regression unit tests, to ensure the user will encounter the mpl formatted compiler error, if the library types are incorrectly instantiated.
Your factorial (example_boosttest_factorial.cpp) example uses a compile-time assertion to check the validity of the argument of factorial. In my understanding of what you wrote (and how your example uses it), you intend to throw a runtime exception from the default constructor of the factorial class. What a metafunction such as factorial could do for handling errors is throwing a "compile-time exception". This is something I've implemented in an other library (metamonad) in my mpllibs repository. This is a bunch of tools simulating exceptions at compile-time. One can "throw" and "catch" them. The "exception" that is "thrown" can contain a class describing the problem itself in a meaningful way. These simulated exceptions can be propagated in the template metafunction call-chain. The fact that an "exception" is propagated out of a metaprogram doesn't break the compilation on its own. It generates errors when the metaprogrammer tries to use it as it was the real result. When the metafunction is used in a compile-time predicate and an exception is propagated out, metatest can display it or add it to Boost.Test (including the meaningful description of the problem) using pretty-printing. When a metaprogram is used in real code and an "exception" is propagated out of it, it will most likely not be usable in the context the metaprogrammer is trying to use it in, thus it generates a compilation error. What I've found useful in such situations is compiling the problematic metaprogram on its own and display it with the pretty-printing solution of metatest. Simulated exceptions are implemented using monads. A drawback of this is that every metafunction in the call-chain has to be prepared for propagating exceptions explicitly. However, metamonad provides a template class (try_) adding this to existing metafunctions. It can be used the following way: template </* args */> struct metafunction_not_prepared_for_exception_propagation : /* body */ {}; template </* args */> struct metafunction_prepared_for_exception_propagation : metatest::try_< /* body */ > {}; Your example: template <class N> struct factorial : mpl::eval_if< mpl::less_equal<N, mpl::int_<0> > , mpl::int_<1> , mpl::times<N, factorial<typename mpl::prior<N>::type> > >::type { BOOST_METATEST((mpl::greater_equal<N, mpl::int_<0> >)); }; BOOST_AUTO_TEST_CASE(failing_negative_homemade_framework) { bool caught = false; try { factorial<mpl::int_<-1> > factneg1; } catch (metatest_exception & ee) { caught = true; } BOOST_CHECK_EQUAL(caught, true); } could be implemented using metamonad's compile-time exceptions the following way: // class describing the problem struct negative_factorial_argument {}; // adding pretty-printing support MPLLIBS_DEFINE_TO_STREAM_FOR_TYPE( negative_factorial_argument, "The factorial metafunction has been called with a negative argument." ) template <class N> struct factorial : mpl::eval_if< typename mpl::greater_equal<N, mpl::int_<0> >::type, mpl::eval_if< mpl::less_equal<N, mpl::int_<0> >, mpl::int_<1>, mpl::times<N, factorial<typename mpl::prior<N>::type> > >, metamonad::throw_<negative_factorial_argument>
{}; template <class NullaryMetafunction> struct no_throw : metamonad::do_try< NullaryMetafunction, metamonad::do_return<mpl::true_>
{}; MPLLIBS_DEFINE_TO_STREAM_FOR_TEMPLATE(1, no_throw, "no_throw") BOOST_AUTO_TEST_CASE(failing_negative_homemade_framework) { metatest::meta_check< no_throw<factorial<mpl::int_<-1> > >
(MPLLIBS_HERE); }
No runtime exceptions are used - meta_check passes the pretty-printed "compile-time exception" to a runtime unit testing framework (Boost.Test). The implementation of these template functions is simple, and one can easily write similar functions to support other unit testing frameworks.
BOOST_METATEST(pred) BOOST_METATEST_NOT(pred) BOOST_METATEST_RELATION(x, rel, y) BOOST_METATEST_MSG(cond, msg, types)
If you use macros to do assertions and your predicates contain syntax errors, the error messages will point to the macro call, not to the exact location of the error. By using template functions for assertions (meta_warn, meta_check, meta_require in my metatest implementation - see my other mail metatest interface update...) you don't hide the real location of the code from the compiler. What is the benefit of having BOOST_METATEST_RELATION, METATEST_NOT macros? For example why is using BOOST_METATEST_NOT(pred) better than using BOOST_METATEST(lazy_not<pred>) ? Regards, Abel

Le 27/09/11 08:43, Ben Robinson a écrit :
This submission would make it possible to write a complete set of unit tests for meta-programs, to test both the positive, compiling statements, and the negative, non-compiling statements. These tests will all compile, and the negative tests can throw an exception instead of failing to compile. ... I look forward to your feedback. Thank you,
Hi, I start to understand what you want to achieve. Your example is quite simple and shows that in general there will be a redundancy on the test, once to avoid the invalid instantiation and the other to assert the valid condition (even if in your example you have used the test to end the recursion to give a valid value). If I have understood correctly the result of the meta-function should be something that behaves as a valid result when the condition is not satisfied. I'm not sure this is always possible, as Dave as already said, and even if it is possible in common cases this should not always be simple. The documentation should show more concrete cases. Other uses of meta-asserts could use intermediary results. Avoiding compilation failure in these cases will result in additional complexity on the meta-function implementation. In order to see if your approach can be used in regular meta-programs it would be great if you can show more complex examples can be written with a without your macros. Best, Vicente

on Tue Sep 27 2011, "Vicente J. Botet Escriba" <vicente.botet-AT-wanadoo.fr> wrote:
In order to see if your approach can be used in regular meta-programs it would be great if you can show more complex examples can be written with a without your macros.
IIUC, this is not a facility that can effectively be used "in" metaprograms. It's just a way to check that metaprograms give certain results as part of a _separate_ test fixture. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On Tue, Sep 27, 2011 at 2:26 PM, Vicente J. Botet Escriba < vicente.botet@wanadoo.fr> wrote:
<snip>
In order to see if your approach can be used in regular meta-programs it
would be great if you can show more complex examples can be written with a without your macros.
I am finding it difficult to disagree, that showing more more examples would be a good thing... So I will. :) Please stand by... Ben Robinson, Ph.D.
Best, Vicente
______________________________**_________________ Unsubscribe & other changes: http://lists.boost.org/** mailman/listinfo.cgi/boost<http://lists.boost.org/mailman/listinfo.cgi/boost>

I have created an example of using the proposed assertion macros, by incorporating them into Boost Unit, and then creating a unit test which was previously not possible to write. With the current Boost Unit, the following unit test can be written: BOOST_AUTO_TEST_CASE(passing_assignment_compatible_units) { BOOST_CHECK_NO_THROW( quantity<length> L = 2.0 * meters; ); } And now, after incorporating BOOST_METATEST_MSG into Boost Unit, it is now possible to write the following unit test: BOOST_AUTO_TEST_CASE(failing_assignment_incompatible_units) { BOOST_CHECK_THROW( quantity<length> L = 2.0 * meters * meters;, metatest_exception ); } The required modification to Boost Unit, was to add the following templated constructor to boost::units::quantity: template <class rhs_type> quantity(const rhs_type& source) : val_() { BOOST_METATEST_MSG(false, INVALID_CONVERSION_BETWEEN_INCOMPATIBLE_UNITS, (this_type, rhs_type)); } In the unmodified library, the lack of this constructor would produce the compiler error, when an incorrect assignment requiring this constructor was made. This makes an interesting point, that I have written additional code to *explicitly* express the intended failure consequence of invoking this constructor, instead of relying *implicitly *on the lack of such a definition. A significant benefit of this change, is that the compiler error message now produced, is now controlled by the BOOST_MPL_ASSERT_MSG macro, instead of a generic compiler specific message. I don't think I need to justify why BOOST_MPL_ASSERT_MSG is preferred to a generic compiler error, with this group. :)) The source for this example has been uploaded to the Metatest git repository at https://github.com/icaretaker/Metatest, in the example_boost_unit subdirectory. Thank you all again in advance for your feedback, Ben Robinson, Ph.D.

AMDG On 09/27/2011 11:06 PM, Ben Robinson wrote:
The required modification to Boost Unit, was to add the following templated constructor to boost::units::quantity:
template <class rhs_type> quantity(const rhs_type& source) : val_() { BOOST_METATEST_MSG(false, INVALID_CONVERSION_BETWEEN_INCOMPATIBLE_UNITS, (this_type, rhs_type)); }
In the unmodified library, the lack of this constructor would produce the compiler error, when an incorrect assignment requiring this constructor was made. This makes an interesting point, that I have written additional code to *explicitly* express the intended failure consequence of invoking this constructor, instead of relying *implicitly *on the lack of such a definition.
Unfortunately, this changes the behavior of the library. is_convertible<Q1, Q2> will return true if the units are unrelated. It can also change the result of overload resolution. In Christ, Steven Watanabe

Hi all - I'm a new addition to the list. I joined because I wanted to get some feedback on whether the Boost community is interested in accepting some patches from another project. The Phusion Passenger project is currently shipping a bundled version of Boost with some custom modifications to suit their purposes. I think their changes might benefit the wider Boost community, and I'd like to see if the community agrees with me. Passenger uses the MIT License, which I believe is compatible with Boost's submission requirements. So, I'll start off with a description of the changes and if there's interest, I can post the actual patches for additional comment. The changes are: 1. Adding an optional stack_size parameter to thread::start_thread() This is useful in Passenger's case where they want to reduce the VM size without requiring the user to hassle with ulimit settings on Linux. Passenger spawns many threads rather than using a thread pool for performance reasons. This change is, its current form, platform-specific, but I'm working on correcting that, hopefully without a ton of ifdefs. 2. Adding backtrace and system_error_code support. This adds additional exception information to boost::thread_exception, boost::thread_resource_error, and boost::thread_interrupted that allows Passenger to dump a full backtrace all the way up its stack. This change is relatively straightforward and seems like it would benefit a number of users. Please let me know if I should post patches for one or both of these changes. ---Brett.

On Sep 28, 2011, at 8:48 AM, Brett Lentz wrote:
Hi all -
I'm a new addition to the list. I joined because I wanted to get some feedback on whether the Boost community is interested in accepting some patches from another project.
[snip]
2. Adding backtrace and system_error_code support.
This adds additional exception information to boost::thread_exception, boost::thread_resource_error, and boost::thread_interrupted that allows Passenger to dump a full backtrace all the way up its stack.
This change is relatively straightforward and seems like it would benefit a number of users.
This would be of great interest to me - but not there. In boost::exception, instead. Making it work cross-platform is the trick. ;-) -- Marshall Marshall Clow Idio Software <mailto:mclow.lists@gmail.com> A.D. 1517: Martin Luther nails his 95 Theses to the church door and is promptly moderated down to (-1, Flamebait). -- Yu Suzuki

2. Adding backtrace and system_error_code support.
This adds additional exception information to boost::thread_exception, boost::thread_resource_error, and boost::thread_interrupted that allows Passenger to dump a full backtrace all the way up its stack.
This change is relatively straightforward and seems like it would benefit a number of users.
This would be of great interest to me - but not there. In boost::exception, instead.
Making it work cross-platform is the trick. ;-)
Indeed, none the less I'm sure there would be interest in this, can I suggest you file a couple of Trac tickets with patches at svn.boost.org, make sure the issue (1) is assigned to the thread lib, and issue (2) to Boost.Exception. Cheers, John.

on Wed Sep 28 2011, Marshall Clow <mclow.lists-AT-gmail.com> wrote:
On Sep 28, 2011, at 8:48 AM, Brett Lentz wrote:
Hi all -
I'm a new addition to the list. I joined because I wanted to get some feedback on whether the Boost community is interested in accepting some patches from another project.
[snip]
2. Adding backtrace and system_error_code support.
This adds additional exception information to boost::thread_exception, boost::thread_resource_error, and boost::thread_interrupted that allows Passenger to dump a full backtrace all the way up its stack.
This change is relatively straightforward and seems like it would benefit a number of users.
This would be of great interest to me - but not there. In boost::exception, instead.
Making it work cross-platform is the trick. ;-)
As long as it's in principle port-ABLE it doesn't need to be fully port-ED. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 9/28/11 8:48 AM, Brett Lentz wrote:
1. Adding an optional stack_size parameter to thread::start_thread()
This is useful in Passenger's case where they want to reduce the VM size without requiring the user to hassle with ulimit settings on Linux. Passenger spawns many threads rather than using a thread pool for performance reasons.
This change is, its current form, platform-specific, but I'm working on correcting that, hopefully without a ton of ifdefs.
I would be interested in this capability but would want to see your proposed changes. We have a patch that we apply for our own purposes of adjusting the stack size but would be interested in a more general solution. Tim

On 09/28/2011 01:02 PM, Tim Moore wrote: > > On 9/28/11 8:48 AM, Brett Lentz wrote: >> >> 1. Adding an optional stack_size parameter to thread::start_thread() >> >> This is useful in Passenger's case where they want to reduce the VM size >> without requiring the user to hassle with ulimit settings on Linux. >> Passenger spawns many threads rather than using a thread pool for >> performance reasons. >> >> This change is, its current form, platform-specific, but I'm working on >> correcting that, hopefully without a ton of ifdefs. > > I would be interested in this capability but would want to see your > proposed changes. We have a patch that we apply for our own purposes of > adjusting the stack size but would be interested in a more general > solution. > > Tim > > Here's the patch against Boost 1.44. This is straight from the phusion repo. The only addition I've made is the comment about its platform-specific nature. ---Brett. diff --git a/boost/thread/detail/thread.hpp b/boost/thread/detail/thread.hpp index 26224ba..3db4b88 100644 --- a/boost/thread/detail/thread.hpp +++ b/boost/thread/detail/thread.hpp @@ -117,8 +117,6 @@ namespace boost detail::thread_data_ptr thread_info; - void start_thread(); - explicit thread(detail::thread_data_ptr data); detail::thread_data_ptr get_thread_info BOOST_PREVENT_MACRO_SUBSTITUTION () const; @@ -147,12 +145,22 @@ namespace boost #endif struct dummy; + + protected: + template <class F> + void set_thread_main_function(F f) + { + thread_info = make_thread_info(f); + } + + void start_thread(unsigned int stack_size = 0); + public: #if BOOST_WORKAROUND(__SUNPRO_CC, < 0x5100) thread(const volatile thread&); #endif thread(); - ~thread(); + virtual ~thread(); #ifndef BOOST_NO_RVALUE_REFERENCES #ifdef BOOST_MSVC @@ -164,10 +172,10 @@ namespace boost } #else template <class F> - thread(F&& f): + thread(F&& f, unsigned int stack_size = 0): thread_info(make_thread_info(static_cast<F&&>(f))) { - start_thread(); + start_thread(stack_size); } #endif @@ -191,25 +199,25 @@ namespace boost #else #ifdef BOOST_NO_SFINAE template <class F> - explicit thread(F f): + explicit thread(F f, unsigned int stack_size = 0): thread_info(make_thread_info(f)) { - start_thread(); + start_thread(stack_size); } #else template <class F> - explicit thread(F f,typename disable_if<boost::is_convertible<F&,detail::thread_move_t<F> >, dummy* >::type=0): + explicit thread(F f,typename disable_if<boost::is_convertible<F&,detail::thread_move_t<F> >, dummy* >::type=0, unsigned int stack_size = 0): thread_info(make_thread_info(f)) { - start_thread(); + start_thread(stack_size); } #endif template <class F> - explicit thread(detail::thread_move_t<F> f): + explicit thread(detail::thread_move_t<F> f, unsigned int stack_size = 0): thread_info(make_thread_info(f)) { - start_thread(); + start_thread(stack_size); } thread(detail::thread_move_t<thread> x) @@ -246,65 +254,65 @@ namespace boost #endif template <class F,class A1> - thread(F f,A1 a1): + thread(F f,A1 a1, unsigned int stack_size = 0): thread_info(make_thread_info(boost::bind(boost::type<void>(),f,a1))) { - start_thread(); + start_thread(stack_size); } template <class F,class A1,class A2> - thread(F f,A1 a1,A2 a2): + thread(F f,A1 a1,A2 a2, unsigned int stack_size = 0): thread_info(make_thread_info(boost::bind(boost::type<void>(),f,a1,a2))) { - start_thread(); + start_thread(stack_size); } template <class F,class A1,class A2,class A3> - thread(F f,A1 a1,A2 a2,A3 a3): + thread(F f,A1 a1,A2 a2,A3 a3, unsigned int stack_size = 0): thread_info(make_thread_info(boost::bind(boost::type<void>(),f,a1,a2,a3))) { - start_thread(); + start_thread(stack_size); } template <class F,class A1,class A2,class A3,class A4> - thread(F f,A1 a1,A2 a2,A3 a3,A4 a4): + thread(F f,A1 a1,A2 a2,A3 a3,A4 a4, unsigned int stack_size = 0): thread_info(make_thread_info(boost::bind(boost::type<void>(),f,a1,a2,a3,a4))) { - start_thread(); + start_thread(stack_size); } template <class F,class A1,class A2,class A3,class A4,class A5> - thread(F f,A1 a1,A2 a2,A3 a3,A4 a4,A5 a5): + thread(F f,A1 a1,A2 a2,A3 a3,A4 a4,A5 a5, unsigned int stack_size = 0): thread_info(make_thread_info(boost::bind(boost::type<void>(),f,a1,a2,a3,a4,a5))) { - start_thread(); + start_thread(stack_size); } template <class F,class A1,class A2,class A3,class A4,class A5,class A6> - thread(F f,A1 a1,A2 a2,A3 a3,A4 a4,A5 a5,A6 a6): + thread(F f,A1 a1,A2 a2,A3 a3,A4 a4,A5 a5,A6 a6, unsigned int stack_size = 0): thread_info(make_thread_info(boost::bind(boost::type<void>(),f,a1,a2,a3,a4,a5,a6))) { - start_thread(); + start_thread(stack_size); } template <class F,class A1,class A2,class A3,class A4,class A5,class A6,class A7> - thread(F f,A1 a1,A2 a2,A3 a3,A4 a4,A5 a5,A6 a6,A7 a7): + thread(F f,A1 a1,A2 a2,A3 a3,A4 a4,A5 a5,A6 a6,A7 a7, unsigned int stack_size = 0): thread_info(make_thread_info(boost::bind(boost::type<void>(),f,a1,a2,a3,a4,a5,a6,a7))) { - start_thread(); + start_thread(stack_size); } template <class F,class A1,class A2,class A3,class A4,class A5,class A6,class A7,class A8> - thread(F f,A1 a1,A2 a2,A3 a3,A4 a4,A5 a5,A6 a6,A7 a7,A8 a8): + thread(F f,A1 a1,A2 a2,A3 a3,A4 a4,A5 a5,A6 a6,A7 a7,A8 a8, unsigned int stack_size = 0): thread_info(make_thread_info(boost::bind(boost::type<void>(),f,a1,a2,a3,a4,a5,a6,a7,a8))) { - start_thread(); + start_thread(stack_size); } template <class F,class A1,class A2,class A3,class A4,class A5,class A6,class A7,class A8,class A9> - thread(F f,A1 a1,A2 a2,A3 a3,A4 a4,A5 a5,A6 a6,A7 a7,A8 a8,A9 a9): + thread(F f,A1 a1,A2 a2,A3 a3,A4 a4,A5 a5,A6 a6,A7 a7,A8 a8,A9 a9, unsigned int stack_size = 0): thread_info(make_thread_info(boost::bind(boost::type<void>(),f,a1,a2,a3,a4,a5,a6,a7,a8,a9))) { - start_thread(); + start_thread(stack_size); } void swap(thread& x) diff --git a/libs/thread/src/pthread/thread.cpp b/libs/thread/src/pthread/thread.cpp index 4ff40a9..7571c9a 100644 --- a/libs/thread/src/pthread/thread.cpp +++ b/libs/thread/src/pthread/thread.cpp @@ -180,10 +180,25 @@ namespace boost thread::thread() {} - void thread::start_thread() + void thread::start_thread(unsigned int stack_size) { + /* FIXME: Linux-only. Breaks Win32. */ thread_info->self=thread_info; - int const res = pthread_create(&thread_info->thread_handle, 0, &thread_proxy, thread_info.get()); + pthread_attr_t attr; + int res = pthread_attr_init(&attr); + if (res != 0) { + throw thread_resource_error(); + } + if (stack_size > 0) { + res = pthread_attr_setstacksize(&attr, stack_size); + if (res != 0) { + pthread_attr_destroy(&attr); + throw thread_resource_error(); + } + } + + res = pthread_create(&thread_info->thread_handle, &attr, &thread_proxy, thread_info.get()); + pthread_attr_destroy(&attr); if (res != 0) { thread_info->self.reset();

----- Original Message ----- From: "Brett Lentz" <blentz@redhat.com> To: <boost@lists.boost.org> Sent: Wednesday, September 28, 2011 5:48 PM Subject: [boost] Gauging interest in patch submissions Hi all - I'm a new addition to the list. I joined because I wanted to get some feedback on whether the Boost community is interested in accepting some patches from another project. The Phusion Passenger project is currently shipping a bundled version of Boost with some custom modifications to suit their purposes. I think their changes might benefit the wider Boost community, and I'd like to see if the community agrees with me. Passenger uses the MIT License, which I believe is compatible with Boost's submission requirements. Small note: If I understand the MIT license correctly it requires that you include the license text in binaries, which, if I'm not mistaken, is something that the Boost license requirements mention explicitly as not being desireable. Anyone please correct me if I am wrong on this. So, I'll start off with a description of the changes and if there's interest, I can post the actual patches for additional comment. The changes are: 1. Adding an optional stack_size parameter to thread::start_thread() This is useful in Passenger's case where they want to reduce the VM size without requiring the user to hassle with ulimit settings on Linux. Passenger spawns many threads rather than using a thread pool for performance reasons. This change is, its current form, platform-specific, but I'm working on correcting that, hopefully without a ton of ifdefs. A big yes to that. I use a thread pool in several cases and lowering the stack size would save me a lot of space. Thumbs up. 2. Adding backtrace and system_error_code support. This adds additional exception information to boost::thread_exception, boost::thread_resource_error, and boost::thread_interrupted that allows Passenger to dump a full backtrace all the way up its stack. This change is relatively straightforward and seems like it would benefit a number of users. I would certainly find this valuable, especially when debugging larger applications. I do wonder whether it should be specific to thread exceptions, though. As another poster said, perhaps in the Boost exception class itself? Please let me know if I should post patches for one or both of these changes. ---Brett.

On Wed, Sep 28, 2011 at 8:48 AM, Brett Lentz <blentz@redhat.com> wrote:
2. Adding backtrace and system_error_code support.
This adds additional exception information to boost::thread_exception, boost::thread_resource_error, and boost::thread_interrupted that allows Passenger to dump a full backtrace all the way up its stack.
+1 I'll just add that if the error code and stack trace are implemented as boost::error_info stored in boost::exception, then they will appear automatically in the output of boost::diagnostic_information() together with the rest of the error_infos. Emil Dotchevski Reverge Studios, Inc. http://www.revergestudios.com/reblog/index.php?n=ReCode

On Wed, Sep 28, 2011 at 8:48 AM, Brett Lentz <blentz@redhat.com> wrote:
2. Adding backtrace and system_error_code support.
This adds additional exception information to boost::thread_exception, boost::thread_resource_error, and boost::thread_interrupted that allows Passenger to dump a full backtrace all the way up its stack.
My only concern with this is in a case like this, for example: while (/* reading lines from file, parsing integers, otherwise ignoring*/) { try { myInts.push_back(boost::lexical_cast<int>(currentLineStr)); } catch (const boost::bad_lexical_cast&) // (has stack trace) {} } Will the stack trace make such an operation noticeably slower? I know it's easy to retort with "don't use exceptions for flow control" but I believe my concern is valid, even if it's a bit contrived of a situation. Perhaps we'd like to make it opt-in, like BOOST_THROW_EXCEPTION_TRACED or something. GMan, Nick Gorski

On Wed, Sep 28, 2011 at 1:49 PM, GMan <gmannickg@gmail.com> wrote:
On Wed, Sep 28, 2011 at 8:48 AM, Brett Lentz <blentz@redhat.com> wrote:
2. Adding backtrace and system_error_code support.
This adds additional exception information to boost::thread_exception, boost::thread_resource_error, and boost::thread_interrupted that allows Passenger to dump a full backtrace all the way up its stack.
My only concern with this is in a case like this, for example:
while (/* reading lines from file, parsing integers, otherwise ignoring*/) { try { myInts.push_back(boost::lexical_cast<int>(currentLineStr)); } catch (const boost::bad_lexical_cast&) // (has stack trace) {} }
Will the stack trace make such an operation noticeably slower?
No, it won't affect the speed of a catch by reference.
I know it's easy to retort with "don't use exceptions for flow control" but I believe my concern is valid, even if it's a bit contrived of a situation.
Your concern might be valid in but it shouldn't be based on a belief. Profile your code and see if the speed of exception handling has a noticeable effect on performance. Emil Dotchevski Reverge Studios, Inc. http://www.revergestudios.com/reblog/index.php?n=ReCode

On Wed, Sep 28, 2011 at 2:07 PM, Emil Dotchevski <emildotchevski@gmail.com>wrote:
No, it won't affect the speed of a catch by reference.
Sorry, that was an unfortunate comment placement on my part. I meant to say the exception object itself contains stack trace data, which in turn implies additional time to construct the exception object. Exception handling, the language feature, wasn't the concern. Rather, it was that we should be careful about automatically adding extra time (stack traces) to exception handling application-wide. Brett addresses that below. -- GMan, Nick Gorski

On 09/28/2011 04:49 PM, GMan wrote:
On Wed, Sep 28, 2011 at 8:48 AM, Brett Lentz <blentz@redhat.com> wrote:
2. Adding backtrace and system_error_code support.
This adds additional exception information to boost::thread_exception, boost::thread_resource_error, and boost::thread_interrupted that allows Passenger to dump a full backtrace all the way up its stack.
[...snipped...]
Perhaps we'd like to make it opt-in, like BOOST_THROW_EXCEPTION_TRACED or something.
That's exactly the case with this set of patches. There's a set of TRACE_POINT macros that are required for users to toggle areas where they want to see the backtraces and also an DISABLE_BACKTRACES macro to turn this feature on/off globally. It also respects the NDEBUG macros, so as not to impact performance when debugging is disabled.
GMan, Nick Gorski
I'm still working on cleaning up this patchset, but if you'd like to see the documentation directly from the passenger sources, you can find it here: https://github.com/FooBarWidget/passenger/blob/master/ext/oxt/backtrace.hpp The forked boost is here: https://github.com/FooBarWidget/passenger/tree/master/ext/boost ---Brett.

On Wed, Sep 28, 2011 at 3:19 PM, Brett Lentz <blentz@redhat.com> wrote:
On 09/28/2011 04:49 PM, GMan wrote:
On Wed, Sep 28, 2011 at 8:48 AM, Brett Lentz <blentz@redhat.com> wrote:
2. Adding backtrace and system_error_code support.
This adds additional exception information to boost::thread_exception, boost::thread_resource_error, and boost::thread_interrupted that allows Passenger to dump a full backtrace all the way up its stack.
[...snipped...]
Perhaps we'd like to make it opt-in, like BOOST_THROW_EXCEPTION_TRACED or something.
That's exactly the case with this set of patches. There's a set of TRACE_POINT macros that are required for users to toggle areas where they want to see the backtraces and also an DISABLE_BACKTRACES macro to turn this feature on/off globally.
Can't this be done non-intrusively, in a platform-specific matter? I know it's difficult and tricky, but stack traces can often be helpful in tracking errors. Having the user register trace points explicitly makes the system not as helpful. :( Emil Dotchevski Reverge Studios, Inc. http://www.revergestudios.com/reblog/index.php?n=ReCode

Brett Lentz wrote:
1. Adding an optional stack_size parameter to thread::start_thread()
This has been suggested a few times including once by me; see e.g. http://search.gmane.org/search.php?group=gmane.comp.lib.boost.devel&query=stack+size Since those previous discussions, though, threads have been added to the C++ standard, and the thread class in the standard doesn't include this. It is conceivable that Boost.Thread might continue to evolve and gain more features [maybe Anthony will comment], but I think it's more likely that people will use std::thread instead and we will have to learn to live with fixed stack sizes. (Or write our own thread classes, which is simple enough if we're not concerned with portability.) Regards, Phil.

On Wed, Sep 28, 2011 at 7:22 AM, Steven Watanabe <watanabesj@gmail.com>wrote:
AMDG
Unfortunately, this changes the behavior of the library. is_convertible<Q1, Q2> will return true if the units are unrelated. It can also change the result of overload resolution.
Well I certainly don't want to break the library! It was not a wise choice to change the implementation of a published library, to demonstrate the use of METATEST. I will work on other examples in published libraries over the weekend. I did however, hopefully at least demonstrate the idea, that should a complex library contain a METATEST, it could be successfully unit tested to prove the assertion is true, and false, when expected to be.
As far as is_convertible goes, the following unit test is failing using a completely unmodified boost distribution: BOOST_AUTO_TEST_CASE(failing_convertible_units) { bool result = is_convertible<length, energy>::value; BOOST_CHECK_EQUAL(result, false); } Since length and energy are incompatible units, shouldn't is_convertible fail? I could certainly learn much more about both is_convertible and boost::units. As far as units::quantity goes, I think that overload resolution could possibly be managed inside the added conversion constructor, with a meta-function testing if the rhs_type would have been implicitly convertible to this_type, and to METATEST on that condition. I am curious of your opinion on this, since you certainly know more about this library than I. Thank you, Ben Robinson, Ph.D. In Christ,
Steven Watanabe
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

I have created a new example to demonstrate BOOST_TBD_METATEST_ASSERT_* in action. The example is a safe_cast free function which can only be used in place of static_cast, if there is zero possibility that the value will be changed upon conversion. As you can see in examples/safecast/boosttest_safecast.cpp (available at https://github.com/icaretaker/Metatest), there is a complete set of unit tests, written for the function, which provide full regression testing for that function, without requiring a build system that inverts the result of a failed compile. Here is one such test: BOOST_AUTO_TEST_CASE(rhs_signed8) { uint8_t unsigned8 = 0; uint16_t unsigned16 = 0; uint32_t unsigned32 = 0; int8_t signed8 = 0; int16_t signed16 = 0; int32_t signed32 = 0; BOOST_CHECK_THROW( unsigned8 = safe_cast<uint8_t>(signed8);, metatest_exception ); BOOST_CHECK_THROW( unsigned16 = safe_cast<uint16_t>(signed8);, metatest_exception ); BOOST_CHECK_THROW( unsigned32 = safe_cast<uint32_t>(signed8);, metatest_exception ); BOOST_CHECK_NO_THROW( signed8 = safe_cast<int8_t>(signed8); ); BOOST_CHECK_NO_THROW( signed16 = safe_cast<int16_t>(signed8); ); BOOST_CHECK_NO_THROW( signed32 = safe_cast<int32_t>(signed8); ); } A potential improvement to safe_cast, would be to allow casts from integers to floating point numbers, provided the number of bits in the mantissa were sufficient to support the integer. This additional capability can now be developed which under the full protection of unit tests for the existing capability, and new unit tests can be written for the new feature. In the opinion of the Boost community, does this new capability stand on its own as useful, such that inclusion into Boost would be warranted? Thank you, Ben Robinson, Ph.D.

Ábel Sinkovics and Boost Community, I have renamed my unit test library for static assertions to "MetaAssert", to avoid a name conflict with Ábel Sinkovics's MetaTest library. Our two libraries provide complementary and related, but non-overlapping functionality. The new github repository for MetaAssert is located here: https://github.com/icaretaker/MetaAssert. This submission would make it possible to write unit tests for static assertions embedded in meta-programs. Any unit test framework can be used to verify these static assertions will pass when they should, and fail when they should. These tests will all compile, and the non-passing static assertions will communicate the detected failure to any test framework at run-time, instead of failing to compile. Thank you, Ben Robinson, Ph.D.

Ben Robinson wrote 2011-10-15 06:55:
Ábel Sinkovics and Boost Community,
I have renamed my unit test library for static assertions to "MetaAssert", to avoid a name conflict with Ábel Sinkovics's MetaTest library. Our two libraries provide complementary and related, but non-overlapping functionality.
The new github repository for MetaAssert is located here: https://github.com/icaretaker/MetaAssert. This submission would make it possible to write unit tests for static assertions embedded in meta-programs. Any unit test framework can be used to verify these static assertions will pass when they should, and fail when they should. These tests will all compile, and the non-passing static assertions will communicate the detected failure to any test framework at run-time, instead of failing to compile.
Thank you,
Ben Robinson, Ph.D.
This lib I will certainly use whenever I come around and start on my own small meta programming projects. One thing though; suppose I am developing lib B that uses another lib A which already uses your MetaAssert, then I think that I would like to have exceptions thrown only from my own library. That way, if I make a fault in my lib that triggers a meta-assert in lib A I will know about it immediately (at compilation), otherwise the exception might be mistaken as an expected exception from my own lib. Would it be reasonable to add a tag or similar to each meta-assert so that the choice between compile-time error and runtime exception can be made per library? Cheers, Leif Linderstam
participants (16)
-
Ben Robinson
-
Brett Lentz
-
Christopher Jefferson
-
Dave Abrahams
-
Emil Dotchevski
-
Gennadiy Rozental
-
GMan
-
John Maddock
-
Leif Linderstam
-
Marshall Clow
-
Phil Endecott
-
Philip Bennefall
-
Steven Watanabe
-
Tim Moore
-
Vicente J. Botet Escriba
-
Ábel Sinkovics