[filesystem][program_options][serialization][test] from comp.lang.c++.moderated

It seems as though the OP may not be totally crazy, so we should evaluate his comments and see if the issues can be (or have been) addressed. The article is of course bad publicity for Boost, and I'd like to demonstrate responsiveness if possible. -- Dave Abrahams Boost Consulting www.boost-consulting.com

"David Abrahams" <dave@boost-consulting.com> wrote in message news:u1wv4avcl.fsf@boost-consulting.com...
It seems as though the OP may not be totally crazy, so we should evaluate his comments and see if the issues can be (or have been) addressed. The article is of course bad publicity for Boost, and I'd like to demonstrate responsiveness if possible.
One of boosts roles is to do research and development, the other is to establish existing practise. The R& D role inevitably results in the need to break an existing interface in the interest of providing a better one, however that is inevitably not going to go down well with existing users. Establishing existing practise demands a high level of stability. Perhaps there needs to be a clear policy on the procedure to follow to go about modifying the interface of a library once it is in the boost distribution. That might involve the library author putting up a clear statement of their intentions well before committing the mods, allowing existing users adequate time and simple facilities to respond. That probably wont satisfy those that dont read such notices, but at least it can be pointed out in the documentation for each library that such a facility exists, together with some general justification for why boost has to reserve the right to break interfaces, due to its R & D role. FWIW regards Andy Little

One of boosts roles is to do research and development, the other is to establish existing practise. The R& D role inevitably results in the need to break an existing interface in the interest of providing a better one, however that is inevitably not going to go down well with existing users. Establishing existing practise demands a high level of stability. Perhaps there needs to be a clear policy on the procedure to follow to go about modifying the interface ...
There has been talk about splitting boost into more manageable chunks for a long time and that seems to an ideal split line: stable: interface will never change; only bug fixes in future experimental: interface may change in a subsequent release Requirements for stable could be: no changes in at least 6 months and author(s) having no intention to make changes. If an author wants to make backwards-incompatible changes to a stable library they make it a new library, with a new name (even it just tacking "2" on to the end of the existing name). This should of course happen very rarely. Having a stable core will give developers more confidence in using boost for important projects; I think it is essential. Darren

Darren Cook <darren@dcook.org> writes:
One of boosts roles is to do research and development, the other is to establish existing practise. The R& D role inevitably results in the need to break an existing interface in the interest of providing a better one, however that is inevitably not going to go down well with existing users. Establishing existing practise demands a high level of stability. Perhaps there needs to be a clear policy on the procedure to follow to go about modifying the interface ...
There has been talk about splitting boost into more manageable chunks for a long time and that seems to an ideal split line:
stable: interface will never change; only bug fixes in future
experimental: interface may change in a subsequent release
Requirements for stable could be: no changes in at least 6 months and author(s) having no intention to make changes.
We'd have to release a lot more often to make that meaningful. Anyway, it seems like "no backwards-incompatible changes" is a more reasonable definition of "stable." I don't much like the term "experimental." By the time a Boost library is released it should be in a state where backwards-incompatible changes are avoided with extreme prejudice and it has, in principle, been shown to have an interface worthy of preserving. IMO, the review process _ought_ to (and does, for the most part) weed out libraries that are truly "experimental."
If an author wants to make backwards-incompatible changes to a stable library they make it a new library, with a new name (even it just tacking "2" on to the end of the existing name). This should of course happen very rarely.
A note from my experience: I completely rewrote Boost.Python at one point and gave it a totally new and improved interface. It was announced well in advance (in fact the files coexisted with the old version in Boost CVS HEAD for a long time during development) and the result was called Boost.Python v2. The old version of Boost.Python was archived at that point and the archive was in the Boost distributions for a few releases before finally being pulled. I had no complaints about this change, IIRC.
Having a stable core will give developers more confidence in using boost for important projects; I think it is essential.
I agree with that; no question about it. I think a few more degrees of classification than you've outlined could be useful: core vs. optional I hate "optional" but can't think of a better term right now. This is the distinction between, e.g., type traits and serialization fluidity "frozen" == Only bugfixes, in principle. A library can move out of "frozen" to stable at the whim of its maintainer. "stable" == Only bugfixes and extensions. "active" == backwards-incompatible changes avoided with extreme prejudice, and made only after one release of deprecation. A strong bias is given towards preserving availability of old interfaces under the same name, even if they've been superceded by newer ones. "fluid" == whether we should have such a category is open to debate. header-only vs. compiled ... -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams wrote:
I agree with that; no question about it. I think a few more degrees of classification than you've outlined could be useful:
core vs. optional
I hate "optional" but can't think of a better term right now. This is the distinction between, e.g., type traits and serialization
It's hard to draw the line between 'core' and 'optional' IMO. I guess 'core'-libraries would be libraries that are heavily used by other boost-libraries whereas 'optional' libraries are leafs in the library-dependency graph. Therefore I think it just is important to have a clearly documented library-dependency-graph. This would allow people using one specific boost library to know what other boost-libraries they also need. This will also allow them to know if the library they want to use relies on 'still experimental' boost-libraries etc. And finally library-authors should be cautious about adding another dependency in their library. This way people can evaluate in advance the 'stability' of the boost-library they are interested in. toon

Toon Knapen <toon.knapen@fft.be> writes:
David Abrahams wrote:
I agree with that; no question about it. I think a few more degrees of classification than you've outlined could be useful:
core vs. optional
I hate "optional" but can't think of a better term right now. This is the distinction between, e.g., type traits and serialization
It's hard to draw the line between 'core' and 'optional' IMO. I guess 'core'-libraries would be libraries that are heavily used by other boost-libraries whereas 'optional' libraries are leafs in the library-dependency graph.
Roughly speaking, yes. Although I would mark Boost.Python as optional even though Boost.Graph and Boost.MultiArray have subparts that depend on it for Python bindings.
Therefore I think it just is important to have a clearly documented library-dependency-graph.
Yeah, that would indeed be great. I'm not sure how to accomplish it, though. -- Dave Abrahams Boost Consulting www.boost-consulting.com

"David Abrahams" <dave@boost-consulting.com> wrote in message news:ufyjj8n1b.fsf@boost-consulting.com...
core vs. optional
I hate "optional" but can't think of a better term right now. This is the distinction between, e.g., type traits and serialization
Maybe "feature"? Gennadiy

On 5/9/06, Gennadiy Rozental <gennadiy.rozental@thomson.com> wrote:
"David Abrahams" <dave@boost-consulting.com> wrote in message news:ufyjj8n1b.fsf@boost-consulting.com...
core vs. optional
I hate "optional" but can't think of a better term right now. This is the distinction between, e.g., type traits and serialization
Maybe "feature"? Gennadiy
what about "extensions"? Wouldn't be nice to have a per library deployment tool where all the boost libraries are listed. You can selected the ones you desired to use and the needed libraries get installed (including dependencies, downloading the latest version of each one, and taking care of building the needed parts). Some kind of installation history will be maintained, so when you execute the deployment tool again, you can ask for updates to the installed ones, or add new libraries to "your" boost distribution. This is the way kubuntu managed the installation of new apps and i have founded it very intuitive and useful. The app could be written in qt so it can be run in each OS. Regards Matias

"Matias Capeletto" <matias.capeletto@gmail.com> writes:
The app could be written in qt so it can be run in each OS.
I await your submission with breathless anticipation ;-) -- Dave Abrahams Boost Consulting www.boost-consulting.com

On 5/9/06, David Abrahams <dave@boost-consulting.com> wrote:
"Matias Capeletto" <matias.capeletto@gmail.com> writes:
The app could be written in qt so it can be run in each OS.
I await your submission with breathless anticipation ;-)
I don't know a lot about qt, and direct Internet deployment. I know that it can be done because i use it every day... I want to learn qt, and i would like to do it in the way... but with my college thesis and, hopefully, my SoC project I will not find the time. Maybe after that... Regards Matias

Matias Capeletto wrote:
The app could be written in qt so it can be run in each OS.
There are potential license issues that need to get taken care of if Qt is used (Qt is a lot less free than Boost). Regards, m Send instant messages to your online friends http://au.messenger.yahoo.com

Martin Wille:
Matias Capeletto wrote:
The app could be written in qt so it can be run in each OS.
There are potential license issues that need to get taken care of if Qt is used (Qt is a lot less free than Boost).
I have read that qt can be used to develop open source projects. In this case the deployment tool is only for that, deployment, so I think it wont compromise the boost licence. Anyway it will be right to ask first to them to save from troubles. If it can be done in qt, bad luck... Regards Matias

Matias Capeletto wrote:
Martin Wille:
Matias Capeletto wrote:
The app could be written in qt so it can be run in each OS.
(Boost has a wider range of target platforms than Qt, btw)
There are potential license issues that need to get taken care of if Qt is used (Qt is a lot less free than Boost).
I have read that qt can be used to develop open source projects. In this case the deployment tool is only for that, deployment, so I think it wont compromise the boost licence. Anyway it will be right to ask first to them to save from troubles. If it can be done in qt, bad luck...
"can be used to develop open source projects" implies GPL, in this case. I'd rather not see stuff under GPL in Boost: * there are several different interpretations about how far the viral nature of GPL extends. * there are several different versions of GPL and there will be even more of them. * Users of Boost include companies that do have troubles using Software that uses different licenses for different parts of the software (especially if GPL is one of the licenses). The best options I can see are: getting a special license for Qt (likely won't happen), not using Qt, or developing the deployment tool completely separate from Boost (separate repository, separate deployment). I do not want to discourage you. I like the idea of having an easy to use installer. Regards, m Send instant messages to your online friends http://au.messenger.yahoo.com

Matias Capeletto:
I have read that qt can be used to develop open source projects. In this case the deployment tool is only for that, deployment, so I think it wont compromise the boost licence. Anyway it will be right to ask first to them to save from troubles. If it can be done in qt, bad luck...
There are always other ways to go. You can use wxWidgets for that as well. Regards Tom

"Gennadiy Rozental" <gennadiy.rozental@thomson.com> writes:
"David Abrahams" <dave@boost-consulting.com> wrote in message news:ufyjj8n1b.fsf@boost-consulting.com...
core vs. optional
I hate "optional" but can't think of a better term right now. This is the distinction between, e.g., type traits and serialization
Maybe "feature"?
Not bad. -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams wrote:
"Gennadiy Rozental" <gennadiy.rozental@thomson.com> writes:
"David Abrahams" <dave@boost-consulting.com> wrote in message news:ufyjj8n1b.fsf@boost-consulting.com...
core vs. optional
I hate "optional" but can't think of a better term right now. This is the distinction between, e.g., type traits and serialization
Maybe "feature"?
Not bad.
core vs. derivative? Is that the dimension your thinking of? Jeff

David Abrahams wrote:
4) test (please don't get me started)
for any reasonable degree of portability (takes over main(), maps OS signals to C++ exceptions, continues to run after failing assert()s, etc.)
FWIW, I don't use Boost.Test for the same reasons. Regards, m Send instant messages to your online friends http://au.messenger.yahoo.com

David Abrahams wrote:
Martin Wille writes:
David Abrahams wrote:
4) test (please don't get me started)
Excuse me, but I did not write that; please be more careful with your attributions.
Sorry, I got confused by the fact that the message you forwarded was not quoted by you. I'll try to do better next time. Regards, m Send instant messages to your online friends http://au.messenger.yahoo.com

"Martin Wille" <mw8329@yahoo.com.au> wrote in message news:44603D05.5080401@yahoo.com.au...
Wil Evers wrote:
4) test (please don't get me started)
for any reasonable degree of portability (takes over main(), maps OS signals to C++ exceptions, continues to run after failing assert()s, etc.)
FWIW, I don't use Boost.Test for the same reasons.
Could you be more elebotate? Cause none of the above seems fair. I dont know about any portability issues related to main in static library (and it's optional nowdays anyway). SIGNAL catching is optional and doesn't constitute portabiltiy issues per se, whether to use it's up to you.
- Lots of construction/destruction ordering issues - major changes here between boost-1.32 and boost-1.33.1
Alot was changed in between these releases, but I dont know about any construction issues. Gennadiy

Gennadiy Rozental wrote:
"Martin Wille" <mw8329@yahoo.com.au> wrote in message news:44603D05.5080401@yahoo.com.au...
Wil Evers wrote:
4) test (please don't get me started)
for any reasonable degree of portability (takes over main(), maps OS signals to C++ exceptions, continues to run after failing assert()s, etc.) FWIW, I don't use Boost.Test for the same reasons.
Could you be more elebotate? Cause none of the above seems fair. I dont know about any portability issues related to main in static library (and it's optional nowdays anyway). SIGNAL catching is optional and doesn't constitute portabiltiy issues per se, whether to use it's up to you.
At least for a long time, the signal catching apparently was the default behaviour of Boost.Test. The signal handling is broken because it exploits undefined behaviour. It caused troubles for several of the tests in Boost. See http://lists.boost.org/Archives/boost/2004/07/67298.php or http://lists.boost.org/Archives/boost/2003/10/55496.php I didn't write or quote that part (since I didn't run into these issues):
- Lots of construction/destruction ordering issues - major changes here between boost-1.32 and boost-1.33.1
Regards, m Send instant messages to your online friends http://au.messenger.yahoo.com

"Martin Wille" <mw8329@yahoo.com.au> wrote in message news:44608D80.2060508@yahoo.com.au...
Gennadiy Rozental wrote:
"Martin Wille" <mw8329@yahoo.com.au> wrote in message news:44603D05.5080401@yahoo.com.au...
Wil Evers wrote:
4) test (please don't get me started)
for any reasonable degree of portability (takes over main(), maps OS signals to C++ exceptions, continues to run after failing assert()s, etc.) FWIW, I don't use Boost.Test for the same reasons.
Could you be more elebotate? Cause none of the above seems fair. I dont know about any portability issues related to main in static library (and it's optional nowdays anyway). SIGNAL catching is optional and doesn't constitute portabiltiy issues per se, whether to use it's up to you.
At least for a long time, the signal catching apparently was the default behaviour of Boost.Test. The signal handling is broken because it exploits undefined behaviour. It caused troubles for several of the tests in Boost.
See http://lists.boost.org/Archives/boost/2004/07/67298.php or http://lists.boost.org/Archives/boost/2003/10/55496.php
I familiar with there issues and potential UB. But Boost.Test provides both alternatives, so you couldn't say that it is broken. Every user is free to make a choice either catch system signals but be exposed to potential UB or don't and lost reporting in case of crash. As for which alternative is selected as a default here my view on this is: 1. In my experience catching signals did not cause any deadlocks in 99% of cases. IMO proper reporting prevail. 2. Opting to ignore system signals will cause halt for regression testing in some cases (unless we change the setup explicitly) 3. This default was selected from the very beginning. I would be wary to change it (the same as changing an interface IMO) 4. I suspect that selecting other alternative would cause more unhappy users than this one (someone is always unhappy - the fact of life ;). Gennadiy

Gennadiy Rozental wrote:
1. In my experience catching signals did not cause any deadlocks in 99% of cases. IMO proper reporting prevail.
The problem is that the deadlocks only happen if a test raises a signal. This is usually not a case. In automated regression testing (like Boost's), at some point, there will be such a deadlock and testing stalls until someone realizes that a test is deadlocked. Then that person will spend quite some time in trying to find out what happened.
2. Opting to ignore system signals will cause halt for regression testing in some cases (unless we change the setup explicitly)
No, it's the other way round. The UB causes halts in regression testing. I experienced dozens of these incidences. As I said, I wasted days of CPU and human time on this problem. How does not mapping a signal to exceptions and letting the process die instead cause halt for regression testing? Regards, m Send instant messages to your online friends http://au.messenger.yahoo.com

2. Opting to ignore system signals will cause halt for regression testing in some cases (unless we change the setup explicitly)
No, it's the other way round. The UB causes halts in regression testing. I experienced dozens of these incidences. As I said, I wasted days of CPU and human time on this problem.
How does not mapping a signal to exceptions and letting the process die instead cause halt for regression testing?
Because some compiler would show dialog window for example. Unfortunately there is no silver bullet here. One will have to deal with stalling regression tests one way or another. Which case has less incidents is an open question. Gennadiy

"Gennadiy Rozental" <gennadiy.rozental@thomson.com> writes:
2. Opting to ignore system signals will cause halt for regression testing in some cases (unless we change the setup explicitly)
No, it's the other way round. The UB causes halts in regression testing. I experienced dozens of these incidences. As I said, I wasted days of CPU and human time on this problem.
How does not mapping a signal to exceptions and letting the process die instead cause halt for regression testing?
Because some compiler would show dialog window for example. Unfortunately there is no silver bullet here. One will have to deal with stalling regression tests one way or another. Which case has less incidents is an open question.
In the absence of other data, it seems to me that Martin's report should be given more weight. P.S. Could you please use proper attributions so I don't have to go back in time to see who you're responding to? Thanks. -- Dave Abrahams Boost Consulting www.boost-consulting.com

"David Abrahams" <dave@boost-consulting.com> wrote in message news:ufyjenbyp.fsf@boost-consulting.com...
"Gennadiy Rozental" <gennadiy.rozental@thomson.com> writes:
2. Opting to ignore system signals will cause halt for regression testing in some cases (unless we change the setup explicitly)
No, it's the other way round. The UB causes halts in regression testing. I experienced dozens of these incidences. As I said, I wasted days of CPU and human time on this problem.
How does not mapping a signal to exceptions and letting the process die instead cause halt for regression testing?
Because some compiler would show dialog window for example. Unfortunately there is no silver bullet here. One will have to deal with stalling regression tests one way or another. Which case has less incidents is an open question.
In the absence of other data, it seems to me that Martin's report should be given more weight.
What do you mean by "absence of other data"? I know for sure that several NT compilers will produce dialog window. Gennadiy

"Gennadiy Rozental" <gennadiy.rozental@thomson.com> writes:
"David Abrahams" <dave@boost-consulting.com> wrote in message news:ufyjenbyp.fsf@boost-consulting.com...
"Gennadiy Rozental" <gennadiy.rozental@thomson.com> writes:
2. Opting to ignore system signals will cause halt for regression testing in some cases (unless we change the setup explicitly)
No, it's the other way round. The UB causes halts in regression testing. I experienced dozens of these incidences. As I said, I wasted days of CPU and human time on this problem.
How does not mapping a signal to exceptions and letting the process die instead cause halt for regression testing?
Because some compiler would show dialog window for example. Unfortunately there is no silver bullet here. One will have to deal with stalling regression tests one way or another. Which case has less incidents is an open question.
In the absence of other data, it seems to me that Martin's report should be given more weight.
What do you mean by "absence of other data"? I know for sure that several NT compilers will produce dialog window.
Hmm, maybe I misunderstood the argument. Isn't there a way of encoding this information in the library and allowing tests to specify a default mode, e.g.: "By default, I am being run as part of an automated test suite and should not stall the process" or "By default I am being run by hand..." maybe this mode specification thing is even unnecessary, I don't know. But if you know which platforms and compilers will benefit from mapping signals, it seems to me you should only do it there. -- Dave Abrahams Boost Consulting www.boost-consulting.com

"David Abrahams" <dave@boost-consulting.com> wrote in message news:uody2kgqv.fsf@boost-consulting.com...
"Gennadiy Rozental" <gennadiy.rozental@thomson.com> writes:
"David Abrahams" <dave@boost-consulting.com> wrote in message news:ufyjenbyp.fsf@boost-consulting.com...
"Gennadiy Rozental" <gennadiy.rozental@thomson.com> writes:
2. Opting to ignore system signals will cause halt for regression testing in some cases (unless we change the setup explicitly)
No, it's the other way round. The UB causes halts in regression testing. I experienced dozens of these incidences. As I said, I wasted days of CPU and human time on this problem.
How does not mapping a signal to exceptions and letting the process die instead cause halt for regression testing?
Because some compiler would show dialog window for example. Unfortunately there is no silver bullet here. One will have to deal with stalling regression tests one way or another. Which case has less incidents is an open question.
In the absence of other data, it seems to me that Martin's report should be given more weight.
What do you mean by "absence of other data"? I know for sure that several NT compilers will produce dialog window.
Hmm, maybe I misunderstood the argument. Isn't there a way of encoding this information in the library and allowing tests to specify a default mode, e.g.:
"By default, I am being run as part of an automated test suite and should not stall the process"
or
"By default I am being run by hand..."
I don't know about the default (how do you plan library would figure out whether it's run from regression run or by hand?), but users could specify how they want library to behave using either CLA or environment variable (or in config file starting next release)
maybe this mode specification thing is even unnecessary, I don't know. But if you know which platforms and compilers will benefit from mapping signals, it seems to me you should only do it there.
IMO all users on all platforms could benefit from signal catching (both in regression run and run by hand). At the same time the same users need to understand the possibility (small IMO) of hung test in case of severe memory corruption. Now you propose to have different default behavior on different platforms. I would strongly oppose such inconsistency (And don't forget there other reasons mentioned in this thread to keep current default). Regression tools maintainers (and/or library developers) need to decide what they want to do on case by case (tool by tool or even better test/tool by test/tool) level. And unfortunately in some cases whatever you choose you still be exposed to the possibility of hung run (from either a deadlock or dialog message). Gennadiy

"Gennadiy Rozental" <gennadiy.rozental@thomson.com> writes:
"David Abrahams" <dave@boost-consulting.com> wrote in message news:uody2kgqv.fsf@boost-consulting.com...
"Gennadiy Rozental" <gennadiy.rozental@thomson.com> writes:
"David Abrahams" <dave@boost-consulting.com> wrote in message news:ufyjenbyp.fsf@boost-consulting.com...
"Gennadiy Rozental" <gennadiy.rozental@thomson.com> writes:
Martin wrote:
No, it's the other way round. The UB causes halts in regression testing. I experienced dozens of these incidences. As I said, I wasted days of CPU and human time on this problem.
How does not mapping a signal to exceptions and letting the process die instead cause halt for regression testing?
Because some compiler would show dialog window for example. Unfortunately there is no silver bullet here. One will have to deal with stalling regression tests one way or another. Which case has less incidents is an open question.
In the absence of other data, it seems to me that Martin's report should be given more weight.
What do you mean by "absence of other data"? I know for sure that several NT compilers will produce dialog window.
Hmm, maybe I misunderstood the argument. Isn't there a way of encoding this information in the library and allowing tests to specify a default mode, e.g.:
"By default, I am being run as part of an automated test suite and should not stall the process"
or
"By default I am being run by hand..."
I don't know about the default (how do you plan library would figure out whether it's run from regression run or by hand?), but users could specify how they want library to behave using either CLA or environment variable (or in config file starting next release)
I'm not sure whether it's the right kind of specification, though. Is it? What I'm looking for is a specification that says, "run the test so that a regression test is least likely to halt, whatever that means on this particular system/compiler," not "run the test with signal handlers that throw exceptions."
maybe this mode specification thing is even unnecessary, I don't know. But if you know which platforms and compilers will benefit from mapping signals, it seems to me you should only do it there.
IMO all users on all platforms could benefit from signal catching (both in regression run and run by hand).
Clearly Martin was not benefitting.
At the same time the same users need to understand the possibility (small IMO) of hung test in case of severe memory corruption.
This was apparently not a small probability for Martin. He said it happened dozens of times.
Now you propose to have different default behavior on different platforms. I would strongly oppose such inconsistency (And don't forget there other reasons mentioned in this thread to keep current default). Regression tools maintainers (and/or library developers) need to decide what they want to do on case by case (tool by tool or even better test/tool by test/tool) level.
I fundamentally disagree that that is the ideal. It's one of the major goals of Boost.Build and the testing framework built around it to centralize expertise about platforms, tests, compilers, etc., so a library developer does _not_ need to be an expert in every platform, compiler, etc. that his library may run under. One should be able to use high level abstractions to describe what needs to be accomplished, and allow the knowledge of platform-specifics embedded in the framework to take care of the details. So far, we've been pretty successful. However, if Boost.Test doesn't cooperate with that approach, it will either undermine the effort, or we will have to stop using it in the Boost regression tests.
And unfortunately in some cases whatever you choose you still be exposed to the possibility of hung run (from either a deadlock or dialog message).
There's always a chance, just as sometimes you may need to learn specific arcanities of a given compiler in order to build a library there. The goal is to minimize those occurrences. -- Dave Abrahams Boost Consulting www.boost-consulting.com

"David Abrahams" <dave@boost-consulting.com> wrote in message news:ufyjdhqcg.fsf@boost-consulting.com...
I don't know about the default (how do you plan library would figure out whether it's run from regression run or by hand?), but users could specify how they want library to behave using either CLA or environment variable (or in config file starting next release)
I'm not sure whether it's the right kind of specification, though. Is it? What I'm looking for is a specification that says, "run the test so that a regression test is least likely to halt, whatever that means on this particular system/compiler," not "run the test with signal handlers that throw exceptions."
This kind of specification make sence for regression tool options decision. On library level that is used by regression tool one need to provide an options for both alternatives. Enforcing different defaults on library level on different compilers completly unacceptable IMO. Library behavior should be consistent.
maybe this mode specification thing is even unnecessary, I don't know. But if you know which platforms and compilers will benefit from mapping signals, it seems to me you should only do it there.
IMO all users on all platforms could benefit from signal catching (both in regression run and run by hand).
Clearly Martin was not benefitting.
In this particular case. In some other cases he will (IMO)
At the same time the same users need to understand the possibility (small IMO) of hung test in case of severe memory corruption.
This was apparently not a small probability for Martin. He said it happened dozens of times.
Maybe for the same particular test. I personally never have any problems with it. Did you?
Now you propose to have different default behavior on different platforms. I would strongly oppose such inconsistency (And don't forget there other reasons mentioned in this thread to keep current default). Regression tools maintainers (and/or library developers) need to decide what they want to do on case by case (tool by tool or even better test/tool by test/tool) level.
I fundamentally disagree that is the ideal. It's one of the major goals of Boost.Build and the testing framework built around it to centralize expertise about platforms, tests, compilers, etc., so a library developer does _not_ need to be an expert in every platform, compiler, etc. that his library may run under. One should be able to use high level abstractions to describe what needs to be accomplished, and allow the knowledge of platform-specifics embedded in the framework to take care of the details. So far, we've been pretty successful. However, if Boost.Test doesn't cooperate with that approach, it will either undermine the effort, or we will have to stop using it in the Boost regression tests.
Library developers doesn't need to know anything in most cases. I would assume that it's regression tools developer/maintainer responsibility in most cases. What you propose: 1. Make library behavior inconsistent for regular user 2. Make some users that actually prefer existing default unhappy 3. Will change the default that is used currently 4. Cause new users not get best from the library until they will learn advanced options 5. Doesn't present an ultimate solution for regression testing: whatever default is chosen for any platform it still may stall. 6. Could be implemented on regression tool/library Jamfile level without causing 1,2,3,4 7. Is trying to solve a problem for very few users by causing different problems for many others If you so adamant that signals should not be caught, why don't you just set this options for all test automatically in Boost.Build? Let's see how many complaint you get. Gennadiy

Gennadiy Rozental wrote:
This was apparently not a small probability for Martin. He said it happened dozens of times.
Maybe for the same particular test. I personally never have any problems with it. Did you?
No, definitely not for a single test. Many of Spirit's tests were affected (that's one of the reasons why Spirit doesn't use Boost.Test anymore). Several other Boost libraries' tests were also affected (e.g. Program Options, Regex). I just gave up reporting the problems. Regards, m Send instant messages to your online friends http://au.messenger.yahoo.com

"Gennadiy Rozental" <gennadiy.rozental@thomson.com> writes:
"David Abrahams" <dave@boost-consulting.com> wrote in message news:ufyjdhqcg.fsf@boost-consulting.com...
I don't know about the default (how do you plan library would figure out whether it's run from regression run or by hand?), but users could specify how they want library to behave using either CLA or environment variable (or in config file starting next release)
I'm not sure whether it's the right kind of specification, though. Is it? What I'm looking for is a specification that says, "run the test so that a regression test is least likely to halt, whatever that means on this particular system/compiler," not "run the test with signal handlers that throw exceptions."
This kind of specification make sence for regression tool options decision.
I don't know what "regression tool options decision" means.
On library level that is used by regression tool one need to provide an options for both alternatives.
Probably.
Enforcing different defaults on library level on different compilers completly unacceptable IMO. Library behavior should be consistent.
Sure, but it all depends on what you mean by "behavior." You measure behavior by the answer to "do the signal handlers throw exceptions?" Other people measure behavior by the answer to the question, "does the test terminate without user intervention."
maybe this mode specification thing is even unnecessary, I don't know. But if you know which platforms and compilers will benefit from mapping signals, it seems to me you should only do it there.
IMO all users on all platforms could benefit from signal catching (both in regression run and run by hand).
Clearly Martin was not benefitting.
In this particular case. In some other cases he will (IMO)
Not if he gives up on the library before he ever gets to those cases.
At the same time the same users need to understand the possibility (small IMO) of hung test in case of severe memory corruption.
This was apparently not a small probability for Martin. He said it happened dozens of times.
Maybe for the same particular test. I personally never have any problems with it. Did you?
Of course not. But I mostly avoid the test library. Its default behaviors are well-suited to a relatively large investment in learning, framework, and automation. For me, barriers to writing tests have to be extremely low, because I just have too many other things to think about.
Now you propose to have different default behavior on different platforms. I would strongly oppose such inconsistency (And don't forget there other reasons mentioned in this thread to keep current default). Regression tools maintainers (and/or library developers) need to decide what they want to do on case by case (tool by tool or even better test/tool by test/tool) level.
I fundamentally disagree that is the ideal. It's one of the major goals of Boost.Build and the testing framework built around it to centralize expertise about platforms, tests, compilers, etc., so a library developer does _not_ need to be an expert in every platform, compiler, etc. that his library may run under. One should be able to use high level abstractions to describe what needs to be accomplished, and allow the knowledge of platform-specifics embedded in the framework to take care of the details. So far, we've been pretty successful. However, if Boost.Test doesn't cooperate with that approach, it will either undermine the effort, or we will have to stop using it in the Boost regression tests.
Library developers doesn't need to know anything in most cases. I would assume that it's regression tools developer/maintainer responsibility in most cases.
Okay; does Boost.Test give the regression tools developer/maintainer the means to control this option?
What you propose:
1. Make library behavior inconsistent for regular user
Depends what you're measuring.
2. Make some users that actually prefer existing default unhappy 3. Will change the default that is used currently 4. Cause new users not get best from the library until they will learn advanced options
Depends how you measure "best."
5. Doesn't present an ultimate solution for regression testing: whatever default is chosen for any platform it still may stall.
6. Could be implemented on regression tool/library Jamfile level without causing 1,2,3,4
How so?
7. Is trying to solve a problem for very few users by causing different problems for many others
The way things are going, your clients among Boost library developers (who may need this feature) will continue to dwindle.
If you so adamant that signals should not be caught,
I am not. I am adamant that the library design and its integration with the Boost testing system should be responsive to the needs of Boost regression testing, and especially responsive to reports of showstopper bugs. It's neither good for Boost.Test nor good for Boost as a whole if libraries like Spirit end up feeling compelled to dump the use of Boost.Test.
why don't you just set this options for all test automatically in Boost.Build?
Because I'm already overwhelmed with other responsibilities, and don't posess the domain knowledge to do it. -- Dave Abrahams Boost Consulting www.boost-consulting.com

"David Abrahams" <dave@boost-consulting.com> wrote in message news:uodxr6mzi.fsf@boost-consulting.com...
maybe this mode specification thing is even unnecessary, I don't know. But if you know which platforms and compilers will benefit from mapping signals, it seems to me you should only do it there.
IMO all users on all platforms could benefit from signal catching (both in regression run and run by hand).
Clearly Martin was not benefitting.
In this particular case. In some other cases he will (IMO)
Not if he gives up on the library before he ever gets to those cases.
I don't see how I could help it without breaking consistency in library behavior.
At the same time the same users need to understand the possibility (small IMO) of hung test in case of severe memory corruption.
This was apparently not a small probability for Martin. He said it happened dozens of times.
Maybe for the same particular test. I personally never have any problems with it. Did you?
Of course not. But I mostly avoid the test library. Its default behaviors are well-suited to a relatively large investment in learning, framework, and automation.
I don't think it's true. The only real problem is lack of proper documentation.
For me, barriers to writing tests have to be extremely low, because I just have too many other things to think about.
I worked very hard on usebility last couple releases. I don't see how it could be done easier for users at the moment.
Now you propose to have different default behavior on different platforms. I would strongly oppose such inconsistency (And don't forget there other reasons mentioned in this thread to keep current default). Regression tools maintainers (and/or library developers) need to decide what they want to do on case by case (tool by tool or even better test/tool by test/tool) level.
I fundamentally disagree that is the ideal. It's one of the major goals of Boost.Build and the testing framework built around it to centralize expertise about platforms, tests, compilers, etc., so a library developer does _not_ need to be an expert in every platform, compiler, etc. that his library may run under. One should be able to use high level abstractions to describe what needs to be accomplished, and allow the knowledge of platform-specifics embedded in the framework to take care of the details. So far, we've been pretty successful. However, if Boost.Test doesn't cooperate with that approach, it will either undermine the effort, or we will have to stop using it in the Boost regression tests.
Library developers doesn't need to know anything in most cases. I would assume that it's regression tools developer/maintainer responsibility in most cases.
Okay; does Boost.Test give the regression tools developer/maintainer the means to control this option?
Yes it does. Using CLA and environment variables. Next release it will support config files either.
What you propose:
1. Make library behavior inconsistent for regular user
Depends what you're measuring.
I measure library behavior in regular (non-extreme) circomstances.
2. Make some users that actually prefer existing default unhappy 3. Will change the default that is used currently 4. Cause new users not get best from the library until they will learn advanced options
Depends how you measure "best."
5. Doesn't present an ultimate solution for regression testing: whatever default is chosen for any platform it still may stall.
6. Could be implemented on regression tool/library Jamfile level without causing 1,2,3,4
How so?
Specify in appropriate tools files (or testing.jam) additional CLA for running tests.
7. Is trying to solve a problem for very few users by causing different problems for many others
The way things are going, your clients among Boost library developers (who may need this feature) will continue to dwindle.
Funny thing is that you don't propose solution that would address this need (simply because there no one IMO)
If you so adamant that signals should not be caught,
I am not. I am adamant that the library design and its integration with the Boost testing system should be responsive to the needs of Boost regression testing,
I am responsive. I don't think that Boost.Test is the one that needs to be fixed.
and especially responsive to reports of showstopper bugs. It's neither good for Boost.Test nor good for Boost as a whole if libraries like Spirit end up feeling compelled to dump the use of Boost.Test.
I agree. Why don't we at least try to consider what I propose?
why don't you just set this options for all test automatically in Boost.Build?
Because I'm already overwhelmed with other responsibilities, and don't posess the domain knowledge to do it.
I don't mean you personally. I mean Boost.Build developers. Gennadiy

"Gennadiy Rozental" <gennadiy.rozental@thomson.com> writes:
"David Abrahams" <dave@boost-consulting.com> wrote in message news:uodxr6mzi.fsf@boost-consulting.com...
maybe this mode specification thing is even unnecessary, I don't know. But if you know which platforms and compilers will benefit from mapping signals, it seems to me you should only do it there.
IMO all users on all platforms could benefit from signal catching (both in regression run and run by hand).
Clearly Martin was not benefitting.
In this particular case. In some other cases he will (IMO)
Not if he gives up on the library before he ever gets to those cases.
I don't see how I could help it without breaking consistency in library behavior.
Sometimes a break with consistency is a pragmatic and appropriate thing to do.
At the same time the same users need to understand the possibility (small IMO) of hung test in case of severe memory corruption.
This was apparently not a small probability for Martin. He said it happened dozens of times.
Maybe for the same particular test. I personally never have any problems with it. Did you?
Of course not. But I mostly avoid the test library. Its default behaviors are well-suited to a relatively large investment in learning, framework, and automation.
I don't think it's true. The only real problem is lack of proper documentation.
Without proper documentation it requires at least a large investment in learning. And even if the perception that a large investment in framework and automation is required is wrong, without proper documentation such perceptions will persist.
For me, barriers to writing tests have to be extremely low, because I just have too many other things to think about.
I worked very hard on usebility last couple releases. I don't see how it could be done easier for users at the moment.
I think that job isn't finished yet. I keep hearing from people who try to use the test library and are frustrated (I'm sorry, I'm not naming my sources).
Library developers doesn't need to know anything in most cases. I would assume that it's regression tools developer/maintainer responsibility in most cases.
Okay; does Boost.Test give the regression tools developer/maintainer the means to control this option?
Yes it does. Using CLA and environment variables. Next release it will support config files either.
Then we should be able to set the options conditionally in the Boost testing framework, to reduce the likelihood of hanging. That's good news.
and especially responsive to reports of showstopper bugs. It's neither good for Boost.Test nor good for Boost as a whole if libraries like Spirit end up feeling compelled to dump the use of Boost.Test.
I agree. Why don't we at least try to consider what I propose?
Had to understand it first :)
why don't you just set this options for all test automatically in Boost.Build?
Because I'm already overwhelmed with other responsibilities, and don't posess the domain knowledge to do it.
I don't mean you personally. I mean Boost.Build developers.
I think, to the extent to which an interaction with Boost.Test behavior causes the problem, the information about how to set the options should be maintained with Boost.Test. Also, it should be included in the Boost.Test documentation, because Boost won't be the only project with the same issues. -- Dave Abrahams Boost Consulting www.boost-consulting.com

"David Abrahams" <dave@boost-consulting.com> wrote in message news:uhd3a68v2.fsf@boost-consulting.com...
Of course not. But I mostly avoid the test library. Its default behaviors are well-suited to a relatively large investment in learning, framework, and automation.
I don't think it's true. The only real problem is lack of proper documentation.
Without proper documentation it requires at least a large investment in learning. And even if the perception that a large investment in framework and automation is required is wrong, without proper documentation such perceptions will persist.
I agree with all points.
For me, barriers to writing tests have to be extremely low, because I just have too many other things to think about.
I worked very hard on usebility last couple releases. I don't see how it could be done easier for users at the moment.
I think that job isn't finished yet. I keep hearing from people who try to use the test library and are frustrated (I'm sorry, I'm not naming my sources).
Could you at least present reasons for their frustration?
Library developers doesn't need to know anything in most cases. I would assume that it's regression tools developer/maintainer responsibility in most cases.
Okay; does Boost.Test give the regression tools developer/maintainer the means to control this option?
Yes it does. Using CLA and environment variables. Next release it will support config files either.
Then we should be able to set the options conditionally in the Boost testing framework, to reduce the likelihood of hanging. That's good news.
That's what I am trying to say is the best solution IMO - set it up in either tool or testing.jam
why don't you just set this options for all test automatically in Boost.Build?
Because I'm already overwhelmed with other responsibilities, and don't posess the domain knowledge to do it.
I don't mean you personally. I mean Boost.Build developers.
I think, to the extent to which an interaction with Boost.Test behavior causes the problem, the information about how to set the options should be maintained with Boost.Test. Also, it should be included in the Boost.Test documentation, because Boost won't be the only project with the same issues.
Yes this need to be covered in docs. Gennadiy

"Gennadiy Rozental" <gennadiy.rozental@thomson.com> writes:
For me, barriers to writing tests have to be extremely low, because I just have too many other things to think about.
I worked very hard on usebility last couple releases. I don't see how it could be done easier for users at the moment.
I think that job isn't finished yet. I keep hearing from people who try to use the test library and are frustrated (I'm sorry, I'm not naming my sources).
Could you at least present reasons for their frustration?
For example: anonymous: why do i need a separate .lib? anonymous: <sigh> anonymous: my needs are very simple anonymous: boost.test is too complicated anonymous: the minimal test framework met my needs
Library developers doesn't need to know anything in most cases. I would assume that it's regression tools developer/maintainer responsibility in most cases.
Okay; does Boost.Test give the regression tools developer/maintainer the means to control this option?
Yes it does. Using CLA and environment variables. Next release it will support config files either.
Then we should be able to set the options conditionally in the Boost testing framework, to reduce the likelihood of hanging. That's good news.
That's what I am trying to say is the best solution IMO - set it up in either tool or testing.jam
Great, but...
why don't you just set this options for all test automatically in Boost.Build?
Because I'm already overwhelmed with other responsibilities, and don't posess the domain knowledge to do it.
I don't mean you personally. I mean Boost.Build developers.
I think, to the extent to which an interaction with Boost.Test behavior causes the problem, the information about how to set the options should be maintained with Boost.Test. Also, it should be included in the Boost.Test documentation, because Boost won't be the only project with the same issues.
Yes this need to be covered in docs.
...given that fact, it seems obvious to me that the test library maintainer should take responsibility for maintaining that setting in the build/test framework. -- Dave Abrahams Boost Consulting www.boost-consulting.com

"David Abrahams" <dave@boost-consulting.com> wrote in message news:uzmh2488y.fsf@boost-consulting.com...
"Gennadiy Rozental" <gennadiy.rozental@thomson.com> writes:
For me, barriers to writing tests have to be extremely low, because I just have too many other things to think about.
I worked very hard on usebility last couple releases. I don't see how it could be done easier for users at the moment.
I think that job isn't finished yet. I keep hearing from people who try to use the test library and are frustrated (I'm sorry, I'm not naming my sources).
Could you at least present reasons for their frustration?
For example:
anonymous: why do i need a separate .lib?
It's optional. You could always use "included" variant.
anonymous: <sigh>
?
anonymous: my needs are very simple anonymous: boost.test is too complicated
This is not a reason. This this an outcome. What is complicated?
anonymous: the minimal test framework met my needs
Unit Test Framework does the same with smaller typing: #include <boost/test/minimal.hpp> int test_main( int /*argc*/, char* /*argv*/[] ) { int i = 1; BOOST_CHECK( i == 1 ); return 0; } vs. #define BOOST_TEST_MODULE my_test #include <boost/test/included/unit_test.hpp> BOOST_AUTO_TEST_CASE( test_main ) { int i = 1; BOOST_CHECK( i == 1 ); } So this point is invalid here IMO.
That's what I am trying to say is the best solution IMO - set it up in either tool or testing.jam
Great, but...
I think, to the extent to which an interaction with Boost.Test behavior causes the problem, the information about how to set the options should be maintained with Boost.Test. Also, it should be included in the Boost.Test documentation, because Boost won't be the only project with the same issues.
Yes this need to be covered in docs.
...given that fact, it seems obvious to me that the test library maintainer should take responsibility for maintaining that setting in the build/test framework.
Unfortunately I don't know Boost Build enough to be able to do this. Gennadiy

"Gennadiy Rozental" <gennadiy.rozental@thomson.com> writes:
So this point is invalid here IMO.
Invalid or not, people (experienced Boosters) are still having this reaction. Saying it's invalid doesn't fix anything. -- Dave Abrahams Boost Consulting www.boost-consulting.com

"Gennadiy Rozental" <gennadiy.rozental@thomson.com> writes:
...given that fact, it seems obvious to me that the test library maintainer should take responsibility for maintaining that setting in the build/test framework.
Unfortunately I don't know Boost Build enough to be able to do this.
Once you develop the list of settings-vs.-platforms/compilers, within a couple posts to the boost-build list you should have all the info you need. This is not complicated. -- Dave Abrahams Boost Consulting www.boost-consulting.com

"David Abrahams" <dave@boost-consulting.com> wrote in message news:umzd13b6o.fsf@boost-consulting.com...
"Gennadiy Rozental" <gennadiy.rozental@thomson.com> writes:
...given that fact, it seems obvious to me that the test library maintainer should take responsibility for maintaining that setting in the build/test framework.
Unfortunately I don't know Boost Build enough to be able to do this.
Once you develop the list of settings-vs.-platforms/compilers, within a couple posts to the boost-build list you should have all the info you need. This is not complicated.
First of all I don't have this list. And second, would I have time to invest into boost at the moment I rather spend it on docs than dig through Boost.Build. Gennadiy

"Gennadiy Rozental" <gennadiy.rozental@thomson.com> writes:
"David Abrahams" <dave@boost-consulting.com> wrote in message news:umzd13b6o.fsf@boost-consulting.com...
"Gennadiy Rozental" <gennadiy.rozental@thomson.com> writes:
...given that fact, it seems obvious to me that the test library maintainer should take responsibility for maintaining that setting in the build/test framework.
Unfortunately I don't know Boost Build enough to be able to do this.
Once you develop the list of settings-vs.-platforms/compilers, within a couple posts to the boost-build list you should have all the info you need. This is not complicated.
First of all I don't have this list. And second, would I have time to invest into boost at the moment I rather spend it on docs than dig through Boost.Build.
Oh, well. -- Dave Abrahams Boost Consulting www.boost-consulting.com

On Sat, 27 May 2006 19:35:03 -0400 "Gennadiy Rozental" <gennadiy.rozental@thomson.com> wrote:
For me, barriers to writing tests have to be extremely low, because I just have too many other things to think about.
I worked very hard on usebility last couple releases. I don't see how it could be done easier for users at the moment.
I'll agree with this... the number of tests that I write has gone up dramatically since moving to Boost.Test. I have been gone for a while, and am only quickly glancing at messages (over 1000 messages to get through). Thus, I have not read all these posts in detail, and I'm confused why Spirit decided to not use it... However, as far as usage goes, the framework is very simple to use...
participants (10)
-
Andy Little
-
Darren Cook
-
David Abrahams
-
Gennadiy Rozental
-
Jeff Flinn
-
Jody Hagins
-
Martin Wille
-
Matias Capeletto
-
Tomás Pecholt
-
Toon Knapen