
On 19/12/2014 12:56, Neal Becker wrote:
oswin krause wrote:
On 19.12.2014 00:00, Semen Trygubenko / Семен Тригубенко wrote:
On the positive side, our tests are now much more robust to that sort of changes. :)
We had the same issue a few years ago when changing to boost::random - almost every test broke. It turned out that testing for specific values or only a small number of samples (the only tests which are affected by this kind of change) can mask a lot of bugs - even though some values seem to be correct, confidence intervals or measured variances can still be off. So for us a more robust test also meant a better test that discovered bugs.
But yeah: such a change should be part of the change-log. _______________________________________________
Not sufficient! Any such change should be discussed with the boost user community first. Boost should have a policy that any breaking changes are discussed on boost-dev.
I second that, first the proposal should be discussed, and if implemented become very loudly announced in the change-logs. For what it's worth I believe it should have even been possible to add/offer different variates generation algorithms (for any distribution) without breaking current behaviour: an extra template argument (to the distribution itself), with the current algorithm as default type, and here we'd have (almost) the best of both worlds. Again a discussion with the community beforehand would be useful. As to making tests simply robust: out of the box I can come up (from own work, real-world) with cases when this is not possible, e.g. when obtaining a non-small sample takes a looong time (> weeks; sorry not waiting that long to test boost), or when there's no way around truly comparing values for [floating-point near-] equality. cheers, Thomas