
2015-02-08 0:15 GMT+04:00 Dmitry Moskalchuk
However, I've realized that for thorough testing matrix of the variants become really big. Look, right now there are nine runners for Android, and this number is in fact very limited. They differs by target ABI (three arm variants, x86 and x86_64) and version of Android (API level 19 - Android 4.4 and 21 - Android 5.0). However, it has sense to test also on Android 4.0, 4.1, 4.2 and 4.3 since theirs market share is still large (see https://developer.android.com/about/dashboards/index.html). Add to that MIPS target ABI (not yet included to the testing) and multiple by two to have tests running with both default settings and with -std=c++11 - and you'll get really big total number of runners.
MIPS platform is not well tested. Running tests on MIPS and ARM is highly required. Android API level is not really essential for Boost, almost all the Boost libraries work well on Android 2.3.3. Having minimal (2.3 or higher) and (optionally) maximal API levels covered in tests seems more than enough. c++11/c++14 may enable some code parts that are not tested with c++98. Just enable those on the modern compilers (gcc4.9+, clang3.5+). A few more notes: * Hard/soft floats may be essential for Boost.Math and compilers testing. * Some of the embedded developers compile code without RTTI and without exceptions. Testing this use case could be useful. * Some of the tests in Thread,Atomic,ASIO make sense if the host is multicore. Running tests on single core hosts may not be really valuable.
I'm asking for advice from community. It's not the big problem for us to run all such tests in all variants, but I'm unsure if it will be acceptable for Boost community to see such wide table of runners. I'm afraid it will look like a flood. We also publish Android-only results on https://boost.crystax.net/master/developer/summary.html and we'll definitely display all results from all runners there. Please let me know if the same approach would work for http://www.boost.org/development/tests/master/developer/summary.html or should I limit somehow runners to be uploaded to the Boost FTP.
I think that more is better. There's a lot of regression testers right now, but I have no feeling of drowning in data: just scroll till the yellow "fail" label and investigate the issue. -- Best regards, Antony Polukhin