On 09/02/15 13:19, Antony Polukhin wrote:
MIPS platform is not well tested. Running tests on MIPS and ARM is highly required.
As of now, we run Android tests on ARM (three ABI - armeabi, armeabi-v7a and armeabi-v7a-hard), x86 and x86_64 targets. We'll add MIPS to the list of active architectures as soon as will fix MIPS-specific issues preventing us to run even simple C++ code which uses GNU libstdc++. Also, we'll run tests on ARM64 (Aarch64) as soon as Google will release ARM64 emulator or we could figure out it somehow in other way (having dedicated Nexus 9 tablet constantly plugged to the CI server, for example).
Android API level is not really essential for Boost, almost all the Boost libraries work well on Android 2.3.3. Having minimal (2.3 or higher) and (optionally) maximal API levels covered in tests seems more than enough.
In fact, it's important to test with all more-or-less actual API levels. Android libc is very unstable and differ from one Android version to another. We've tried to minimize such differences in CrystaX NDK, but without thorough testing we can't guarantee it will work on all Android versions. Please note Boost regression testing is not only testing of Boost itself; it's testing of CrystaX NDK too. We already have set of automatic tests where we test many things to ensure behavior is POSIX-compatible, but having Boost tests successfully passed with all API levels will make us even more assured there is no problems.
c++11/c++14 may enable some code parts that are not tested with c++98. Just enable those on the modern compilers (gcc4.9+, clang3.5+).
A few more notes: * Hard/soft floats may be essential for Boost.Math and compilers testing. * Some of the embedded developers compile code without RTTI and without exceptions. Testing this use case could be useful. * Some of the tests in Thread,Atomic,ASIO make sense if the host is multicore. Running tests on single core hosts may not be really valuable.
This makes sense for me. Thank you for pointing out.
I think that more is better. There's a lot of regression testers right now, but I have no feeling of drowning in data: just scroll till the yellow "fail" label and investigate the issue.
Great! For me it definitely make sense. I just don't want violate rules of community and make people discontented. As soon as having many runners in results table is OK for you all, it's OK for me too. -- Dmitry Moskalchuk