
So if the code of your algorithm is complicated because it does smart things for speed purposes or is parallelized, it is useful to validate it against a simple implementation that is less likely to be wrong.
What we tried to do with Boost.Math was: * Generate test values from "known goods" where available - Mathematica or it's online sibling functions.wolfram.com are good choices. * Generate high precision test values for random testing using code we wrote ourselves, but which uses a "brain dead" implementation copied directly from the definition of the function, and enough digits precision to overcome any cancellation errors that may result. It doesn't matter how long this takes to run as long as it does get there eventually... * Then double check that the test cases actually cover all of the branches in the "real" code - sometimes it can be a real challenge to find cases that take a particular branch though! Even with the above, there is a real issue with functions taking many parameters - it's basically *impossible* to fully cover the entire domain of the function with your test cases - given that complexity grows exponentially with number of parameters. HTH, John.