NDEBUG, affects floating point math?
I'm getting some seriously strange problems with regard to numerical accuracy when computing matrix products that depends on whether or not NDEBUG is defined when I compile. I wrote a small piece of C code to calculate a symmetric-matrix by row-matrix product by directly accessing the memory (which btw is about 1000% faster than ublas's generated code with gcc 3.2.3 and -O3, why is ublas so slow?) And when I compile with -DNDEBUG the results differ by around +/- 1e-16, but when its not defined the results match almost perfectly. Is there something strange going on with regard to FPU control words or types or anything? I'm using type double btw. If more info is needed I can post a test case, but I thought it'd make a quick inquery to see if there is a fast answer. -- Stephen -------- This message was scanned for viruses by Digital Passage.
participants (1)
-
Stephen Crowley