
Brandon,
- did you look in the tests? I don't think we need a "place to start" because we already have many tests on this, in the distribution, and more. I mean by this that we're started, on the way. But thanks for pointing this out.
I have looked at test_overlay.hpp which seems to contain around 15 or so tests of very small polygons. My use cases require robust usage on polygonal regions (with holes) that come from CAD for airports, cities etc. with many operations performed sequentially and progressively on the outputs of previous iterations. So my point is that the tests we have seem inadequate in in proving more than a rudimentary level of robustness/correctness. Perhaps I missed more tests?
We have more tests and most of them test very small polygons. What I test with those is different cases / configurations (see your point below). I'm testing the configurations where small (preferably triangles, sometimes more) are formed in cases, important for the algorithm implemented. The more detailed tests are testing the phases independantly, they are rougher so I didn't include them in formal distribution. However, they can be provided.
Batteries of tests will never cover all cases in geometry because there are infinitely many input configurations.
I was mentioned this because you (or Luke) mentioned batteries of tests... I agree, total configurations are infinit. Cases different for the algorithm should be finit.
I think large numbers of tests on reasonably constructed random inputs does inspire more confidence that things work and is a good way to uncover problems. Real world use cases are probably the best, but these are much harder to implement in the sizes to really hammer the algorithm. I'll see if I can come up with some data to help on this.
Your testdata is very welcome.
- it is still not clear to me if you refer to "algorithmic robustness" (handling all cases) or "100% numeric stability"
In this thread I have been talking about how it performs using native FP types. So what can I expect from using a double. Clearly this is where a majority of your users will come in. I don't expect that it will work in all cases as we all know it probably cannot. My own experience with FP boolean operations has shown them to be flawed with models of only moderate complexity (which I why I hesitate recommending them for Boost). Perhaps your algorithms fare better than the ones I've used (I hope!) At this point I don't know that we (from the tests you have) are certain if it works even when using something like GMP. We won't know until we test, find something that breaks using double and then works with GMP. I would define algorithmic robustness under this condition: when something works with a GMP type but fails using a native FP type. I don't really say 100% numeric stability anywhere do I? I think you never have this under native FP types. My assumption there is that numeric stability is defined as how the number's representation changes with use in arithmetic.
I have cases where float goes wrong and (long) double does not. I've not yet a case where double goes wrong but that should be feasable with some trial&error. So I can test this against GMP-types. Only (as you mentioned) I've to do some search/replace in segment-intersection (actually it is somewhat more than search/replace of course) to have them there as well. However, if float goes wrong and double goes right, GMP should go right as well... Barend