
On Tue, Nov 17, 2009 at 12:23 PM, Phil Endecott <spam_from_boost_dev@chezphil.org> wrote:
Jonathan Franklin wrote:
On Tue, Nov 17, 2009 at 6:35 AM, Brandon Kohn <blkohn@hotmail.com> wrote:
My own view is that something that only works part of the time really doesn't work.
I'm sure that's great for whatever you do. It is valid in many use cases.
The visualization software that I work on, on the other hand, prefers speed over making sure that point p1 is provably inside or outside the box. If it's *that* close, then it doesn't matter.
Hi Jonathan,
Do please have a look at the paper that I linked to before, and in particular the figure at the top of the second page:
http://www.mpi-inf.mpg.de/~kettner/pub/nonrobust_cgta_06.pdf
Two points:
1. Failures may not occur only in cases when things are very close. A problematic input may cause the output to be grossly wrong.
2. You may not get a wrong answer; the program might instead segfault or loop forever.
This discussion raises some important implementation considerations, but for the purposes of this review, the discussion should simply be "Are we requiring that the implementation be provably 100% robust and 100% numerically stable before acceptance?". If the answer to that is yes, then I think we are holding GGL at a *much* higher standard than previously accepted libraries (I'm not talking about any specific library). Note that even CGAL, a highly regarded library failed that paper's tests. I believe the interfaces and concepts should be what is discussed. Implementation bugs and optimizations can come later if the interfaces are well designed. I'm hoping to do a full review within a day or two, but I just wanted to add my two cents to this discussion. --Michael Fawcett