
Hi Barend,
Hi Fernando,
It deeply worries me to find that the GGL proposal haven't been updated in this regard, because the instrumentation of the proper techniques have a significant impact on the overall design. Robustness can't be treated as an implementation detail to be deal with later on (and I'm sorry if I haven't made this point clear when I warn about it before).
I know you warned before and if I'm right you offered to help us with that.
Right... is just that I don't know *how* to help you yet. Is like I don't know where to start. At the very least I need you to fully understand the problem and totally see why your current approach is just fundamentally wrong. So, can you read very very carefully the link I posted and come back to me when you can yourself identify what's exactly wrong with GGL? Then we can talk about what it should do instead.
I was not forgotten that, although it is more than a year ago.
Wow.. that much already? :)
In my opinion we've left this issue open, there were many issues to handle, because of requests of the list. Concepts, taking any geometry, coordinate systems, dimensions, ranges, we've handled many. But not all, in that sense you are right. I'm still optimistic that we can handle also this issue, even now.
OK, but keep in mind that it would affect your design in the same fundamental way concepts did.
You are right, it has been unchanged indeed (as a I mentioned).
If I where to review the library today I would have to strongly vote for rejection because of this. That phase is not yet there... It is in preview until we publish it for review.
Fair enough.
The library proposed by Lucannus, OTOH, at least acknowledges the problem appropriately and approaches a solution in the right direction (there is still some important design-influencing considerations but are not showstopers)
You'll probably found this on the mails and not on the library itself.
If "this" is my assesment of the robustness of Luke's library, no. I know that from what I privately discussed with him.
It has no circles so it is a bit unfair to compare the approaches like this.
No it isn't because I picked on the incircle test only because it is a textbook example. But I've looked at the source code of most of the library and it just totally missing the right approach.
It focusses on integer arithmetic and 45/90 angled polygons so it is different, more or less complimentary.
Then you are missing an important point, so let me try to explain it: Please read the following very carefully. It is well known that algebraic operations on integers are exact *IFF THE RESUT DOES NOT OVERFLOW*. By extension, division of integer rationals is exact (since there is no real division at all), but then again, provided there is no overflow. Many years ago it was commonly believed that operations on geometries with integer rational coordinates where just incidentally robust because of the integer properties stated above. And that belief proved itself to be correct in a large number of real world cases, so much in fact that it was common for geometric libraries, and even algorithm research, to focus only integer rational coordinates as a fast ad simple mean to achieve robustness. However, that recipe has a fundamental and very practical limitation: integer overflow. There are always non-degenerate input cases where the algorithm fails, and there are algorithms which fail very fast because integer rationals quickly overflow if you cascade too many multiplications. If robustness is defined, as it is or should be in our context, as the capability of obtaining correct results for *all* of the non-degenerate input, instead of some (however large), then rationals over plain machine integers are simply not the *right* solution since in reality that only reduces the probabiliy of failure in the same way epsilon-geometry or fixed-point artihmetic (which also fails on overflow) does. Machine integer rationals are, as a general technique for achieving robust geometric computations, much better than floating points, but still just not good enough. In fact, some algorithms work "very well" with machine integer rationals because they stay away from overflow far enough, but some other algorithms fail even much more qickly and destructively than using machine floating points because, on the contrary, they overflow in a snap (while floats have a much much wider range so floating point overflow is very rare in geometric computing) One obvious solution is to use software based number types. Bigints for example are easy to understand, implement and optimize, so Bigrationals have once been considered as perfect drop in replacements for machine integers in robust geometric algorithms. This would be ideal since the library code could just pretend that the underlying computational model where the RealRAM. However, those algorithms which "explode" the bitsize requirement due to large cascaded multiplications, totally ruin the performance of adaptive unlimited precision integer rationals. Similarly, using unlimited precision floating point numbers, or those which keep the expression trees, etc, is just too slow in practice to be a choice for industrial applications. Simply using a special number type does produce correct results, but it is still not fast enough (by far) to be used in industry, so, one of the keys of the *modern* approach to robust geometric computing is to computing using native types as far as possible and switch to specialized types when is needed. With integer types this can be done easily since the only source of errors is overflow and it is entirely possible to known in advanced whether an operation will overflow or not. And in C++ is possible to abstract all that away using the right design (in dynamic languages the virtual machine could also do the same totally under the hood, but I don't know any doing it) *THAT* is what Luke's library does, and that's the reason I said it aproached robustness in the right direction. It wasn't just the fact that it uses whatever integer coordinates or restricted forms of geometries (which isn't the case anymore) In the response to Luke I'll sketch the approach for floating point robustess, which follows the same principle, and how that affects the overall design of a library. Best -- Fernando Cacciola SciSoft Consulting, Founder http://www.scisoft-consulting.com