
Hi Luke,
Your algorithm is: for all red montonic sections, inspect all blue monotonic sections for bounding box overlap and if overlap exists inspect segments of monotonic section pair for line intersection and report said intersections. It has O(n^2) runtime complexity wrt. monotonic sections. Monotonic sections are in the worst case equivilent to the number of edges.
Sure. As I said, there is room for improvement there. But it is not necessary. See below.
My library is not relevant to the discussion.
Was mentioned by you in terms of "My own implementation...", "I was hoping to upgrade my algorithm..." etc. But I agree, let's leave it out.
I don't understand what you mean my not convenient. That I did want to go to sleep.
My concern about performance is that your benchmarking may not expose performance problems. Your claims about performance are based upon one benchmark (with sever minor variations of one of the two input geometries) from which you generalize about performance. The algorithm has a fatal weakness to
to ... everything? We've had this discussion several times before, off-list and on-list. You asked me to create a more-intersection-dense environment and I did. It *is* testing pathological cases. It *is* testing the diagonals you mentioned. You asked me to deliver cases and pictures where things went wrong in another library and I did. You asked me if I could create tests testing 90-degree and 45-degree polygons and I didn't, because I said you could create those tests yourself, they belong to your environment, but I didn't receive them. You're objecting against our interior-ring approach and I will test it, but not now. You asked for more documentation and I did. All the things I do and write are returned by words like fear and fatal, but nothing more than such words. So I think it's useless to continue like that. Anyway, worst case tested: - 19 minutes (GGL, best) - 73 minutes (second best) - 20 hours (worst) The last one is widely used within the GIS world and has variants in Java, C++ and C#. It's 60 times slower in pathological cases and ~10 times slower in more normal cases, so I think that potential Boost users can safely use our library without taking the risk that it is too slow. Full info: http://trac.osgeo.org/ggl/wiki/IntersectionMore The tested polygons have 10000 vertices, have the form of a star, which points are located horizontally, vertically and diagonally across the input. A factor more, 100000 points didn't fit in computer memory, with all libraries tested. It *is* a pathological case The testsuite is open source and open for anyone to reproduce the results or want to extend it. The input can be choosen, it is a runtime parameter. And I repeat, if the first phase, which is quadratic wrt sections and so quadratic in worst cases, even then takes too long there is room for improvement. As said in our paper (http://www.boostcon.com/site-media/var/sphene/sphwiki/attachment/2009/05/05/...) the Priority R-Tree spatial index, optimal in worst case scenario's, might be useful. So you can repeat that our implementation is fatally weak, but it is not, on the contrary. That our tests are wrong and grains of salt but they are not, on the contrary.
Also here we're happy to provide it, it is not satisfying your needs, especially if this enhances interoperability we're certainly willing to adapt and finetune things. Actually I'm also glad to hear positive feedback about the way we designed it for linestring.
I think fixing the polygon traits to be like the linestring/ring traits would improve both consistency and genericity of the interfaces and should be required before the library is accepted into boost.
Our concepts are completely clear and sound, documented and have examples. We're conforming to boost ranges and, for mutable, to std-library. I mentioned adaption to increase interoperability with a recently accepted library, your library. Objecting acceptance on return of such an offer, adapting to a library accepted one hour after announcement of review of ours, is quite surprising. Regards, Barend