
8 Apr
2010
8 Apr
'10
8:22 a.m.
Phil Endecott wrote:
I think that having a test data set in mind in advance would make this much more concrete to reason about. Does anyone have any suggestions?
Some people regard using an algorithm with a really bad complexity as some sort of bug, no? So the question would be how this "bug" can be exposed, in case it is really a bug. One suggestion in this direction might be a denial of service attack. But in my own experience, such "bugs" tend to cause damage in more subtle and unexpected ways. Perhaps somebody can answer me the opposite question: What is the advantage of having subtle bugs in your algorithms that only wait to bite you when you expect it the least? Regards, Thomas