
--- David Abrahams <dave@boost-consulting.com> wrote:
Oh, I find it perfectly easy to grasp, at least in a general sense. Your're using an iterative method and you want to know when your solution is "good enough". You've implemented a heuristic for deciding. Right?
Yes! Although it is more a multi-trial/trial-and-error method than an iterative method.
What I don't understand in this is why the use of Python makes a crucial difference in this heuristic.
Because I could never have done what I did in the same amount of time if I had tried it in C++. Python is simply unbeatable when it comes to programmer efficiency. It might be intimately connected to how I work as a scientist, though. Initially I often have only a vague idea of what will work and I have to try many different approaches. C++ is too rigid to support this kind of "scientific research" efficiently. But Python is not just good for rapid prototyping and testing of ideas. Here comes the 90-10 rule again: in the end there are many algorithms that I don't even have to reimplement in C++. So I never have to spent the time typing in twice the amount of code, working around compiler bugs as soon as I start using new features of the language or new libraries, and I don't have to wait for 15 minutes once more until everything is rebuilt after changing just two lines in a core part of the system. To back this up with an example: yesterday our whole ~100k C++ lines package (plus Boost) worked fine everywhere except for a SEGFAULT on one platform out of 13. It took me three hours to find out that changing boost::python::extract<int> to boost::python::extract<long> fixed the problem. This was in code that I was using for years; the only thing I did was moving it wholesale to a different extension module. There is no obvious explanation for the SEGFAULT because the size of int and long are exactly the same on that platform (RH 7.3, gcc 2.96). Something like this happens to me all the time and over the year the hours add up to months worth of time. Frustrations like this *never* happen in Python. It is only a couple of decades ago or so that people thought it is worth writing machine code in the interest of runtime efficiency, and the people with compiled code where thought of as the unmanly guys. These days I see myself one level up. Compiled code vs. Python today is like machine code vs. compiled code yesterday... I guess I could go on and on about this and I'll soon be banned from this list. :-) To bring this back to where we started: if I have to go through the pains of "dropping down" to C++ I don't care about the extra loop that I might have to write. What I would love to have though is a generic way to return large arrays from functions without the copy, or small ones without the dynamic memory allocation overhead. Hence my proposal with the array<MemoryModel, IndexingModel> where I can plug in the memory model that is best for a given situation. As far as I can tell the world has been very busy with IndexingModels calling them all kinds of things including Storage, but MemoryModels as parameters have been largely neglected. To me it seems once this problem is solved many things will fall nicely into place. Instead of arguing about the best indexing model, just plug in your own. Inherit from the existing and customize as necessary, freely combine with different memory models as needed or even better, array<MemoryModel, IndexingModel, AlgebraModel>, with different algebras as well. This would be a tool that I could see as increasing my efficiency in using C++ -- when I have no choice. :-) Just my $10. I realize other people have other goals. Ralf __________________________________ Do you Yahoo!? Yahoo! Mail Address AutoComplete - You start. We finish. http://promotions.yahoo.com/new_mail