
"Ralf W. Grosse-Kunstleve" <rwgk@yahoo.com> writes:
--- David Abrahams <dave@boost-consulting.com> wrote:
You are hitting the nail on the head. Experience with my own reference-counted array type tells me that the issue of immutability/constness needs extreme care. My own solution has the glaring shortcoming of not providing a const-reference-counted array type (excuses available on request :-). But having said that, I found it a lot worse not to be able to write a function that returns a new array without incurring a deep copy.
It seems quite a few compilers implement one or another form of RVO.l
Sorry, I don't know what RVO.l is. Is it portable? Does code exploiting RVO.l to achieve move semantics look intuitive?
Sorry, the "l" was a typo. I meant Return Value Optimization, where values can be returned from functions without actually invoking the copy ctor.
I guess you haven't read the MTL concept breakdown:
Right. When I started with my array library (two years ago) it was clear that MTL is too specialized for my purposes.
- Shape - Storage (I think this is what you're calling memory management) - Orientation (row-major, column-major, diagonal-major...)
Is "Orientation" a specialization of "Shape". Couldn't both be viewed as "Indexing models?"
No, shape is separate. No point in me trying to explain here when you can google for a good explanation from the MTL authors, though ;-)
1. I found the following memory management models for arrays to be essential in practical applications (i.e. I implemented them all, and make heavy use of all):
stack, fixed size stack, variable size, fixed capacity heap, variable capacity, one owner (e.g. std::vector) heap, variable capacity, shared ownership reference to any of these (i.e. "no management")
That's storage.
Right. Which of these are you going to support?
Whatever you like. There will be well-formed concepts, so you can build any storage policy you like if it isn't directly supplied.
4. Array algebras (unary and binary operators, functions) are a third property that is ideally separated from memory models and, if applicable, indexing models. This could lead to:
HilbertSpace, BanachSpace, VectorSpace, LinearAlgebra, RModule, Field, Ring, AbelianGroup, LinearOperator, TransposableLinearOperator were the algebraic concepts we had in mind. I don't think these things should neccessarily be part of the array type though.
Does my generic array design look too daunting?
Not neccessarily... but I doubt I'm going to implement your design. We have specific needs ;-).
Do we have to wait for another language before array libraries can be implemented in such a generic way?
I don't know what you mean. We do intend to make the MTL rewrite very generic.
5. Expression templates are an optimization on top of all this and in my practical experience the least important of all the points mentioned here. In all likelihood the 5-10% of code that matter for runtime performance have to be hand-optimized anyway, usually leading to distortions of the original, intuitive code beyond recognition.
The aim is to make the library smart enough to do those optimizations correctly so you don't have to do it by hand.
A case of "premature optimization?"
No, one of the main _goals_ of this project (from a CS/research point-of-view) is to demonstrate that it can be done.
I am still waiting for an example of real-world code where the critical section is so large that the reduction in code size/gain in maintainability due to the availability of expression templates would justify the cost of implementing the ET library and the increased compile times.
I think iterative solvers are a good example. But expression templates shouldn't be hard to implement given modern TMP knowledge and I can see no reason they need to induce long compile times either.
-- Maybe I am just too spoiled by Python. :-) I am constantly taking a performance hit of about two orders of magnitude, writing only the critical code in C++, and it turns out to be very practical because the old 90-10 rule is true!
Of course it is. If you're taking a performance hit of about two orders of magnitude in your linear algebra code, and you can still afford it, you're simply not in a domain where it matters all that much; other computations are swamping those costs.
Sometimes I even win out compared to brute force number crunching approaches because the very high abstraction level of Python allows me to push bookkeeping and high-level decision making to a point where I know how to skip much of the brute force work.
I'll have to take your word for it. It seems to me that if you can skip the brute force work in Python you can do it in C++, too. -- Dave Abrahams Boost Consulting http://www.boost-consulting.com