
double length_squared = Z*transpose(Z); is, for example, an entirely valid operation that reduces to length_squared = 3^2+4^2 and has the type of an abstract_symmetric_matrix<2,0> which is identically castable to a scalar. i think theoretically it's possible since one can compute any computable value thriugh template metaprogramming - why not? but i have doubts about profits of all that stuff
Sure. Just because we can doesn't mean we should. Then again, when you look at some of Boost 'because we can' could well be the battle cry! The profits, once the minefield of development is negotiated, may well be surprising and unexpected. It's now been demonstrated that various C++ techniques and deep abstraction not only don't impact performance but can yield excellent performance and vastly improve expressibility. I think, and am sure you agree, expressibility is the number one concern. Being able to write code that is clear and concise under the domain of interest (e.g. linear algebra) is compelling; if it's implemented in a suitable manner high performance will come for free!
significant. The parting shot is that if one can see through the layers of abstraction there is a real possibility for building a phenomenally efficient linear algebra library that is practically self aware! actually, if i got the point right, an implementation might do just what you wrote it doesn't hold the resulting matrix but rather an expression with some rules of computing the elements of resulting matrix
Indeed. Oh, expression templates just got re-invented! ;-) -ed