
On Jul 6, 2004, at 9:51 AM, Peter Schmitteckert (boost) wrote:
On Tuesday 06 July 2004 03:06, Jeremy Graham Siek wrote:
A *linear algebra* algorithm will never need to be generalized in the fashion you advocate below. That's because matrices are about systems of equations. A system of equations has a certain number of unknowns and a certain number of equations. Once the system is represented as a matrix, the number of rows corresponds to the number of equations, and the number of columns corresponds to the number of unknowns. That's it. There's no more numbers to talk about.
Here I have to disagree. But the problem lies in the fact, that I'm a theoretical physicist, who learned numerics by doing. For me, matrices are far more, than just objects used to solve linear set of equations. Matrices are representations of algebras, they can be fused (e.g. tensor products), and one can calculate general functions on matrices, like the matrix exponential.
In my work, vectors have double indices, i.e. a matrix has four indices, but it is still a matrix, the double index is just an implementation feature, since vectors are representated by a dyadic product of other vectors. You can now argues, wether this is still a vector, but in physics it is called a vector.
As another theoretical physicist I want to disagree with this definition. I would call the object with four indices a linear operator, but not a matrix. Matrices for me are representation of linear operators with two indices. You however point out an important requirement for generic algorithms on vector spaces: they should not require that a vector can be accessed with operator[] and a single subscript, or that once can construct a vector by passing just the size to the constructor. These too narrow requirements of the Iterative Template Library ITL, caused us to introduce the "vector space" concept in the our Iterative Eigenvalue Template Library IETL. Matthias