
Salut, On Tuesday 06 July 2004 03:06, Jeremy Graham Siek wrote:
A *linear algebra* algorithm will never need to be generalized in the fashion you advocate below. That's because matrices are about systems of equations. A system of equations has a certain number of unknowns and a certain number of equations. Once the system is represented as a matrix, the number of rows corresponds to the number of equations, and the number of columns corresponds to the number of unknowns. That's it. There's no more numbers to talk about.
Here I have to disagree. But the problem lies in the fact, that I'm a theoretical physicist, who learned numerics by doing. For me, matrices are far more, than just objects used to solve linear set of equations. Matrices are representations of algebras, they can be fused (e.g. tensor products), and one can calculate general functions on matrices, like the matrix exponential. In my work, vectors have double indices, i.e. a matrix has four indices, but it is still a matrix, the double index is just an implementation feature, since vectors are representated by a dyadic product of other vectors. You can now argues, wether this is still a vector, but in physics it is called a vector.
Another way to look at this is to focus on the mathematical concept *Linear Operator*. A linear operator is something that you can multiply with a vector of a certain size to get another vector of a certain size. The "number of columns" of the linear operator is the required size of the input vector, and the "number of rows" of the linear operator is the required size of the output vector.
I agree with you, if you restrict matrices to linear algebra. That's a fine definition. But I'm used to use the notion 'matrix' in a more general sense. Best wishes, Peter