4

I couldn't see a specific reason for multiplying every row of A with every column of B. Is this an arbitrary property of multiplication function of matrices?

Instead, why don't we simply multiply row#1 of matrix A with row#1 of matrix B, which would help us to do multiplication easier without all that confusion of what we are gonna multiply with what? In this case, product of A(mxn) and B(kxn) would be P(mxk).

I know a lot of things would change in today's mathematics if we define multiplication this way. But would that approach cause any problem in the future?

  • 7
    Multiplication of matrices is the composition of linear maps. A matrix is not just a lump of numbers; it represents a linear map. That doesn't work if you multiply row-by-row. – Patrick Stevens Apr 10 '16 at 09:09
  • Yap.. The point is that defining it this way it's useful and has lots of properties. Defining it in another (easier?) way would not – Ant Apr 10 '16 at 09:12
  • Is it really impossible to define all of those properties with such multiplication? –  Apr 10 '16 at 09:15
  • @Patrick Stevens It's obvious that some stuff depending on current way of multiplication won't work if we change definition of multiplication. The question is, can't they be re-created. –  Apr 10 '16 at 09:21
  • every linear application from $\mathbb{R}^n$ to $\mathbb{R}^n$ can be represented by a $n\times n$ matrix. then, we check that rotations, symmetries, isometries that fix the point $0$ are linear transformations. we also observe that if a matrix is orthonormal then it is easy to invert, and that the orthonormal matrices are exactly those preserving the $|.|^2$ norm. and that every system of linear equations reduce to matrix/vectors equations. finally the multiplication of matrix correspond to the composition of linear maps and is the most natural operation on matrices given what we saw – reuns Apr 10 '16 at 09:30
  • @m1clee Regular matrix multiplication is an extremely useful and natural concept. If you redefined multiplication as you suggest, we would still need to invent a new symbol for the original matrix multiplication, and the redefined multiplication would go largely unused (as is the case for the Hadamard product). What do you actually gain by redefining it this way? – Erick Wong Apr 10 '16 at 19:49
  • I considered the confisuon about matrix multiplication I'm having from time to time. I'd only gain the simplicity and intuition about that. –  Apr 10 '16 at 19:56

2 Answers2

3

Suppose you have $x,y,z$ defined in terms of $p,q$, let's say, $$\eqalign{x&=3p+4q\cr y&=5p-2q\cr z&=-p+7q\cr}$$ and you have $p,q$ defined in terms of $a,b,c,d$, let's say, $$\eqalign{p&=4a-3b-c+2d\cr q&=9a+5b-6c+3d\cr}$$ and you want to express $x,y,z$ in terms of $a,b,c,d$. Well, all you need to do is extract the matrices of coefficients, and multiply them: $$\pmatrix{3&4\cr5&-2\cr-1&7\cr}\pmatrix{4&-3&-1&2\cr9&5&-6&3\cr}$$ The product will give you the coefficients in the expressions for $x,y,z$ in terms of $a,b,c,d$.

See also Why, historically, do we multiply matrices as we do?

Gerry Myerson
  • 179,216
3

If we consider matrices simply as tables of numbers than we can define many possible different binary operations that we can call '' multiplications'', simply using this name to distinguish this operation from the addition (defined as the sum of corresponding elements). Obviously different definitions give different properties of the ''multiplication'' and someone can be useful in some contest, but not in other.

As an example the Hadamard product of two matrix (defined as the product of the corresponding elements) is associative, distributive and also commutative, but can be defined only for matrices that have the same dimension, and ( as far as I know) is used in computer graphic.

The Kroneker product is another possible kind of multiplication, that has usefull properties and has important applications being related to the tensor product of linear transformations.

The usual row-column product has the advantage that it can represent the action of linear transformations between vector spaces, and capture all properties of these transformations (linearity, associativity, non commutativity, existence of a neutral element and of not invertible elements). There is some amount of convection in the definition, in the sense that we can chose the row to the left and column to the right ( as usual) or vice versa, but really these two possible alternative give isomorphic structures.

Emilio Novati
  • 62,675
  • Hadamard and Kroneker references has been highly useful for me. Thank you. –  Apr 10 '16 at 19:52