0

I wanted to ask a question about the geometric interpretation of multiplying matrices but for non-identical dimensions.

I'm currently studying an introductory course in matrices for my Quantum Physics courses in Chemistry and am using 3Blue1Brown's course linked here.

In chapter 3 of his videos, he mentions multiplying matrices with vectors and showing this from a geometric perspective.

In chapter 4 he shows how to do this with multiplying matrices together. Below is his example:

$$\begin{bmatrix} 0 & 2 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} 1 & -2 \\ 1 & 0 \end{bmatrix}$$

This part makes sense. From reading right to left, considering vectors $\underline{i}$ and $\underline{j}$ as the unit vectors, the right hand matrix shows us where $\underline{i}$ and $\underline{j}$ initally land, at the coordinates $(1,1)$ and $(-2,0)$ respectively. The left hand matrix transforms the span, so calculating the new $\underline{i}$ and $\underline{j}$ vectors land can be done as such:

$$\begin{bmatrix} 0 & 2 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} 1 \\ 1 \end{bmatrix} = 1 \begin{bmatrix} 0 \\ 2 \end{bmatrix} + 1 \begin{bmatrix} 2 \\ 0 \end{bmatrix} = \begin{bmatrix} 2 \\ 1 \end{bmatrix}$$

which is added to the left hand side of the final composition matrix, and likewise for the right hand side:

$$\begin{bmatrix} 0 & 2 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} -2 \\ 0 \end{bmatrix} = -2 \begin{bmatrix} 0 \\ 2 \end{bmatrix} + 0 \begin{bmatrix} 2 \\ 0 \end{bmatrix} = \begin{bmatrix} 0 \\ 2 \end{bmatrix}$$

Hence the final composition matrix is: $$\begin{bmatrix} 2 & 0 \\ 1 & 2 \end{bmatrix}$$

The idea behind it, as explained geometrically, relies on the scalar and linear combination of vectors.

Now I see this can apply for matrices that are equal in dimension. Later videos expand into the 3-dimensional application of this idea.

However, one question I faced today was to find the resulting composition matrix of this:

$$\begin{bmatrix} 2 & 2 \\ 0 & -2 \\ -6 & 3 \end{bmatrix} \begin{bmatrix} -6 & 2 \\ -4 & -2 \end{bmatrix}$$

I initially thought of using the method above, but quickly found myself confused as I was dealing with multiplying a 3-dimensional matrix by a 2-dimensional matrix.

I was given the "rule" by my professor that states:

If the number of rows of the first and the number of columns of the second matrix are the same, multiply each row of the first column by the column of the second matrix with the numbers that correspond.

However, I disliked this "algorithmic" approach and tried to work out for hours if there is a geometric interpretation similar to that I attempted to explain and that given in the videos linked above where you can understand matrices as transformations and linear combinations of the unit vectors.

Is there a possible geometric interpretation that applies to multiplying vectors that are not the same dimensions?

My question is not a duplicate as the other question linked above refers to matrix multiplication using dot products and does not wish to make reference to a geometric analysis of the issue: my question applies only geometrically to see if such a way is possible.

I apologise if the answer is trivial but I come from a chemical field so this subject is new to me!

vik1245
  • 893
  • @shogun I explained why my question is not a duplicate as I have not asked how matrix multiplication can be explained via dot products. – vik1245 Oct 14 '19 at 01:53
  • 1
    One view is that $\begin{bmatrix} a_1 & a_2 \end{bmatrix} \begin{bmatrix} b_1 \ b_2 \end{bmatrix} = b_1 a_1 + b_2 a_2$, where $a_1$ and $a_2$ are the columns of a matrix and $b_1,b_2$ are the entries of a vector. The number of components in the vectors $a_1,a_2$ is irrelevant. – Ian Oct 14 '19 at 01:55
  • 1
    In your example of a $3 \times 2$ times a $2 \times 2$, the first matrix (on the right) transforms the basis vectors in $\Bbb{R}^2$ as you described above. Then the second matrix sends the basis vector $(1,0) \in \Bbb{R}^2$ to the vector $(2,0,-6) \in \Bbb{R}^3$, and it sends $(0,1)$ to $(2,-2,3)$. In short, matrix multiplication is just composition of linear maps, and the columns of the matrices tell you where they send the basis vectors. – Nick Oct 14 '19 at 02:12

0 Answers0