5

The question was given in the early chapters of Linear Algebra by Hoffman & Kunze, so I am trying to give a proof with only the tools given to me so far - which are mainly row reduction and knowledge of matrix multiplication, row reduced echelon forms, row equivalence and linear independence.

I attempted a proof as per the following:

Consider $A$ as a collection (not sure if this would be the ideal expression) of $1 \times n$ row vectors, and $B$ as a collection of $n \times 1$ column vectors. Then we have that:

$$ A=\begin{bmatrix} r_1 \\ \vdots \\ r_m \end{bmatrix},\ B=\begin{bmatrix} c_1 & \cdots & c_m \end{bmatrix}. $$ Thus it follows that: $$ AB =\begin{bmatrix} r_1\cdot c_1 & \cdots & r_1\cdot c_m \\ \vdots & ~ & \vdots \\ r_m\cdot c_1 & \cdots & r_m \cdot c_m \end{bmatrix} $$ Clearly, by inspection, the rows are linearly dependent.

Since the rows of $AB$ are linearly dependent, it naturally follows that the reduced row echelon form of $AB$ contains zero rows. Hence, $AB$ is not invertible.

Would this be a mathematically sufficient proof?

  • 1
    no. think about the rank of these matrices or the null space of $B$. – user251257 Aug 04 '15 at 07:50
  • @user251257 Although I am aware of the notion of rank and its definition, I am trying to do a proof that is sufficient with only the knowledge of row reduced echelon matrices, elementary row operations, and matrix multiplication. I will edit that in to the original post. –  Aug 04 '15 at 07:53
  • 1
    the problem is the word clearly. not row echelon form. how do you see that the rows are linearly dependent? – user251257 Aug 04 '15 at 07:56
  • 2
    One reason to learn about linear maps: Solve these problems without any effort in one line. In my opinion, matrices are really the main source of confusion in linear algebra. Perhaps one should ignore books who put a great emphasis on matrices before treating linear maps, because this makes everything more complicated as it is. – Martin Brandenburg Aug 04 '15 at 07:57
  • @user251257 If you take any two arbitrary rows from $AB$, are they not scalar multiples of one another? –  Aug 04 '15 at 07:58
  • @River not necessarily – user251257 Aug 04 '15 at 07:58
  • @user251257 Haven given it some thought, you are completely correct. I confused dot product distribution with scalar distribution. –  Aug 04 '15 at 08:00
  • @MartinBrandenburg I wonder how do you compute the rank of a linear map abstractly? – user251257 Aug 04 '15 at 08:00
  • I am not saying that it is a good idea to avoid matrices completely. What I'm saying is that linear maps are more fundamental and many concepts (also the rank and determinant for instance) can be understood better from a more abstract point of view. Of course, matrices are useful for computations. – Martin Brandenburg Aug 04 '15 at 08:01
  • However this can be done here. – Augustin Aug 04 '15 at 08:04

4 Answers4

13

Here's a proof that relies on matrix multiplication.

We can adjoin $m-n$ columns of zeros to $A$ and $m-n$ rows of zeros to $B$ to form $m\times m$ matrices $A', B'$. This won't affect the product $AB$, meaning $A'B'=AB$.

Then we'll have something like

$$ \left[\begin{array}{c|c} A & 0 \end{array} \right] \left[\begin{array}{c} B\\ \hline 0 \end{array}\right]=A'B'$$

Since $A'$ has a column of zeros, $\det A'=0$, so $\det{A'B'}=0$ .

But since $AB=A'B'$, we have $\det{AB}=\det{A'B'}=0$, which means that $AB$ is not invertible.

coldnumber
  • 3,721
3

As has been mentioned in the comments, your approach does not make a for complete and proper proof. You may proceed as follows:

We have from here, for example, that $\operatorname{rank}(AB) \leq \operatorname{min}(\operatorname{rank}(A), \operatorname{rank}(B))$. This you could try to prove using the tools you already know.

In your problem we have that $\operatorname{rank}(A)\leq n$ and $\operatorname{rank}(B)\leq n$ implying that $\operatorname{rank}(AB) \leq n$, but $AB$ is $m$ by $m$ with $n<m$ hence $AB$ can not have full rank which in turn means that it is not invertible.

Math-fun
  • 9,507
1

I have a short proof for this .

Assume AB = C and C is invertible .

Now we know that to convert any matrix into its reduced row echelon form , we multiply it through a series of elementary matrices . Let the cumulative product of these elem. matrices be P (mxm matrix)

PA = R (rref(A))

=> PAB = RB = PC

now consider R , there are more number of rows than columns , at most n pivots can be there which is less than m => there has to be a row with no pivots => a zero row

As a result RB (a matrix mxm) has atleast one zero row . This implies that RB is not invertible , but P is invertible (product of elem matrices) . Therefore our assumption that C is invertible is wrong and so a contradiction occurs.

P.S. : I am new to writing proofs so don't have a lot of exposure of putting sentences into symbols !

  • I didn't understand how could you conclude your initial assumption (C is invertible) is wrong from proving RB is not invertible and P is invertible. – Jacob Martina Oct 25 '23 at 18:01
  • @JacobMartina A square matrix C=C1C2C3...Cn is invertible if and only if each square matrix that composes the product is invertible (Hoffman and Kunze, Theorem 13, Second Corollary). P and C are square matrices by definition. We know RB = PC is not invertible. Therefore one of P or C must not be invertible since if they were both invertible their product would be too. P is invertible by definition. Therefore it must be the case that C is not invertible. – Eric Scrivner Feb 15 '24 at 22:02
1

Here's a proof involving linear transformations. Let $F$ be a field.

Given that, $A$ is an $m\times n$ matrix. This means that A is the matrix of some linear transformation $U: F^n \to F^m$ with respect to some bases of $F^n$ and $F^m$.

Again, $B$ is an $n\times m$ matrix means that B is the matrix of some linear transformation $T: F^m \to F^n$ with respect to some bases of $F^m$ and $F^n$.

Now, consider the linear transformation $UT: F^m \to F^m$. We prove that $UT$ is not invertible, which will in turn imply that the matrix of $UT$ (with respect to some basis of $F^m$), namely $AB$, is not invertible.

The important thing to notice is the given information $\textbf{n < m}$.

Consider a basis $\mathcal{B}$ for $F^m$, given by $$\mathcal{B} = \{\alpha_1, \alpha_2, \ldots, \alpha_m\}$$ Next, consider the set $\{T(\alpha_1), T(\alpha_2), \ldots, T(\alpha_m)\}$. Clearly, this set is linearly dependent as the range of $T$ (which is $F^n$) has dimension $n$, which is less than $m$, the number of vectors in our set.

$$\implies b_1T(\alpha_1) + b_2T(\alpha_2) + \ldots + b_mT(\alpha_m) = 0 \; where \; b_1, b_2, \dots, b_m \in F \; are \; not \; all \; 0 \ldots (1) $$

$$\implies b_1\alpha_1 + b_2\alpha_2 + \ldots + b_m\alpha_m \neq 0 \dots (2)$$ (as $\{\alpha_1, \alpha_2, \ldots, \alpha_m\}$ is a linearly independent set, being a basis).

$$$$ Now, $$UT(b_1\alpha_1 + b_2\alpha_2 + \ldots + b_m\alpha_m)$$ $$ = U(T(b_1\alpha_1 + b_2\alpha_2 + \ldots + b_m\alpha_m))$$ $$ = U(b_1T(\alpha_1)+b_2T(\alpha_2)+\ldots + b_mT(\alpha_m))$$ $$ = U(0) \;[from \; (1)] = 0 $$

Thus, $UT(nonzero\; vector) = 0$ means that $UT$ has a non trivial null-space, which in turn implies that $UT$ is not injective. Thus, $UT$ is not invertible. $\blacksquare$

Rajdeep
  • 164