16

I was reading the wiki page for eigenvalues and eigenvectors, and I found this statement as a fundamental linear algebra theorem.

$A\mathbf{x}=\mathbf{0}$ has a non-zero solution $\mathbf{x}$ iff $\det(A)=0$.

I know how to prove from left to right: Assuming $\det(A)\neq 0$, the only solution for $A\mathbf{x}=\mathbf{0}$ is $\mathbf{x}=A^{-1}\mathbf{0}=\mathbf{0}$. This is a contradiction to the fact that $A\mathbf{x}=\mathbf{0}$ has a non-zero solution $\mathbf{x}$. Therefore, $A\mathbf{x}=\mathbf{0}$ has a non-zero solution $\implies$ $\det(A)=0$.

Can anybody show me how to prove the other direction? $$\det(A)=0 \implies A\mathbf{x}=\mathbf{0} \;\;\text{has a non-zero solution}$$

Mike Pierce
  • 18,938
MIMIGA
  • 1,051
  • 3
  • 9
  • 17
  • 8
    What definition do you use for $\det(A)$? –  Sep 28 '15 at 16:31
  • @Jack's question is quite good. You seem to be using the fact that "A matrix has an inverse (or, is invertible) if and only if its determinant is nonzero". Are things like this on the table as well? – pjs36 Sep 28 '15 at 16:36
  • @Jack $\det(\cdot)$ stands for the determinant of a matrix. Also, I think the definition of a singular matrix $\mathbf{A}$ is just $\det(A)=0$ – MIMIGA Sep 28 '15 at 17:07

3 Answers3

7

The idea for the reverse is as follows: Since $\det A=0$, then it means any row of $A$ can be written as a linear combination of the other rows. If that is true then (suppose $A$ is $n\times n$) $\mathrm{rank} A<n$. In other words the linear transformation (Suppose the vector space is $V$, $n$-dimensional) $T:V\to V$ given by $x\mapsto Ax$, is not onto. So $\mathrm{Ker}T\neq 0$ meaning there is a vector other than zero being sent to $0$, i.e. $Ax=0$ has a nontrivial solution.

There is a more hands on way to prove this too. Basically using the fact that that row is a linear combination of other rows, you can do a series of basis transformations to obtain $B=UAU^{-1}$ ($U$ is the basis transformation), such that $B$ has a row equal to zero. Then $By=0$ has a nontrivial solution. Consequently define $x=U^{-1}y$, then $Ax = AU^{-1}y =0$. And $x$ is nontrivial.

Hamed
  • 6,793
  • 1
    This is a famous theorem: $\mathrm{rank}(T) + \mathrm{null}(T)=\dim V$ ($\mathrm{null}(T)=\dim[\mathrm{Ker}(T)]$) if $T: V\to W$. https://en.wikipedia.org/wiki/Rank%E2%80%93nullity_theorem – Hamed Sep 28 '15 at 16:51
7

$0=\det(A)=\det(A-0\cdot I)\iff \text{$0$ is an eigenvalue of $A$}$, iff there's a vector $x\neq0$ such that $Ax=0x$.

Michael Hoppe
  • 18,103
  • 3
  • 32
  • 49
  • 3
    I think this solution buries the tough part inside the claim that $\det(A-0\cdot I) = 0$ implies that $0$ is an eigenvalue of $A$. – Mike Pierce Sep 19 '19 at 02:34
  • 1
    The fact that det(A - kI) = 0 if k is an eigenvalue comes from the fact that we start with Ax = kx for some non zero x. Sounds like a circular argument. – stoic-santiago Jan 14 '20 at 08:31
  • Usually "eigenvalue" is defined before "eigenvector" as zeroes of the characteristic polynomial. – Michael Hoppe Jan 14 '20 at 12:22
2

Here's a different proof of this statement that works over an arbitrary commutative unital integral domain $R$. One direction is rather uninspiring though, and the other direction reduces to the case over fields.

For our $n\times n$ matrix $A$ with entries in $R$, let $A^\alpha$ denote the classical adjoint of $A$. A nifty property of the classical adjoint is that $A^\alpha A = \det(A)I_n$. So if $A\mathbf{x}=0$ has a nonzero solution, we get that $$ A\mathbf{x}=0 \quad\implies\quad A^\alpha A\mathbf{x}=0 \quad\implies\quad \det(A)I_n \mathbf{x}=0\, $$ and since we have no zero divisors in $R$, $\det(A)$ cannot be a zero divisor and we must have $\det(A)=0$.

For the other direction, suppose that $\det(A)=0$. Let $\mathrm{Fr}(R)$ denote the field of fractions of $R$. Since $R$ is a domain, $\mathrm{Fr}(R)$ contains a copy of $R$ and we can think of the entries of $A$ as living in $\mathrm{Fr}(R)$. So since this result is true over fields, there $A\mathbf{x}=0$ has a nonzero solution $$ \mathbf{x} = \left\langle \frac{a_1}{b_1}, \dotsc, \frac{a_n}{b_n} \right\rangle $$ with $a_i,b_i \in R$. Let $\beta$ be the product $b_1\dotsb b_n$. Then since $\mathbf{x}$ is a solution to $A\mathbf{x}=0$, $\beta\mathbf{x}$ will be a solution too, and the entries of $\beta \mathbf{x}$ live $R$, giving us a nontrivial solution in $R$.

This theorem is not true though, if we aren't working over an integral domain

Mike Pierce
  • 18,938
  • 2
    Here's a generalization that is true over arbitrary commutative rings: https://artofproblemsolving.com/community/c7h124137 – darij grinberg Aug 10 '18 at 22:49