2

Usually the inverse of a square $n \times n$ matrix $A$ is defined as a matrix $A'$ such that:

$A \cdot A' = A' \cdot A = E$

where $E$ is the identity matrix.

From this definition they prove uniqueness but using significantly the fact that $A'$ is both right and left inverse.

But what if... we define right and left inverse matrices separately. Can we then prove that:

(1) the right inverse is unique (when it exists)
(2) the left inverse is unique (when it exists)
(3) the right inverse equals the left one

I mean the usual definition seems too strong to me. Why is the inverse introduced this way? Is it because if the inverse is introduced the way I mention, these three statements cannot be proven?

peter.petrov
  • 12,568
  • 2
    Related question: https://math.stackexchange.com/q/3852/40119 – littleO Dec 05 '20 at 10:12
  • @littleO I looked at this answer and I think I understand it well: https://math.stackexchange.com/q/3800863 So... basically we explicitly show (construct) that matrix C and then it follows that the right inverse equals C and the left one also equals C. Correct? So... it means one can prove (1), (2), (3) if the two concepts of left and right inverse are introduced separately, is that right? I think my book does the same later on in the text. It just doesn't mention that these equalities there are called Laplace expansions. Good to know they are called that way. – peter.petrov Dec 05 '20 at 10:39
  • 1
    It's a little important to say what your matrices are over. Presumably, a field (the reals? the rationals? the complex numbers? other?) Over more general rings, weird things can happen. – Arturo Magidin Dec 09 '20 at 23:20
  • @ArturoMagidin My question was basic. The matrices are over some field yes, e.g. over the reals. On a side note, what weird things? Which might influence the answer, I guess? – peter.petrov Dec 10 '20 at 00:01
  • @peter.petrov: over general rings, you could have matrices that are not square, but are invertible (e.g., a $1\times 2$ matrix $A$ and a $2\times 1$ matrix $B$ with $BA$ the $2\times 2$ identity and $AB$ the $1\times 1$ identity), which is impossible over a field. And you could have square matrices that have inverses on one side but not the other; this also cannot happen over a field (or more generally, over a domain). In any case, the “accepted” answer has a serious gap. – Arturo Magidin Dec 10 '20 at 00:07
  • @ArturoMagidin Aha... So in some way even though unwillingly my question does have some deeper meaning. I get it, thanks. Yeah, I was feeling like the definition I read was a bit too strong. – peter.petrov Dec 10 '20 at 00:08
  • It has well-known answers in general rings (away from matrices). See my answer. In general rings, it is standard to define “left-invertible” and “left inverse”, “right invertible” and “right inverse”, and “invertible” and “two-sided inverse”, followed by a proof that if an element is left- and right-invertible then it is invertible and its left and right inverses coincide. In fact, it can be done at the level of monoids, not even rings. – Arturo Magidin Dec 10 '20 at 00:09
  • @ArturoMagidin I will read it in some depth definitely. Thank you. – peter.petrov Dec 10 '20 at 00:10
  • (I edited and added something to the previous comment before you commented...) – Arturo Magidin Dec 10 '20 at 00:10
  • @Right... Thanks. Yes, that's how I felt the theory should have been developed... even over a field like the reals. This definition that I read was just not minimalistic enough, that was my feeling. – peter.petrov Dec 10 '20 at 00:12

3 Answers3

1

Given $n \times n$ matrices $A$ and $B$ for which holds $AB = I$ where $I$ is the $n \times n$ identity matrix. Now we will try to solve

$BX = I$

for an unknown $n \times n$ matrix $X$. We (left-)multiply both sides by $A$ to get

$ABX = AI$

which gives us

$IX = A$

so $X = A$ (because $IX = X$) and therefore $BA = I$. So we just showed $AB = I$ implies

$AB = BA = I$.

So as you can see it is not necessary to introduce a second definition for the inverse because it will lead to the same. Instead of $B$ we write $A^{-1}$.

Existence of a solution to $BX = I$ is because it can be shown $B$ is a bijective mapping ($x \mapsto Bx$) from $\mathbb{R}^{n}$ to $\mathbb{R}^{n}$.

Elmex80s
  • 221
  • Does this proof really work? Because it looks very simple and obvious if indeed correct. – peter.petrov Dec 05 '20 at 17:57
  • I think this really works... Thanks a lot. Not sure why this proof is not mentioned more often. – peter.petrov Dec 05 '20 at 17:59
  • I worked through your argument one more time but... But this only proves that if X exists such that BX=I then X=A. It does not prove that this X really exists (given that AB=I). I think the explicit proof is better - by just showing explicitly what $A^{-1}$ really is (that matrix with the cofactors). Never mind, I get it all now. – peter.petrov Dec 09 '20 at 15:38
  • You have that $AB=I$, you are trying to prove that $BA=I$ There's no way apart from constructing explicitly the $A^{-1}$ matrix and then showing that it's both left and right inverse (as done here: https://math.stackexchange.com/a/3800863/116591) I am accepting your answer anyway, thanks for the help. – peter.petrov Dec 09 '20 at 15:55
  • 1
    You cannot cancel matrix $B$ like this from both sides of the equality. For matrices if we have that $AB=A$ this does not imply $B=I$ (or $A=0$ where $0$ is the zero matrix). Never mind, leave it, I am OK now, I got it. – peter.petrov Dec 09 '20 at 16:52
  • But that's the thing: yes, $X=I$ is a solution but you are not sure if $X=I$ i.e. $X=I$ is not implied. And since $X$ denotes $BA$ you are not sure if $BA=I$. You see the flaw? – peter.petrov Dec 09 '20 at 17:22
  • No... you are missing the point, there may be more than one X such that $B=XB$. Hence it does not follow that $X=I$. Write your whole proof in full and you will see the flaw. – peter.petrov Dec 09 '20 at 19:08
  • No, you didn't show $X=I$. You have a flaw there believe it or not. – peter.petrov Dec 09 '20 at 19:11
  • 2
    I told you. You want to prove $BA=I$. You denote $X=BA$. Then you show that $B=XB$ (this is fine, this is correct). But from this last equality you cannot conclude that $X=I$. – peter.petrov Dec 09 '20 at 19:13
  • 2
    There are non-commutative rings with elements $A$, $B$ such that $AB=I$ but $BA \ne I$ - see here for example. So you should ask yourself why does your proof not apply in that case? – Derek Holt Dec 09 '20 at 19:23
  • @Elmex80s Let me delete my entire question :) Again... thanks for the help! – peter.petrov Dec 09 '20 at 20:57
  • @peter.petrov You cannot delete a question that has an upvoted answer, or has been accepted. And Elmex80s, do not vandalize your own post. – amWhy Dec 09 '20 at 22:28
  • 1
    As far as I can tell, you have shown the following: We assume that $AB=I$. If $BX=I$ has a solution, then $X=A$. (But we still need to show that $BX$ has a solution.) I'd guess adding this clarification - instead of deleting the answer - would be completely sufficient. But if you still think that it is really needed to delete your answer, you have to ask @peter.petrov to unaccept it first. – Martin Sleziak Dec 09 '20 at 22:30
  • See update. ___ – Elmex80s Dec 10 '20 at 09:05
1

The proposed, accepted answer, has a bug. I mean, the conclusion $(AB=I)\Rightarrow (BA=I)$ is true for $n\times n$ matrices, but the proof is wrong.

Suppose that $AB=I$.

IF there exists $X$ so that $BX=I$ -- which you don't know --

THEN you can conclude that $ABX=AI$, whence $X=A$.

But may be $X$ does not exist. In fact the proposed proof never used the fact that the matrices are square $n\times n$, which is crucial.

Example. Suppose $A=\begin{pmatrix}1&0&0\\0&1&0\end{pmatrix}$ and $B=A^t=\begin{pmatrix}1&0\\ 0&1\\ 0&0\end{pmatrix}$

Then $AB=I$, more precisely $AB=I_2$ the identity $2\times 2$.

IF it would exist $X$ so that $BX=I$, more precisely $BX=I_3$ the identidy $3\times 3$, then one could conclude that $X=A$. BUT in fact $BA=\begin{pmatrix}1&0&0\\ 0&1&0\\ 0&0&0\end{pmatrix}$

It is still true, for $n\times n$ matrices, that $AB=I$ if and only if $BA=I$, but this fact has not a two-lines proof. (See for instance this question).

Note: the above example can be adapted to "square" matrices of infinite size. That is to say, endomorphisms of infinite dimensional vector spaces that have a left inverse need not to have a right inverse.

Finally, to give a complete answer to the original question, observe that any matrix $C=\begin{pmatrix}1&0\\ 0&1\\ x&y\end{pmatrix}$ is a right-inverse of $A$. (And so $C^t$ is a left inverse of $B$). Which shows that right and left inverses are not unique outside the realm of $n\times n$ matrices.

So yes, the fact that for $n \times n$ matrices a right inverese is also a left inverse is crucial.

user126154
  • 7,551
  • "but this fact has not a two-lines proof" Yeah, exactly my point. I get it much better now. – peter.petrov Dec 10 '20 at 00:15
  • I saw the question and the answer you were referring to. I even have a comment below that answer. That answer is what made me reread the proof from Elmex80s, and made me suspect there's some flaw in it. Thanks for your answer, it had a few interesting points. – peter.petrov Dec 10 '20 at 00:39
1

So... square matrices over a field are special: one can prove that if a square matrix has a left inverse, then it also has a right inverse. And once you know that, it is easy to show that the left and right inverse must be equal. This holds in general for groups/rings: if $x$ is an element, and there exist $a,b$ such that $ax=1$ and $xb=1$, then $a=b$: $$a = a1 = a(xb) = (ax)b = 1b = b.$$

(Use capital letters and the identity for matrices).

Now, if $A$ is a square matrix over a field, and there exists a matrix $B$ such that $AB=I$, then interpret $A$ and $B$ as linear transformations. Then $AB$ is bijective, hence $A$ is surjective; but that means that $A$ is full rank, hence by the Dimension Theorem has trivial nullity, hence $A$ is also injective. Since $A$ is therefore a bijection, it has a left inverse $C$ (which is also linear, and thus corresponds to a matrix), and thus $A$ has a two-sided inverse.

Similarly, if $A$ has a left inverse $C$ with $CA=I$, then $A$ is one-to-one, hence full rank and so surjective, so it has a right inverse and the argument proceeds as before.

In arbitrary rings you can have elements that have a left inverse but no right inverse; in such cases, you will have multiple left inverses but no right inverse. For if $x$ is an element such that there exists $a$ with $ax=1$, but $xb\neq 1$ for all $b$, then $xa\neq 1$, so $xa-1\neq 0$. Then $(xa-1)x = xax-x = x-x = 0$, so then $(a+xa-1)x = ax+(xa-1)x = 1$, but $a+xa-1\neq a$. Thus, $x$ has at least two left inverses. A symmetric argument shows that if $x$ has a right inverse but no left inverse, then it has at least two right inverses.

And yes, it is possible to have rings in which some elements have left inverses but no right inverses. Consider the vector space $\mathbb{R}[x]$ of all polynomials with coefficients in $\mathbb{R}$, and the ring of all linear transformations from $\mathbb{R}$ to itself. The linear transformation $T(p(x)) = xp(x)$ is one-to-one but not onto, so it has left inverse but no right inverse. In fact, it has infinitely many left inverses. Similarly, the linear transformation $T(p(x)) = p'(x)$ (the derivative) is onto, but not one-to-one, so it has a right inverse (in fact, infinitely many), but no left inverse.

Arturo Magidin
  • 398,050
  • I updated my answer. Still somehow a bit surprised. I am always used to 'crawl' my way to a solution by valid algebraic manipulation and never had to worry about the existence of a solution. In fact the only reason I am convinced there was a flaw in my proof was because of the counterexample with the infinite vector space. How does this work for differential equations? Can you just go for a numerical solution without knowing anything about existence? – Elmex80s Dec 10 '20 at 16:23
  • 1
    @Elmex80s: What you did was find necessary conditions for a solution to exist; if a solution exists, then blah. If all steps were reversible, then the necessary condition you find at the end is also sufficient. But if not all steps are reversible, then you are in trouble. In your solution, the step from $BX=I$ to $ABX=AI$ is not reversible in general (you need $A$ to be left-cancellable, and you do not know that in general), so you cannot guarantee that your necessary condition is also sufficient. – Arturo Magidin Dec 10 '20 at 16:27
  • 1
    @Elmex80s: An analogue would be to try to solve $x^2=-1$ in the reals, by first squaring to get $x^4=1$, then solving this to get $x=1$ and $x=-1$. If a real solution existed, then it would be $1$ or $-1$; but the step from $x^2=-1$ to $x^4=1$ cannot be reversed, so the necessary condition is not sufficient. – Arturo Magidin Dec 10 '20 at 16:30
  • Thanks. So when steps are reversible existence of the solution follows from the answer, example $2 + x = 5$. When not I have to be much more cautious? Strange I do not know this, I got my degrees from respectable universities. – Elmex80s Dec 10 '20 at 16:48
  • @Elmex80s: Surely you’ve encountered equations where solving them produces “extraneous solutions”, or in differential equations you get equations where you have to “manually” find special solutions (usually constant solutions), which you excluded when you did cancellation, division, squaring, or the like. You’ve seen the phenomenon, you just never had it highlighted. – Arturo Magidin Dec 10 '20 at 16:49