6

We know that $AA^{-1} = I$ and $A^{-1}A = I$, but is there a proof for the commutative property here? Or is this just the definition of invertibility?

user26857
  • 52,094
  • The definition of invertibility implies this. Given $A$ if there is $B$ such that $AB=I$ and $BA=I$ we say that A is invertible and we call $B=A^{-1}$ – Euler88 ... Aug 01 '15 at 22:26
  • 2
    Are you asking: If we know $AA^{-1} = I$, does it follow that $A^{-1}A = I$? – pjs36 Aug 01 '15 at 22:38
  • 1
    I think he is asking what @pjs36 implies. Even if he isn't, it is a interesting information to be adressed here. – Aloizio Macedo Aug 01 '15 at 22:39
  • 1
    For a square matrix, the existence of a left inverse or right inverse implies that the matrix is invertible, since if $AB=I$, then $A=IA=(AB)A=A(BA) \implies BA=I$ – rationalis Aug 01 '15 at 22:42
  • @rationalis: How do we know $A(BA)=A \Rightarrow BA=I$ without knowing that $BA=I$ already? The intuitively straightforward way to get from $A(BA)=A$ to $BA=I$ is to multiply on the left by $B$, whence we are assuming the conclusion. – Will R Aug 01 '15 at 22:44
  • 2
    @rationalis: That assumes you can prove that $AC=A$ implies $C=I$. Doing so before we know $A$ has a left inverse is tricky -- and cannot be done while staying at the level of abstraction you're working on here (since it's not true for general linear transformations). – hmakholm left over Monica Aug 01 '15 at 22:44
  • Isn't this asking the same as this? – leo Aug 02 '15 at 00:31

3 Answers3

7

Commutativity is part of the definition of the inverse, but it is justified by the following fact on monoids: If an element $a$ in a monoid $M$ has a right inverse $b$ and a left inverse $c$: $ab=e$, $ca=e$ (the neutral element in $M$), then $b=c$ — in other words, $a$ has an inverse.

This results very simply from the associativity of the monoid law: $$b= eb=(ca)b=c(ab)=ce=c.$$

zzchan
  • 184
Bernard
  • 175,478
  • 3
    Yes, but the monoid of square matrices has the additional property that ever left-invertible matrix is also right-invertible and vice versa. That is not true for monoids in general. – hmakholm left over Monica Aug 01 '15 at 22:47
  • You're right, and that is linked to finite dimension, but it is not exactly in the O.P.'s question. – Bernard Aug 02 '15 at 00:01
4

Forget about linearity for the moment. If $X$ and $Y$ are sets and $f : X \rightarrow Y$ is some function that is injective, then there exists a function $g : f(X)\rightarrow X$ such that $$ g(f(x))=x,\;\;\; x\in X. $$ Even though $f$ may not be surjective, you can apply $f$ to both sides of the above in order to obtain: $$ f(g(f(x)))=f(x) \\ (f\circ g)(f(x))=f(x) \\ (f\circ g)(y) = y,\;\;\; y \in f(X). $$ So it's a simple trick to see that $g : f(X)\rightarrow X$ and $f : X\rightarrow f(X)$ are inverses. Consequently, if $f$ is injective and surjective, then $g\circ f = id_{X}$ forces $f\circ g = id_{Y}$, where $id_{X}$ and $id_{Y}$ are the identity maps on $X$, $Y$, respectively.

For a linear function $L : X\rightarrow X$ on a finite-dimensional linear space $X$, you have the unusual property that $L$ is surjective iff it is injective. That's the rank-nullity theorem, and is peculiar to linear maps on finite-dimensional spaces (i.e., it is not true on infinite-dimensional linear spaces.) Therefore, if $L : X\rightarrow X$ is injective, then $f(x) = Lx$ as above has an inverse $g$ that is defined everywhere on $X$, which forces $(f\circ g)(y)=y$ for all $y \in Y$. In other words, if $M$ is a matrix such that $ML=I$ on the finite dimensional linear space $X$, then it automatically holds that $LM=I$.

Disintegrating By Parts
  • 87,459
  • 5
  • 65
  • 149
3

If $A$ and $B$ are square matrices in $\mathbb R^{n\times n}$ such that $AB=I$, then we can prove that $BA=I$ too.

One way to see this is to consider the $n$ column vectors $B\mathbf e_1, B\mathbf e_2, \ldots, B\mathbf e_n$, where $e_i$s are the standard basis for $\mathbb R^n$. The $B\mathbf e_i$s must be linearly independent (because if we have a linear combination of them, we can multiply that from the left by $A$ and get a linear combination of $\mathbf e_i$s), and any linearly independent set of $n$ vectors is a basis for $\mathbb R^n$.

Now consider an arbitrary column vector $Y\in\mathbb R^n$. We can write $Y$ as a linear combination of the $B\mathbf e_i$s (because they form a basis). Let $X$ be the same linear combination of $\mathbf e_i$s; by linearity we have $BX=Y$. We can now calculate $$ (BA)Y=(BA)(BX)=B(AB)X=BIX=BX=Y $$ In other words, left multiplication by a $BA$ is the identity, and the only matrix with that property is $I$, so $BA=I$.