0

If $A$ is a square matrix which coefficients are the scalar of a field $\Bbb K$ then there exist a linear function $f:\Bbb K^n\rightarrow\Bbb K^n$ such that $$ f(e_j):=A_j $$ where $A_j$ is the $j$-th column of $A$ for any $j=1,...,n$.

Then if $I$ is the identity square matrix if $A$ and $B$ are two square matix such that $AB=I$ then there exist two function $f,g:\Bbb K^n\rightarrow\Bbb K^n$ such that the identity square matrix is the matrix of the function $(f\circ g)$ and this means that $(f\circ g)$ is the identity on $\Bbb K^n$. So to prove the statement of the question I have to prove that $(g\circ f)$ is the identity too. So how prove the statement? Could someone help me, please?

  • @lulu Unfortunately I don't understand those answers. – Antonio Maria Di Mauro Aug 22 '20 at 17:42
  • 3
    Then you should ask a more precise question...since you ask a duplicate question, any answer you get is going to closely resemble those. – lulu Aug 22 '20 at 17:43
  • Worth stressing: the desired claim is false for infinite dimensions. For instance: Let $V$ be the vector space of real sequences $(a_1,a_2, \cdots)$ Let $B:V\to V$ be the map $(a_1,a_2, \cdots)\mapsto (0,a_1, a_2, \cdots)$ and let $A:V\to V$ be the map $(a_1, a_2, \cdots)\mapsto (a_2, a_3, a_4, \cdots)$. Then $AB=I$ but $BA\neq I$. – lulu Aug 22 '20 at 17:46
  • here is yet another duplicate. – lulu Aug 22 '20 at 17:47

4 Answers4

1

$f\circ g = \text{id}$ implies $g$ is injective (if a composition is injective then the "inner" function is always injective). Since $g:\Bbb{K}^n\to \Bbb{K}^n$ is a map between vector spaces of the same dimension, the rank-nullity theorem implies $g$ is also bijective. Thus, $f = g^{-1}$.

Another way to phrase the argument is to say that since $f\circ g = \text{id}$, then $f$ is surjective (again if a composition is surjective, then the "outer" function is surjective). Again, rank-nullity implies $f$ is bijective, so that again $g=f^{-1}$.

peek-a-boo
  • 55,725
  • 2
  • 45
  • 89
  • Okay, I like your proof. However I don't understand the final statement: why rank-nullity implies $f$ is bijective? Could you explain, please? – Antonio Maria Di Mauro Aug 22 '20 at 17:56
  • Or yet another way: if $AB = I$ then taking determinant on both sides shows that $\det A \cdot \det B = 1 \neq 0$, so that $\det A$ and $\det B$ are non-zero, hence $A$ and $B$ are invertible matrices. – peek-a-boo Aug 22 '20 at 17:57
  • @AntonioMariaDiMauro What does the rank-nullity theorem say? – peek-a-boo Aug 22 '20 at 17:57
  • I don't know this theorem: I'm studied linear algebra by italian book so probably I know this theorem with another names, sorry. – Antonio Maria Di Mauro Aug 22 '20 at 17:58
  • Perhaps I understood! this theorem says that if $v_1,...,v_n$ are vectors of $V$ such that $V=\big<v_1,...,v_n\big>$ and the dimension of $V$ is $n$ then those vectors form a base, right? – Antonio Maria Di Mauro Aug 22 '20 at 18:03
  • 1
    @AntonioMariaDiMauro it says that if $V,W$ are finite-dimensional vector spaces over a field and $T:V\to W$ is linear, then $\dim \ker T + \dim \text{image}(T) = \dim V$. In particular, if $\dim V = \dim W$ and $T$ is injective, then $\ker T ={0}$, so the above equation shows $\dim \text{image}(T) = \dim W$. In other words, $\text{image}(T) = W$ (because if a subspace has same dimension as the whole space, then it is equal to the whole space). Hence, surjective (so, injective+surjective implies bijective) – peek-a-boo Aug 22 '20 at 18:03
  • Okay, I knew this theorem but italian professors don't call it with particular any particular name. – Antonio Maria Di Mauro Aug 22 '20 at 18:04
  • @AntonioMariaDiMauro Always keep in mind the following corollary of the rank-nullity theorem. – peek-a-boo Aug 22 '20 at 18:06
  • Okay. Anyway thanks too much for your assistance!!! – Antonio Maria Di Mauro Aug 22 '20 at 18:08
1

Let $E = {\Bbb K}^n$. When $AB=I$ on $E$ then both $A$ and $B$ must have full rank because of finite dimension. We have $(BA)^2=BABA=BA$, so $BA$ restricted to the image of $BA$ is the identity, but this image is all of $E$ which allows us to conclude.

H. H. Rugh
  • 35,236
0

Note the similarity of block matrices: $\begin{pmatrix}I & -A \\\ 0 & I \end{pmatrix}\begin{pmatrix}AB & 0 \\\ B & 0 \end{pmatrix}\begin{pmatrix}I & A \\\ 0 & I \end{pmatrix} = \begin{pmatrix}0 & 0 \\\ B & BA \end{pmatrix}$.

It follows that $AB$ and $BA$ have the same non-zero eigenvalues, counting multiplicities. In particular, if $\lambda = 1$ is an eigenvalue of $AB$, then it is an eigenvalue of $BA$.

-1

Theorem:

If $A$ and $B$ are two square matrices such that $A B=I$, then $B A=I$.

Proof:

By applying the properties of determinants of square matrices, we get that

$\det A\cdot\det B=\det(A B)=\det I=1\ne0$.

So it results that $\;\det A\ne0\;$ and $\;\det B\ne0\;$.

Now we consider the matrix

$C=\frac{1}{\det A}\cdot\text{adj}(A)$

where $\;\text{adj}(A)\;$ is the adjugate matrix of $A$ which is the transpose of the cofactor matrix of $A$.

$\text{adj}(A)= \begin{pmatrix} A_{1,1} & A_{2,1} & \cdots & A_{n,1} \\ A_{1,2} & A_{2,2} & \cdots & A_{n,2} \\ \vdots & \vdots & \ddots & \vdots \\ A_{1,n} & A_{2,n} & \cdots & A_{n,n} \end{pmatrix}$

where $\;A_{i,j}\;$ is the cofactor of the element $a_{i,j}$ of the matrix $A$.

So $\;A_{i,j}=(-1)^{i+j}\det M_{i,j}$

where $\;M_{i,j}\;$ is the submatrix of $A$ formed by deleting the $i^{th}$ row and the $j^{th}$ column.

We are going to use the Laplace expansions which are the following equalities:

$a_{i,1}A_{j,1}+a_{i,2}A_{j,2}+\ldots+a_{i,n}A_{j,n}= \begin{cases} \det A\;,\quad\text{ if } i=j\\ 0\;,\quad\quad\;\,\text{ if } i\ne j \end{cases}$

$A_{1,i}a_{1,j}+A_{2,i}a_{2,j} +\ldots+A_{n,i}a_{n,j}= \begin{cases} \det A\;,\quad\text{ if } i=j\\ 0\;,\quad\quad\;\,\text{ if } i\ne j \end{cases}$

By applying Laplace expansions, we get that

$A C = C A = I$.

Since for hypothesis $\;A B =I\;,\;$ then $\;C(A B)= C I\;,\;$ hence $\;(C A)B=C,\;$ therefore $\;I B = C\;$ and so we get that $\;B=C$.

Consequently,

$B A = C A = I\;,$

so we have proved that

$B A = I$.

Angelo
  • 12,328