4

Is there an elementary explanation of why the row-rank of a matrix equals its column-rank (without using adjoint maps, resp. lots of technical computations)? What is the geometric intuition behind this?

Peter Franek
  • 11,522
  • 2
    This question has been asked before here: http://math.stackexchange.com/questions/2315/is-the-rank-of-a-matrix-the-same-of-its-transpose-if-yes-how-can-i-prove-it – Mr.Fry May 17 '14 at 05:46
  • It's not the same question. Of course, I've seen many proofs but I wonder why are all of them so technical and whether there is some kind of natural geometric duality hidden in that. Anyway, thanks for the link. – Peter Franek May 17 '14 at 05:54
  • Try an do a few examples of fairly small matrices or row reduce using some CAS such as mathematica and see if you can convince yourself why. If you do not prefer such technical proofs, see if you can come up with something more pleasing, now that's math. If you do, share :). – Mr.Fry May 17 '14 at 05:57
  • Do you know the so-called Rank theorem, which isn't too technical? – user139388 May 17 '14 at 06:03
  • 1
    It seems to me that you don't understand me, but maybe this wasn't the right place to ask and the question is too philosophical :) I've taught linear algebra for many years; it seems to me that the statement is too simple and I somehow don't like most of the standard proofs. – Peter Franek May 17 '14 at 06:07
  • I agree, that was not clear from the question and only obliquely discernible from your first comment. – Eric Stucky May 17 '14 at 07:31

4 Answers4

4

Since you seem to want a "geometrical" proof: the transpose of a matrix $A \in M_{m,n}(\mathbb{R})$ correspond to the adjoint of a morphism $u : \mathbb{R}^n \to \mathbb{R}^m$ for the standard inner products on $\mathbb{R}^n$ and $\mathbb{R}^m$. In other words: $$\langle Ax, y \rangle = \langle x, A^T y \rangle$$

Adjoint morphisms have various properties; for example, $\operatorname{ker}(A^T) = \operatorname{im}(A)^\perp$ is supplementary to $\operatorname{im}(A)$. The rank-nullity theorem then yields $\operatorname{rk}(A) = \operatorname{rk}(A^T)$.

Najib Idrissi
  • 54,185
  • 2
    Note that this argument requires a positive definite inner product (for the "supplementary" property), so it only works in characteristic$~0$. But all that matters is the codimension of the image of $A$, so this limitation could be avoided by a slightly different formulation. – Marc van Leeuwen May 18 '14 at 08:11
  • @MarcvanLeeuwen is it true if entries of matrix are neither real nor complex? – Meet Patel May 01 '23 at 11:26
  • @MeetPatel Yes, see my answer, which does not make any assumption about the base field. I don't think I have even used anything that uses its commutativity, so it the argument should basically work over (non commutative) division rings as well. – Marc van Leeuwen May 02 '23 at 08:34
3

I don't know if you consider working with vectors and linear forms too technical, but some form of duality seems inevitable if you want a geometric interpretation of the transpose.

Given a $n\times m$ matrix$~M$, let it operate on column vectors of size $m$ by left multiplication as usual; the rank is the dimension of the image of this linear map$~f$ (this is the column rank, since images are linear combinations of the columns of$~M$). If instead one takes the operation$~g$ of right multiplication by$~M$ on row vectors of size$~n$, then the matrix of$~g$ is the transpose $M^\top$, and we must argue that the dimension of the image of this map (which is the row rank of$~M$, as images by$~g$ are linear combinations of the rows of$~M$) is the same as the column rank of$~M$.

If $r$ is the dimension of the image space$~W\subseteq K^n$ of$~f$, then we can find $r$ vectors $b_1,\ldots,b_r\in K^m$ such that $f(b_1),\ldots,f(b_r)$ form a basis of $W$. The coordinate functions for this basis form $r$ linearly independent linear forms $W\to K$, which can be extended to linear forms $\alpha_1,\ldots,\alpha_r$ defined on$~K^n$ (row vectors). Then by construction $\alpha_i(f(b_j))=\delta_{i,j}$ for $i,j\in\{1,\ldots,r\}$; moreover $r$ is the largest values for which this can be achieved (for any choice of the $\alpha_i$ and $b_j$). The linear forms $x\mapsto\alpha_i(f(x))$ on $K^m$ are the images$~g(\alpha_i)$, so we have also found column vectors $b_1,\ldots,b_r$ for which their evaluations, as functions on the image space of$~g$, form a maximal linearly independent set of linear forms on that space; therefore $r$ is also the rank of$~g$.

In algebraic terms, if $A$ is the matrix with rows $\alpha_1,\ldots,\alpha_r$, and $B$ is the matrix with columns $b_1,\ldots,b_r$, then $AMB=I_r$ and this is the largest identity matrix that can be so obtained. Transposing the equation we see that the maximum size is the same as for $M^\top$.

2

If you row reduce a matrix $A$ to RREF, the number of pivots (leading ones) is the rank. On the other hand, the rank theorem tells you that the column vectors of the original matrix corresponding to those pivots form a basis of the column space of the matrix. That is, the number of linearly independent columns is equal to the number of pivots.

But the number of linearly independent columns of $A$ is the number of pivots you get by bringing $A^\top$ to RREF, and hence the rank of $A^\top$.

So rank$(A) = $rank$(A^\top)$.

user139388
  • 3,449
1

The row-rank is equal to the dimension of the subspace created by the row-vectors. If you apply Gauss elimination you will see that the number of linearly independent vectors remains the same after transposition.

355durch113
  • 1,570
  • For this to make sense, you need to know that a row-operation doesn't change the dimension of the column-space. Is that completely elementary? Something like "after a row-operation, the column space is the same as before, just written in a different basis"? – Peter Franek May 18 '14 at 10:40
  • Would it be enough to show that the number of linearly independent vectors does not change after a row operation? Because that would be pretty simple. (but I'm first semester, so don't put too much weight on what I say) – 355durch113 May 18 '14 at 22:38
  • I think that what you used implicitely in your first answer is, that a row operation (in Gauss el.) changes neither the number of independent rows (trivial), nor the number of independent columns (almost trivial). I just wanted to remark this. – Peter Franek May 19 '14 at 06:25