1

The question asked us to prove the path-connectedness of $GL(n,\mathbb{C})$, and while there are solutions on the internet, I wanted to visualize what a path between two matrices looks like geometrically, while getting a "feel" for such a path. This is what I've come up with:

For an order-n complex matrix, we fix $n^2 -1$ coordinates, and move the remaining coordinate to the corresponding value in $I_n$. For example, if the matrix given is \begin{pmatrix} a_{11} & a_{12}\\ a_{21} & a_{22}\\ \end{pmatrix} Then the first step will be to find a path to the matrix \begin{pmatrix} 1 & a_{12}\\ a_{21} & a_{22}\\ \end{pmatrix} This path lies inside $GL(n,\mathbb{C})$. The only problematic point is $a_{11}=\frac{a_{21} a_{12}}{a_{22}}$ which can be circumvented due to the path connectedness of the punctured complex plane.

We continue the process until we eventually arrive at $I_n$ after necessarily finitely many steps. We can then continue the process forward to reach any other matrix.

I wanted to see if I am missing something here. Any feedback would be appreciated.

  • 2
    I'm not sure this is quite a full proof. You seem to have given a sketch of part of the proof in the case $n = 2$. Is it completely obvious how to turn this into a full proof of the general case? In general the determinant is a much more complicated polynomial - really what you're trying to prove is exactly "the set where the determinant is zero can be circumvented", so I think it requires some justification. (PS: be careful about matrices like $\begin{pmatrix}0&1\1&1\end{pmatrix}$). – Izaak van Dongen Nov 22 '23 at 13:53

2 Answers2

4

This is a nice idea, but it would require a lot of work to make it a proper proof. Even in the simplest case $n = 2$ one has to consider a number of special cases and sometimes to simultaneously "move" more than one entry. See Izaak van Dongen's comment: It does not make sense to look for a path from the matrix $\begin{pmatrix}0 & 1 \\ 1 & 1 \end{pmatrix}$ to the matrix $\begin{pmatrix}1 & 1 \\ 1 & 1 \end{pmatrix}$.

Therefore I doubt that your approach, if done properly, will give a transparent proof.

However, your idea works nicely for any upper triangular matrix $X \in GL(n,\mathbb C)$. The determinant of $X$ is the product of its diagonal entries $x_{jj}$ which must therefore be non-zero. For $j \ne k$ let $\alpha_{jk}$ be the linear path in $\mathbb C$ from $x_{jk}$ to $0$ (i.e. $\alpha_{jk}(t) = (1-t)x_{kk}$). For $j = k$ we can find paths $\alpha_{jj}$ in $\mathbb C \setminus \{0\}$ from $x_{jj}$ to $1$ (if $x_{jj} \notin (-\infty, 0)$, simply take the linear path, otherwise $\alpha_{jj}(t) = (1-2t)x_{jj} +2ti$ for $t \le 1/2$ and $\alpha_{jj}(t) = (2-2t)i + 2t -1$ for $t \ge 1/2$). Then $$\alpha(t) = (\alpha_{jk}(t))$$ is a path in $GL(n,\mathbb C)$ connecting $X$ and the unit matrix $I$. Actually all $\alpha(t)$ are upper triangluar matrices with non-zero diagonal entries, thus $\det \alpha(t) \ne 0$ for all $t$.

Thus it remains to show that each $A \in GL(n,\mathbb C)$ admits a path in $GL(n,\mathbb C)$ which starts at $A$ and ends at some upper triangular matrix $X \in GL(n,\mathbb C)$.

To do this, I suggest to modify the approach given in Path from an invertible matrix $A$ to a diagonal matrix $E=(e_{ij})$ with $e_{11}=\mbox{sgn}(\det (A))$ for $GL(n,\mathbb R)$.

Let $r_1,\dots,r_n$ denote the rows of $A \in GL(n,\mathbb C)$. We introduce the following transformation:

(1) For $i \ne j$ and $m \in\mathbb{C}$ replace $r_i$ by $r_i + m\cdot r_j$, written as $A \mapsto A(i;j,m)$.

Three applications of (1) gives us the transformation

(2) For $i \ne j$ simultaneously replace $r_i$ by $r_j$ and $r_j$ by $-r_i$ (which exchanges two rows up to a sign): $$ A \mapsto A' = A(i,j;1) \mapsto A'' = A'(j,i;-1) \mapsto A''' = A''(i,j;1) .$$ To see what $A'''$ is, note that all three transformation only modify rows $i$ and $j$. We have $$(r_i, r_j) \mapsto (r_i + r_j, r_j) \mapsto (r_i + r_j, -r_i) \mapsto (r_j, -r_i) .$$

Since the determinant is multilinear (as a function whose entries are the $n$ rows of a matrix), we have $$\det A(i,j;m) = \det A \text{ for all } m \in \mathbb C .$$ Thus transformation (1) does not change the determinant, and therefore also transformation (2) does not.

Moreover, transformation (1) can be realized by a path in $GL(n,\mathbb{C})$. Simply take $$u(t) = (1-t)A + tA(i;j,m) = A(i;j,(1-t) + tm) .$$ Then $u(0) = A, u(1) = A(i;j,m)$ and $\det(u(t)) = \det(A) \ne 0$.

The same is therefore true for transformation (2).

It is well-known that Gaussian elimination using a sequence of transformations (1) and (2) transforms each matrix $A$ into a matrix $E$ in row echelon form. The sequence of transformations can be realized by a path in $GL(n,\mathbb{C})$.

Since all square matrices in row echelon form are upper triangular matrices, we are done.

Paul Frost
  • 76,394
  • 12
  • 43
  • 125
  • Nice! An alternative way to conclude once you know "there is a path between any two upper-triangular matrices" is to write $A = PBP^{-1}$ for an invertible upper-triangular $B$ with no assertions yet about a path from $A$ to $B$ (eg by "triangularisability of complex matrices" or "existence of JNFs"). Then a path from $B$ to the identity $I$ can be conjugated to give a path from $A$ to $PIP^{-1} = I$. (I'm also fond of a similar argument using diagonalisable matrices, appealing to density and local path-connectedness of open subsets of $\Bbb R^{n^2}$.. for no particular good reason :)) – Izaak van Dongen Nov 23 '23 at 10:52
  • @IzaakvanDongen I think your comment would be worth to be "upgraded" to an additional answer. – Paul Frost Nov 23 '23 at 11:03
  • Thank you! I think this answer and Marc van Leeuwen's elaboration here already say pretty much everything I would say in an answer (except the diagonalisability part but that's really just strictly less elegant than the triangularisability argument), so I will probably leave it for now. – Izaak van Dongen Nov 23 '23 at 12:24
1

The idea of the argument is that if we change one entry at a time, we can always find a path that doesn't go through a non-invertible matrix because, as a function of that one entry we are changing, $\det(A)$ is a polynomial of degree $n$ and has only finitely many zeros. Since $\mathbb C$ minus a finite number of points is path connected, such a path exists.

The problem is that this requires the starting matrix and the ending matrix (where one entry gets changed to a 0 or 1 to fit the identity matrix) both to be invertible. If the ending matrix isn't invertible, you can't use it as a part of the eventual path to the identity matrix.

However, there are two ways to repair this that I can think of. The first is to factor your matrix into a product of matrices where you can find a path, then exploit that matrix multiplication is continuous, so a path for each constituent matrix leads to a path for the product For example, if you had a factorization into elementary matrices (coming from row reduction), it would suffice to be able to find paths for every elementary matrix. Or if you had a LU factorization (lower triangular times upper triangular), then it suffices to note that your idea won't run into problems for triangular matrices.

The other way to repair it is to let yourself get close to but not actually touch problem points along your path. Suppose that changing $a_{11}$ to 1 gave a non-invertible matrix. Then instead, find a path to $1+\epsilon$ for some very small epsilon. After $n^2$ steps, if you aren't actually at $I_n$, then no entry is more than $\epsilon$ away. Now, the fact that $\det(A)$ is a polynomial function in the entries of $A$ shows that the determinant function is continuous, so there is some $\epsilon>0$ such that if non of the entries of $A$ are more than $\epsilon$ away the corresponding entry of $I_n$, then $\det(A)$ is no more than $1/2$ away from $1$ in the complex plane. From this point, your method is guaranteed to work.

Aaron
  • 24,207