1

Let $$S := \left\{ {\bf X} \in \Bbb R^{4 \times 4} \mid \operatorname{rank} ({\bf X}) \leq 2 \right\}$$ Is $S$ closed with respect to the usual Euclidean metric?


I think so. My reasoning is the following:

Let $({\bf A}_n)$ be a sequence in $S$ converging to a matrix $\bf A$. We observe that the row operations performed in order to get row echelon form are continous. Let ${\bf A}'_n$ denote the row echelon form of ${\bf A}_n$. Since the rank is preserved, there are at most two nonzero rows in ${\bf A}'_n$. Let ${\bf A}'_n = \phi({\bf A}_n)$ for some continous function $\phi.$ Since $A_n$ converges, we must have that ${\bf A}_n'$ converges. Since ${\bf A}'_n$ converges, it follows that its corresponding entries also converge, so the limiting matrix will have at most $2$ nonzero rows, or equivalently, a rank of at most $2$. The limit matrix must be the row echelon form of ${\bf A}$, so ${\bf A}$ is in $S$.

Is there any problem with the above argument?

Didier
  • 19,132
Eloon_Mask_P
  • 717
  • 1
  • 10
  • How do you prove your claim that "the row operations performed in order to get row echelon form are continous"? –  May 23 '23 at 06:31
  • 1
    Could you also clarify exactly what metric you are referring to? I'm not sure there is something like a standard Euclidean metric on the set of matrices. – pauwelvde May 23 '23 at 07:00
  • 6
    Hint: all minors of size 3 vanish for a matrix with rank at most 2 – Didier May 23 '23 at 07:42
  • @StinkingBishop I'm not sure but my reasoning would be this: The steps transforming $A_n$ to $A_n'$ are all linear, so the entire transformation is linear and thus continous since we are in a finite dimensional space. But the linear map $A_n$ to $A_n'$ is possibly a different one for each $n$, so we can't assume that there is one continous function $phi$ that maps $A_n$ to $A_n'$. But then again it's probably trivial to construct a linear map between any two (non-zero) vector space elements so I guess this argument is not of much value. – chrysante May 23 '23 at 07:58
  • @StinkingBishop Row operations use just addition, multiplication by scalars and swapping of entries which is easy to see as continous functions in $\mathbb{R}^{16}$. – Eloon_Mask_P May 23 '23 at 09:39
  • @pauwelvde You may think collection of $n \times n$ matrices as euclidean space $\mathbb{R}^{n^2}$ with usual metric. – Eloon_Mask_P May 23 '23 at 09:41
  • 2
  • @daw No, I want to check validity of my argument. – Eloon_Mask_P May 23 '23 at 10:53
  • @3f183201 For proof-verification question, please use the proper tag – Didier May 23 '23 at 13:09
  • @Didier You may see the original the question, there was solution verification tag, but later someone removed it I guess. – Eloon_Mask_P May 23 '23 at 14:01
  • @3f183201 Right, sorry about that! – Didier May 23 '23 at 14:24
  • @3f183201 Sorry for not getting back to you earlier. I see you've got some answers already. The bottom line is that you are not using just elementary operations, you also need to choose which operations to use, depending on whether a particular element in the matrix is nonzero. Imagine the situation where the top left corner converges to zero: $$(A_n)_{11}=\begin{cases}\frac{1}{n}&n\text{ even}\0&\text{ otherwise}\end{cases}$$ In even $A_n$ you multiply the $1$st row by $n$, in odd ones you may swap the first row with something, or do something entirely different... –  May 25 '23 at 22:50
  • (Cont'd) In effect I think you are making the same error as someone who, from knowing that $f(x)=0$ and $g(x)=1$ are continuous functions concludes that $$h(x)=\begin{cases}f(x)&x=0\g(x)&x\ne 0\end{cases}$$ is also continuous, which obviously does not follow. –  May 25 '23 at 22:53

2 Answers2

2

I do not think that your proof is correct. You claim that there is a "universal sequence of row operations" transforming all matrices into echelon form. But it seems to me that we need different sequences of row operations for different matrices. Only if you can prove that there actually exists a universal sequence of row operations, you would get a continuous function $\phi$ assigning to each matrix $A$ a matrix $\phi(A)$ in echelon form. I doubt that, but maybe I am wrong.

So what can be done? I think Didier's comment is an essential ingredient. We know that a matrix $A$ has rank $\le 2$ iff all $3 \times 3$-minors $A^{i,j}$ obtained from $A$ by eliminating row $i$ and column $j$ have vanishing determinant. Now consider a sequence $(A_n)$ of matrices with rank $\le 2$ which converges to a matrix $A$. Consider all sequences $(A^{i,j}_n)$ of $3 \times 3$-minors. They converge to the minors $A^{i,j}$ of $A$. Since the determinant is continuous, all $A^{i,j}$ have vanishing determinant which means that $A$ has rank $\le 2$.

Paul Frost
  • 76,394
  • 12
  • 43
  • 125
1

I'm going to address only your posted question, regarding whether there is a problem with your argument.

Yes, there is a big problem.

It is true that any individual row operation is a continuous operation on the space of $4 \times 4$ matrices, i.e. a continuous function $\mathbb R^{16} \mapsto \mathbb R^{16}$. This is true for more-or-less the reason you explained in the comments. The way I would say it is that an individual row operation is just left-multiplication by an individual elementary matrix $E$: $EA$ is the result of applying the given row operation to $A$, and when $E$ is fixed, $EA$ is a continuous function of $A$.

And it is still true that if one fixes one particular sequence of row operations, then that fixed sequence represents a continuous function on the space of $4 \times 4$ matrices. By fixing the sequence of row operations, one is fixing a sequence of elementary matrices $E_1,...,E_k$ to multiply on the left by: $E_1 A$ is the result of applying the first row operation, $E_2 E_1 A$ is the result of applying the second row operation, and so on. Taking the product of that sequence elementary matrices $M = E_k \cdots E_1$ one gets a fixed invertible matrix to multiply on the left by: $MA$ is the row echelon form of $A$, and $MA$ is a continuous function of $A$.

One way that this is summarized in linear algebra is by the "$LU$-factorization theorem": for every matrix $A$ there is a factorization $A=LU$ where $U$ is the echelon form of $A$, and $L$ is a square, invertible matrix, and $M=L^{-1}$ is the product of the sequence of elementary matrices that are used in the row reduction process.

But here's the big problem. Suppose that I take two different matrices ${\bf A}_m$ and ${\bf A}_n$ in your sequence. There is no reason at all to expect their row operation sequences to be the same. In fact, it is certainly possible that every single matrix has a row operation sequence different from the others. One can express this in the language of the $LU$ factorization theorem: we do get factorizations ${\bf A}_n = L_n U_n$, and so we do get factorizations $L_n^{-1} {\bf A}_n = U_n$ where $M_n = L_n^{-1}$ is the product of the elementary matrices used to get ${\bf A}_n$ into row echelon form. But it is possible that every single $M_n$ matrix is different from the others.

Now, IF we knew that all of the $M_n$'s were the same then, perhaps, your argument could be made to work.

But that's a big IF. More likely, all of the $M_n$'s are not all the same, and your argument fails.

Lee Mosher
  • 120,280