2

I am wondering how to prove that a complex-valued random matrix say $\mathbf{A} \in \mathbb{C}^{M \times N}$ with size $M \times N$ has full-rank, i.e., $\textrm{rank} = \min\left\{M,N\right\}$ with probability $1$, where each entry in $a_{m,n} = [\mathbf{A}]_{m,n}$ is distributed as independent complex Gaussian random variables, i.e., $a_{m,n} \sim \mathcal{CN}\left(\mu, \sigma^2\right)$.

Thank you very much in advance

user550103
  • 2,688

2 Answers2

2

Assume that $M \leq N$, $M$ rows each row picked from $\mathbb{C}^N$.

Fact 1: Let $M$ and $N$ be two positive integers with $M \leq N$. Next, for some $j \leq M$, let $v_1,\ldots, v_{j-1}$ be arbitrary vectors in $\mathbb{C}^N$. Now let $v_{j}$ be a vector picked from $\mathbb{C}^N$ independently from $v_1,\ldots, v_{j-1}$ according to some continuous distribution (i.e., the distribution is such that the probability that $v_{j}$ falls into any set in $\mathbb{C}^N$ of measure 0 (which includes subspaces of $\mathbb{C}^N$ of dimension $M-1$ or less) is 0). Then the probability $P_{j}$ that $v_{j}$ is linearly dependent on $v_1,\ldots, v_{j-1}$ is 0.

Check to make sure you understand and see Fact 1 for yourself.

So build ${\bf{A}}$ by picking the first row $v_1$, then the 2nd row $v_2$, and then for each $j=3, \ldots, M$, the $j$-th row $v_j$ of $\mathbb{A}$. IF each coordinate of each such $v_j$ is picked according to the Gaussian independently of one another and independently of $v_1,\ldots, v_{j-1}$, then $v_j$ is indeed picked according to a continuous distribution from $\mathbb{C}^N$. So by Fact 1 and the Union Bound, the probability that ${\bf{A}}$ has full row-rank (for the case where $M \leq N$) is at least $1 - \sum_{j=1}^M P_j$ where $P_j$ is as in Fact 1. But each $P_j$ is 0. So (for the case where $M \leq N$) the probability that ${\bf{A}}$ has full row rank is 1, which implies that (for the case where $M \leq N$)the probability that ${\bf{A}}$ does not have full row rank is 0.

Showing that ${\bf{A}}$ has full column rank for the case where $N \geq M$ can be handled analogously.

Mike
  • 20,434
  • Thank you very much for your enlightenment. – user550103 May 01 '18 at 16:40
  • Glad I could help! If you still aren't sure of Fact 1, then here is another way of looking at it: Let $y_{j-1}$ be a vector orthogonal to $v_1,\ldots, v_{j-1}$ and let $p$ be an integer s.t the value $(y_{j-1})p$ of the $p$-th coordinate of $y{j-1}$ is 0. Then $y^T_{j-1}v_j$ must be 0 for $v_j$ to be linearly dependent on $v_1,\ldots, v_{j-1}$. ... – Mike May 01 '18 at 16:52
  • .... To show the prob $P_j$ of this is 0, pick every other coordinate of $v_j$ (besides the value $(v_j)p$ of the $p$-th coordinate of $v_j$) first. Then pick $(v{j})p$ . Then as $(y{j-1})p$ is nonzero, the only way $y^T{j-1}v_j$ can be 0 (given the other coordinates of $v_{j}$) is if the $p$-th coordinate $(v_{j})p$ of $v_p$ is exactly $c$ for some $c \in \mathbb{C}$. This has probability $P{j}$ equal to 0. Then finish as above – Mike May 01 '18 at 16:59
  • Thanks, I was just thinking along these lines that one can probably show that the correlation $\mathbb{E}\left[ \mathbf{v}_i^* \mathbf{v}_j\right] = 0$ for $i \neq j$. So, does that make sense? – user550103 May 01 '18 at 17:11
  • But that is in expected sense!? but not necessarily in probability $1$ sense!? or I am confusing myself. – user550103 May 01 '18 at 17:14
  • That isn't the argument though. Don't think expected value or correlation. The coordinate $(y_j)_p$ is picked according to a continuous distribution on $\mathbb{C}$ [i.e., $(y_j)_p$ is picked according to a continuous distribution no matter the values of the other coordinates], so the probability that $(y_j)_p$ lands on any one point $c$ in $\mathbb{C}$ [no matter what that value of $c$ is] is 0. – Mike May 01 '18 at 17:16
  • Now, I must admit that I am totally off-track. Sorry to clear the baseline first. When you say "coordinate", is it an element in the vector, say $v_j[p]$? – user550103 May 01 '18 at 17:55
  • By $(v_j)_p$ I mean the $p$-th entry in the vector $v_j$;, where $p$ is an integer in $ {1,\ldots, N}$, as we are choosing $v_j$ from $\mathbb{C}^N$ – Mike May 01 '18 at 18:50
  • Thanks. Then the main essence from the argument is that the probability of having say row vector $v_j$ linearly dependent to other vectors is $0$. Thus, the random matrix (where each element in a vector/matrix is withdrawn from the continuous distribution) will have full rank essentially. Correct? – user550103 May 01 '18 at 19:33
  • Correct. (7 more characters) – Mike May 01 '18 at 19:34
  • Thank you so much. – user550103 May 01 '18 at 19:38
2

Let $\mathcal M$ be the set of all submatrices of $A$ of size $r=\min(M,N)$. Matrix $A$ is full-rank if and only if $det(B) \ne 0$ for eat least one submatrix $B\in \mathcal M$.

Consider $P = \sum_{B\in \mathcal M} (det(B))^2$. This is a polynomial function of the entries of $A$. This polynomial is not always equal to 0, for instance in the case when one submatrix of $A$ is equal to identity.

Since the zero-set of a nonzero polynomial has Lebesgue measure 0, $A$ is full-rank except on a set of Lebesgue measure 0 (with respect to the entries of $A$). Thus if the distribution of $A$ is absolutely continuous with respect to the Lebesgue measure, $\mathbb P(A \text{ is not full rank}) = \mathbb P(P=0) = 0$.

jlewk
  • 1,867
  • Why do you have the square on $det(B)^2$? –  Apr 01 '20 at 11:46
  • If a sum of squares is 0, then each term in the sum must be zero (i.e, all submatrices B will have det(B=0). Without the square, the sum could be zero with some terms being nonzero. – jlewk Apr 02 '20 at 16:38