1

With an $N \times 1$ vector $\boldsymbol{v}$, the matrix $\boldsymbol{R} = \boldsymbol{v} \, \boldsymbol{v}^\mathrm{H}$ has a rank of 1, therefore there is only one nonzero eigenvalue equal to $\lambda = \mathrm{Tr}(\boldsymbol{v} \,\boldsymbol{v}^\mathrm{H}) = |\boldsymbol{v}|^2$. See e.g. eigenvalues and eigenvectors of $vv^T$.
For the nonzero eigenvalue $\lambda$, the corresponding eigenvector can be found through: $$(\boldsymbol{v} \, \boldsymbol{v}^\mathrm{H} - \lambda \boldsymbol{I}) \boldsymbol{x} = \boldsymbol{0} \tag{1}.$$ For the $N-1$ zero eigenvalues, the corresponding eigenvectors can be found through: $$\boldsymbol{v} \, \boldsymbol{v}^\mathrm{H} \boldsymbol{x} = \boldsymbol{0} \tag{2}.$$

I have two questions:
#1 Can we develop more equations (1) and (2)? I don't know the values of $\boldsymbol{v}$, I have only the matrix $\boldsymbol{R}$.
#2 In a research article (https://ieeexplore.ieee.org/document/8282171), they have such a matrix and they say:

Since the rank of $\boldsymbol{R}$ is 1, the vector $\boldsymbol{v}$ and the eigenvector of $\boldsymbol{R}$ satisfy $$\boldsymbol{x}_g^\mathrm{H} \boldsymbol{v} = \mathbf{0}, \tag{3}$$where $\boldsymbol{x}_g$ is the $g$th eigenvector corresponding to the $g$th eigenvalue ($g = 2, ..., N$). Note that the eigenvalues of $\boldsymbol{R}$ are arranged in descending order. Therefore, an estimate of the vector, $\hat{\boldsymbol{v}}$, is given by $$\hat{\boldsymbol{v}} = \alpha \, x_1, \tag{4}$$ where $x_1 = [x_{1,1} \;\; x_{1,2} \;\; \cdots \;\; x_{1,N}]^\mathrm{T}$ is the eigenvector corresponding to the largest eigenvalue of $\boldsymbol{R}$ and $\alpha = 1 / x_{1,1}$.

I don't understand how to arrive to equations (3) and (4). Any clue?

Note: In the paper, they used $\lambda$ instead of $\alpha$. I changed it to avoid confusion because I don't think it represents the nonzero eigenvalue, but I may be wrong.

James
  • 77
  • If you don't know $v$, you can always find it through the straightforward recipe to find eigenvectors: solve $\text{det}[R-\lambda I]=0$ for $\lambda$, and solve $(R-\lambda I)v=0$ -your eq. (1)- for $v$. Equation (3) means that the remaining (generalized) eigenvectors of $R$ are orthogonal to $v$, which is prooved in the accepted answer of the very same post you link. For this specific case, note that the first column of $R$ is $R_1=v_1v$, and in particular that $R_{11}=v_1^2$, I don't understand why they don't immediately recover $v$ by constructing $v=\frac{1}{\sqrt{R_{11}}}R_1$. – G Frazao Oct 11 '22 at 15:46
  • To find a $v$ such that $R = vv^H$, it suffices to take $v = cw$ where $w$ is any non-zero column of $R$ and $c > 0$ is chosen so that $c^2 ww^H = R$. – Ben Grossmann Oct 11 '22 at 17:51
  • @G Frazao, thanks for pointing that out, now I understand Eq. (3). In fact, I misinterpreted the problem, $\boldsymbol{v}$ is available, but it contains a signal and a noise, and the signal part is estimated using eigenvectors of the covariance matrix. – James Oct 17 '22 at 15:03

0 Answers0