With an $N \times 1$ vector $\boldsymbol{v}$, the matrix $\boldsymbol{R} = \boldsymbol{v} \, \boldsymbol{v}^\mathrm{H}$ has a rank of 1, therefore there is only one nonzero eigenvalue equal to $\lambda = \mathrm{Tr}(\boldsymbol{v} \,\boldsymbol{v}^\mathrm{H}) = |\boldsymbol{v}|^2$. See e.g. eigenvalues and eigenvectors of $vv^T$.
For the nonzero eigenvalue $\lambda$, the corresponding eigenvector can be found through:
$$(\boldsymbol{v} \, \boldsymbol{v}^\mathrm{H} - \lambda \boldsymbol{I}) \boldsymbol{x} = \boldsymbol{0} \tag{1}.$$
For the $N-1$ zero eigenvalues, the corresponding eigenvectors can be found through:
$$\boldsymbol{v} \, \boldsymbol{v}^\mathrm{H} \boldsymbol{x} = \boldsymbol{0} \tag{2}.$$
I have two questions:
#1 Can we develop more equations (1) and (2)? I don't know the values of $\boldsymbol{v}$, I have only the matrix $\boldsymbol{R}$.
#2 In a research article (https://ieeexplore.ieee.org/document/8282171), they have such a matrix and they say:
Since the rank of $\boldsymbol{R}$ is 1, the vector $\boldsymbol{v}$ and the eigenvector of $\boldsymbol{R}$ satisfy $$\boldsymbol{x}_g^\mathrm{H} \boldsymbol{v} = \mathbf{0}, \tag{3}$$where $\boldsymbol{x}_g$ is the $g$th eigenvector corresponding to the $g$th eigenvalue ($g = 2, ..., N$). Note that the eigenvalues of $\boldsymbol{R}$ are arranged in descending order. Therefore, an estimate of the vector, $\hat{\boldsymbol{v}}$, is given by $$\hat{\boldsymbol{v}} = \alpha \, x_1, \tag{4}$$ where $x_1 = [x_{1,1} \;\; x_{1,2} \;\; \cdots \;\; x_{1,N}]^\mathrm{T}$ is the eigenvector corresponding to the largest eigenvalue of $\boldsymbol{R}$ and $\alpha = 1 / x_{1,1}$.
I don't understand how to arrive to equations (3) and (4). Any clue?
Note: In the paper, they used $\lambda$ instead of $\alpha$. I changed it to avoid confusion because I don't think it represents the nonzero eigenvalue, but I may be wrong.