Hi I am trying to find a rank-1 matrix within the space of a linear combination of known matrices:
$$ \boldsymbol{A} = \boldsymbol{A_0} + \sum_{m = 1}^{M} x_m \boldsymbol{B}_m $$
where $\boldsymbol{A_0}$ and $\boldsymbol{B_m}, \forall m$, are hermitian matrices of size $N\times N$, and $\boldsymbol{x} = [x_1, \ldots, x_M]^T\in\mathbb{R}^{M\times 1}$ is a real vector. I have read many papers and posts related to this, and the best way I know so far to handle this problem is NNM, i.e., Nuclear Norm Minimization (related post here), which is formulated as: $$ \min_{\boldsymbol{x}} \left\|\boldsymbol{A_0} + \sum_{m = 1}^{M} x_m \boldsymbol{B}_m \right\|_{*} $$ I used CVX to solve this convex problem and it turned out that this works very well. However, it takes too much time when $M$ and $N$ grows larger, e.g, $N = M = 64$ or more. In the scenario I am considering, time complexity is quite important. So I need faster algorithms to solve this problem, and performance degradation is okay for that.
What I have tried to solve this problem faster:
- subgradient descent (see here for subgradient of nuclear norm).
- proximal minimization (see here for proximal operator for nuclear norm).
- reformulate the NNM as an SDP and solve it with ADMM (NNM as SDP; ADMM for SDP).
These three methods achieve similar performance as CVX, but still consume too much time, and the main reason is that SVD for an $N\times N$ matrix is employed in every iteration.
My question:
- Is there faster approach to solve the NNM problem without doing SVD, or at least not in every iteration? Some performance degradation is okay with faster speed.
- Or if there are better methods to minimize the rank of $\boldsymbol{A}$ other than NNM for large $N$ and $M$?
{\bf A}
for bold instead. Also, I would use $n$ instead of $N$. Note that the scalars usually are to the left the matrices rather than to the right. Also, I would useB_0
instead ofA_0
. Note that each entry of $\bf A$ is affine in $\bf x$. – Rodrigo de Azevedo Jan 06 '23 at 11:14