1

Hi I am trying to find a rank-1 matrix within the space of a linear combination of known matrices:

$$ \boldsymbol{A} = \boldsymbol{A_0} + \sum_{m = 1}^{M} x_m \boldsymbol{B}_m $$

where $\boldsymbol{A_0}$ and $\boldsymbol{B_m}, \forall m$, are hermitian matrices of size $N\times N$, and $\boldsymbol{x} = [x_1, \ldots, x_M]^T\in\mathbb{R}^{M\times 1}$ is a real vector. I have read many papers and posts related to this, and the best way I know so far to handle this problem is NNM, i.e., Nuclear Norm Minimization (related post here), which is formulated as: $$ \min_{\boldsymbol{x}} \left\|\boldsymbol{A_0} + \sum_{m = 1}^{M} x_m \boldsymbol{B}_m \right\|_{*} $$ I used CVX to solve this convex problem and it turned out that this works very well. However, it takes too much time when $M$ and $N$ grows larger, e.g, $N = M = 64$ or more. In the scenario I am considering, time complexity is quite important. So I need faster algorithms to solve this problem, and performance degradation is okay for that.

What I have tried to solve this problem faster:

  1. subgradient descent (see here for subgradient of nuclear norm).
  2. proximal minimization (see here for proximal operator for nuclear norm).
  3. reformulate the NNM as an SDP and solve it with ADMM (NNM as SDP; ADMM for SDP).

These three methods achieve similar performance as CVX, but still consume too much time, and the main reason is that SVD for an $N\times N$ matrix is employed in every iteration.

My question:

  1. Is there faster approach to solve the NNM problem without doing SVD, or at least not in every iteration? Some performance degradation is okay with faster speed.
  2. Or if there are better methods to minimize the rank of $\boldsymbol{A}$ other than NNM for large $N$ and $M$?
Zieac
  • 501
  • I would use {\bf A} for bold instead. Also, I would use $n$ instead of $N$. Note that the scalars usually are to the left the matrices rather than to the right. Also, I would use B_0 instead of A_0. Note that each entry of $\bf A$ is affine in $\bf x$. – Rodrigo de Azevedo Jan 06 '23 at 11:14
  • @RodrigodeAzevedo Thank you for your kind response and suggestions! I have read through the slides but most of them are already familiar to me. Seems like the best I can get is something like proximal minimization, and there is no cheap yet good substitutions for singular value thresholding that is taking up most of the time. – Zieac Jan 10 '23 at 16:59
  • Are you acquainted with Maryam Fazel's work? – Rodrigo de Azevedo Jan 14 '23 at 11:04
  • @RodrigodeAzevedo I have read some of her famous papers on the nuclear norm and log-det relaxation of rank minimization problems. But I did not notice any fast implementation of solving them. I will go check for it later, or can you kindly tell me some paper on that? Very appreciated if it does not bother you. – Zieac Jan 16 '23 at 03:48

0 Answers0