While the singular value decomposition of a matrix is very general, the standard factorization of a matrix A into two singular vector matrices U and V and a singular value matrix L is not unique, in that there are often multiple choices for those matrices that all yield the original matrix A. What set of additional conventions/constraints/normalizations is sufficient to insure that the decomposition is unique?
-
Essentially it just isn't, and it's silly to try to force it except possibly in $\mathbb{R}^2$ where you might want to enforce some kind of sign-of-determinant convention (i.e. make $U$ and $V$ both have positive determinant if $A$ has positive determinant and choose a particular one of the two to have negative determinant when $A$ has negative determinant). But except in that one case, you have at least $2^n$ choices (in the real case) and there is not really a way to nicely distinguish between them in advance. – Ian Jun 23 '18 at 02:49
-
What I am looking for is a set of constraining rules to make sure all different SVD routines get the same answer. One of those rules (an obvious one) is that the singular values must be ordered in some specific way, typically in decreasing order of size. But that is not enough - it needs to be made more robust in some way to handle situations involving repeated roots. And then there is a whole set of questions about making the singular vector matrices unique. – John Polcari Jun 23 '18 at 13:19
-
1And WRT to being silly, each of us may have his/her own opinion about what is more or less useful, but the question of whether a transformation is unique (or how non-unique it is) ends up being front and center when it comes to things such as transforming multivariate densities into densities of associated singular values and singular vectors. That is my prime motivation here. – John Polcari Jun 23 '18 at 13:24
-
Perhaps "hopeless" would be better than "silly". The problem is that even in the best case scenario of distinct singular values and a real space, you still have $2^n$ completely arbitrary sign choices to make. Above two, maybe three dimensions, there is no good way to choose a convention. And then it gets worse when singular values collide or the space becomes complex. – Ian Jun 23 '18 at 15:26
-
@Ian An example of how you could reduce the 2^n sign choices very simply - just declare that you always pick all positives, so maybe not so hopeless after all. What is harder is making sure everyone considers the same specific set of sign choices. I would argue that the precise identification of such structural issues is at the very heart of mathematics and its extension. – John Polcari Jun 23 '18 at 15:34
-
The singular vectors are not given to you by a routine that will make "pick all positives" an unambiguous prescription. The routine itself makes those choices behind the scenes, and for the most part the choices it makes are arbitrary (and more importantly discontinuous). In postprocessing you can change them, but why? What matters in the SVD is not really the singular vectors but the singular subspaces, which are unambiguously defined. The same is true for eigendecomposition. – Ian Jun 23 '18 at 15:39
-
Beacause if you are using SVD as an analytic tool rather than a numerical manipulation, it becomes important. – John Polcari Jun 23 '18 at 15:44
-
No, it is precisely from the analytic angle that these issues do not matter. Analytically the only meaningful things are the singular subspaces, not their representation through singular vectors. Take a look at Trefethen and Bau Numerical Linear Algebra to see this done well. – Ian Jun 23 '18 at 15:46
-
Will do - my base reference on this would be the classic Golub and Van Loan. One place they matter tremendously in analytic use is when you have to actually enumerate something like spaces of a singular values and singular vectors - so you can assign actual integral limits - so you can then actually evaluate the integrals. – John Polcari Jun 23 '18 at 15:54
-
PS - to the monitors, would love to move this discussion to chat, but web site won't allow me to... – John Polcari Jun 23 '18 at 16:10
-
@Ian - see related question for more concrete example of what I am after. – John Polcari Jun 23 '18 at 17:56
1 Answers
Given the lack of answers to date, I provide what I believe to be a possible set of conventions making the SVD a unique transformation, primarily to demonstrate the question’s feasibility.
In general, the singular values must be treated as complex values (or signed reals), with the traditional singular values being the magnitudes of these values.
The magnitudes of the singular values must be ordered in some specified manner, traditionally in order of decreasing magnitude.
In the case of repeated singular values, there must be a method of uniquely resolving ties for purposes of ordering. My approach to accomplishing this would be to define the SVD in the case of an $N{\rm{ }}\times{\rm{ }}N$ square matrix through the limiting form $$\underline {\overline {\bf{U}} } \,\underline {\overline {\bf{\Lambda }} } \,{\underline {\overline {\bf{V}} } ^ + } = \mathop {\lim }\limits_{\varepsilon \to 0} \left( {\underline {\overline {\bf{X}} } + \varepsilon \left[ {\begin{array}{*{20}{c}} N& \cdots &0\\ \vdots & \ddots & \vdots \\ 0& \cdots &1 \end{array}} \right]} \right)$$ with analogous limiting forms in the case of rectangular matrices. [I must include the caveat that I have not exhaustively explored this approach, so there may be issues with its rigor.]
The singular vector matrices on both sides must take the form of this factorization, with the phase matrices ${\underline {\overline {\bf{\Phi }} } _L}$ and ${\underline {\overline {\bf{\Phi }} } _R}$ being unity. For any more arbitrary choice of singular vector matrices, it is the product ${\underline {\overline {\bf{\Phi }} } _L}\underline {\overline {\bf{\Phi }} } _R^ +$ [appropriately truncated in the case of rectangular matrices] that provides the phase terms that make the singular values complex.
For tall matrices ($M{\rm{ }}\times{\rm{ }}N$ with $M > N$), all DOF vectors ${\underline {\bf{w}} _{Li}}$ associated with columns $i = N + 1 \to M$ of the left hand singular vector matrix must be truncated so that only the first $N$ DOF are nonzero. For wide matrices ($M < N$), an equivalent requirement exists for the right hand singular vector matrix, and for rank deficient matrices (rank $R < \min \left( {M,N} \right)$), an equivalent requirement exists for both singular vector matrices.

- 640