1

Let $A$ be a traceless $2\times 2$ complex matrix. Its SVD reads $A=UDV^\dagger$, or in dyadic notation, $$A=s_1 u_1 v_1^\dagger+s_2 u_2 v_2^\dagger,$$ with $\langle u_i,u_j\rangle=\langle v_i,v_j\rangle=\delta_{ij}$ and $s_i\ge0$. The left (right) singular vectors of $A$ are $(u_1,u_2)$ and $(v_1,v_2)$, and its singular values are $s_1,s_2$.

The trace condition $\operatorname{Tr}(A)=0$ translates, in terms of its SVD, into $$s_1\langle v_1,u_1\rangle+s_2\langle v_2,u_2\rangle=0.$$

However, numerically, I find that the stronger condition $\langle u_1,v_1\rangle=\langle u_2,v_2\rangle=0$ holds. In words, the left and right singular vectors corresponding to the same singular values are always orthogonal. You can use the following Mathematica snippet to verify it directly:

With[{mat = # - Tr[#]/2 IdentityMatrix@2 & @ RandomComplex[{-1 - I, 1 + I}, {2, 2}]},
    SingularValueDecomposition@mat //  Dot[ConjugateTranspose@#[[1]], #[[3]]] & // Chop // MatrixForm
]

This snippet generates random complex matrices $A$ by sampling the components from the uniform distribution in $[0,1]$, and then removing $\operatorname{Tr}(A) I/2$ to get a traceless matrix. The output is the inner product between the different left and right singular vectors, and the zeros on the diagonal correspond to the orthogonality that is the subject of this question.

At the same time, this is clearly false for many matrices, in particular normal ones, for which $u_i=v_i$. Still, is there a way to see why left and right singular vectors are "often" orthogonal for traceless matrices?

glS
  • 6,818

4 Answers4

1

If $A=USV^\ast$ is a singular value decomposition of a non-normal traceless $2\times2$ matrix $A$, then $V^\ast U$ must possess a zero diagonal.

Write $-\det(A)$ in polar form as $de^{i\theta}$. By dividing $A$ by $e^{i\theta/2}$ and by a change of orthonormal basis, we may assume that $-\det(A)=d\ge0$ and $V=I$. We want to show that $U$ has a zero diagonal.

Since $A$ has a zero trace, $A^2=dI$. Therefore $USUS=dI$.

If $A$ is singular, then $SUS=0$. Since $A$ is not normal, $S=\operatorname{diag}(s,0)$ for some $s>0$. The equality $SUS=0$ thus implies that $u_{11}=0$. As $U$ is unitary, $u_{22}$ must also be zero. Hence $U$ has a zero diagonal.

If $A$ is nonsingular, then $d>0$. From $USUS=dI$, we get $(USU^\ast)U^2 = \left(dS^{-1}\right)(I)$. By the uniqueness of polar decompositions of nonsingular matrices, we have $U^2=I$. As $U\ne\pm I$ (otherwise $A=\pm S$ is normal), the spectrum of $U$ must be equal to $\{1,-1\}$. Hence the trace of $U$ is zero. If the diagonal of $U$ is nonzero, since $A=US$ also has zero trace, $S$ must be a scalar matrix and $A=US$ is normal, which is a contradiction. Therefore $U$ has a zero diagonal.

user1551
  • 139,064
  • I guess this wasn't as straightforward as I thought! One question: you say "since $A$ is not normal $S=\operatorname{diag}(s,0)$". Why is normality relevant here? If $A$ is singular then it must have a zero singular value, and unless $A=0$ we must have one positive singular value. Also, if I understand you correctly, at the beginning you are essentially replacing $A$ with $AV (\det U)^{-1/2}$, correct? – glS Jun 04 '20 at 18:31
  • regarding the last paragraph, I don't quite understand how you reach $U^2=I$. Of what matrix are you considering the polar decomposition of? – glS Jun 04 '20 at 18:41
  • @glS Let $P_1=USU^\ast,,U_1=U^2,,P_2=dS^{-1}$ and $U_2=I$. Then both $P_1U_1$ and $P_2U_2$ are polar decompositions of the matrix they are both equal to (i.e. $USU$ or $dS^{-1}$). – user1551 Jun 04 '20 at 19:04
  • @glS (Sorry, I've missed your first comment.) The zero matrix is normal. If $A$ is singular but not normal, it must possess exactly one positive singular value. – user1551 Jun 04 '20 at 21:06
  • thank you for the clarifications. One another thing. You write that zero trace implies $A^2=d I$. How do you get this? Zero trace implies the eigenvalues are $\pm id$, and so I would have said that $A^2=-d I$. Take as an example $A=\begin{pmatrix}i & 0\ 1 & -i\end{pmatrix}$, which has $d=1$ and $A^2=-I$ – glS Jun 05 '20 at 08:02
  • @glS It's a typo. $\det(A)$ should read $-\det(A)$. That $A^2=-\det(A)I$ follows from Cayley-Hamilton theorem. – user1551 Jun 05 '20 at 10:09
1

We first show that every traceless $2\times2$ complex matrix $A$ has a singular value decomposition $USV^\ast$ such that $V^\ast U$ has a zero diagonal. Then we show that if $A$ is also non-normal, the diagonal of $V^\ast U$ must be zero.

Let $W\pmatrix{|\lambda|e^{i\theta}&-be^{i(\theta+\delta)}\\ 0&-|\lambda|e^{i\theta}}W^\ast$ (where $W$ is unitary and $b\ge0$) be a Schur triangulation of $A$ and let $B=\pmatrix{-b&|\lambda|\\ |\lambda|&0}$. Since $B$ is real symmetric, it admits an orthogonal diagonalisation $Q\Lambda Q^T$ over $\mathbb R$ and we may write $\Lambda=SD$ where $S$ is a nonnegative diagonal matrix $S$ and $D$ is a diagonal matrix whose diagonal entries are equal to $\pm1$. Therefore

\begin{aligned} A &=e^{i\theta}W\pmatrix{|\lambda|&-be^{i\delta}\\ 0&-|\lambda|}W^\ast\\ &=e^{i\theta}W\pmatrix{e^{i\delta}&0\\ 0&1} \pmatrix{-b&|\lambda|\\ |\lambda|&0} \pmatrix{0&-1\\ e^{-i\delta}&0}W^\ast\\ &=\left(e^{i\theta}W\pmatrix{e^{i\delta}&0\\ 0&1}Q^T\right) S \left(DQ\pmatrix{0&-1\\ e^{-i\delta}&0}W^\ast\right)\\ &=USV^\ast \end{aligned} is a singular value decomposition of $A$ and $$ V^\ast U =\left(DQ\pmatrix{0&-1\\ e^{-i\delta}&0}W^\ast\right)\left(e^{i\theta}W\pmatrix{e^{i\delta}&0\\ 0&1}Q^T\right) =e^{i\theta}DQ\pmatrix{0&-1\\ 1&0}Q^T $$ has a zero diagonal.

Note that the above applies even when $A$ is normal. E.g. the matrix $A=\operatorname{diag}(1,-1)$ has a singular value decomposition $A=USV^\ast=(Q)(I)(Q^TA)$ where $Q=\frac{1}{\sqrt{2}}\pmatrix{1&-1\\ 1&1}$. Thus $V^\ast U=Q^TAQ$ has a zero diagonal in this case.

However, if $A$ is not normal, it must have two different singular values and hence its singular spaces are one-dimensional. Therefore, the fact that $V^\ast U$ has a zero diagonal in one SVD of $A$ implies that $V^\ast U$ has a zero diagonal in every SVD of $A$.

user1551
  • 139,064
  • so, stumbling back on this one, I find myself confused by the first sentence. Is there a typo there? You're showing that $V^* U$ must have zero diagonal in general, and then in the specific case of non-normal matrices? – glS May 27 '22 at 10:20
  • 1
    @glS What I meant was that SVDs are not unique in general. While a traceless $2\times2$ matrix $A$ always possesses a SVD for which $V^\ast U$ is hollow, it may also possess some other SVDs in which $V^\ast U$ are not hollow. In the example $A$ in my answer, $V^\ast U$ is hollow for the SVD $(Q)(I)(Q^TA)$ but it isn't hollow for the SVD $(A)(I)(I)$. However, when $A$ is $2\times2$, traceless and not normal, $V^\ast U$ must be hollow for all SVDs. – user1551 May 28 '22 at 04:09
0

We know that $A^2=-d I$ where $d\equiv\det(A)$. This follows from $\operatorname{tr}(A)=0$ which implies $\lambda_\pm=\pm\sqrt{-\det A}$ and thus $(A-\sqrt{-\det A})(A+\sqrt{-\det A})=A^2+\det(A)I=0$.

Moreover, write the SVD of $A$ as $A=USV^\dagger$. We thus have

$$(USV^\dagger)^2=-d I\Longleftrightarrow S W = - d W^\dagger S^{-1},\tag1$$ where $W\equiv U^\dagger V$ is a unitary.

We now observe that for all matrices $\lvert\det(A)\rvert=\prod_k s_k$, and thus in particular $\lvert d\rvert = s_1 s_2$ if $s_i\equiv S_{ii}$ are the singular values. Therefore the components of $dS^{-1}$ in its diagonal representation are $$\frac{d}{s_1}=s_{2} e^{i\phi}, \quad \frac{d}{s_1}=s_{2} e^{i\phi}, \quad\text{ where }\quad d=|d| e^{i\phi}. $$ Explicitly, (1) thus reads, in the eigenbasis of $S$, $$\newcommand{\bs}[1]{\boldsymbol{#1}} s_1 \bs e_1 \bs w_1^\dagger + s_2 \bs e_2 \bs w_2^\dagger = -e^{i\phi}(s_2 \bs w_1 \bs e_1^\dagger + s_1 \bs w_2 \bs e_2^\dagger). $$ Defining the auxiliary vectors $\tilde{\bs w_i}\equiv -e^{i\phi}\bs w_i$, we thus have $$\newcommand{\bs}[1]{\boldsymbol{#1}} s_1 \bs e_1 \bs w_1^\dagger + s_2 \bs e_2 \bs w_2^\dagger = s_2 \tilde{\bs w_1} \bs e_1^\dagger + s_1 \tilde{\bs w_2} \bs e_2^\dagger.\tag2 $$ But $\langle \tilde{\bs w_i},\tilde{\bs w_j}\rangle=\langle \bs w_i,\bs w_j\rangle=\langle \bs e_i,\bs e_j\rangle=\delta_{ij}$, thus both LHS and RHS are the SVD of the same matrix. The uniqueness of the SVD, therefore, tells us that one of the two following cases must hold:

  1. $s_1=s_2$. This can only happen for normal matrices, as it implies that $A^\dagger A=AA^\dagger= s_1 I$.

  2. $s_1\neq s_2$. In this case for (2) to be possible we must have $\bs e_1\bs w_1^\dagger=\tilde{\bs w_2}\bs e_2^\dagger=-e^{i\phi}\bs w_2\bs e_2^\dagger$, and thus $\bs w_2=\bs e_1$ and $\bs w_1=\bs e_2$ up to a phase.

glS
  • 6,818
0

I later found out that this result can be seen as a direct consequence of the fact that any traceless matrix has an orthonormal basis with respect to which its diagonal is zero. In other words, for any $A$ such that $\operatorname{tr}(A)=0$, there's an orthonormal basis $\mathbf u_k$ such that $\langle \mathbf u_k,A\mathbf u_k\rangle=0$ for all $k$. This is discussed e.g. in Is every square traceless matrix unitarily similar to a zero-diagonal matrix? and Is there a similarity transformation rendering all diagonal elements of a matrix equal?.

To see why that result implies the one at hand, note that if $A$ is 2x2 and has zero diagonal with respect to the basis $\mathbf u_k$, then $\langle \mathbf u_1,A\mathbf u_1\rangle=\langle\mathbf u_2,A\mathbf u_2\rangle$, and therefore we must $A\mathbf u_1\propto \mathbf u_2$ and $A\mathbf u_2\propto \mathbf u_1$. But this means that $\langle A\mathbf u_1,A\mathbf u_2\rangle=0$, i.e. $\{\mathbf u_1,\mathbf u_2\}$ are principal components in the (or a) singular value decomposition of $A$. In other words $A$ can be written as the SVD $$A = \alpha \mathbf u_1 \mathbf u_2^\dagger + \beta \mathbf u_2 \mathbf u_1^\dagger,$$ for some singular values $\alpha,\beta\ge0$.

So in summary, if $A$ has zero diagonal with respect to the basis $\mathbf u_1,\mathbf u_2$, then $\{\mathbf u_1,\mathbf u_2\}$ and $\{\mathbf u_2,\mathbf u_1\}$ are left and right singular vectors of $A$, respectively.

glS
  • 6,818