3

I have $A$ a $d\times d$ positive definite matrix, and want to obtain its eigenvalues $\{\lambda_i\}$from list of moments $\{m_i\}$: $$\{m_i\}=\left\{\frac{1}{d}\operatorname{Tr}(A),\frac{1}{d}\operatorname{Tr}(A^2),\ldots,\frac{1}{d}\operatorname{Tr}(A^d)\right\}$$

Since $m_k$ is the average eigenvalue of $A^k$, we can write it as expectation over $\lambda$, the density of $A$'s eigenvalues

$$\frac{1}{d}\operatorname{Tr}A^k =E_{\lambda}\left[\lambda^k\right]=\int \mathbb{d}\lambda e^{k\log \lambda}$$

which looks like it could be massaged into Laplace Transform: $\int d\mu\ e^{-k \mu}$

Is there a practical way to do the reverse mapping? IE, use Laplace transform/inverse Laplace transform to go between $\lambda$ and moments of$A$ with the following eigenvalues?

$$\{\lambda_i\}=\left\{1,\frac{1}{2},\frac{1}{3},\ldots,\frac{1}{100}\right\}$$

3 Answers3

8

It’s not in the form of a single, clean formula but you can thrash out the Newton-Girard identities as explained here. I recently learned it is in the form of a single, "clean" formula:

$$\det(A)=(-1)^n\cdot\sum_{m_1+2m_2+3m_3+\cdots+nm_n=n\\\,\,\quad\quad\quad m_\bullet\ge0}\prod_{k=1}^n\frac{(-1)^{m_k}\cdot\operatorname{Tr}(A^k)^{m_k}}{k^{m_k}\cdot m_k!}$$

More generally, the $k$th coefficient of $\chi_A(t)=t^n+a_1t^{n-1}+\cdots+a_{n-1}t+a_n$ is given by: $$a_j=\sum_{m_1+2m_2+3m_3+\cdots+jm_j=j\\\,\,\quad\quad\quad m_\bullet\ge0}\prod_{k=1}^j\frac{(-1)^{m_k}\cdot\operatorname{Tr}(A^k)^{m_k}}{k^{m_k}\cdot m_k!}$$

This follows from a formula on that same Wikipedia page that I somehow missed on the first dozen viewings... Wikipedia doesn't provide a proof (unless I remain blind) so here's a sketch:

For a fixed set of letters $\lambda_1,\lambda_2,\cdots,\lambda_i$ let $p_k$ and $e_k$ denote the $k$th power sums $\sum\lambda_j^k$ and elementary symmetric functions respectively.

Observe $\sum_{k=1}^\infty\frac{(-1)^{k+1}}{k}p_kt^k=\sum_{j=1}^i\sum_{k=1}^\infty\frac{(-1)^{k+1}}{k}(\lambda_jt)^k=\sum_{j=1}^i\ln(1+\lambda_jt)$ for sufficiently small real $t$. Exponentiating and using $\exp(a+b)=\exp(a)\exp(b)$ we see fairly clearly that: $$\exp\left(\sum_{k=1}^\infty\frac{(-1)^{k+1}}{k}p_kt^k\right)=\sum_{k=1}^\infty(-1)^ke_kt^k$$As an equality of analytic functions for $t$ in a neighbourhood of zero. We may compare coefficients then (note $e_k=0$ for $k>i$) after expressing $\exp(\sum\cdots)=\exp(p_1t-p_2t^2/2+\cdots)=\exp(p_1t)\exp(-p_2t^2/2)\cdots=(1+p_1t+p_1^2t^2/2!+\cdots)(1-p_2t^2/2+p_2^2t^4/2\cdot2+\cdots)\cdots$ and expanding the product to find: $$(-1)^je_j=\sum_{m_1+2m_2+3m_3+\cdots+jm_j=j\\\,\,\quad\quad\quad m_\bullet\ge0}\prod_{k=1}^j\frac{(-1)^{m_k}\cdot p_j^{m_k}}{k^{m_k}\cdot m_k!}$$As desired. You can justify the slightly iffy "$\cdots$" and infinite product expansion by noting that to extract a coefficient we need only look up to $O(t^j)$ and take a finite expansion plus some error term of higher order. Then, provably, the higher order term - some analytic function - will not contribute to the desired coefficient. We only need to work with finite expansions.

Continue original post.


Knowledge of the first $d$ power sums of $d$ constants together with these identities allows you to deduce the symmetric polynomial values - for example, you could infer $(d=5)$ the quantity: $\lambda_1\lambda_2\lambda_3(\lambda_4+\lambda_5)+\lambda_1\lambda_2\lambda_4\lambda_5+\lambda_2\lambda_3\lambda_4\lambda_5$.

Ring a bell? These are the expressions involved in Vieta’s identities. So knowledge of all power sums - the traces - allows you to construct the coefficients of the characteristic polynomial. If you can find the roots of that polynomial to satisfactory accuracy, you can get your eigenvalues back.

A simple example with $d=2$. If you know $\operatorname{Tr}(A)=\lambda_1+\lambda_2=5$ and $\operatorname{Tr}(A^2)=\lambda_1^2+\lambda_2^2=19$ then you can deduce $\lambda_1\lambda_2=3$ and thus that the eigenvalues are roots of: $$x^2-5x+3=0$$From which you can get their exact values.

I learned this in the context of basic character theory: it showed me that studying the traces of the matrices actually gives you a surprising amount of information, more than is gained from studying the determinant. Indeed the Newton-Girard identities allow the $n\times n$ determinant of a matrix $M$ to be determined from the traces of $M,M^2,\cdots,M^n$. But knowing the determinant(s) wouldn’t tell you the trace: what if one of the eigenvalues were zero? Then the trace could be anything by letting the other eigenvalues vary, but the determinant would always be zero.

FShrike
  • 40,125
  • Interesting.....I'm wondering if that's equivalent to inverse Discrete Laplace Transform in some way. IE, if I have closed form expression for $m_i$ that also works for non-integer $i$, I can get $\lambda_i$ from inverse Laplace Transform of $m$ (related example worked through here) – Yaroslav Bulatov Mar 19 '23 at 23:34
  • Unless you can deduce roots of polynomials from (inverse) Laplace transforms I don’t see why these methods are related – FShrike Mar 19 '23 at 23:43
4

You can also use the explicit formula (see bottom of article (dealing with Faddeev-Leverrier method) giving the coefficients :

$$\displaystyle c_{n-m}={\frac {(-1)^{m}}{m!}}{\begin{vmatrix}\operatorname{tr} A&m-1&0&0 &\cdots\\ \operatorname{tr} A^{2}&\operatorname{tr} A&m-2&0 &\cdots\\ \operatorname{tr} A^{3}&\operatorname{tr} A^{2}&\operatorname{tr}A&m-3& \cdots\\ \ddots & \ddots&\ddots&\ddots &\ddots\\ \operatorname{tr} A^{m-1}&\operatorname{tr} A^{m-2}&\cdots &\cdots &\cdots&1\\ \operatorname{tr} A^{m}&\operatorname {tr} A^{m-1}&\cdots &\cdots &\cdots&\operatorname {tr} A\end{vmatrix}}$$

of the characteristic polynomial of $A$ written under the form :

$$\displaystyle p_{A}(\lambda )\equiv \det(\lambda I_{n}-A)=\sum _{k=0}^{n}c_{k}\lambda ^{k}$$

Caution : usually, $p_{A}(\lambda )= \det(A-\lambda I_{n}).$

Once you have the characteristic polynomial of $A$, you have access to its eigenvalues.

Particular cases :

  • if $m=1$, one retrieves $c_{n-1}=-\operatorname {tr} A.$

  • if $m=2$, one retrieves $c_{n-2}=\frac12(\left(\operatorname {tr} A)^2 - \operatorname {tr} (A^2)\right)$ which has some resemblance with a variance formula.

Jean Marie
  • 81,803
  • Interesting....this seems impractical for large $d$ because of numeric issues. I'm wondering if having values $m(k)=\operatorname{Tr}(A^k)$ for non-interger values of $k$ would make it practical. If we had a formula for $m(k)$, then formula for $\lambda_i$ comes out of inverse Laplace transform, it's interesting that there isn't a practical way to do the discrete version – Yaroslav Bulatov Mar 20 '23 at 15:05
  • I am conscious that this expression is interesting mainly for theoretical reasons. I am interested by the connection you are attempting to do with Laplace transform or something equivalent. – Jean Marie Mar 20 '23 at 15:08
3

It is perfectly possible to retrieve the eigenvalues from the information you have albeit it is not going to work for big values of $d$. One can always construct the characteristic polynomial of the matrix in question and then seek its roots numerically.

Let $M_p:= Tr[A^p]$ and then define:

\begin{eqnarray} a_p &:=& \left. \frac{1}{p!} \frac{d^p}{d z^p} e^{\sum\limits_{q=1}^\infty \frac{(-1)^{q-1}}{q} \cdot M_q z^q } \right|_{z=0} \\ &=& \left( \begin{array}{c} 1 \\ M_1 \\ \frac{1}{2} \left(M_1^2-M_2\right) \\ \frac{1}{6} \left(M_1^3-3 M_2 M_1+2 M_3\right) \\ \frac{1}{24} \left(M_1^4-6 M_2 M_1^2+8 M_3 M_1+3 M_2^2-6 M_4\right) \\ \frac{1}{120} \left(M_1^5-10 M_2 M_1^3+20 M_3 M_1^2+15 M_2^2 M_1-30 M_4 M_1-20 M_2 M_3+24 M_5\right) \\ \cdots \\ \end{array} \right) \end{eqnarray} for $p=0,1,2,\cdots $.

Then the polynomial $\sum\limits_{p=0}^d a_p z^p$ has roots equal to $(-1/\lambda_k)_{k=1}^d$.

As for your example I took $d=10$ and you can see the coefficients of the polynomial in question along with the negated inverses of the roots in the code snippet below:

Pmax = 11;
aas = Table[
    1/p! D[Exp[Sum[ (-1)^(q - 1)/q M[q] z^q, {q, 1, Pmax}]], {z, 
       p}], {p, 0, Pmax - 1}] /. z :> 0;
MatrixForm[Take[aas, 6] /. M[p_] :> Subscript[M, p]]
aas = aas /. M[p_] -> Sum[1/j^p, {j, 1, Pmax - 1}]
f[x] = aas.(x)^Range[0, Pmax - 1];
-1/x /. NSolve[f[x] == 0, x]

enter image description here

Przemo
  • 11,331
  • The expressions you give are exactly those obtained by expanding the Fadeev-Leverrier determinants of my answer. – Jean Marie Jan 12 '24 at 08:11