2

Let $N \ge 2$ be an integer and let $\vec{A}:= \left( A_j \right)_{j=1}^N \in {\mathbb R}^N_+$. Consider a following integral:

\begin{equation} {\mathfrak I}_N(\vec{A}) := \int\limits_{{\mathbb R}_{\ge}^N} \prod\limits_{1 \le i < j \le N} (\lambda_i - \lambda_j) \cdot e^{-\vec{A} \cdot \vec{\lambda}} \cdot d^N \vec{\lambda} \tag{1} \end{equation}

Here ${\mathbb R}_{\ge}^N := \left\{\left. (\lambda_j)_{j=1}^N \right| \infty > \lambda_1 \ge \lambda_2 \ge \cdots \ge \lambda_N \ge 0\right\}$ is the Weyl chamber.

This integral is of interest in Random Matrix Theory, in particular in computing moments and/or correlation functions of the Wishart distribution.

Even by taking a glimpse of the formula above one can see that computing this integral isn't that hard conceptually. There are at least two different ways one might tackle it, firstly by doing elementary integrations over consecutive $\lambda$'s from the one with the smallest index to the one with the biggest index, or secondly by expanding the Vandermonde determinant in the integrand in a series and integrating term by term. I chose the first approach and , using Mathematica, I guessed the following result:

\begin{equation} {\mathfrak I}_N(\vec{A}) := \sum\limits_{\begin{array}{lll} l_1+l_2+ \cdots + l_N = \binom{N}{3} \\ l_1 \ge 0, \cdots l_N \ge 0 \end{array}} {\mathfrak A}^{(N)}_{l_1,l_2,\cdots,l_N} \frac{\prod\limits_{\xi=1}^N ({\bar A}_\xi)^{l_\xi} }{ \prod\limits_{\xi=1}^N ({\bar A}_\xi)^{\xi \cdot N -\xi^2+1}} \tag{2} \end{equation} where ${\bar A}_j := A_1+\cdots+A_j$ for $j=1,\cdots, N$ and ${\mathfrak A}^{(N)}_{l_1,l_2,\cdots,l_N} $ are some positive integers that do not depend on $\vec{A}$. We have worked out those coefficients for $N=2,3,4,5$ as you can see below:

Clear[lmb]; Clear[A]; NN = 6;
FF = Product[
    lmb[i] - lmb[j], {i, 1, NN}, {j, i + 1, 
     NN}] Exp[-Sum[A[j] lmb[j], {j, 1, NN}]];
Do[
  FF = Integrate[ FF, {lmb[j], If[j == NN, 0, lmb[j + 1]], Infinity}, 
    Assumptions -> And @@ Table[A[j] > 0, {j, 1, NN}]];
  PrintTemporary["j=", j, " done"];
  , {j, 1, NN}];
FF

(Denominator[ress] /. A[j_] :> Subscript[A, j]) // MatrixForm (Table[Expand[(Numerator[ress[[nn]]] /. A[j_] :> A[j] - If[j == 1, 0, A[j - 1]])], {nn, 1, Length[ress]}] /. A[j_] :> Subscript[A, j]) // MatrixForm

enter image description here

It was impossible to go beyond $N=5$ due to the sheer number of calculations that overwhelmed Mathematica.

In view of this my question is the following. Would it be possible to find some smarter way of computing this integral. In particular could we etsablish some recurrence relations in $N$ to do the computations?


Przemo
  • 11,331
  • Naive question. Did you try the change of variables based on introducing non-negative slack variables $(t_1, t_2, \ldots) \geq 0$ so that $\lambda _n = t_1, \lambda _{n-1} = t_1 + t_2,$ etc ? Then the region of integration is simply $t_j\geq0$ for all $j$ (Positive orthant) – MathFont Apr 17 '23 at 15:01
  • And the polynomial factor $P(t) =\Pi_{i<j} ( t_i+ \ldots + t_j)$ is just what you get by applying the associated partial derivative operator $p(D_b)$ to $e^{\vec t \cdot \vec B}$ where $\vec B \cdot t = \vec A \cdot \lambda$ is what you get from the change of variables. – MathFont Apr 17 '23 at 15:10
  • In other words, you just have to differentiate an elementary exponential integral identity. – MathFont Apr 17 '23 at 15:17
  • @MathWork Alright, maybe it was naive of me to ask it but I am not an expert in RMT. Thank you for helping me. So you are saying the result simply is $(-1)^{\binom{N}{2}} \prod \limits_{1 \le i < j \le N} (\partial_{{\bar A}{j-1} }+ \cdots \partial{ {\bar A}{i} } )\cdot \prod\limits{j=1}^N 1/{\bar A}_j $. I checked the cases $N=2,3$ and they match. – Przemo Apr 17 '23 at 15:59
  • The final details probably involve some nice combinatorial ideas. I'll be curious to know how that plays out. Good luck! – MathFont Apr 17 '23 at 18:01
  • "maybe it was naive of me to ask it" : you have mis-interpreted the first comment of @MathWonk : he/she was meaning "naive question from him/her"... – Jean Marie May 03 '23 at 18:46

1 Answers1

2

Owing to the excellent advice of the user MathWonk I can now write the full answer to my question.

Using the change of variables given in the comments above one finds a closed form expression as an action of a differential operator on the product of inverses of the quantities $({\bar A}_j)_{j=1}^N$. From that the recurrence in question reads:

\begin{equation} {\mathfrak I}_N(\vec{A}) = (-1)^{N-1} \left[ \prod\limits_{i=1}^{N-1} \left( \partial_{{\bar A}_i} + \cdots + \partial_{{\bar A}_{N-1}} \right) \cdot {\mathfrak I}_{N-1} \left( A_1, \cdots, A_{N-1} \right) \right] \cdot \frac{1}{{\bar A}_N} \tag{3} \end{equation} subject to ${\mathfrak I}_2(\vec{A}) := 1/({\bar A}_1^2 {\bar A}_2) $.

Now we are going to show that the solution to the recurrence $(3)$ is equation $(2)$. Before doing this let us note that the differential operator in $(3)$ can be always written as ${\hat O}_{N-1} := \prod\limits_{i=1}^{N-1} \left( \partial_{{\bar A}_i} + \cdots + \partial_{{\bar A}_{N-1}} \right) = \sum\limits_{d_1=0}^1 \sum\limits_{d_2=0}^2 \cdots \sum\limits_{d_{N-2}=0}^{N-2} \sum\limits_{d_{N-1}=1}^{N-1} {\mathfrak N}^{(N-1)}_{d_1,\cdots,d_{N-1}} \partial^{(d_1)}_{{\bar A}_1} \partial^{(d_2)}_{{\bar A}_2}\cdots \partial^{(d_{N-1})}_{{\bar A}_{N-1}} $ where $d_1+\cdots+d_{N-1}=N-1$. The coefficients ${\mathfrak N}^{(N-1)}_{\cdots}$ can be easily generated from the following observation ${\hat O}_N = (\partial_1 + \cdots \partial_N) \left. {\hat O}_{N-1} \right|_{i \rightarrow i+1} $ where the last term denotes the operator at $N \rightarrow N-1$ with all the subscripts being increased by one.

Therefore we have:

\begin{eqnarray} {\mathfrak I}_N(\vec{A}) &=& (-1)^{N-1} \left( \sum\limits_{d_1=0}^1 \sum\limits_{d_2=0}^2 \cdots \sum\limits_{d_{N-2}=0}^{N-2} \sum\limits_{d_{N-1}=1}^{N-1} {\mathfrak N}^{(N-1)}_{d_1,\cdots,d_{N-1}} \delta_{d_1+d_2+\cdots+d_{N-1} , N-1} \partial^{(d_1)}_{{\bar A}_1} \partial^{(d_2)}_{{\bar A}_2}\cdots \partial^{(d_{N-1})}_{{\bar A}_{N-1}} \right) \cdot \left( \sum\limits_{l_1+\cdots+l_{N-1} = \binom{N-1}{3}} {\mathfrak A}^{(N-1)}_{l_1,\cdots,l_{N-1}} \prod\limits_{p=1}^{N-1} \frac{1}{({\bar A}_p)^{p(N-1)-p^2+1-l_p}} \right) \cdot \frac{1}{{\bar A}_N} \\ &=& \left( \sum\limits_{d_1=0}^1 \sum\limits_{d_2=0}^2 \cdots \sum\limits_{d_{N-2}=0}^{N-2} \sum\limits_{d_{N-1}=1}^{N-1} {\mathfrak N}^{(N-1)}_{d_1,\cdots,d_{N-1}} \delta_{d_1+d_2+\cdots+d_{N-1} , N-1} \right) \cdot \left( \sum\limits_{l_1+\cdots+l_{N-1} = \binom{N-1}{3}} {\mathfrak A}^{(N-1)}_{l_1,\cdots,l_{N-1}} \prod\limits_{p=1}^{N-1} \frac{(-1)^{d_p} (p(N-1)-p^2+1-l_p)^{(d_p)}}{({\bar A}_p)^{p N-p^2+1-l_p-p+d_p}} \right) \cdot \frac{1}{{\bar A}_N} \tag{4} \end{eqnarray} Now, we relabel the $l$-indices in the last equation in $(4)$ in an obvious way, i.e. ${\bar l}_p := l_p +p - d_p$ for $p=1,\cdots,N-1$ and ${\bar l}_N=0$. Then we have $\sum\limits_{p=1}^N {\bar l}_p = \sum\limits_{p=1}^{N-1} l_p + \binom{N}{2} - \sum\limits_{p=1}^{N-1} d_p = \sum\limits_{p=1}^{N-1} l_p + \binom{N}{2} - (N-1) = \sum\limits_{p=1}^{N-1} l_p + \binom{N-1}{2} = \binom{N-1}{3} + \binom{N-1}{2} = \binom{N}{3}$. We conclude that the equation in question has the same functional form as $(2)$ except that $N \rightarrow N+1$. Proof completed.

Below we show a code snippet that generates the expressions in questions for $N=3,\cdots,7$ and verifies two things, firstly that the results always have the same functional form as in $(2)$ and secondly that the results match those being obtained using the old primitive method. Here we go:

In[180]:= 
(*Collection of results for N=2,3,4,5*)
myListGen[ll_List, NN_Integer] := 

Module[{newll, mult, tmp0, tmp, count, lltmp, ll1}, newll = ConstantArray[0, {Length[ll] (NN + 1)}]; count = 1;

Do[ mult = ll[[which, 1]]; tmp0 = Join[{0}, ll[[which, 2]]];

Do[
 tmp = tmp0; tmp[[j]] += 1;
 newll[[count]] = {mult, tmp};
 count++;
 , {j, 1, NN + 1}];
, {which, 1, Length[ll]}];

(Remove duplicates) lltmp = Tally[newll]; ll1 = ConstantArray[0, {Length[lltmp]}]; Do[ If[lltmp[[which, 2]] == 1, ll1[[which]] = lltmp[[which, 1]], ll1[[which]] = {lltmp[[which, 1, 1]] lltmp[[which, 2]], lltmp[[which, 1, 2]]}]; , {which, 1, Length[lltmp]}]; ll1 ];

Clear[i]; ll = {1/(A[1]^2 A[2])}; ress = Simplify[ress /. A[j_] :> A[j] - If[j == 1, 0, A[j - 1]]]; Do[ t0 = TimeUsed[];

ll0 = {{1, {1}}}; Do[ ll0 = myListGen[ll0, mm]; , {mm, 1, NN - 2}];

Print["NN,Length[ll0]=", {NN, Length[ll0]}]; lenb20 = Floor[Length[ll0]/20]; S = 0; mcount = 0; Do[ S += ll0[[which, 1]] With[{mtmp = DeleteCases[ll0[[which, 2]], 0]}, D[#, Evaluate[ Sequence @@ Select[Table[{A[j], ll0[[which, 2, j]]}, {j, 1, NN - 1}], #[[2]] != 0 &]]]] & /@ {Expand[ ress[[NN - 2]]]}; If[Mod[which, Max[lenb20, 1]] == 0, PrintTemporary[5 (mcount++), " percent done."]]; , {which, 1, Length[ll0]}]; mres = (-1)^(NN - 1)*S;

mres = (mres 1/A[NN]) // Together; t0 = TimeUsed[] - t0;

t1 = TimeUsed[]; If[NN <= 5, mres1 = ress[[NN - 1]], mres1 = 0]; If[NN <= 5 && ! (Simplify[mres - mres1] === 0), Print["Error"]; Break[]];

ll = Join[ll, {mres}]; If[NN > 5, ress = Join[ress, {ll[[-1]]}]]; t1 = TimeUsed[] - t1; PrintTemporary["NN=", NN, " done in =", {t0, t1}, " secs."]; , {NN, 3, 7}]; Denominator[ll] // MatrixForm Table[And @@ ((Total[Exponent[#, Table[A[j], {j, 1, NN}]]] == Binomial[NN, 3]) & /@ List @@ Expand[Numerator[ll[[NN - 1]]]]), {NN, 2, 7}]

enter image description here

Update:

This is just a note to emphasize that the results derived in here are very useful. The quantity ${\mathfrak I}_N(\vec{A})$ is nothing else but a generating function of various spectral moments and/or spectral correlation coefficients of a sample correlation matrix (Wishart matrix). The aforementioned quantities can be generated by differentiating our generating function appropriately. For example, if the underlying correlation matrix (the oracle matrix) is an identity matrix then the spectral moments read:

\begin{eqnarray} m_p &=& \frac{{\mathfrak P}_{N,T}}{N \cdot T^p} \int\limits_{{\mathbb R}_{\ge}^N} \left( \sum\limits_{j=1}^N \lambda_j^p \right) \cdot \prod\limits_{j=1}^N \lambda_j^{\frac{T-N-1}{2}} \cdot \prod\limits_{1 \le i < j \le N} \cdot e^{-\frac{1}{2} \sum\limits_{j=1}^N \lambda_j} d^N\lambda \\ &=& \frac{{\mathfrak P}_{N,T}}{N \cdot T^p} \sum\limits_{j=1}^N \left(\prod\limits_{\xi=1}^N \partial^{ \frac{T-N-1}{2} + \delta_{\xi,j} \cdot p}_{A_\xi} \right) \left. {\mathfrak I}_N(\vec{A}) \right|_{\vec{A}=1/2(1,1,\cdots,1)} \tag{5} \end{eqnarray}

where $p=0,1,2,\cdots$.

To illustrate that this really works we Took $N=2,3,4$ and $T=2 \zeta+1+N$ for $\zeta=1,\cdots,5$ and for each value of $(N,T)$ we compute the first three spectral moments, i.e. $p=0,1,2$, using the last equation in $(5)$. The results are given below where the rows & the columns are labelled by $N$ and by $T$ respectively. Here we go:

mGamma[NN_, x_] := 
  Pi^(NN (NN - 1)/4) Product[Gamma[(2 x - j)/2], {j, 0, NN - 1}];
Pfct[NN_, T_] := 
 Pi^(NN^2/2) 2^(-NN T/2)/(mGamma[NN, NN/2] mGamma[NN, T/2])

Table[With[{T = 2 Zeta + 1 + NN}, Abs[Pfct[NN, T]/(NN T^P) Sum[(D[ ll[[NN - 1]] /. A[j_] :> Sum[A[xi], {xi, 1, j}], Evaluate[ Sequence @@ Table[{A[xi], Zeta + If[xi == j, P, 0]}, {xi, 1, NN}]]] /. A[eta_] :> 1/2), {j, 1, NN}]]], {NN, 2, 4}, {Zeta, 1, 5}, {P, 0, 2}] // MatrixForm

enter image description here

The reader can check with

Repetowicz, Przemysław; Richmond, Peter, Statistical inference of multivariate distribution parameters for non-Gaussian distributed time series, Acta Phys. Pol. B 36, No. 9, 2785-2796 (2005). ZBL1372.91126. that the results are correct. Indeed $m_1 = 1$ and $m_2 = N/T + 1 + 1/T$ as expected.

Update 1:

This is to illustrate how to calculate the spectral moments if the oracle matrix $\Sigma$ is not an identity matrix. Here we only give the result for $N=3$. Without loss of generality we take $\Sigma = diag(s_1,s_2,s_3)$, we define (see my answer to this question for details) three auxiliary quantities as follows:


\begin{eqnarray} {\mathfrak A}_1(\vec{\alpha}) &=& \frac{1}{4} \left(\frac{1}{s_1}-\frac{1}{s_2}\right) \cos \left(2 \alpha _1\right)+\frac{1}{8} \left(\frac{1}{s_1}-\frac{1}{s_2}\right) \cos \left(2 \alpha _1-2 \alpha _2\right)+\frac{1}{8} \left(\frac{1}{s_1}-\frac{1}{s_2}\right) \cos \left(2 \alpha _1+2 \alpha _2\right)+\frac{1}{4} \left(\frac{1}{s_2}-\frac{2}{s_3}+\frac{1}{s_1}\right) \cos \left(2 \alpha _2\right)+\frac{1}{4} \left(\frac{1}{s_2}+\frac{2}{s_3}+\frac{1}{s_1}\right)\\ {\mathfrak A}_2(\vec{\alpha}) &=& \frac{1}{8} \left(\frac{1}{s_1}-\frac{1}{s_2}\right) \sin \left(2 \alpha _1+\alpha _2-2 \alpha _3\right)+\frac{1}{8} \left(\frac{1}{s_1}-\frac{1}{s_2}\right) \sin \left(2 \alpha _1-\alpha _2+2 \alpha _3\right)+\frac{1}{8} \left(\frac{1}{s_2}-\frac{1}{s_1}\right) \sin \left(2 \alpha _1-\alpha _2-2 \alpha _3\right)+\frac{1}{8} \left(\frac{1}{s_2}-\frac{1}{s_1}\right) \sin \left(2 \alpha _1+\alpha _2+2 \alpha _3\right)+\frac{1}{32} \left(\frac{1}{s_1}-\frac{1}{s_2}\right) \cos \left(2 \alpha _1-2 \alpha _2-2 \alpha _3\right)+\frac{1}{32} \left(\frac{1}{s_1}-\frac{1}{s_2}\right) \cos \left(2 \alpha _1+2 \alpha _2-2 \alpha _3\right)+\frac{1}{32} \left(\frac{1}{s_1}-\frac{1}{s_2}\right) \cos \left(2 \alpha _1-2 \alpha _2+2 \alpha _3\right)+\frac{1}{32} \left(\frac{1}{s_1}-\frac{1}{s_2}\right) \cos \left(2 \alpha _1+2 \alpha _2+2 \alpha _3\right)+\frac{1}{8} \left(\frac{1}{s_2}-\frac{1}{s_1}\right) \cos \left(2 \alpha _1\right)+\frac{1}{16} \left(\frac{1}{s_2}-\frac{1}{s_1}\right) \cos \left(2 \alpha _1-2 \alpha _2\right)+\frac{1}{16} \left(\frac{1}{s_2}-\frac{1}{s_1}\right) \cos \left(2 \alpha _1+2 \alpha _2\right)+\frac{3}{16} \left(\frac{1}{s_2}-\frac{1}{s_1}\right) \cos \left(2 \alpha _1-2 \alpha _3\right)+\frac{3}{16} \left(\frac{1}{s_2}-\frac{1}{s_1}\right) \cos \left(2 \alpha _1+2 \alpha _3\right)+\frac{1}{16} \left(\frac{1}{s_2}-\frac{2}{s_3}+\frac{1}{s_1}\right) \cos \left(2 \alpha _2-2 \alpha _3\right)+\frac{1}{8} \left(\frac{1}{s_2}-\frac{2}{s_3}+\frac{1}{s_1}\right) \cos \left(2 \alpha _3\right)+\frac{1}{16} \left(\frac{1}{s_2}-\frac{2}{s_3}+\frac{1}{s_1}\right) \cos \left(2 \alpha _2+2 \alpha _3\right)+\frac{1}{8} \left(-\frac{1}{s_2}+\frac{2}{s_3}-\frac{1}{s_1}\right) \cos \left(2 \alpha _2\right)+\frac{1}{8} \left(\frac{3}{s_2}+\frac{2}{s_3}+\frac{3}{s_1}\right)\\ {\mathfrak A}_3(\vec{\alpha}) &=& \frac{1}{8} \left(\frac{1}{s_1}-\frac{1}{s_2}\right) \sin \left(2 \alpha _1-\alpha _2-2 \alpha _3\right)+\frac{1}{8} \left(\frac{1}{s_1}-\frac{1}{s_2}\right) \sin \left(2 \alpha _1+\alpha _2+2 \alpha _3\right)+\frac{1}{8} \left(\frac{1}{s_2}-\frac{1}{s_1}\right) \sin \left(2 \alpha _1+\alpha _2-2 \alpha _3\right)+\frac{1}{8} \left(\frac{1}{s_2}-\frac{1}{s_1}\right) \sin \left(2 \alpha _1-\alpha _2+2 \alpha _3\right)+\frac{3}{16} \left(\frac{1}{s_1}-\frac{1}{s_2}\right) \cos \left(2 \alpha _1-2 \alpha _3\right)+\frac{3}{16} \left(\frac{1}{s_1}-\frac{1}{s_2}\right) \cos \left(2 \alpha _1+2 \alpha _3\right)+\frac{1}{8} \left(\frac{1}{s_2}-\frac{1}{s_1}\right) \cos \left(2 \alpha _1\right)+\frac{1}{16} \left(\frac{1}{s_2}-\frac{1}{s_1}\right) \cos \left(2 \alpha _1-2 \alpha _2\right)+\frac{1}{16} \left(\frac{1}{s_2}-\frac{1}{s_1}\right) \cos \left(2 \alpha _1+2 \alpha _2\right)+\frac{1}{32} \left(\frac{1}{s_2}-\frac{1}{s_1}\right) \cos \left(2 \alpha _1-2 \alpha _2-2 \alpha _3\right)+\frac{1}{32} \left(\frac{1}{s_2}-\frac{1}{s_1}\right) \cos \left(2 \alpha _1+2 \alpha _2-2 \alpha _3\right)+\frac{1}{32} \left(\frac{1}{s_2}-\frac{1}{s_1}\right) \cos \left(2 \alpha _1-2 \alpha _2+2 \alpha _3\right)+\frac{1}{32} \left(\frac{1}{s_2}-\frac{1}{s_1}\right) \cos \left(2 \alpha _1+2 \alpha _2+2 \alpha _3\right)+\frac{1}{8} \left(-\frac{1}{s_2}+\frac{2}{s_3}-\frac{1}{s_1}\right) \cos \left(2 \alpha _2\right)+\frac{1}{16} \left(-\frac{1}{s_2}+\frac{2}{s_3}-\frac{1}{s_1}\right) \cos \left(2 \alpha _2-2 \alpha _3\right)+\frac{1}{8} \left(-\frac{1}{s_2}+\frac{2}{s_3}-\frac{1}{s_1}\right) \cos \left(2 \alpha _3\right)+\frac{1}{16} \left(-\frac{1}{s_2}+\frac{2}{s_3}-\frac{1}{s_1}\right) \cos \left(2 \alpha _2+2 \alpha _3\right)+\frac{1}{8} \left(\frac{3}{s_2}+\frac{2}{s_3}+\frac{3}{s_1}\right) \end{eqnarray}

We also define an average over a cube as follows:

\begin{equation} \left< \cdots \right> := \int\limits_{[0,2\pi]^3} \cdots |\cos(\alpha_2)| d\alpha_1 d\alpha_2 d\alpha_3 \end{equation}

We also define an auxiliary constant:

\begin{eqnarray} {\mathfrak C}^{({\mathfrak a},p)}_{l_1,l_2,l_3} := (3 {\mathfrak a} + p - l_2-l_3)! (l_2-l_1+l_3+1)! (l_1+1) \cdot \left( \frac{({\mathfrak a}+p)!}{(l_2-l_1)! ({\mathfrak a}+p-l_2)!} \frac{{\mathfrak a}!}{l_3! ({\mathfrak a}-l_3)!} 1_{0 \le l_3 \le {\mathfrak a} } + \frac{({\mathfrak a})!}{(l_2-l_1)! ({\mathfrak a}-l_2)!} \frac{({\mathfrak a}+p)!}{l_3! ({\mathfrak a}+p-l_3)!} 1_{0 \le l_1 \le l_2 \le {\mathfrak a} } + \frac{({\mathfrak a})!}{(l_2-l_1)! ({\mathfrak a}-l_2)!} \frac{({\mathfrak a})!}{l_3! ({\mathfrak a}-l_3)!} 1_{0 \le l_3 \le {\mathfrak a} } 1_{0 \le l_1 \le l_2 \le {\mathfrak a} } \right) \end{eqnarray}

where ${\mathfrak a} := (T-N-1)/2$.


Then the spectral moments of the sample correlation matrix read:

\begin{eqnarray} m_p &=& \frac{1}{4 (2\pi)^2 \cdot \det(\Sigma)^{\frac{T}{2}} } \frac{{\mathfrak P}_{N,T}}{N \cdot T^p} \sum\limits_{j=1}^N \left< \left(\prod\limits_{\xi=1}^N \partial^{ \frac{T-N-1}{2} + \delta_{\xi,j} \cdot p}_{A_\xi} \right) \left. {\mathfrak I}_N(\vec{A}) \right|_{\vec{A}=1/2({\mathfrak A}_1(\vec{\alpha}),{\mathfrak A}_2(\vec{\alpha}),{\mathfrak A}_3(\vec{\alpha}))} \right> \\ m_p &=& \frac{1}{4 (2\pi)^2 \cdot \det(\Sigma)^{\frac{T}{2}} } \frac{{\mathfrak P}_{N,T}}{N \cdot T^p} 2^{6 + 3 {\mathfrak a}+p} \cdot \\ && \sum\limits_{0 \le l_1 \le l_1 \le {\mathfrak a}+p} \sum\limits_{l_3=0}^{{\mathfrak a}+p} {\mathfrak C}^{({\mathfrak a},p)}_{l_1,l_2,l_3} \cdot %(3 {\mathfrak a} + p - l_2-l_3)! (l_2-l_1+l_3+1)! (l_1+1) % % % \left< \left. \frac{\left(\left(A_1+A_2\right) \left(l_1+2\right)+A_1 \left(-l_1+l_2+l_3+2\right)\right) }{A_1^{l_1+3} \left(A_1+A_2\right)^{-l_1+l_2+l_3+3} \left(A_1+A_2+A_3\right)^{3 {\mathfrak a}+p-l_2-l_3+1}} \right|_{A_j\rightarrow {\mathfrak A}_j(\vec{\alpha})} \right> % \tag{5} \end{eqnarray}

We confirm, as always, the formula $(5)$ by comparing it with the exact result being obtained using the Isserlis' theorem. We took $N,T=3,8$ and $p=0,1,2,3$ and we get the following:

(*Definitions. All self contained.*)

Clear[m2]; a =.; s =.; mGamma[NN_, x_] := Pi^(NN (NN - 1)/4) Product[Gamma[(2 x - j)/2], {j, 0, NN - 1}]; Pfct[NN_, T_] := Pi^(NN^2/2) 2^(-NN T/2)/(mGamma[NN, NN/2] mGamma[NN, T/2]); m2[NN_Integer, T_Integer, p_Integer, s_List, a_List] := With[{aa = (T - NN - 1)/2, AA1 = 1/4 Cos[2 a[[1]]] (1/s[[1]] - 1/s[[2]]) + 1/8 Cos[2 a[[1]] - 2 a[[2]]] (1/s[[1]] - 1/s[[2]]) + 1/8 Cos[2 a[[1]] + 2 a[[2]]] (1/s[[1]] - 1/s[[2]]) + 1/4 Cos[2 a[[2]]] (1/s[[1]] + 1/s[[2]] - 2/s[[3]]) + 1/4 (1/s[[1]] + 1/s[[2]] + 2/s[[3]]), AA2 = 1/32 Cos[ 2 a[[1]] - 2 a[[2]] - 2 a[[3]]] (1/s[[1]] - 1/s[[2]]) + 1/32 Cos[2 a[[1]] + 2 a[[2]] - 2 a[[3]]] (1/s[[1]] - 1/s[[2]]) + 1/32 Cos[2 a[[1]] - 2 a[[2]] + 2 a[[3]]] (1/s[[1]] - 1/s[[2]]) + 1/32 Cos[2 a[[1]] + 2 a[[2]] + 2 a[[3]]] (1/s[[1]] - 1/s[[2]]) + 1/8 Cos[2 a[[1]]] (-(1/s[[1]]) + 1/s[[2]]) + 1/16 Cos[2 a[[1]] - 2 a[[2]]] (-(1/s[[1]]) + 1/s[[2]]) + 1/16 Cos[2 a[[1]] + 2 a[[2]]] (-(1/s[[1]]) + 1/s[[2]]) + 3/16 Cos[2 a[[1]] - 2 a[[3]]] (-(1/s[[1]]) + 1/s[[2]]) + 3/16 Cos[2 a[[1]] + 2 a[[3]]] (-(1/s[[1]]) + 1/s[[2]]) + 1/16 Cos[2 a[[2]] - 2 a[[3]]] (1/s[[1]] + 1/s[[2]] - 2/s[[3]]) + 1/8 Cos[2 a[[3]]] (1/s[[1]] + 1/s[[2]] - 2/s[[3]]) + 1/16 Cos[2 a[[2]] + 2 a[[3]]] (1/s[[1]] + 1/s[[2]] - 2/s[[3]]) + 1/8 Cos[2 a[[2]]] (-(1/s[[1]]) - 1/s[[2]] + 2/s[[3]]) + 1/8 (3/s[[1]] + 3/s[[2]] + 2/s[[3]]) + 1/8 (-(1/s[[1]]) + 1/s[[2]]) Sin[2 a[[1]] - a[[2]] - 2 a[[3]]] + 1/8 (1/s[[1]] - 1/s[[2]]) Sin[2 a[[1]] + a[[2]] - 2 a[[3]]] + 1/8 (1/s[[1]] - 1/s[[2]]) Sin[2 a[[1]] - a[[2]] + 2 a[[3]]] + 1/8 (-(1/s[[1]]) + 1/s[[2]]) Sin[2 a[[1]] + a[[2]] + 2 a[[3]]], AA3 = 3/16 Cos[2 a[[1]] - 2 a[[3]]] (1/s[[1]] - 1/s[[2]]) + 3/16 Cos[2 a[[1]] + 2 a[[3]]] (1/s[[1]] - 1/s[[2]]) + 1/8 Cos[2 a[[1]]] (-(1/s[[1]]) + 1/s[[2]]) + 1/16 Cos[2 a[[1]] - 2 a[[2]]] (-(1/s[[1]]) + 1/s[[2]]) + 1/16 Cos[2 a[[1]] + 2 a[[2]]] (-(1/s[[1]]) + 1/s[[2]]) + 1/32 Cos[ 2 a[[1]] - 2 a[[2]] - 2 a[[3]]] (-(1/s[[1]]) + 1/s[[2]]) + 1/32 Cos[ 2 a[[1]] + 2 a[[2]] - 2 a[[3]]] (-(1/s[[1]]) + 1/s[[2]]) + 1/32 Cos[ 2 a[[1]] - 2 a[[2]] + 2 a[[3]]] (-(1/s[[1]]) + 1/s[[2]]) + 1/32 Cos[ 2 a[[1]] + 2 a[[2]] + 2 a[[3]]] (-(1/s[[1]]) + 1/s[[2]]) + 1/8 Cos[2 a[[2]]] (-(1/s[[1]]) - 1/s[[2]] + 2/s[[3]]) + 1/16 Cos[ 2 a[[2]] - 2 a[[3]]] (-(1/s[[1]]) - 1/s[[2]] + 2/s[[3]]) + 1/8 Cos[2 a[[3]]] (-(1/s[[1]]) - 1/s[[2]] + 2/s[[3]]) + 1/16 Cos[ 2 a[[2]] + 2 a[[3]]] (-(1/s[[1]]) - 1/s[[2]] + 2/s[[3]]) + 1/8 (3/s[[1]] + 3/s[[2]] + 2/s[[3]]) + 1/8 (1/s[[1]] - 1/s[[2]]) Sin[2 a[[1]] - a[[2]] - 2 a[[3]]] + 1/8 (-(1/s[[1]]) + 1/s[[2]]) Sin[2 a[[1]] + a[[2]] - 2 a[[3]]] + 1/8 (-(1/s[[1]]) + 1/s[[2]]) Sin[2 a[[1]] - a[[2]] + 2 a[[3]]] + 1/8 (1/s[[1]] - 1/s[[2]]) Sin[2 a[[1]] + a[[2]] + 2 a[[3]]]}, Pfct[NN, T]/(NN T^p) (-1)^(3 aa + p) 2^( 6 + 3 aa + p) (Sum[(aa + p)!/((l2 - l1)! (aa + p - l2)!) aa!/( l3! (aa - l3)!) (3 aa + p - l2 - l3)! (l2 - l1 + l3 + 1)! (l1 + 1) (((l2 - l1 + l3 + 2) AA1 + (l1 + 2) (AA1 + AA2))/(AA1^( l1 + 3) (AA1 + AA2)^(l2 - l1 + l3 + 3) (AA1 + AA2 + AA3)^( 3 aa + p - l2 - l3 + 1))), {l1, 0, aa + p}, {l2, l1, aa + p}, {l3, 0, aa}] + Sum[(aa)!/((l2 - l1)! (aa - l2)!) (aa + p)!/( l3! (aa + p - l3)!) (3 aa + p - l2 - l3)! (l2 - l1 + l3 + 1)! (l1 + 1) (((l2 - l1 + l3 + 2) AA1 + (l1 + 2) (AA1 + AA2))/(AA1^( l1 + 3) (AA1 + AA2)^(l2 - l1 + l3 + 3) (AA1 + AA2 + AA3)^( 3 aa + p - l2 - l3 + 1))), {l1, 0, aa}, {l2, l1, aa}, {l3, 0, aa + p}] + Sum[(aa)!/((l2 - l1)! (aa - l2)!) (aa)!/( l3! (aa - l3)!) (3 aa + p - l2 - l3)! (l2 - l1 + l3 + 1)! (l1 + 1) (((l2 - l1 + l3 + 2) AA1 + (l1 + 2) (AA1 + AA2))/(AA1^( l1 + 3) (AA1 + AA2)^(l2 - l1 + l3 + 3) (AA1 + AA2 + AA3)^( 3 aa + p - l2 - l3 + 1))), {l1, 0, aa}, {l2, l1, aa}, {l3, 0, aa}]) ]

{NN, T} = {3, 8}; a1 =.; a2 =.; a3 =.; rr =.; W =.; Clear[M]; ss = RandomReal[{1/2, 3}, 2, WorkingPrecision -> 20]; s = {ss[[1]], ss[[1]], ss[[2]]}; Table[NIntegrate[ m2[NN, T, p, s, {a1, a2, a3}] Abs[Cos[a2]], {a1, 0, 2 Pi}, {a2, 0, 2 Pi}, {a3, 0, 2 Pi}], {p, 0, 3}]/(4 (2 Pi)^2 (Times @@ s)^(T/2)) ({1, M[1], (rr^1 M[1]^2 + M[2] (1 + W)/W), rr^2 M[1]^3 + (3 rr (1 + W) M[1] M[2])/ W + (((4 + 3 W + W^2) M[3])/W^2)} /. rr :> NN/T /. W :> T /. M[p_] :> Mean[s^p]) // N

enter image description here

Przemo
  • 11,331
  • Bravo! Glad to see that that these things are useful to somebody. P.S. It might be interesting to see if combinatorialists have a name for the multivariable polynomials $P(\ldots)$ that are generated by this process. – MathFont Apr 18 '23 at 20:34
  • I assume that you verified that the numerical coefficients are correct, as well as the monomial powers? (It was hard for me to read and interpret any of the data you posted.) – MathFont Apr 18 '23 at 20:39
  • @MathWonk: The quantities I am looking to compute are the spectral moments of the correlation matrix, i.e. $m_p:= 1/N Tr[ c^p]$ where $c := 1/T {\tilde C}\cdot X \cdot X^T \cdot {\tilde C}^T$ is the sample correlation matrix and $C:= {\tilde C}\cdot {\tilde C}^T$ is the underlying correlation matrix. Here $X $ is a random matrix of dimensions $N\times T$ with iid entries being Gaussian distributed with mean zero and variance one and ${\tilde C}$ is a non-random matrix of dimensions $N\times N$. Computing those quantities is important in statistics, i.e. separating signal from noise . – Przemo Apr 19 '23 at 09:20
  • @MathWonk: Now, those quantities have been computed before under the simplifying assumption that $N$ and $T$ go to infinity and $N/T$ is fixed. If both $N$ and $T$ are finite then one can expand the trace in the definition of the moment and then apply the Isserlis' theorem https://en.wikipedia.org/wiki/Isserlis%27_theorem . The results are $(m_p)_{p=1}^3=\left{M_1,M_1^2 r+\frac{M_2 (T+1)}{T},M_1^3 r^2+\frac{3 M_2 M_1 r (T+1)}{T}+\frac{M_3 \left(T^2+3 T+4\right)}{T^2}\right}$. Here $M_p:= 1/N Tr[C^p]$ are spectral moments of the oracle matrix and $r:=N/T$. – Przemo Apr 19 '23 at 09:28
  • @MatheWonk It turns out there is an alternative way of computing those moments, a way which makes use of the integral we just computed. This hinges on the result for the joint probability density function of eigenvalues for Wishart matrices (see Theorem 7.1.2 "Wishart Distribution" section 7.1.1 page 445 in: John Harnad, Random Matrices, Random Processes and Integrable Systems, Springer 201 for example). This very quantity involvesthefamous Itzykson-Zuber-Harish-Handra an integral that has not been evaluated yet. The quantities ${\mathfrak A}_j(\vec{\alpha})$ are related to this integral. – Przemo Apr 19 '23 at 09:37
  • @MathWonk What I would like to do now is to simplify the expression (5) and possibly come up with a way of computing the integral over the angles because numerical computations converge very slowly. – Przemo Apr 19 '23 at 09:42
  • @MathWonk For more information on the Itzykson-Zuber-Harish-Handra integral see this comprehensive blog post by Terence Tao https://terrytao.wordpress.com/2013/02/08/the-harish-chandra-itzykson-zuber-integral-formula/ . – Przemo Apr 19 '23 at 09:45
  • FYI. I sent you my contact info, (to your yahoo acct), should you wish to continue discussing the angle problem. – MathFont Apr 20 '23 at 20:04