3

There is a theorem by Ribando (Measuring Solid Angles Beyond Dimension Three, Discrete Comput Geom 36:479–487 (2006)) https://link.springer.com/content/pdf/10.1007/s00454-006-1253-4.pdf?pdf=button:

Let $\Omega \subseteq \Bbb{R}^n$ be a solid-angle spanned by unit vectors $\lbrace v_1 , \dots , v_n \rbrace$, let $V$ be the matrix whose ith column is $v_i$ , and let $\alpha _{ij} = v_i \cdot v_j$ as above. Let $T_{\alpha}$ be the following infinite multivariable Taylor series: $$T_{\alpha} = \dfrac{det \ V}{(4 \pi )^{n/2}} \sum _{a \in \Bbb{N}^{{n \choose 2}}} \left[ \dfrac{(-2)^{\sum _{i < j} a_{ij}}}{ \Pi _{i<j} a_{ij}!} \Pi _{i} \Gamma \left( \dfrac{1 + \sum _{m \neq i} a_{im}}{2} \right) \right] \alpha^{a}$$ The series $T_{\alpha}$ agrees with the normalized measure of solid-angle $\Omega$ whenever $T_{\alpha}$ converges.

My question is how someone can find $a$ values and what this part means exactly $\sum _{a \in \Bbb{N}^{{n \choose 2}}}$ ? Is it finally an infinite sum over all possible $a \in \Bbb{N}^{{n \choose 2}}$? thanks

A. R.
  • 91
  • 1
    Here's my read of it. Note first the multi-index notation $\alpha^a:=\prod_{i<j}\alpha_{ij}^{a_{ij}}$. In analogy with the symmetric $\alpha_{ij}$ which are trivial on-diagonal viz. $\alpha_{ii}=1$, the matrices $a$ included in this sum will only be allowed e.g. $i<j$ degrees of freedom, so the set thereof is denoted $\Bbb N^{\binom{n}{2}}$. But this is analogous to $\Bbb N^{n\times n}$, not to any set of the form $\Bbb N^k$. – J.G. Dec 28 '22 at 16:29
  • @ J.G Thanks for your response. Yes, I know that $a$ is not from any size, so for example, for $n = 4$, the size os the set is 6. But still, I don't know how we can determine each element of $a$. – A. R. Dec 28 '22 at 17:01

1 Answers1

1

Let's consider a simpler expression to gain some intuition.

$$e^{x+y} = \left(e^x\right)\left( e^y \right)\approx \left(\sum_{k =0}^{N-1} \frac{x^k}{k!} \right)\left(\sum_{l =0}^{N-1} \frac{y^l}{l!} \right)$$

This contains terms $x^ky^l$ with various combinations of $k$ and $l$ and the summation is over those terms. In this case you get $N^2$ terms. In the limit $N \to \infty$, the pairs $k,l$ will be all possible values in $\mathbb{N}^2$ and the sum will equal the exponential.


With your application you get products of upper or lower triangular terms $\alpha_{ik}$. For instance when $\alpha$ is 4 by 4, then you get sums of products of ${4 \choose 2} = 6$ terms like $$\boldsymbol{\alpha}^{\boldsymbol{a}} = \alpha_{12}^{a_{12}} \alpha_{13}^{a_{13}} \alpha_{14}^{a_{14}} \alpha_{23}^{a_{23}} \alpha_{24}^{a_{24}} \alpha_{34}^{a_{34}}$$

The $a_{ik}$ will range from $0$ to $(N-1)$ and there will be $N^{n \choose 2}$ combinations. (and in the limit you get all values in the space $\mathbb{N}^{n \choose 2}$)

In this answer you see a code that computes this for the first few terms. You can use that code to get a better idea to see which terms are being computed. The code computes eventually products like

prod1 = prod(factorial(aj))
term_i = sapply(1:dim, FUN = function(i) {gamma( 0.5 * (sum(Maj[i,-i])+1) )})
prod2 = prod(term_i)
prod3 = prod(alpha^aj)
pow = (-2)^sum(aj)
pow/prod1*prod2*prod3 

And it does this for $N^{n \choose 2}$ different values of the vector $a_j$ (which contains the powers).

In the case of $N = 3$ and $n=4$ then the $a_j$ will look like the $3^6 = 729$ rows of the matrix below

        [,1] [,2] [,3] [,4] [,5] [,6]
   [1,]    0    0    0    0    0    0
   [2,]    1    0    0    0    0    0
   [3,]    2    0    0    0    0    0
   [4,]    0    1    0    0    0    0
   [5,]    1    1    0    0    0    0
   [6,]    2    1    0    0    0    0
   [7,]    0    2    0    0    0    0
   [8,]    1    2    0    0    0    0
   [9,]    2    2    0    0    0    0
  [10,]    0    0    1    0    0    0
  [11,]    1    0    1    0    0    0
  [12,]    2    0    1    0    0    0
  ...
 [729,]    2    2    2    2    2    2
  • Many thanks for your response. So these $a_{ik}$s are not certain and definite values, they can be anything between $0$ to $N$. But how you can determine $N$ as small as possible to guarantee convergence? – A. R. Dec 29 '22 at 11:56
  • I would 't call it 'not certain and definite' (but maybe this is more semantic, I guess we understand it as the same). The values are fixed like 0,1,2,3, etc. The values $a_{ik}$ are the indices in a summation, just like used in the single dimensional Taylor series. Instead of a single sum $\sum_{a=0}^N$ you get multiple sums $\sum_{a_{12}=0}^N \sum_{a_{13}=0}^N \sum_{a_{23}=0}^N$. – Sextus Empiricus Dec 29 '22 at 12:04
  • To determine convergence rate and how far to increase $N$ I would keep increasing $N$ untill the change becomes below a certain limit. I am not sure whether that is close to the generally accepted method. If you want to make a lot of use of this formula then probably you can make use of smarter ways to determine convergence or size of the error. – Sextus Empiricus Dec 29 '22 at 12:11
  • Okay I see. I was wondering if we could determine $N$ as a function of other known parameters with a certain error order for convergence, then we could claim that we have an approximated closed-form probability for positive orthant for any arbitrary covariance and dimension. – A. R. Dec 29 '22 at 12:21
  • @ Sextus Empiricus: based on the paper, if Matrix $M(1,-|\alpha_{i,j}|)$ is positive semi-definite then the probability always converge. Do you think if Matrix $A$ or actually the covariance matrix is PSD, then M is always PSD, and convergence is guaranteed? – A. R. Jan 02 '23 at 12:51
  • @AminRadbord I don't remember so well how I did that previous code but I believe that the only difference between the matrices was the eigenvalues and the eigenvectors are the same. Shouldn't that preserve the positive semidefinite property? – Sextus Empiricus Jan 02 '23 at 14:14
  • @ Sextus Empiricus: Unfortunately not. Here is a counterexample: $C =$ [2.0067 0.6704 3.7026 -2.6689 0.6704 1.2844 1.2768 0.0333 3.7026 1.2768 10.6756 -6.5839 -2.6689 0.0333 -6.5839 5.4901] Then $A = C^-1 =$ [3.1884 -2.1665 0.4504 2.1032 -2.1665 2.9861 -1.0231 -2.2983 0.4504 -1.0231 0.7630 1.1402 2.1032 -2.2983 1.1402 2.5859] – A. R. Jan 02 '23 at 23:36
  • Yes, I realised later that what I did in the code is only changing from the covariance matrix to the matrix A. We can scale the variables which leaves the orthant probabilities unchanged. So when we solve the problem for any covariance matrix we first convert to an equivalent problem with a covariance matrix that has a diagonal equal to 1. – Sextus Empiricus Jan 02 '23 at 23:38
  • I didn't study the article very intensively so I will have to look again what the difference is between the matrix $M$ and $A$ and whether there is no general convergence. I find it difficult to imagine why convergence would not happen as with the single variate case. – Sextus Empiricus Jan 02 '23 at 23:40
  • @ Sextus Empiricus: continue my last comment: Then we have $A' = diag(diag(A))^{(-1/2)} A diag(diag(A))^{(-1/2)} = \big [\begin{matrix} 1.0000 & -0.7021 & 0.2888 & 0.7325\ -0.7021 & 1.0000 & -0.6778 & -0.8271 \ 0.2888 & -0.6778 & 1.0000 & 0.8117\ 0.7325 &-0.8271 & 0.8117 & 1.0000 \end{matrix} \big ]$ Then we have matrix $M = \big [\begin{matrix} 1.0000 &-0.7021 &-0.2888 &-0.7325\ -0.7021 &1.0000 &-0.6778 &-0.8271\ -0.2888 &-0.6778 &1.0000 &-0.8117\ -0.7325 &-0.8271 &-0.8117 &1.0000 \end{matrix} \big ]$

    Which is not PSD

    – A. R. Jan 02 '23 at 23:48
  • @ Sextus Empiricus: $M$ is like $A$ with this difference fixing diagonal terms and replacing minus of absolute of off-diagonal terms instead. – A. R. Jan 02 '23 at 23:54
  • @ Sextus Empiricus: If you try for this counterexample $A'$ then you get Prob_simu =0.0403

    Prob_theory = -3.4963e+08

    – A. R. Jan 03 '23 at 00:01