2

Given a $2 \times 2$ matrix

$$A(k) = \begin{pmatrix} a_{1,1}(k) & a_{1,2}(k)\\ a_{2,1}(k) & a_{2,2}(k) \end{pmatrix}$$

where $a_{m,n}(k) \sim \mathcal{N}(0,\,\sigma^{2})$ are Gaussian distributed random variables, and the product:

$$P(N) = \prod_{k=1}^N A(k)$$

If we have

$$d := \lim_{N\to\infty} \det\left( P(N) \right)$$

is it possible to prove that $d \lt \infty$?

  • Are you stuck at some step? – user3658307 Aug 01 '17 at 15:25
  • @user3658307: I only did some simulation with large $N$ $(N=10^4)$ and in some case the result is finite – Riccardo.Alestra Aug 01 '17 at 15:28
  • 1
    In some cases it was infinite? All the $a_{i,j}(k)$ are independent right? – user3658307 Aug 01 '17 at 15:41
  • @Riccardo.Alestra: The determinant can only become infinite if you have at least one infinite value in $P$, so your question boils down if the entries in $P$ are bounded. – MrYouMath Aug 01 '17 at 15:41
  • @user3658307: yes they are independent – Riccardo.Alestra Aug 01 '17 at 15:43
  • @MrYouMath: can you prove your statement? – Riccardo.Alestra Aug 01 '17 at 15:43
  • @Riccardo.Alestra: What do you mean be prove? The Determinant of P is given by $p_{11}p_{22}-p_{12}p_{21}$. It is clear that if only one of these values does diverge, then the determinant of $P$ will diverge. In the case in which many values of $P$ diverge, it will be more complicated. So, if only one entry diverges that this is sufficient for the determinant to diverge. If all entries are bounded, then this is sufficient for a finite determinant. Also, note that the determinant of a product is the product of the determinants. – MrYouMath Aug 01 '17 at 15:47
  • @MrYouMath: OK. This could be a proof – Riccardo.Alestra Aug 01 '17 at 15:49

1 Answers1

3

Let's define the following: $$ d_N = \det\left(\prod_{k=1}^N A(k)\right) = \prod_{k=1}^N \det(A(k))=\prod_{k=1}^N a_{11}(k)a_{22}(k)-a_{21}(k)a_{12}(k) $$ where $a_{ij}(k)\sim\mathcal{N}(0,\sigma^2)$. So then the expected value is given by: $$ \mathbb{E}[d_N]=0=:\mu_N $$ Next, define $$ \hat{S}_1(k) = \hat{a}_{11}(k)\hat{a}_{22}(k),\;\;\hat{S}_2(k) = \hat{a}_{21}(k)\hat{a}_{21}(k) $$ where $\hat{a}_{ij}(k)\sim\mathcal{N}(0,1)$. So, the variance is written: \begin{align} \mathbb{V}[d_N] &= \mathbb{E}[d_N^2]= \mathbb{E}\left[ \prod_{k=1}^N (a_{11}(k)a_{22}(k)-a_{21}(k)a_{12}(k))^2 \right]\\ &=\prod_{k=1}^N \mathbb{E}\left[ (a_{11}(k)a_{22}(k)-a_{21}(k)a_{12}(k))^2 \right]\\ &= \prod_{k=1}^N \mathbb{E}\left[ (\sigma^2\hat{S}_1(k)-\sigma^2\hat{S}_1(k))^2 \right]\\ &= \sigma^{4N}\prod_{k=1}^N \mathbb{E}\left[ (\hat{S}_1(k)-\hat{S}_1(k))^2 \right] \end{align} Now for some random variable algebra. First, note that the $\hat{S}_i$ are chi-squared because they are a product of independent normal RVs (e.g. here): $$ \hat{S}_1(k), \hat{S}_2(k)\sim\chi^2_1 $$
Next, define $\mathfrak{D}_k= \hat{S}_1(k) - \hat{S}_2(k)$. The difference between two chi-squared variables (e.g. here, or A note on gamma difference distributions by Bernhard Klar) follow a variance-gamma (generalized Laplace) distribution: $$ \mathfrak{D}_k\sim\Gamma_\mathcal{V}(\mu,\alpha,\beta,\lambda,\gamma)=\Gamma_\mathcal{V}\left(0,\frac{1}{2},0,\frac{1}{2},\frac{1}{2}\right) $$ This also tells us that: \begin{align} \mathbb{E}[\mathfrak{D}_k] &= \mu + \frac{2\beta\lambda}{\gamma^2} = 0 \\ \mathbb{V}[\mathfrak{D}_k] &= \frac{2\lambda}{\gamma^2}\left( 1 + \frac{2\beta^2}{\gamma^2} \right) = 4 \end{align} So now we can complete the computation: $$ \mathbb{V}[d_N]=\sigma^{4N}\prod_{k=1}^N \mathbb{E}\left[ \mathfrak{D}_k^2 \right] = \sigma^{4N}\prod_{k=1}^N (\mathbb{V}[\mathfrak{D}_k] + \mathbb{E}[\mathfrak{D}_k]^2) = 4^N\sigma^{4N} =: \varsigma^2_N $$ Hopefully I didn't make any arithmetic mistakes. Anyway, woah, so that is potentially a very large variance. Clearly it depends heavily on the value of $\sigma$. But anyway, ignoring the limit, we can bound our target using the Chebyshev inequality: $$ P(|d_N-\mu_N|\geq \varsigma_N\kappa) \leq \frac{1}{\kappa^2}\;\;\;\implies\;\;\; P(|d_N|\geq 2^N\sigma^{2N}\kappa) \leq \frac{1}{\kappa^2}$$ Maybe there is a better concentration inequality.

But, if we denote $\sigma = \hat{\sigma}/\sqrt{2} $, then at least what this tells us is that: if $\hat{\sigma}<1$, then the probability that $d_N$ is non-zero is essentially zero.

user3658307
  • 10,433