9

Let $$ M_1=\left( \begin{array}{cc} 1 & 1 \\ 0 & 1 \\ \end{array} \right), \qquad M_2=\left( \begin{array}{cc} 1 & 0 \\ 1 & 1 \\ \end{array} \right),\qquad v = \left( \begin{array}{c} 1 \\ 1 \\ \end{array} \right) $$

and consider the $n$-fold random product, for $n \in \mathbb N$. More precisely, let

$$\Psi_n := \log \| M_{i_1}M_{i_2}\cdots M_{i_n}v \|_{L^1}:$$ where each $i_k$ are i.i.d., equal to 1 or 2 with equal probability.

Scaling by $2^{-n}$, in this question we consider $\Phi_n$ as a compactly supported probability measure on $\mathbb R$ (i.e. a weighted sum of Dirac deltas).

Central Limit Theorem

For probability measures $\mu_n, \mu$, we here say

$$\mu_n\stackrel{\text{dist.}}\longrightarrow \mu$$

if, for each $-∞ < a \leq b < ∞$,

$$\mu_n\big([a,b]\big) \to \mu([a,b]) .$$

Under certain hypotheses on the $M_i$, one would see, for generic $v$, that $\Psi_n$ satisfies a kind of Central Limit Theorem:

I.e., that there exist constants $\lambda_\ast > 0$ and $\sigma >0$ such that

$$ \frac{\Psi_n - n \lambda_\ast}{\sigma^2\sqrt n} \stackrel{\text{dist.}}\longrightarrow N(0,1) \qquad\text{as }n \to ∞,$$

where $N(0,1)$ is the standard normal distribution.

Reality

Calculating some histograms in Mathematica, the shape of $\Psi_n$ seems to not be converging to a Gaussian, but to some kind of skewed distribution: see pictures below.

Questions:

  • What can $\Phi_n$ be converging to, if anything?
  • How would one prove that $\Phi_n$ is not converging to a Gaussian?

Histograms

Good Boy
  • 2,210
  • 1
    Please make clear what should converge to what. What is $\Psi_n?$ (A function defined on which interval? How is it defined?) What is $v$? Is $v=[1\ 0]^T$ generic (enough), or one should also give a sense to "generic"?! Why should we expect some central limit theorem to work? And the question wants full proofs for exactly claimed limit distributions, or experimental speculations that may work are enough? – dan_fulea Dec 08 '20 at 12:29
  • 2
    (The experiment is very interesting, but as a question there are too many unknown left to the answerer. Please invest some more detail, the effort will be rewarted, the question will be upvoted many times!) – dan_fulea Dec 08 '20 at 12:32
  • Thank you for your comments, Dan. How does it look now? – Good Boy Dec 10 '20 at 16:06
  • Hm, maybe little bit of context? Why did you encounter this question? – Paresseux Nguyen Dec 15 '20 at 14:56
  • 2
    Seems like this has the answers you seek: https://arxiv.org/pdf/1603.09086.pdf (I haven't checked the "strong irreducibility" assumption, but it "irreducibility" is satisfied, according to my definition at least, and this seems to be enough judging by the comment after Example 4.15; it also seems to satisfy the "unbounded in PGL" assumption by inspection of the nth power of either Jordan block.) –  Dec 15 '20 at 19:43
  • Oh, and it's worth noting the choice of matrix norm is not relevant here (by equivalence of norms on finite dimensional spaces). –  Dec 15 '20 at 19:56
  • Thanks Peter. The question in that case, if you think a CLT applies, is how would one go about proving strong irreducibility? – Good Boy Dec 16 '20 at 11:56

1 Answers1

1

This is a partial answer, but given the lack of other answers hopefully it is helpful.

  1. When you say that $\Psi_{n}$ should converge to normal, be careful: the CLT says only that $$ Z_{n} = \frac{\frac{\Psi_{n}}{n} - \lambda^{\ast}}{\sigma/\sqrt{n}} $$ converges to anything normal, and really that is only true if $\frac{1}{n}\Psi_{n}$ behaves sufficiently like a sample mean ($\Psi_{n}$ is not a sum, so this is not obvious).
  2. The CLT theorem depends on the original sampling distribution. For instance, it does not apply if the sampling distribution is Cauchy. More generally, if the sampling distribution comes from a family of stable distributions, the CLT does not necessarily apply (I assume there are other ways for the CLT to fail, but I find this a particularly illuminating case).
  3. Lastly, as pointed out by Peter Morfe in the comments, the paper "Central Limit Theorems for Linear Groups" (Benoist and Quint 2016) seems to answer this problem.
Jacob Maibach
  • 2,512
  • 2
  • 14
  • 20