2

Suppose we have an ergodic Markov chain on a finite state space $X=\{1,2, \ldots, r\}$ with transition matrix $A$ and stationary measure $\pi$ which we also take to be the initial distribution. I wanted the find the following limit:

$$\lim_{n \to \infty} \frac{\log(P\{\omega_{i} \neq 1: i=0,1,2 \ldots, n \}, )}{n}$$

I'm honestly quite lost on how to go about this, it seems like law of large numbers for Markov chains should be relevant.

user135520
  • 2,137
  • Is there a typo in the question? maybe you meant to sum up some probabilities? Anyway, you might find https://math.stackexchange.com/questions/155839/on-ces%c3%a0ro-convergence-if-x-n-to-x-then-z-n-fracx-1-dots-x-nn helpful – E-A Feb 05 '21 at 07:55
  • Given the assumptions that the chain is ergodic and stationary (and on a finite state space, no less), it should be clear to see that the probability of a state never being visited is zero. So in fact the numerator in your expression has limit zero with probability one - which of course implies the same when divided by $n$. – Math1000 Feb 05 '21 at 14:50
  • 1
    It's not clear (to me) what you mean by the event ${\omega_i\not = 1: i=0,1,2, \ldots,n}$. Is it to be $$ \cap_{i=0}^n{\omega: \omega_i\not=1} $$ or something else? – John Dawkins Feb 05 '21 at 15:34
  • 1
    My guess is you are missing a log in the numerator. If you include a log, the limit can be computed using the spectral decomposition of the chain. – Yuval Peres Feb 05 '21 at 17:15
  • @YuvalPeres, thank you, yes, I forgot the log in the numerator. Does spectral decomposition, here apply to the transition matrix? – user135520 Feb 05 '21 at 20:42
  • 1
    @JohnDawkins, The way I read it was the probability that a sample path $\omega$ doesn't visit state $1$, so I believe that is the intersection of sets you wrote. – user135520 Feb 05 '21 at 20:51
  • 1
    Yes, to analyze high powers of the transition matrix diagonalize it if possible, otherwise consider the https://en.wikipedia.org/wiki/Jordan_normal_form – Yuval Peres Feb 05 '21 at 23:27

1 Answers1

2

Let $M$ be the substochastic matrix obtained from the transition matrix $P$ by erasing the first row and column, and let $\lambda(M)<1$ denote its Perron eigenvalue [1]. The probability in the numerator of the question is $\pi M^n {\bf 1}$ where ${\bf 1}$ is the all ones vector. Thus $$\lim_n \frac {\log(\pi M^n {\bf 1})}{n} =\log \lambda(M)<0 \,. $$

[1] https://en.wikipedia.org/wiki/Perron%E2%80%93Frobenius_theorem

Yuval Peres
  • 21,955
  • 1
    Thank you Professor! I follow now, $\pi$ is the left eigenvector for $\lambda$ whose coefficients sum to one. – user135520 Feb 07 '21 at 22:18
  • 1
    $\pi$ need not be a left eigenvector of $M$ but $\pi \ge c \cdot v$ where $v$ is the left Perron eigenvector and $c$ is a scalar. That suffices, see the Perron Frobenius Theorem. – Yuval Peres Feb 08 '21 at 00:31
  • I see. $v$ is positive and guaranteed by the Perron Frobenius Theorem. But how does the inequality suffice in showing ${\pi}M^{n}\textbf{1} \sim \lambda(M)^{n}$. Sorry for the persistent bugging. – user135520 Feb 08 '21 at 17:21
  • 2
    The inequality $\pi \ge cv$ suffices because we are looking at the logarithm. recall also the Gelfand formula for the spectral radius that applies in complete geenrality $|M^k|^{1/k} \to \lambda(M)$ as $ k \to \infty$. – Yuval Peres Feb 09 '21 at 01:12