3

A person climbs an infinite ladder. At each jump, the person can jump up one step with probability $1-p$ while the person slips off and falls all the way to the bottom with probability $p$.

a) Represent the person's height on the ladder as a Markov chain and show that the stationary distribution is geometric and find its parameter.

b) What is the long-run proportion of time for which the frog is on the first step above the bottom of the ladder?

c) If the person has just fallen to the bottom, on average how many jumps will it take before he reaches step $k$

So far I have the following:

Let $X_n$ be the step number at time $n$. Because $P(X_{n+1} = i+1| X_n =i) = 1-p$ and $P(X_{n+1}=0 | X_n = i)=p$, we can find the following system of equations such that:

$\pi_0 =p[\pi_0 +\pi_1 +\pi_2 +...\pi_\infty] = p*1$, $\pi_1 = (1-p)\pi_0$, $\pi_2 = (1-p)\pi_1$,... and thus we find the pattern $\pi_n = p(1-p)^n$; however, I'm not sure if this is correct and how to proceed with the subsequent parts, especially given the infinite state space

  • You have an error: $\sum_{k=1}^\infty \pi_k$ is probably not $1$, because that is $1-\pi_0$. But then your initial equation now only involves $\pi_0$, so you can solve it for $\pi_0$, which gives the initial condition you need to solve the recurrence to compute $\pi_k$ for $k \geq 1$. – Ian Feb 23 '17 at 00:31
  • Anyway, once you have the stationary distribution, that immediately answers #2 (proving this requires some "ergodicity" argument, i.e. you need to show that the time average converging to the stationary distribution, but this goes through under rather general assumptions). #3 is a fairly standard renewal theory problem (compute the mean time to hit a particular state by conditioning on the first step and using the total expectation formula). – Ian Feb 23 '17 at 00:34
  • @Ian I'm confused as why $\sum_{k=1}^{\infty} \pi_k=1-\pi_0$ changes the initial condition I solved for. Because we then have $\pi_0 = p[\pi_0 + \sum_{k=1}^{\infty} \pi_k] = p\pi_0 +p(1-\pi_0)$ $=>$ $\pi_0=p$ –  Feb 23 '17 at 00:51
  • Sorry, I made a silly error, you are correct, $\pi_0=p \sum_{k=0}^\infty \pi_k=p$ and for $k \geq 1$ you have $\pi_k=(1-p)\pi_{k-1}$, so that gives you the correct distribution. (Your actual mistake was ignoring $\pi_0$ in the unedited version). – Ian Feb 23 '17 at 01:18

1 Answers1

2

Part a.

From elementary theory, we know that for the stationary distribution $\pi$, $$\pi P = \pi.$$ From this, we can derive the relation $$\pi_k = (1-p)\pi_{k-1}, \qquad k \geq 1.$$

Additionally, we have that $$p\sum_{i=0}^\infty\pi_i=\pi_0 \Longrightarrow \pi_0 = \frac{p}{1-p}\sum_{i=1}^\infty\pi_i.$$

By the above result and normalizing, we get that $$\sum_{i=0}^\infty\pi_i = \frac{p}{1-p}\sum_{i=1}^\infty\pi_i + \sum_{i=1}^\infty\pi_i = 1 \Longrightarrow \sum_{i=1}^\infty\pi_i = 1-p.$$ Hence, $$\pi_0 = p.$$

Therefore, we have that $$\pi_1 = (1-p)\pi_0 = (1-p)p,$$ and generally $$\pi_k = (1-p)^kp.$$

We note this is a geometric distribution with parameter $p$.

Part b.

This is of course the same as $\pi_1.$

Part c.

This is simply the expected number of steps until $k$ sequential successes. This is a fairly standard problem, e.g., Expected Number of Coin Tosses to Get Five Consecutive Heads

==============================================

EDIT: This is wrong. Misread that you fall "one rung" not to the bottom. Revised version above.

Part a.

From elementary theory, we know that for the stationary distribution $\pi$, $$\pi P = \pi.$$ From this, we can derive the relation $$p\pi_k = (1-p)\pi_{k+1}, \quad k \geq 0.$$

Thus, we have $$\pi_0 = \frac{1-p}{p}\pi_1.$$ We can then show (by induction if you want to be formal) that $$\pi_k = \left(\frac{1-p}{p}\right)^k \pi_0.$$

And since $\pi$ is a probability distribution we know that $$\sum_{k=0}^\infty\pi_k = 1.$$ By substituting in the above answer, we then have that $$\sum_{k=0}^\infty \pi_k = \sum_{k=0}^\infty\left(\frac{1-p}{p}\right)^k \pi_0 = 1 \Longrightarrow \pi_0 = \frac{1}{\frac{1}{1-\left(\frac{1-p}{p}\right)}} = 1 - \left(\frac{1-p}{p}\right).$$

We note, the above sum converges when $p>0.5,$ otherwise the system is not-ergodic (this should be very intuitive, otherwise we obviously drift off until infinity). Thus, we can substitute this back above to find $\pi_k$, i.e., $$\pi_k = \left(\frac{1-p}{p}\right)^k \pi_0 = \left(\frac{1-p}{p}\right)^k \left(1-\left(\frac{1-p}{p}\right)\right).$$ We note this is a geometric distribution with parameter $1 - \left(\frac{1-p}{p}\right).$

David
  • 3,180
  • this is very helpful - i do wonder how we can deduce $p\pi_k=(1-p)\pi_{k+1}$. In my work stated above, I have that $\pi_k = (1-p)\pi_{k-1}$ –  Feb 23 '17 at 01:24
  • I think this is incorrect. Note that the $k$th column of $P$ is either identically equal to $p$ if $k=0$ or else it is $0$ except for being $1-p$ at the $k-1$ position. Thus for any $q \in \ell^1$, $(qP)j=\begin{cases} p \sum{i=0}^\infty q_i & j=0 \ (1-p)q_{j-1} & j \neq 0 \end{cases}$ as in the OP. In particular, the time to return to $0$ is just a geometric random variable, because you instantly collapse all the way to $0$ rather than just falling one level. – Ian Feb 23 '17 at 01:28
  • In particular, to kill a fly with a sledgehammer, the identity is a Foster-Lyapunov function, because the mean change in the state at a given state $n$ is $(1-p)-np$ which is negative for $n>\frac{1-p}{p}$, which holds off the compact set $=\left { 0,1,\dots,\left \lfloor \frac{1-p}{p} \right \rfloor \right }$. So the chain is indeed ergodic. – Ian Feb 23 '17 at 01:32
  • @Ian I misread...I thought it said slips and falls one rung. Let me fix the thing...What is funny is by the time I got to the third part, I immediately was thinking about the actual problem... – David Feb 23 '17 at 01:35
  • @Ian For part C), how can I use the stationary distribution to solve for the mean hitting time of k starting at step 0. I know the mean/expected return time to k is based on $m_k$ = $\frac{1}{\pi_k}$ = $\frac{1}{(1-p)^kp}$ –  Feb 23 '17 at 11:22
  • @zhou8910 You can't use the stationary distribution to do it, however it can be done without using the geometric distribution for the hitting time. You just solve the recursion $((P-I)u)(i)=-1$ for $i \neq j$ and $u(j)=0$. In your situation that boils down to $pu_0+(1-p)u_{i+1}-u_i=-1$ for $i=0,1,\dots,j-1$ and $u(j)=0$. – Ian Feb 23 '17 at 12:17
  • @zhou8910 To be a bit more specific, the stationary distribution is related to the mean recurrence time, but not the mean time to hit one state starting from a different state. – Ian Feb 23 '17 at 12:24