5

I am working on the following problem (which I couldn't find on the website so far):

Show that a symmetric random walk ($1$ dimensional) starting from the origin visits the point $1$ with probability $1$.

My attempt so far (an adaptation of the recurrence proof from Probability An Introduction $2^{nd}$ edition by Grimmett and Welsh -p.170).

Let $S_n$ be the r.v. which represents where (on the $X$-axis) the random walk is at time $n$. And let $X_n$ be the random variable representing the move that has happened at time $n$. Of course, $X_i=\pm1$ and

$$ P(X_i=1) = P(X_i=-1)=\frac{1}{2} \mbox{ as it is symmetric} $$

Hence, we can write $S_n = X_1 + \dots + X_n$.

In order to be at point $1$ we would need to have had an odd number of moves ($m+1$ to the right and $m$ to the left). Hence we cannot be at point $1$ after an even number of moves. Hence,

$$ P(S_{2m}=1) = 0, \mbox{ } m\geq0 \tag{1} $$ and $$ P(S_{2m+1} = 1) = \binom{2m+1}{m}\frac{1}{2^{2m+1}}, \mbox{ } m\geq0 \tag{2} $$ Let, now, $A_n = \left \{ S_n = 1\right \}$ for the event that the walk visits point $1$ at time $n$, and: $$ B_n = \left\{ S_n = 1, S_k\neq1 \mbox{ for } 1\leq k\leq n-1 \right\} $$ for the event that the first visit of the walk through point $1$ occurs at time $n$. If $A_n$ occurs, then exactly one of $B_1, \dots , B_n$ occurs, giving that: $$ P(A_n) = \sum_{k=1}^{n}P(A_n\cap B_k). $$ Now, $A_n\cap B_k$ is the event that the walk goes through $1$ for the first time at time $k$ and then again in another $n-k$ steps. Hence: $$ P(A_n\cap B_k) = P(B_k)P(A_{n-k}), \mbox{for } 2\leq k \leq n \tag{3} $$

I have doubts that in the equation above, the boundaries are correct i.e. not sure if $n\geq 2$

since transitions in disjoint intervals of time are independent of each other. We write $f_n = P(B_n)$ and $u_n = P(A_n)$. Hence, from the above equations we get that: $$ u_n = \sum_{k=2}^{n} f_{k}u_{n-k}, \mbox{ for } n=1,2,\dots $$

We know the $u_i$s from $(1)$ and $(2)$ and we want to find the $f_k$. Given that the summation we got above is a convolution, we can use probability generating functions. $$ U(s) = \sum_{n=0}^{\infty}u_ns^n $$ $$ F(s) = \sum_{n=0}^{\infty}f_ns^n $$ noting that $u_0 = 0$ and $f_0 = 0$, we have, from $(3)$ that: $$ \sum_{n=2}^{\infty}u_ns^n = F(s)U(s) $$ Hence $U(s) - \frac{1}{2}s = F(s)U(s)$. Hence $$F(s) = 1 - \frac{1}{2sU(s)} $$ And we can find out (not so easily), that: $$ U(s) = \frac{1-\sqrt{1-s^2}}{s\sqrt{1-s^2}}, |s|<1 $$ Hence, $$ F(s) = 1-\frac{s\sqrt{1-s^2}}{2s-2s\sqrt{1-s^2}}, |s|<1 $$

We get the probability we are interested in by taking the limit in the above equation as $s\to 1$, which yields $1$.

I am not sure if I manipulated the (summation) indexes correctly. I would appreciate comments on this proof as well as an alternative proof if anybody knows one.

  • 1
    An alternative proof would be to prove that the time of first return to $0$ is finite. That proves that $0$ is a recurrent state (when seeing the random walk as a Markov Chain). Since $0$ communicates with $1$, $1$ is recurrent as well. This little reasonning show that even if you start at $2^{10000}$, you will reach $1$ someday with probability $1$. – nicomezi Aug 14 '18 at 12:13
  • this might be useful. https://math.stackexchange.com/questions/536/proving-that-1-and-2-d-simple-symmetric-random-walks-return-to-the-origin-with – tortue Aug 14 '18 at 13:56

2 Answers2

2

Here is a martingale argument. Note that by continuity from below, \begin{align*} \{T_1< \infty\} = \bigcup_{n=1}^{\infty}\{T_1 < T_{-n}\} \Rightarrow P(T_1 < \infty) = \lim_n P(T_1 < T_{-n}), \end{align*} where $T_{-n} \doteq \inf\{m: S_m = -n\}$. Letting $T = T_1 \wedge T_{-n}$, we have by the Optional Sampling Theorem that \begin{align*} 0 = ES_0 = ES_{T \wedge m}. \end{align*} Also note that $P(T < \infty) = 1$, since $P(T \geq m(1+n)) \leq (1-2^{-(1+n)})^m$, i.e. every sequence of $n+1$ flips has probability $2^{-(n+1)}$ of being all heads, in which case the walk will escape the interval $(-n,1)$. If the time of escape is greater than $m(n+1)$, then we must have failed to obtain $n+1$ heads in a row, $m$ times in a row. We now have \begin{align*} E\left(\frac{T}{n+1}\right) = \sum_{m=1}^{\infty} P(T \geq m(n+1)) < \infty, \end{align*} since this is a geometric series. Hence $ET < \infty$ so $T$ is finite with probability $1$, so that $S_{T \wedge m} \to S_T$ almost surely. Since $S_{T \wedge m}$ is bounded between $-n$ and $1$, we have by the Dominated Convergence Theorem that \begin{align*} 0 = \lim_{m \to\infty} ES_{T \wedge m} = ES_T = P(T_1 < T_{-n})\cdot 1 + (1-P(T_1 < T_{-n})) \cdot (-n). \end{align*} Rearranging gives \begin{align*} P(T_1 < T_{-n}) = \frac{n}{n+1}. \end{align*} Hence, $P(T_1 < \infty) = \lim_{n\to\infty} P(T_{1} < T_{-n}) = 1$.

Daniel Xiang
  • 2,816
1

Here is a statement that may not be completely rigorous: We know that the probability of a simple random walk revisiting 0 is 1. This means in the whole random sequence, there has to be a 0 except the first one. Then, starting at that point, it will be a new simple random walk. So it would return 0 once again. Continuing this process, we conclude that a simple random walk revisits 0 infinitely many times. We also know that the probability of a simple random walk going to 1 is larger than 0 (it's 1/2 from 0 to 1). Since we have infinitely many times starting from 0, there has to be a time when it visits 1. This is like if you toss a coin infinitely many times, there has to be a head. Thus the problem is proved.

dada
  • 11