4

In proving that a simple symmetric 2-d random walk a.s. returns to the origin, the proofs generally start by showing (*) that the expected number of returns to the origin is infinite, and then use a lemma that:

Lemma 1: If that expected number of returns is infinite, then the probability of return to the origin is $1$.

The demonstration of Lemma 1, for example in Durrett, Probability Theory and Examples p. 164, is what is confusing me. Because whatever proof holds, it has to break down for the following non-symmetric position and path dependent 2-d lattice random walk $W$:

  • All steps are units in each of the $+x, -x, +y, -y$ directions unless step $1$ has been taken and is $+x$.

  • If step $1$ has been taken and is $+x$, then all subsequent steps are $+x$.

$W$ clearly has a probability of return to the origin that is $\leq \frac34$. Yet (by the same reasoning that shows (*) for a simple symmetric 2-d walk) the expected number of returns to the origin is infinite. So that seems to violate Lemma 1.

What am I missing here?

Mark Fischler
  • 41,743
  • 2
    The Markov property. –  Mar 14 '16 at 18:59
  • That does not work. The example $W$ is a Markov process on the space $Z^2 \times Z_2$ where the $Z_2$ component is $1$ if you are forced to go $+x$ forever, and $0$ if not. Whatever property you offer as a proof has to have a reason for not applying to $W$. – Mark Fischler Mar 14 '16 at 20:33
  • Sorry, it is more accurate to say that you are missing time homogeneity. For a random walk, its behaviour starting at $(1,0)$ doesn't depend on when this occurs. If I have understood you correctly, your process will behave differently depending on whether you visit $(1,0)$ at time $n=1$ or at $n>1$. –  Mar 14 '16 at 20:47

1 Answers1

3

Your question is a very natural one, but what you are missing is the (strong) Markov property. Roughly speaking, this says that when the process returns to its starting position, then it starts over, independently of its past.

If $N$ denotes the total number of visits to the initial position (including the time zero visit), then the Markov property guarantees that $N$ has a geometric distribution with parameter $e$, where $e$ the probability of escape, never to return.

In particular, if $e>0$, then $\mathbb{E}(N)={1\over e}<\infty.$
Therefore if $\mathbb{E}(N)=\infty$, we must have $e=0.$