0

I was wondering about the probability that an asymmetric random walk will ever come back where it started. Trying to take a cue from this answer (which solves it for the symmetric case). Suppose the walk starts at $0$ and goes forward one step with probability $p$ and backwards one step with probability $1-p$. Further, let $P$ be the probability it will ever come back to $0$, $P_1$ the probability it'll ever hit $0$ if it starts at $1$ and $P_{-1}$ the probability it'll hit $0$ starting at $-1$. We have:

$$P=p P_1+(1-p)P_{-1} \tag{1}$$

In general, $P_x$ is the probability of hitting $0$ starting at $x$. We get:

$$P_1 = pP_2+(1-p)$$ But, if we're at $2$, we need to go back one step (probability $P_1$ due to stationarity) and then repeat the feat. So,

$$P_1 = p P_1^2+(1-p)$$

Solving this quadratic equation gives us that either $P_1 = 1$ or $P_1=\frac{1-p}{p}$.

Similarly we get $P_{-1} = 1$ or $P_{-1} = \frac{p}{1-p}$.

We're interested in plugging these into equation (1) and getting $P$. And there seem to be four ways to do this. If we choose $P_1 = P_{-1} = 1$, we get that $P$ must be $1$. If we choose $P_1 = \frac{1-p}{p}$ and $P_{-1} = \frac{p}{1-p}$, we again get $P=1$. The other two ways produce expressions that aren't guaranteed to be less than $1$ ($2p$ and $2(1-p)$). This seems to suggest that the walk will always return where it started.

But then we know that when $p=1$, it will never return ($P=0$). The only way to get this result is to choose $P_1 = \frac{1-p}{p}$ and $P_{-1}=1$ (resulting in $P=2(1-p)$). So how do I know (for a given value of $p$), which combination of solutions to choose?

All this suggests $P=\min(2p,2(1-p))$. But I can't think of a solid argument for this.

Rohit Pandey
  • 6,803

1 Answers1

1

My suggestion is to make this a countable state markov chain denominated on $\mathbb N$ with state $0$ set to be an absorbing state. What follows is essentially a martingale argument albeit with minimal machinery. Let $\mathbf P$ be the transition matrix for this markov chain, so
$p_{0,0}=1$
$p_{1,0}=1-p$ and $p_{1,2}=p$
$p_{2,1}=1-p$ and $p_{2,3}=p$ and so on
This simplifies the chain; any calculations desired for negative states in the OP are implied by symmetry considerations.

lemma:
For any positive vector $\mathbf x$ with $x_0:=1$, then $\mathbf {Px}=\mathbf x\implies p_{i,0}^{(n)} \leq x_i$
in words: a positive fixed point vector [with zeroth state =1] gives an upper bound on the probability of being absorbed by the nth iteration of the chain for arbitrary $n$.

proof:
$p_{i,0}^{(n)}= p_{i,0}^{(n)}\cdot x_0 \leq \sum_{j=0}^\infty p_{i,j}^{(n)}\cdot x_i = x_i$
since $\sum_{j=0}^\infty p_{i,j}^{(n)}\cdot x_i$ is the ith row of $\mathbf P^n\mathbf x =\mathbf x$

main argument:
There is nothing to do for $i=0$ so consider $i \in \big\{1,2,3, \dots\big\}$
OP has essentially found two satisfying fixed points, $\mathbf {P1}=\mathbf 1$ and that $\mathbf {Py}=\mathbf y$ with $y_i:= \big(\frac{1-p}{p}\big)^i$ for $p\in (0,1)$

Now $p_{i,0}^{(1)}\leq p_{i,0}^{(2)}\leq p_{i,0}^{(3)}\leq p_{i,0}^{(4)}\leq \cdots$ i.e. $p_{i,0}^{(n)}$ forms a monotone non-decreasing sequence in $n$ bounded above by $\min\big(1, y_i\big)$ hence by monotone convergence $\lim_{n\to \infty} p_{i,0}^{(n)} = L_i \leq \min\big(1, y_i\big)$. We argue this is met with equality as follows:

(i) By markov property it must be the case that and $L_i=L_1^i$
(ii) since $\lim_{n\to\infty} \mathbf P^n=\lim_{n\to\infty} \mathbf P^{n+1}=\mathbf P\big(\lim_{n\to\infty} \mathbf P^{n}\big)$ the $L_i$, which form the zeroth column of $\big(\lim_{n\to\infty} \mathbf P^{n}\big)$), must observe $L_i = (1-p)\cdot L_{i-1} + p \cdot L_{i+1}$; put differently with $x_i:= L_i$ we have $\mathbf {Px}=\mathbf x$
(iii) the above two items in combination imply $\mathbf x \in\big\{\mathbf 1,\mathbf y\big\}$ -- i.e. this is OP's statement

either $P_1 = 1$ or $P_1=\frac{1-p}{p}$

since $L_0=x_0=1$ and $L_1=x_1\in \big\{1,\frac{1-p}{p}\big\}$ then (i) tells us the vector $\mathbf x$ is completely specified and the limiting argument tells us $L_i = x_i=\min\big(1, y_i\big)$

user8675309
  • 10,034