I am working on the following problem (which I couldn't find on the website so far):
Show that a symmetric random walk ($1$ dimensional) starting from the origin visits the point $1$ with probability $1$.
My attempt so far (an adaptation of the recurrence proof from Probability An Introduction $2^{nd}$ edition by Grimmett and Welsh -p.170).
Let $S_n$ be the r.v. which represents where (on the $X$-axis) the random walk is at time $n$. And let $X_n$ be the random variable representing the move that has happened at time $n$. Of course, $X_i=\pm1$ and
$$ P(X_i=1) = P(X_i=-1)=\frac{1}{2} \mbox{ as it is symmetric} $$
Hence, we can write $S_n = X_1 + \dots + X_n$.
In order to be at point $1$ we would need to have had an odd number of moves ($m+1$ to the right and $m$ to the left). Hence we cannot be at point $1$ after an even number of moves. Hence,
$$ P(S_{2m}=1) = 0, \mbox{ } m\geq0 \tag{1} $$ and $$ P(S_{2m+1} = 1) = \binom{2m+1}{m}\frac{1}{2^{2m+1}}, \mbox{ } m\geq0 \tag{2} $$ Let, now, $A_n = \left \{ S_n = 1\right \}$ for the event that the walk visits point $1$ at time $n$, and: $$ B_n = \left\{ S_n = 1, S_k\neq1 \mbox{ for } 1\leq k\leq n-1 \right\} $$ for the event that the first visit of the walk through point $1$ occurs at time $n$. If $A_n$ occurs, then exactly one of $B_1, \dots , B_n$ occurs, giving that: $$ P(A_n) = \sum_{k=1}^{n}P(A_n\cap B_k). $$ Now, $A_n\cap B_k$ is the event that the walk goes through $1$ for the first time at time $k$ and then again in another $n-k$ steps. Hence: $$ P(A_n\cap B_k) = P(B_k)P(A_{n-k}), \mbox{for } 2\leq k \leq n \tag{3} $$
I have doubts that in the equation above, the boundaries are correct i.e. not sure if $n\geq 2$
since transitions in disjoint intervals of time are independent of each other. We write $f_n = P(B_n)$ and $u_n = P(A_n)$. Hence, from the above equations we get that: $$ u_n = \sum_{k=2}^{n} f_{k}u_{n-k}, \mbox{ for } n=1,2,\dots $$
We know the $u_i$s from $(1)$ and $(2)$ and we want to find the $f_k$. Given that the summation we got above is a convolution, we can use probability generating functions. $$ U(s) = \sum_{n=0}^{\infty}u_ns^n $$ $$ F(s) = \sum_{n=0}^{\infty}f_ns^n $$ noting that $u_0 = 0$ and $f_0 = 0$, we have, from $(3)$ that: $$ \sum_{n=2}^{\infty}u_ns^n = F(s)U(s) $$ Hence $U(s) - \frac{1}{2}s = F(s)U(s)$. Hence $$F(s) = 1 - \frac{1}{2sU(s)} $$ And we can find out (not so easily), that: $$ U(s) = \frac{1-\sqrt{1-s^2}}{s\sqrt{1-s^2}}, |s|<1 $$ Hence, $$ F(s) = 1-\frac{s\sqrt{1-s^2}}{2s-2s\sqrt{1-s^2}}, |s|<1 $$
We get the probability we are interested in by taking the limit in the above equation as $s\to 1$, which yields $1$.
I am not sure if I manipulated the (summation) indexes correctly. I would appreciate comments on this proof as well as an alternative proof if anybody knows one.