5

I was just thinking about brain teasers the other day and came up with this one that turned out to be more difficult than I was anticipating.

Suppose we play a game where we start with a bit string of $l$ zeroes (in my example $l=8$),

00000000

At each step of the game, we choose a bit at random from the bit string and flip it, like

01000000

The game terminates whenever we return back to our initial state of all zeroes. My question is this: What is the (closed-form) expected value of the length of the game as a function of $l$?


My (incomplete) approach:

First, I found the answer for verification purposes by programming the game with a couple different values of $l$ in python, which gave me the hypothesis that the expected value should be $2^l$. My formal proof for how this is the case is what I'm seeking.

I started by conceptualizing the bit string differently, as a string of integers from $0$ to $l-1$ with a dividing line between them. On the LHS of the line would be the indices of 1 in the bit string and the RHS would be indices of 0 in the bit string. This would encode a state like this: $$ \texttt{10111000}\qquad \Longrightarrow \qquad \texttt{0234 | 1567} $$ Then, each iteration of the game is simply choosing one of these numbers at random, and putting it on the other side of the dividing line. If we call the set of integers on the LHS $L$ and those on the RHS $R$, then the probability that the line moves left is $|L|\;/\;l$ and the probability that it moves right is $|R|\;/\;l$.

In this sense, we can sort of model the bit-flipping process as a one-dimensional random walk, terminating whenever the walker reaches the LHS, or $|L| = 0$. This is why I consider the process to be a Martingale, even though the probability that the walker moves one direction or the other actually changes depending on where it is. That said, I don't actually know anything about Martingales other than that they are essentially random walks with a terminating condition (like gambling or w/e).

I'm just not sure how to continue from here, can anyone solve this problem?

1 Answers1

3

To answer this question, let's borrow a bit from finite-state Markov Chain theory: the key idea is the connection between $\mathbb E_x[T_x]$ (that is, the expected time for a walk started at $x$ to return to $x$) and the stationary distribution $\pi(x)$ on the states of the random walk. Specifically, that connection is:

$$\pi(x) = \frac{1}{\mathbb E_x[T_x]}$$

For a proof, see, e.g., Theorem 5.5.11 of Durrett's Probability: Theory and Examples. (Note: This theorem isn't a trivial statement.)

This is very useful here, because it means all we have to do is find a stationary distribution. I claim that the uniform distribution on all the states works by symmetry, and since this distribution would have $\pi(x) = \frac{1}{2^l}$ for all $2^l$ of the states, your result is indeed correct.

  • 1
    Thanks for pointing out my mistake with the name "martingale," I appreciate the correction. I've reflected your point in an edit to the title of the question – dsillman2000 Feb 25 '22 at 17:46
  • Could you explain how the distribution is stationary? In other words, clarify your justification for the claim that this that there is a uniform distribution on all the states? I would imagine that, for instance, that 11111111 is less likely to be reached than 1000000 considering our starting point at 00000000 and our termination condition – dsillman2000 Feb 26 '22 at 03:49
  • 1
    It certainly is -- but a stationary distribution is one that is invariant under the action of a single step. Put another way: start with a walker whose position is uniformly distributed among all the states, then allow it take to take one step; its new position will still be uniformly distributed among all the states. – Aaron Montgomery Feb 26 '22 at 05:47