1

The problem asks to simulate a probability of $1/e$ given a fair coin, and asks the expected number of tosses for each simulation.

It seems to do with CLT but I don't know how to link a normal $N(np,np(1-p))$ with parameter $n$ with this quantity $1/e$.

Thank you!

Blue
  • 75,673
WWSS
  • 119
  • 1
    The simulation itself does not require the central limit theorem. Focus on that first. – lulu Jul 22 '21 at 14:55
  • Side note: nor do you need CLT for the expected value. Well, I suppose it might depend on the nature of the simulation. But for the usual methods, the expected value just drops out. – lulu Jul 22 '21 at 15:02
  • I think there are multiple ways to do this, but the first one that comes to mind is "use the fair coin to generate bits of a uniform random number on $(0,1)$ and then return your outcome once you know whether the number will be in $[0,e^{-1})$ or not". This works because after $n$ flips have been done, you will know the number to within an error of $2^{-(n+1)}$. For example, after one flip, you know the number is going to be in either $[0.75-0.25,0.75+0.25]$ or $[0.25-0.25,0.25+0.25]$ depending on which way the flip went. – Ian Jul 22 '21 at 15:07
  • @Ian Thanks Ian~ This approach makes sense. Using binary numbers is indeed beyond my thinking process. – WWSS Jul 22 '21 at 15:30
  • 1
    @lulu Maybe Ian's approach is in line with your suggestion? – WWSS Jul 22 '21 at 15:30
  • That is the standard solution, yes. And for that, note that the expected number is independent of which binary string you are choosing. – lulu Jul 22 '21 at 15:35
  • A hint for computing the expectation: at any given stage, classify whether one, both, or neither of the possibilities for the next flip will end in the process terminating. – Ian Jul 22 '21 at 17:47
  • Thanks lulu and Ian. I will check more about this standard approach. In terms of expected number of achieving so, given $e^{-1} \approx 0.4$ or $e^{-1} \approx 0.37$ , and the error estimate $2^{-n-1}$, does the expected number depend on the target accuracy? – WWSS Jul 22 '21 at 18:12
  • 1
    There's no accuracy issue unless you stop early. Instead you just keep going until your $n$th random number is more than $2^{-(n+1)}$ away from $e^{-1}$. At that point you know which side the "final" random number (which would need infinitely many bits to generate) will be on. – Ian Jul 22 '21 at 18:16
  • @SandroS Can you post an answer with our hints? – Ian Jul 23 '21 at 14:11

1 Answers1

2

With the help from Ian and lulu, I try to answer my own question. After searching for this idea, I realize it is called random bits. For example, if the question is to get the probability of $1/2^2$ instead of $1/e$, then one can toss a fair coin $n$ times to get $X_i$ with $i=1,\dots,n$ and compute $\sum_{i=1}^n X_i /2^i$ for $n=2$. The outcomes with equal probability will be $\{0,0.25,0.50,0.75\}$. The expected number of tosses is $4$. So the question turns out to be using $\sum_{i=1}^n X_i /2^i$ to approximate $1/e$.

As Ian pointed out that the generating process returns the outcome once we know whether the number will be in $[0,e^{−1})$ or $[e^{−1},1]$. This works because after $n$ flips have been done, we will know the number won't change more than $2^{−(n+1)}$ by adding extra terms.

In terms of the expected number of tosses, if we set $E[\sum_{i=1}^n X_i /2^i]=1/e$ or $\sum_{i=1}^n 1/2^{i+1}=1/e$, we can solve for $n=-\log_2(1-2/e)$. Please correct me if I am wrong. Thanks~

WWSS
  • 119