Let $f_{p, \mu_c}(s, n)$ be the probability of never exceeding $\mu_c$ given a sum of $s$ after $n$ iterations. The answer would then be $f_{p, \mu_c}(0, 0)$. The average will be $\frac{s}{n}$, so clearly $$f_{p, \mu_c}(s, n)=0$$ if $s/n > \mu_c \to s-\mu_cn > 0$
If this condition is not satisfied, then $$f_{p, \mu_c}(s, n) = pf_{p, \mu_c}(s+1, n+1) + (1-p)f_{p, \mu_c}(s, n+1)$$
since there is a $p$ probability that $s$ will increase $1$ and a $1-p$ probability that $s$ will stay the same.
This can instead be rewritten with one variable instead of two in the argument. Let $c = s-\mu_cn$. Then $$P_{p, \mu_c}(c) = pP_{p, \mu_c}(c+1-\mu_c) + (1-p)P_{p, \mu_c}(c-\mu_c)$$
This Python code implements this in order to calculate the probability. You can decrease eps
in order to make it more accurate. With eps = 1e-8
, $P(1/2, 3/4) = 0.4565$ is obtained.
def g(p, mu_c, comp = 0, mult = 1, eps = 1e-6):
#mult is indicator variable (if mult drops below eps, then the function simply returns mult)
if comp > 0:
return 0
elif mult < eps:
return mult
else:
return g(p, mu_c, comp+1-mu_c, p*mult, eps) + g(p, mu_c, comp-mu_c, (1-p)*mult, eps)
Edit: I just thought of another approach. Let $L$ be a sequence of length $n$ of $0$s and $1$s. Let $$m(L) = \max_{k=1...n} \frac{1}{k} \sum_{i=1}^k L_i$$ be the maximum average value of the first $k$ elements. For example $m(\{0, 1, 0, 0, 1\}) = \frac{1}{2}$ choosing $k = 2$.
The answer is then $$\lim_{n \to \infty} \sum_{L_n, m(L) \le \mu_c} p^{\sum L} (1-p)^{n-\sum L}$$ where the sum is over all $L$ of length $n$ such that $m(L) \le \mu_c$ and $\sum L$ is the number of $1$s in $L$.
When $p = 1/2$, this is easier to compute, as it can be simplified to $$\lim_{n \to \infty} (1/2)^{n} \underbrace{\sum_{L_n \cap m(L) \le \mu_c}1}_{S_n}$$
With $\mu_c = 3/4$, as in the example in the question, it seems that $S_n = 2S_{n-1}$ unless $n \equiv 1 \pmod 4$, in which case $S_n = 2S_{n-1} - \frac{\binom{4n}{n}}{3n+1}$. Using this, I get that $$P_{1/2, 3/4} = 1 - \frac{1}{2} \ _3F_2(\frac{1}{4}, \frac{1}{2}, \frac{3}{4}; \frac{2}{3}, \frac{4}{3}; \frac{16}{27}) \approx 0.456$$ The hypergeometric function is equal to A086253. However, I am not $100\%$ sure about the relation between $S_n$, so this is just an educated guess for now. In a similar way, I guess that $$P_{1/2, 2/3} = \frac{-1 + \sqrt{5}}{4}$$
Edit 2: Let $M$ be the sorted list of the locations of the $1$s in $L$. Let $$h(M) = \max_{i=1...k} \frac{i}{M_i}$$ Then this is equivalent to $$\lim_{n \to \infty} \sum_{k=0}^{n} p^{k} (1-p)^{n-k} \sum_{M_{k, n}\cap h(M) \le \mu_c} 1$$ where $M_{k, n}$ is all $\binom{n}{k}$ combinations from $1...n$ of length $k$. For example, $M_{2, 3} = \{\{1, 2 \}, \{1, 3\}, \{2, 3\}\}$. Finding the distribution of $h(M)$ for all $M$ in $M_{k, n}$ is now sufficient for the answer. The Python code for this is included.
from itertools import combinations
def h(M):
if len(M) == 0:
return 0
return max(i/M[i-1] for i in range(1, len(M)+1))
def findProb(p, mu_c, n=10):
prob = 0
for k in range(n+1):
t = sum(1 for M in combinations(range(1, n+1), k) if h(M) <= mu_c)
prob += p**k*(1-p)**(n-k) * t
return prob
eps
to get more accurate.def g(p, mu_c, comp = 0, mult = 1, eps = 0.0000001):
$\ \ \ \ $if comp > 0: return 0
$\ \ \ \ $elif mult < eps: return mult
$\ \ \ \ $else: return g(p, mu_c, comp+1-mu_c, pmult, eps) + g(p, mu_c, comp-mu_c, (1-p)mult, eps)
– Varun Vejalla May 28 '20 at 20:06