3

We know from this answer that for $0\leq n \leq k$, $$ \int_0^1 r^n(1-r)^{k-n}\,dr = \frac{1}{(k+1)\dbinom k n}. $$

In my case, $n$ and $k$ are integers.

But what is

$$ \int_{\varepsilon}^1 r^n(1-r)^{k-n}\,dr $$

for small $0 < \varepsilon$ ?

This question comes from the comments to the accepted answer at https://math.stackexchange.com/a/903516/66307 .

  • Approximately $\int_0^1 r^n(1-r)^{k-n},dr = \frac{1}{(k+1)\dbinom k n}.$. – Pedro Aug 24 '14 at 15:55
  • @PedroTamaroff I added an explanation for where this comes from. I assume it is different enough to make the mean finite in that linked question. Does this make more sense? –  Aug 24 '14 at 16:00
  • I was joking. I take it you want to estimate the value of the integral asymptotically as $\varepsilon \to 0$? – Pedro Aug 24 '14 at 16:04
  • @PedroTamaroff Michael Hardy's answer does a better at explaining the question than I did. The only problem with it is that I am worried the bound he gives might be very loose. –  Aug 24 '14 at 18:13
  • @Travis See my comment above. –  Aug 24 '14 at 18:14

3 Answers3

1

Usng a CAS, the following result was obtained $$\int_{a}^1 r^n(1-r)^{k-n}\,dr=\frac{\Gamma (n+1) \Gamma (k-n+1)}{\Gamma (k+2)}-\frac{a^{n+1} \, _2F_1(n+1,n-k;n+2;a)}{n+1}$$ You can expand the hypergeometric function as a Taylor series and then get as an approximation $$\frac{a^n \left(-\frac{\Gamma (k+2) a}{n+1}+\frac{(k-n) \Gamma (k+2) a^2}{n+2}+O\left(a^3\right)\right)+\Gamma (n+1) \Gamma (k-n+1)}{\Gamma (k+2)}$$

Added later

$$\int_{a}^1 r^n(1-r)^{k-n}\,dr=\frac{1}{(k+1)\dbinom k n}-B_a(n+1,k-n+1)$$ Expanded as a series built at $a=0$, $$B_a(n+1,k-n+1)=a^{n+1} \left(\frac{1}{n+1}+\frac{(n-k) a}{n+2}+\frac{(n-k) (n-k+1) a^2}{2 (n+3)}+O\left(a^3\right)\right)$$

  • What does $O(a^3)$ mean? In computer science the O notation requires a variable which is going to infinity. But $a$ is bounded by $1$. –  Aug 26 '14 at 08:05
  • $O(a^n)$ (the big O notation) describes the limiting behavior of a function when the argument tends towards a particular value – Claude Leibovici Aug 26 '14 at 08:19
1
Lucian
  • 48,334
  • 2
  • 83
  • 154
  • Thank you. I should have specified, $n$ and $k$ are integers and we are in your case 2. –  Aug 25 '14 at 05:23
0

In the earlier question we consider a random variable $P$ uniformly distributed on $[0,1]$ and a sequence $X_1,X_2,X_3,\ldots$ that were conditionally independent given $P$, and conditional on $P$ each $X_i$ is equal to $1$ with probability $P$ and $0$ otherwise. Then we considered the random variable $K=\min\{k\in\{1,2,3,\ldots\} : X_1+\cdots+X_k=n\}$. The probability distribution of $K$ was found to be given by $\Pr(K=k)=n/(k(k+1))$ for $k=n,n+1,n+2,\ldots.$ Finally, it was found that $\mathbb E(K)=\infty$.

The question was posed in comments, whether $\mathbb E(K)$ would be finite if $P$ had been distributed uniformly on $[\varepsilon,1]$ for some $\varepsilon>0$.

I will answer that here. That doesn't fully answer the question as posed above, but perhaps that is what is of interest.

So suppose $P$ is distributed uniformly on $[\varepsilon,1]$ and $0<\varepsilon\le 1$. Then $$ \mathbb E(K) = \mathbb E(\mathbb E(K\mid P)) \overset{(1)}{\le} \mathbb E(K\mid P=\varepsilon) \overset{(2)} = \frac n \varepsilon. $$ The equality $(2)$ is well known. I will leave the proofs of $(1)$ and $(2)$ as an exercise for now. Maybe more later$\ldots\ldots$.

  • Well that is another lovely answer but this time full of mysteries :) Details for the missing parts would be great! It would also be great to understand how loose that bound is. –  Aug 24 '14 at 16:53
  • For $(2)$, google "negative binomial distribution". – Michael Hardy Aug 24 '14 at 16:56