3

${X_i}$ are i.i.d with $X_i\sim U\left(0,1\right)$. Prove that $Y_1=\min\left\{X_1,X_2,\ldots,X_n\right\}$ converges to $Y=0$ in expectation of order $p\ge1$.

My try and where I got stuck:
$$\lim _{n\to \infty }\left(E\left[\left|Y_1-Y\right|^p\right]\right)=\lim _{n\to \infty }\left(E\left[\left|Y_1\right|^p\right]\right)=\lim _{n\to \infty }\left(E\left[\left|\min\left\{X_1,X_2,\ldots,X_n\right\}^p\right|\right]\right)\\ \le \lim _{n\to \infty }\left(E\left[\left|X_i\right|^p\right]\right)\:\forall i\in \left\{1,2,\ldots,n\right\}=\lim _{n\to \infty }\left(\int _0^1\left|X_i\right|^p\:dx\right)\ne 0$$
That is my problem, I don't know how to receive $0$ here

Amir
  • 4,305
  • I just showed how you can obtain the result. You need to use some existing results. – Amir Feb 29 '24 at 11:34

3 Answers3

2

If $X_i\sim U\left(0,1\right), i=1, \dots, n$ and are independent, we have the following property:

$$Y_1=\min\left\{X_1,X_2,\ldots,X_n\right\} \sim \text{Beta}(a=1,b=n).$$

Hence, as the expectation of any power of a beta distribution is known (see here), we have

$$ \mathbb E\left[\left|Y_1-0\right|^p\right]=\mathbb E\left[Y_1^p\right]=\\ \frac{\Gamma(a+b)\Gamma(a+p)}{\Gamma(a)\Gamma(a+p+b)}= \frac{\Gamma(1+n)\Gamma(1+p)}{\Gamma(1)\Gamma(1+n+p)}= \\ \frac{n!\Gamma(1+p)}{(n+p)((n-1)+p)...(1+p)\Gamma(1+p)}=\color{blue}{\frac{n!}{(n+p)((n-1)+p)...(1+p)}}. \tag {1} $$

For $p=1$, it becomes $\frac{1}{n+1}$, which tends to $0$ as $n \to \infty$.

For $p>1$, from

$$\frac{n!}{(n+p)((n-1)+p)...(1+p)} \le \frac{n!}{n!+ p^n},$$

we see that the last term in (1) tends to zero as $n \to \infty$. This yields the desired result:

$$\mathbb \lim_{n \to \infty } E\left[\left|Y_1-0\right|^p\right]=0$$

for any $p \ge 1$.

Amir
  • 4,305
  • Oh dam, I don't think we are allowed to use Beta functions, first time I ever see it :\ ....... for $p=1$ you wrote it tends to $1$ as $n$ goes to $\infty$, is it not $0$?...... But dam, that proof, so formal, I will try to understand this though, for me, and write it also :) I just looked a little ( and will look more ) at beta and alpha functions. – Analysis_Complex_Study Feb 29 '24 at 11:55
  • 1
    @Analysis_Complex_Study I just fixed the typo. I also improved the question. You may note that $Y_1$, $Y_{1:n}$ or $X_{(1)}$ is commonly used to denote the minimum order statistic. – Amir Feb 29 '24 at 12:27
  • 1
    First of all, thanks for improving the question!! , Secondly: First time seeing also :\ ( third year student and never heard of it :( ), And thanks, understood your proof very well, wrote it down on my notes :) ( it may help with future questions, it looks like the beta functions is appearing many times ) – Analysis_Complex_Study Feb 29 '24 at 12:32
  • 1
    @Analysis_Complex_Study I think one should know about elementary distributions like Beta and Gamma distributions (and hence the beta and gamma functions and their properties) before learning about "convergnece of random variables". So it would be my advice to learn those up and try and understand this answer before going into measure theoratic details. Also, you can use the formula that for a positive random variable $E(X^{p})=\int_{0}^{\infty}pt^{p-1}P(X>t),dt$ to arrive at the beta integral. Note that $P(Y_{n}>t)=(1-t)^{n}$. – Mr.Gandalf Sauron Feb 29 '24 at 12:46
  • 1
    So even if you don't want to compute the integral, you can apply dominated convergence theorem to the integral $\int_{0}^{1}pt^{p-1}(1-t)^{n},dt$ to get that it goes to $0$. Note that the integrand is bounded by $p$ for all $n$. I have edited this detail in my answer @Analysis_Complex_Study – Mr.Gandalf Sauron Feb 29 '24 at 12:49
  • @Amir One can simply apply DCT to the integral $E(|Y_{n}|^{p})=\int_{0}^{1}pt^{p-1}(1-t)^{n},dt$ if one does not want to rely on identities for Beta function. – Mr.Gandalf Sauron Feb 29 '24 at 12:58
  • @Mr.GandalfSauron That actually is a nice way, if I had an unsolveable integral, it could be good :) Will check it out and write it out also. – Analysis_Complex_Study Feb 29 '24 at 12:59
  • 1
    @Analysis_Complex_Study For example if you are asked to compute the limit $\lim_{n\to\infty}\int_{0}^{1}e^{-nx^{2}}\sin(\cos^{2}(x)\tan(x)e^{200x}),dx$. The best way is to do it by DCT or there's something also called Weirestrass' test. – Mr.Gandalf Sauron Feb 29 '24 at 13:04
  • 2
    @Mr.GandalfSauron Using the hint given in one of your comment, one can show that the minimum order statistic for any population distribution $X$ with $P(0 \le X \le U)=1$ for some $U>0$ and $P(X>a)>0$ for any $a \ge 0$ converges to zero in $L_p$ norm. – Amir Feb 29 '24 at 13:05
  • 1
    @Amir Yes that's true. That would again result from a simple application of DCT with the formula $\int_{0}^{U}pt^{p-1}P(X>t)^{n},dt$ as the integrand will go to 0 and will be bounded by $pU^{p-1}$ – Mr.Gandalf Sauron Feb 29 '24 at 13:08
  • @Mr.GandalfSauron what can we say when there is no finite $U$? In this case, when $Y_1$ converges to $0$? – Amir Feb 29 '24 at 23:52
  • @Amir Like I said. In a non-negative iid setting, you'll always have convergence in probability of $Y_{(1),n}\to 0$ as $P(Y_{1}>\epsilon)=P(X_{1}>\epsilon)^{n}\to 0$ if $1>P(X_{1}>\epsilon)\geq c_{\epsilon}>0$ for each $\epsilon$. (eg. if $X_{1}$ is exponential variate). And you always have $Y_{(1),n}\leq X_{1}$ for each $n$. Hence if $X_{1}$ has $p$-th moment, then you'll always have $L^{p}$ convergence due to uniform integrability or Dominated Convergence Theorem whatver you like. So much more is true in greater generality. – Mr.Gandalf Sauron Mar 01 '24 at 08:23
  • @Mr.GandalfSauron Thank you! – Amir Mar 01 '24 at 08:30
1

Well here's a short way to do this provided you know about Dominated Convergence Theorem or more generally Uniform Integrability.

So firstly, you show convergence in probability.

So $P(|Y_{n}|\geq\epsilon)=P(\{|X_{1}|\geq\epsilon\},...,\{|X_{n}|\geq\epsilon\})\stackrel{iid}{=}P(|X_{1}|\geq\epsilon)^{n}=(1-\epsilon)^{n}$

which goes to $0$ as $n\to\infty$ ($Y_{n}$ denotes the minimum order statistic).

And $|Y_{n}|\leq 1$ for all $n$. Hence using Dominated Convergence Theorem , you have that $Y_{n}\xrightarrow{L^{p}}0$ .

The uniform integrability theorem basically says that if $X_{n}\xrightarrow{P} X$ and $X_{n}$ is uniformly integrable then $X_{n}\xrightarrow{L^{1}}X$ and vice-versa. You can see my short proof here. So basically, you can even repeat my proof from there and conclude convergence in expectation for any $p$.

In fact, you can generalize this even further. Let $X_{1}$ be non-negative and have $p$-th moment such that for each $\epsilon>0$, there exists $c(\epsilon)>0$ such that $1>P(X_{1}>\epsilon)\geq c(\epsilon)>0$. Then you have $Y_{n}\xrightarrow{P}0$ by the same reasoning. Also, note that $|Y_{n}|\leq |X_{1}|$. So you can conclude using DCT or uniform integrability if $X_{1}$ has $p$-th moment, $|Y_{n}|^{p}$ is uniformly integrable

Here's another way to do it without explicitly computing integrals.

For a postive random variable $X$, you have $E(X^{p})=\int_{0}^{\infty}pt^{p-1}P(X>t)\,dt$ . Now note that $P(Y_{n}>t)=(1-t)^{n}$ for $0\leq t\leq 1$ and $0$ for $t>1$.

Hence you have to show that $\int_{0}^{1}t^{p-1}(1-t)^{n}\,dt\to 0$ as $n\to\infty$ but this is easy by the Dominated Convergence Theorem as the integrand is dominated by $1$ and converges pointwise to $0$.

  • Actually, we have not learned this also :\ we were just given definition of it and told to prove that question :\ ( not really related to anything we taught this semester ). But I will also write this down since its a good way to understand probably some other stuffs and other ideas if will be needed :) ( always good to keep an open mind ). I just do not understand why the fact that $|Y_n|$ is bounded means that $Y_n$ is uniformly integrable? at the Wikipedia it talks about probability and expectation ( norm $l1$ ), I don't see any relation to the absolute value. Thanks!! – Analysis_Complex_Study Feb 29 '24 at 12:26
  • 1
    @Analysis_Complex_Study So do you know about DCT? The DCT condition of being dominated by an integrable random variable is actually what is generalized by uniform integrability. So if $|X_{n}|\leq Z$ then $E(|X_{n}|\mathbf{1}{|X{n}|>M})\leq E(Z\mathbf{1}{Z>M})\xrightarrow{M\to\infty}0$ . This is just because ${|X{n}|>M}\subseteq{Z>M}$ and as $Z$ is integrable if and only if the RHS goes to $0$ as $M\to\infty$. You can take $Z=1$ in this case of the problem. – Mr.Gandalf Sauron Feb 29 '24 at 12:30
  • 1
    @Analysis_Complex_Study Also, you can just about the uniform integrability theorem from my answer. I have provided a very compact proof of it in the linked answer. Also try to prove the reverse condition that if $X_{n}$ converges in $L^{1}$ to something (i.e .if $X_{n}$ is Cauchy) in $L^{1}$, then $X_{n}$ is uniformly integrable. Here is the best summary of uniform intergrability on the internet. It's a very very useful and strong tool that often reduces a lot of work. – Mr.Gandalf Sauron Feb 29 '24 at 12:32
  • DCT also did not learn, regarding what you wrote in the first comment, I see, I will research a bit of that subject. And look at your proof at the second link ( saw it its related ) :) Thank you very much!! – Analysis_Complex_Study Feb 29 '24 at 12:36
0

Your first inequality is not strict enough to get the result. You will need a better estimate of $E\left[\left|Min\left\{X_1,X_2,..,X_n\right\}^p\right|\right]$.

hunter
  • 29,847
  • hmm, what better estimator of it exist? I don't have anything else to do with the $Min$ of it sadly. The only think I know is what I wrote, since with probability, its true. – Analysis_Complex_Study Feb 29 '24 at 10:28
  • Try computing it directly from the definition of expectation! This is a standard exercise when $p=1$, where you should get $1/(n+1)$ (in your estimate, you're just getting $1/2$, which is obviously not going to 0 as $n$ increases). I don't know the answer when $ p \neq 1$ -- maybe you can directly compute the expectation or maybe you can use a tighter estimate. – hunter Feb 29 '24 at 10:30
  • Oh we need for $P\ge 1$, forgot to mention, I will edit. – Analysis_Complex_Study Feb 29 '24 at 10:32
  • $\frac 1 {n+1}$, really? there is dependence of $n$ here at expectation? weird. I will try. – Analysis_Complex_Study Feb 29 '24 at 10:34
  • 2
    @Analysis_Complex_Study For $p\geq 1$, it might help to note $x^p \leq x$ when $0 \leq x \leq 1$. – Brian Moehring Feb 29 '24 at 10:38
  • @Analysis_Complex_Study yes there has to be a dependence on $n$ or else there's no way that limit is going to zero. Intuitively, if you pick lots of random numbers, the minimum is going to be pretty small since you'll get lucky once or twice. – hunter Feb 29 '24 at 10:38
  • @BrianMoehring Oh, I see, I can find $p=1$ and it affeects all other $p\ge1$ since it will be higher then them, and if $p=1$ goes to 0, so does the $p\ge1$. I will try and do it!!.. – Analysis_Complex_Study Feb 29 '24 at 10:41
  • @hunter Yea, that is why I was confused, there has to be, but could not do it. I will try to do it now and update :) – Analysis_Complex_Study Feb 29 '24 at 10:41
  • 1
    $\frac{1}{n+1}=E\left[min\left{X_1,X_2,..,X_n\right}\right]\ge E\left[min\left{X_1,X_2,..,X_n\right}^p\right]$ I did it correctly, right? I reached that expectation. and now the limit is indeed 0. – Analysis_Complex_Study Feb 29 '24 at 11:04