1

In my statistics book I was encountered by the task to calculate the maximum and minimum of $n$ independent random variables, which are uniformly distributed over $(0,1)$. Knowing the definition of the density function I get that

$f_{\mathbb{X}}(x) = {1\over {b-a}}, a=0, b=1$

which gives

$f_{\mathbb{X}}(x) = {1\over {1-0}}, 1$

The distribution function then becomes, simply, $x$. To get the maximum of the $n$ variables (let's call this $\mathbb{Z}$) I get

$F_{\mathbb{Z}}(x) = \Pi_{i=1}^{n} F_{\mathbb{X}_i}(x) = x^n$

For the minimum ($\mathbb{Y}$) I get

$F_{\mathbb{Z}}(x) = 1-\Pi_{i=1}^{n} 1- F_{\mathbb{X}_i}(x) = 1-(1-x)^n$

So, to the get the expected values I have two choices (which are really the same thing); integrate the density functions over $(0,1)$, or just take $F(1)-F(0)$. Either way, I get that

$E(\mathbb{Z}) = F_{\mathbb{Z}}(1)-F_{\mathbb{Z}}(0) = 1^n - 0^n = 1$

$E(\mathbb{Y}) = F_{\mathbb{Y}}(1)-F_{\mathbb{Y}}(0) = (1-(1-1)^n) - (1-(1-0)^n) = (1-0)-(1-1) = 1$

My books disgree, claiming that the expected values are

$E(\mathbb{Z}) = {n \over {n+1}}$

$E(\mathbb{Y}) = {1\over n}$

Since I can't see how this is true, I'd simply like to know where I went wrong, and what I should've done?

  • I disagree with your book, solely because if you've written the fractions correctly from there, it seems that you would expect the maximum value to be closer to $1$ ($\frac{1}{n+1}$ away) than the minimum value is from $0$ ($\frac{1}{n}$ away). This seems strange from a symmetry standpoint. – Arthur Dec 31 '13 at 13:57
  • see related link: http://math.stackexchange.com/questions/150586/expected-value-of-max-of-iid-variables – Eleven-Eleven Dec 31 '13 at 14:02
  • Thanks you both! I will take a look at the link provided! – EscalatedQuickly Dec 31 '13 at 14:04

3 Answers3

3

$f_Z(x)=nx^{n-1}$ then $EZ=\int_0^1 x nx^{n-1}dx=n\int_0^1x^ndx=n\left(\frac{x^{n+1}}{n+1}|_0^1\right)=\frac{n}{n+1}$

$f_Y(x)=n(1-x)^{n-1}$ then $EY=\int_0^1 x n(1-x)^{n-1}dx=n\int_0^1x(1-x)^{n-1}dx$

$=-\int_0^1xd(1-x)^{n}=-x(1-x)^n|_0^1+\int_0^1(1-x)^ndx=-\int_0^1(1-x)^nd(1-x)$

$=-\frac{(1-x)^{n+1}}{n+1}|_0^1=\frac{1}{n+1}$

kmitov
  • 4,731
2

For expected value, you need to first differentiate the CDF, so you should have $f_z(x)=nx^{n-1}$. now you take the expected value and integrate over (0,1) $$E(\mathbb{Z})=\int_0^1{xnx^{n-1}dx}=\int_0^1{nx^n}=\frac{nx^{n+1}}{n+1}|_0^1=\frac{n}{n+1}$$

2

The expected value of a continuous random variable $X$ is given by $$ {\rm E}[X] = \int_{x = -\infty}^{\infty} x f_X(x) \, dx,$$ or when the support is nonnegative, $$ {\rm E}[X] = \int_{x=0}^\infty (1-F_X(x)) \, dx = \int_{x=0}^\infty S_X(x) \, dx.$$ Your computation does not make sense. The expected value is not a difference of CDFs.

Since the first and last order statistics $Y = \min(X_1, \ldots, X_n)$ and $Z = \max(X_1, \ldots, X_n)$ are taken over a set of iid $X_i \sim {\rm Uniform}(0,1)$ variables, then we can use your computation of the CDF to find the expected values directly: $${\rm E}[Z] = \int_{z=0}^\infty (1-F_Z(z)) \, dz = \int_{z=0}^1 (1 - z^n) \, dz= \frac{n}{n+1},$$ as claimed. Similarly, for $Y$, we obtain $${\rm E}[Y] = \int_{y=0}^\infty (1 - F_Y(y)) \, dy = \int_{y=0}^1 (1-y)^n \, dy = \frac{1}{n+1}.$$ Heuristically, this result should not be surprising. If we have $n$ iid uniform(0,1) observations, we might informally expect them to partition the interval $[0,1]$ into $n+1$ equal subintervals; the maximum observation occurs at $n/(n+1)$, and the minimum occurs at $1/(n+1)$. Of course, this is not a rigorous argument by any means, but it is consistent with the formal calculation.

heropup
  • 135,869