0

Suppose $X_k$($1 \leqslant k\leqslant n$) be independent random variables with finite means.

I know the $\frac{1}{n}\sum_{k=1}^n E(X_k)=\frac{E(X_1)+ \cdots E(X_n) }{n} \leqslant E(X_1)+ \cdots E(X_n)$,

and $max(_1,···,_) \leqslant X_1+ \cdots + X_n \Longrightarrow E(max(_1,···,_)) \leqslant E(X_1+ \cdots + X_n)= E(X_1)+ \cdots E(X_n)$.

How comes $\frac{1}{n}\sum_{k=1}^n E(X_k) \leqslant E(max(X_1, · · · ,X_n))$?

Sandipan Dey
  • 2,111
tcxrp
  • 37
  • 6
  • 1
    Multiplying one side by $\frac 1n$ changes the inequality. Incidentally, (A) you do not need independence and (B) $\max(1,\ldots,) \leqslant X_1+ \cdots + X_n$ is not always true if $X_i$ can take negative values – Henry Oct 14 '21 at 21:53

3 Answers3

2

Independence doesn't play any role here. Note that for $n$ real numbers, $$ \frac{x_1+\cdots+x_n}{n}\le \frac{n\max\{x_1,\ldots,x_n\}}{n}=\max\{x_1,\ldots,x_n\}. $$ The result follows then by linearity of expectations.


Also, the inequality $$ \max\{x_1,\ldots,x_n\}\le x_1+\cdots+x_n $$ is always true only for nonnegative $x_1,\ldots,x_n$ (take, for example, $x_1=0$ and $x_2=\ldots =x_n=-1$).

2

$\quad E[max(X_1, · · · ,X_n)]$

$\quad \geq max\left(E[X_1],\ldots, E[X_n]\right)$

$\quad = \frac{1}{n}\left(n.max\left(E[X_1],\ldots, E[X_n]\right)\right)$

$\quad = \frac{1}{n}\sum\limits_{k=1}^{n}max\left(E[X_1],\ldots, E[X_n]\right)$

$\quad \geq \frac{1}{n}\sum\limits_{k=1}^{n}E[X_k]$, since $max\left(E[X_1],\ldots, E[X_n]\right) \geq E[X_k]$, $\forall{k}$

Sandipan Dey
  • 2,111
1

In general, $\frac{a_1 + \cdots + a_n}{n} \le \max(a_1, \ldots, a_n)$ (i.e., mean is no more than the maximum). Replacing $a_i$ with random variables $X_i$ and wrapping with an expectation preserves the inequality.

angryavian
  • 89,882