1

Assume that $X$ is integrable in probability space $(\Omega, \mathscr F, \mathbb P)$, i.e. $X \in \mathscr L^1 (\Omega, \mathscr F, \mathbb P)$.

  1. What does it mean if a random variable is (surely/almost surely) greater than its expected value? $X \ge E[X]$ I think this means $X$ is, at least almost surely, constant. (I'm not sure $X$ is surely constant even if $X \ge E[X]$ surely.) How do we prove this though? What I've done so far:

    • 1.1. I can prove this for $X$ indicator and nonnegative simple (and nonnegative discrete).
    • 1.2. I didn't bother anymore to try for nonnegative integrable and general integrable because I'm hoping for some simple proof I might've over looked like...
    • 1.3. ...like prove that $P(X > E[X]) = 0$ through something like this. Maybe consider $E[X1_A]$ where $A=\{X > E[X]\}$ or something.
    • 1.4. If standard machine is really the way to go about this, then I'm stuck: For nonnegative integrable, probably monotone convergence theorem, but not really sure how. But since we're still in nonnegative, I'm guessing we'll have $X=0$. For general integrable, ok this part I remember is actually not just simple but also easy, so I must really be over looking something.
  2. Does the same conclusion in (1) (I mean whatever is the correct conclusion and not necessarily what I have stated) hold if $X$ is instead (surely/almost surely) less than its expected value? $X \le E[X]$

  3. Elementary/basic probability theory: If $X$ is a continuous random variable, then how do we show it is impossible that $X \ge E[X]$ surely (and also $X \le E[X]$ surely) (and also almost surely, but you know, it's still elementary/basic)? (I guess ignore this part if you can answer the above without measure theory.)

  4. If answering any of the above is easier if we assume $X$ is square integrable, then please tell me how (eg somehow we can say $Var[X]=0$).

BCLC
  • 13,459
  • A random variable can be almost surely greater than (or smaller than) the expected value, but only if the probability space is heavily skewed. For example when you have a lottery with $1000,000$ tickets. Each ticket costs 1 dollar. The winner receives $1000,000$ dollar. When you buy a ticket, the expected value is zero. However there is a chance of $99.9999$ percent that you will lose 1 dollar. – M. Wind Aug 20 '21 at 02:18
  • 2
    @M.Wind : What you say is true if you construe "almost surely" as meaning the probability is close to $1.$ But it is standard in probability theory to use that phrase to mean the probability is $1.$ Thus the difference between "almost surely" and "surely" would be that with "almost surely", there may be some outcomes for which the "almost sure" event does not happen, but the measure of the set of such outcomes is zero. – Michael Hardy Aug 20 '21 at 02:27

2 Answers2

1

This can be shown through measure theory. Let's use your $A=\{X>\mu\}$. Then on $A$, $X-\mu$ is positive. On its complement $A^c$, $X$ equals $\mu$ so $X-\mu$ is zero.

Hence:

$$\mu=E[X]= \mu+ \int_A X-\mu dP + \int_{A^c} X-\mu dP = \mu + \int_A X-\mu dP.$$

Now note that the final term has to be zero and the integrand is strictly positive on $A$. Hence, $A$ has to be a null-set.

You can try to follow a similar proof strategy for the case of a continuous random variable with a pdf.

BCLC
  • 13,459
Dasherman
  • 4,206
  • thanks Dasherman! i knew i didn't need to go through standard machine! anyway, so whether our assumption is surely or almost surely, the conclusion is only almost surely? – BCLC Aug 20 '21 at 04:18
  • 1
    Yes, as an exercise you can try to come up with an example where the assumption holds surely, but the conclusion only almost surely. Although in probability theory we rarely deal with surely anyway, because that requires knowing the details of the probability space and you usually want to stay away from that. – Dasherman Aug 20 '21 at 04:24
  • ayt just double checking about the almost surely. thanks – BCLC Aug 20 '21 at 04:26
1

We show that

If $X\in L_1(\Omega,\mathscr{F},\mathbb{P})$ and $X\geq \mathbb{E}[X]$ $\mathbb{P}$-a.s. then $X=\mathbb{E}[X]$ $\mathbb{P}$-a.s.

This will be a consequence of the following observation:

Suppose $\mathbb{P}[A]>0$ and $g$ is a measurable function that strictly positive on $A$, then $\mathbb{E}[g\mathbb{1}_A]=\int_A g\,d\mathbb{P}>0$.

To see this, notice that $\{\omega\in A:g(\omega)>0\}=\bigcup^\infty_{n=1}\{\omega\in A: g(\omega)>\frac1n\}$. Since $\mathbb{P}\big[\{\omega\in A: g(\omega)>0\}\big]>0$, there is $n_0\in\mathbb{N}$ such that $\mathbb{P}\big[\{\omega\in A: g(\omega)>\frac{1}{n_0}\}\big]>0$. Then, by Markov-Chebyshev's inequality $$\int_Ag\,d\mathbb{P}\geq \int_{A\cap\{g>\tfrac{1}{n_0}}g\,d\mathbb{P}\geq \frac{1}{n_0}\mathbb{P}\big[A\cap\{g>\tfrac{1}{n_0}\}\big]>0\qquad\Box.$$

To conclude the suppose $A=\{X> \mathbb{E}[X]\}$ has positive measure. Define $g(\omega)=(X-\mathbb{E}[X])\mathbb{1}_A(\omega)$. Notice that $\{g>0\}=A$ and by assumption, $\mathbb{P}[A]>0$. Then $$0<\int_Ag\,d\mathbb{P}=\int_A(X-\mathbb{E}[X])\,d\mathbb{P}\leq \int_\Omega(X-E[X])\,d\mathbb{P}=0$$ which yields a contradiction. Hence, $\mathbb{P}[A]=0$ and so, $X\leq \mathbb{E}[X]$ $\mathbb{P}$-a.s. This, along with the assumption that $X\geq\mathbb{E}[X]$ $\mathbb{P}$-a.s., implies that $X=\mathbb{E}[X]$ $\mathbb{P}$-a.s.


  • A similar conclusion follows if one assumes that $X\leq \mathbb{E}[X]$ $\mathbb{P}$-a.s.
Mittens
  • 39,145
  • thanks but what's the difference with Dasherman 's answer ? seems like just more details that i actually did know and understand already – BCLC Aug 21 '21 at 11:28
  • 1
    @BCLC: My only goal was to stress the key part: that if $g>0$ on a set $A$ of positive measure, then $\int_Ag>0$. The rest, is rather simple. If you understand that already, brilliant! – Mittens Aug 21 '21 at 11:31
  • thanks but like hell it's simple to think how to use it XD – BCLC Aug 21 '21 at 11:52