I am reading a result that for a nonnegative random variable $X$ on $(\Omega, \mathcal{F}, P)$,
$EX = (P \times \lambda)\{(\omega,x): 0 \leq x \leq X(\omega)\}$,
where $\lambda$ is the Lebesgue measure.
What is the intuition behind this?
I am reading a result that for a nonnegative random variable $X$ on $(\Omega, \mathcal{F}, P)$,
$EX = (P \times \lambda)\{(\omega,x): 0 \leq x \leq X(\omega)\}$,
where $\lambda$ is the Lebesgue measure.
What is the intuition behind this?
well, suppose $\Omega$ consists of finitely many elements, $X$ some random variable. you have $$EX= X(\omega_1)P(\omega_1) + ... +X(\omega_n)P(\omega_n) = \sum_{k=1}^n(P(\omega_i)\int_0^{X(\omega_i)}d\lambda) = (X\times\lambda)\{(x,\omega): x<X(\omega)\}$$
This answer might be helpful: Intuition behind using complementary CDF to compute expectation for nonnegative random variables