For the non-negative r.v. Y, why is the following true:
$E[Y] = \int_{0}^{\infty}P\left \{ Y>x \right \}{d}x$
For the non-negative r.v. Y, why is the following true:
$E[Y] = \int_{0}^{\infty}P\left \{ Y>x \right \}{d}x$
Integration by parts. The expectation is equal to $\int_0^\infty x dF(x),$ where $F$ is the CDF, which is equal to $1-P(Y>x).$ Integrate by parts, to get your formula.
Instead of working on a probability space, let's work on a general measure space $(X,\mu)$ and a nonnegative function $f$.
Define the set \begin{equation} R(f):=\{(x,y)\in X\times\mathbb{R}: y<f(x)\}, \end{equation} which is `the region below the graph'.
Under reasonable conditions one shows that \begin{equation} (\mu\times\lambda) (R(f))= \int fd\mu \end{equation} where $\mu\times\lambda$ is the product measure on $X\times\mathbb{R}$, which says the area of the region below the graph is the integral of the function.
On the other hand, we have the obvious \begin{equation} (\mu\times\lambda) (R(f))=\int_\mathbb{R}\int_X\chi_{R(f)}d\mu(x) dy=\int_{0}^{+\infty}\int_X\chi_{\{f(x)>y\}}d\mu(x)dy, \end{equation} which gives the formula you are looking for once you realize \begin{equation} \int_X\chi_{\{f(x)>y\}}d\mu(x)=\mu(\{f(x)>y\}). \end{equation}
i would like to supplement the elegant answers given by Igor and Hui with a more naive approach which i hope will focus attention on the switch in method of summation which is the technical key of this result. apologies for any deficiencies as to rigor and clarity of exposition.
suppose we have a discrete random variable $X$ which can take the values $\{x_i\}_{i=1,2,3...}$, where: $$x_1 \lt x_2 \lt x_3 \lt \dots $$ and the probability of the $i^{th}$ value is $p_i$. the expectation is therefore $$ E(X) = \sum_{i=1}^{\infty} x_ip_i $$ now define the related probabilities $P_i$ where $$P_i=P(X \ge x_i) $$ so that $$ \begin{aligned} P_1 & = & p_1+ & p_2+ p_3+p_4... & = 1 \\ P_2 & = & & p_2+ p_3+p_4... & = 1-p_1 \end{aligned} $$ and therefore, conversely: $$ p_1 = P_1-P_2 \\ p_2 = P_2-P_3 \\ \dots $$ and generally $$p_n = P_n-P_{n+1} $$ so that the expectation may also be written as $$ E(X) = x_1(P_1-P_2)+x_2(P_2-P_3)+x_3(P_3-P_4)+ \dots $$ which, by a simple regrouping becomes $$ E(X) = x_1P_1+(x_2-x_1)P_2 + (x_3-x_2)P_3+ \dots $$ and if we now introduce $$ \Delta_{x_1} = x_1 -0 \\ \Delta_{x_2} = x_2-x_1 \\ \Delta_{x_3} = x_3-x_2 \\ $$ and generally $$\Delta_{x_n}= x_n - x_{n-1} $$ we can write $$ E(X) = \sum_{i=1}^{\infty} P_i\Delta_{x_i} $$ it should be clear how to pass from this to the integral form of the result desired