6

This is from Actuarial Mathematics for Life Contingent Risks, 2nd ed., by Dickson et al. Some definitions (not directly from the book):

Definitions/Notation. $T_x$ is defined to be the future lifetime of a life age $x \geq 0$. We also define the cumulative distribution function of $T_x$, denoted either $F_{T_x}$ or $F_x$, as $$F_{T_x}(t) = F_{x}(t) = \mathbb{P}\{T_x \leq t\}\text{.}$$ The survival function of $T_x$, denoted $S_x$, is defined as $$S_{x}(t) = 1 - F_{x}(t)\text{.}$$ It should also make sense that $T_x$ takes on only nonnegative values; i.e., $T_x \geq 0$. So, of course,$$\mathbb{E}\left[T_x\right] = \int\limits_{0}^{\infty}tf_{x}(t)\text{ d}t$$ where $f_{x}$ is the probability density function of $T_x$.

Throughout this textbook, it is assumed that $S_{x}$ is differentiable for all $t > 0$. The text also makes the following assumptions:

Assumption 2.2: $\lim_{t \to \infty}tS_{x}(t) = 0$

Assumption 2.3: $\lim_{t \to \infty}t^2S_{x}(t) = 0$

"These last two assumptions ensure that the mean and variance of the distribution of $T_x$ exist."

Now here's the main question: why is this true? I can no longer find where I asked this before, but I recall that the converse is actually true (i.e., what the authors are stating here is indeed false), but never was able to find justification for why.

I also know for a fact that IF $\mathbb{E}[T_x]$ exists that $$\mathbb{E}[T_x] = \int_{0}^{\infty}S_{x}(t) \text{ d}t\text{,}$$ but this is of course, not helpful, since it assumes that $\mathbb{E}[T_x]$ exists to begin with.

FYI: I am including in this question in case we need tools from measure-theoretic probability to solve this question. Unfortunately, I don't know the topic very well.

Clarinetist
  • 19,519
  • 1
    Of course, simply positing that the existence of a 200 year old person is essentially impossible, or whatever other upper bound you like, establishes the existence of the mean and variance in an obvious way. – user24142 Feb 17 '15 at 19:44

2 Answers2

6

The conditions given by the OP are not sufficient (as suspected by the OP in the question).

A well-known formula in probability theory states that for nonnegative random variables $Y$: $$ E[Y] = \int_0^\infty P(Y>y) dy , $$ see Integral of CDF equals expected value. This formula holds even if one of the sides above takes the value $+ \infty$.

Now we may use that formula to express the mean of the nonnegative random variable $T_x$: $$ E[T_x] = \int_0^\infty S_x(t) dt.$$ So the mean is finite iff the integral on the right hand side is finite. Let $c= 2\log 2$. Then, the example $S_x(t) = c((t+2) \log (t+2))^{-1}$ for $t\in [0,\infty)$ shows: \begin{align} \lim_{t\to \infty} tS_x(t) & = c\lim_{t \to \infty} \frac{t}{(t+2)\log (t+2)} = 0 \\ \int_0^\infty S_x(t)dt & = c\int_0^\infty ((t+2) \log (t+2))^{-1} dt = c\int_2^\infty (s \log s)^{-1} ds = c[\log (\log t)]^\infty_2 = \infty. \end{align} So, we see that Assumption 2.2 is not sufficient for the existence of the mean. We used $s=t+2$ in the integral.

Likewise the situation with the variance and the function \begin{equation} S_x(t) = c'\big((t+2)^2 \log (t+2) \big)^{-1}, t \in [0,\infty) . \end{equation} Here, $c' = 4 \log 2$. It is known that the variance exists iff the second moment exists. The derivation of the first steps for the second moment is then as follows. $$ E[T_x^2] = \int_0^\infty P(T_x^2>t) dt = \int_0^\infty S_x(\sqrt{t})dt.$$ Almost the same calculation as above shows that Assumption 2.3 is not sufficient for the existence of the variance: \begin{align} \lim_{t\to \infty} t^2S_x(t) & = c'\lim_{t \to \infty} \frac{t^2}{(t+2)^2\log (t+2)} = 0 \\ \int_0^\infty S_x(\sqrt{t})dt & = c'\int_0^\infty \big((\sqrt{t}+2)^2 \log (\sqrt{t}+2)\big)^{-1} dt = c'\int_2^\infty \big((s \log (\sqrt{s})\big)^{-1} dt \\ & = 2c'[\log (\log t)]^\infty_2 = \infty. \end{align} We used the substitution $\sqrt{s} = \sqrt{t} +2$ in the last line.

Thomas Rippl
  • 1,274
  • 9
  • 13
  • Your examples of $S_x(t)$ are not differentiable at $t=2$. But I think this can easily be fixed; all that is needed is a smaller value of $c$ (or $c'$) and a function that is $1$ at $t=0$ and that has the same value and derivative as $c(t\log t)^{-1}$ (or $c'(t^2\log t)^{-1}$) at $t=2$. The end result is the same. – David K Feb 17 '15 at 14:28
  • @thomas - Could you change the answer so that $c$ and $c^{\prime}$ are chosen such that $S_x$ is differentiable? If your proof is correct, I will grant you the bounty. – Clarinetist Feb 17 '15 at 16:08
  • @thomas - Thank you for your changes! – Clarinetist Feb 17 '15 at 18:24
  • @Clarinetist: you are welcome. The older version had $S_x(t) = c(t \log t)^{-1}$ for $t\geq 2$ and $S_x(t)=1$ for $t \in [0,2)$, which was not differentiable at $t=2$. – Thomas Rippl Feb 18 '15 at 07:06
2

I will give a general rule relating the tail behavior to the existence of certain moments of a random variable. This rule could then be used to refine assumptions 2.2 and 2.3.

Before, first note that your IF claim is fine, though if the mean does not exist the equality you mentioned still holds: \begin{align} \mathbb{E}\left[T_x\right] &= \int_{0}^{\infty}t dF_x(t)\\ &=\int_{0}^{\infty}\int_{0}^{\infty}1_{t>y}f_x(t)dydt\\ &=\int_{0}^{\infty}\int_{0}^{\infty}1_{t>y}dF_x(t)dy\\ &=\int_{0}^{\infty}Pr(T_x>y)dy\\ &=\int_{0}^{\infty}S_x(t)dt\\ \end{align} I am allowed to change the order of integration by Tonelli's theorem.

In general, note that one could establish that if $\displaystyle t^aS_x(t)\rightarrow 0$ for some $\displaystyle a>0$ then $\displaystyle E|T_x|^b<\infty$ for $\displaystyle b<a$: To see this note that (using integrations by part) \begin{align} \int_{0}^{n}t^b dF_x(t)&=-n^bPr(T_x>n)+\int_{0}^{n}bt^{b-1}S_x(t)dt \end{align} now by $\displaystyle t^aS_x(t)\rightarrow 0$, for some $\displaystyle \epsilon >0$ we can choose $\displaystyle N=N(\epsilon)$ such that $\displaystyle Pr(T_x>t)<\frac{\epsilon}{t^a}$. Therefore $\displaystyle-t^bPr(T_x>t) \rightarrow 0$. Thus \begin{align} \int_{0}^{\infty}t^b dF_x(t)&=\int_{0}^{N}bt^{b-1}S_x(t)dt+\int_{N}^{\infty}bt^{b-1}S_x(t)dt\\ &\leq \int_{0}^{N}bt^{b-1}dt + \int_{N}^{\infty}bt^{b-1}\frac{\epsilon}{t^a}dt\\ &<\infty \end{align}

Math-fun
  • 9,507
  • What does the notation $Pr(S_x > t)$ mean? Is $S_x$ a random variable introduced for this integration, and if so, what is its distribution? – David K Feb 17 '15 at 13:27
  • In what way did you contradict the "if" claim? If I remember predicate logic correctly, for any propositions $P$ and $Q$ the implication $P\Rightarrow Q$ is proved, not contradicted, if $Q$ is shown to be true. – David K Feb 17 '15 at 13:32
  • @DavidK Many thanks for the comments, I corrected the arguments. – Math-fun Feb 17 '15 at 14:58
  • @Clarinetist Many thanks for the comments. I do not agree with the author, in that I think we need $t^{1+\delta}S_x(t)\rightarrow 0$ with $\delta>0$ so that the mean exist which is stronger than $t^{1+0}S_x(t)\rightarrow 0$. About the second comment, I think this should not be a problem, since we are talking about $t$ being large (i.e. $t \rightarrow \infty$) – Math-fun Feb 17 '15 at 16:51
  • @Mehdi - Ah, I see. Thank you for the clarification. – Clarinetist Feb 17 '15 at 17:01
  • @Mehdi - Sorry; my comments were completely off. Had a complete "duh" moment. I should have realized that limit was being done as $t \to \infty$, haha. – Clarinetist Feb 17 '15 at 17:16