24

Given pdf of $I$ and $R$ (both $I$ and $R$ are independent RV's), how to find cdf of $W =I^2R$?

Where,

$$ \begin{align} f_I(i)&=6i(1-i), &0 \leq i \leq 1 \\ f_R(r)&=2r, &0 \leq r\leq 1. \end{align} $$

Did
  • 279,727
  • 6
    One approach is to use convolution: $\log(W) = 2\log(I) + \log(R)$. – Shai Covo Apr 04 '11 at 19:10
  • 1
    @Shai: How would you back transform once you have pdf or cdf of log(W)? –  Apr 04 '11 at 21:08
  • 1
    Also note that $R$ is equal in distribution to $\sqrt{U}$, where $U$ is uniform$(0,1)$. Hence $\log(R)$ is distributed as $\log(U)/2$, and in turn as $-X/2$, where $X$ is exponential$(1)$. – Shai Covo Apr 04 '11 at 21:15
  • 1
    @H_S: For $0<x<1$, $F_W (x):={\rm P}(W \leq x) = {\rm P}(\log (W) \leq \log(x)) := F_{\log(W)} (\log(x))$. – Shai Covo Apr 04 '11 at 21:22
  • @H_S: Is this homework? – Shai Covo Apr 04 '11 at 21:53
  • 3
    Another approach: By the law of total probability (conditioning on $I$), for any $0 \leq x \leq 1$, $$ {\rm P}(W \le x) = \int_0^1 {{\rm P}(I^2 R \le x|I = s)6s(1 - s),{\rm d}s} = \int_0^1 {{\rm P}(R \le x/s^2)6s(1 - s),{\rm d}s}. $$ – Shai Covo Apr 04 '11 at 21:54
  • 2
    You may also condition on $R$, getting $$ {\rm P}(W \le x) = \int_0^1 {{\rm P}(I^2 R \le x|R = s)2s ,{\rm d}s} = \int_0^1 {{\rm P}(I \leq \sqrt{x/s})2s ,{\rm d}s}. $$ – Shai Covo Apr 04 '11 at 21:55
  • So, you have at least 3 methods to check yourself... – Shai Covo Apr 04 '11 at 21:58
  • 1
    Since Shai Covo asked if this was homework (without getting a reply), I will point out that this is an end-of-chapter problem in Sheldon Ross's A First Course in Probability (Problem 6.29 in 6th edition). – Dilip Sarwate Apr 01 '12 at 13:49

3 Answers3

52

The simplest and surest way to compute the distribution density or probability of a random variable is often to compute the means of functions of this random variable. In the case at hand, one wants to write $\mathrm E(g(W))$ as $$ \color{blue}{\mathrm E(g(W))=\int g(w)f(w)\mathrm{d}w}, $$ for every bounded measurable function $g$, then one can be sure that $f$ is the density of the distribution of $W$. So, in a way, the functions $g$ play the role of a dummy variable and one wants the equality above to hold for every $g$.

Naturally $W=I^2R$ hence $\mathrm E(g(W))$ is a priori a double integral, but one can be sure that a change of variable will save the day. So, applying the definitions, $\mathrm E(g(W))=\mathrm E(g(I^2R))$ and $$ \mathrm E(g(I^2R))=\iint g(x^2y)\cdot[0\leqslant x\leqslant 1]\cdot6x(1-x)\cdot[0\leqslant y\leqslant 1]\cdot2y\cdot\mathrm{d}x\mathrm{d}y, $$ where, for every property $\mathfrak{A}$, Iverson bracket $[\mathfrak{A}]$ denotes $1$ if $\mathfrak{A}$ holds and $0$ otherwise.

(Begin of rant: no, I do not like to put the limits of the domain of integration on the integral signs, and yes, I prefer to use the notation $[\mathfrak{A}]$ or its cousin $\mathbb{1}_\mathfrak{A}$ because they are more systematic and, at least to me, less error prone. End of rant.)

Now, what change of variable? For one of the two new variables, we want $w=x^2y$, of course. For the other, a sensible choice (but not the only one) is $z=x$. The new domain is $0\leqslant w\leqslant z^2\leqslant 1$ and the Jacobian is given by $\mathrm{d}x\mathrm{d}y=z^{-2}\mathrm{d}w\mathrm{d}z$, hence $$ \mathrm E(g(W))=\int g(w)[0\leqslant w\leqslant 1]\left(\int [w\leqslant z^2\leqslant 1]\cdot6z(1-z)(2wz^{-2})z^{-2}\mathrm{d}z\right)\mathrm{d}w. $$ By identification, the density $f(w)$ is the quantity enclosed by the parenthesis, that is, for every $0\leqslant w\leqslant1$, $$ f(w)=\int [w\leqslant z^2\leqslant 1]6z(1-z)(2wz^{-2})z^{-2}\mathrm{d}z=12w\int_{\sqrt{w}}^1 z^{-3}(1-z)\mathrm{d}z, $$ Finally, $$ \color{red}{f(w)=6(1-\sqrt{w})^2\cdot[0\leqslant w\leqslant1]}. $$

Stefan Hansen
  • 25,582
  • 7
  • 59
  • 91
Did
  • 279,727
  • 8
    If ever I read a short and civil rant this one must have been it. +1 for so many reasons that I don't even try to enumerate them. – t.b. Jun 04 '11 at 17:26
  • 2
    Maybe you need to add an Iverson bracket to the last displayed equation to remind the reader that $f(w) = 0$ if $w < 0$ or $w > 1$? – Dilip Sarwate Apr 01 '12 at 13:58
  • "$ \mathrm E(g(W))=\int g(w)f(w)\mathrm{d} $ for every bounded measurable function $g$, then one can be sure that $f$ is the density of the distribution of $W$." may I ask you for free in access reference or sketch of proof for this? Thanks in advance! – Stephen Dedalus Sep 01 '14 at 12:02
  • 1
    @StephenDedalus One direction is obvious. In the other direction, note that the functions $h_t:w\mapsto\exp(itw)$ are bounded continuous hence if the condition holds for every such function it holds for every $h_t$ hence one knows the Fourier transform of the distribution of $W$, QED. – Did Sep 01 '14 at 12:58
  • @Did: Is this requirement of f being the density of the distribution equivalent to f being Absolutely Continuous? – MSIS Sep 29 '21 at 00:45
4

Probability w = W is probability I^2 R = W.

$$f_W(w) = \int \delta(w - i^2 r) f_{I,R}(i, r) \, di \, dr$$

Independence means that $f_{I,R}(i, r) = f_I(i) f_R(r)$.

(I suggest doing the R integral first -- the delta function transformation is easier.)

Changing to the cumulative distribution function is just integration.

$$F_W(w_0) = \int_0^{w_0} f_W(w) dw.$$

Of course, you can plug the first one into the second, and do the W integral first. This is nice as it handles the delta function quite easily.

wnoise
  • 2,281
  • 1
    @H_S: Is this better? – wnoise Apr 04 '11 at 19:14
  • 1
    @fabian: Heh. They do, but you have to be extremely careful with adjusting the limits of integration that the delta function picks out. – wnoise Apr 04 '11 at 19:42
  • @fabian: Mathematica fails when doing the r integral first, but works fine for the i integral, the opposite of what I'd expect. – wnoise Apr 04 '11 at 20:24
  • 1
    @wnoise: it is usually not a good sign if the value of an integral depends on the order on which the integral is performed. Could you expand your answer (giving the final result) and maybe indicate why the special order is necessary? – Fabian Apr 04 '11 at 20:35
  • 1
    @fabian: The value really doesn't depend on the order -- Mathematica isn't exactly bug-free. It appears to be taking the wrong branch of a square root, even with Assumptions -> w > 0 && w < 1 && i > 0 && i < 0 && r > 0 && r < 1. The failure is that it gives a negative answer, even though all quantities are positive. (I had the order of the r and i backwards, so that actually is as I expect). – wnoise Apr 04 '11 at 20:49
  • 1
    In the past, I was successful in solving this variety of delta function integrals in Mathematica using one of the 'limit definitions' of the delta function. In my case, computing the integral using the Lorentzian limiting function and then taking this result and applying the delta function limit to the Lorentzian parameters. – phdmba7of12 Oct 30 '17 at 14:48
0

$\newcommand{\+}{^{\dagger}} \newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle} \newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack} \newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,} \newcommand{\dd}{{\rm d}} \newcommand{\down}{\downarrow} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,{\rm e}^{#1}\,} \newcommand{\fermi}{\,{\rm f}} \newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{{\rm i}} \newcommand{\iff}{\Longleftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\isdiv}{\,\left.\right\vert\,} \newcommand{\ket}[1]{\left\vert #1\right\rangle} \newcommand{\ol}[1]{\overline{#1}} \newcommand{\pars}[1]{\left(\, #1 \,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\pp}{{\cal P}} \newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,} \newcommand{\sech}{\,{\rm sech}} \newcommand{\sgn}{\,{\rm sgn}} \newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}} \newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert} \newcommand{\wt}[1]{\widetilde{#1}}$ $\ds{\Theta\pars{x}}$ and $\ds{\delta\pars{x}}$ are the Heaviside Step Function and the Dirac Delta Function, respectively.

\begin{align} {\rm P}\pars{W}&=\totald{}{W}\int_{0}^{W}{\rm P}\pars{t}\,\dd t= \totald{}{W}\int_{0}^{1}6I\pars{1 - I} \int_{0}^{1}2R\,\Theta\pars{W - I^{2}R}\,\dd R\,\dd I \\[3mm]&= 12\int_{0}^{1}I\pars{1 - I}\int_{0}^{1}R\,\delta\pars{W - I^{2}R}\,\dd R\,\dd I \\[3mm]&= 12\int_{0}^{1}I\pars{1 - I}\int_{0}^{1}R\,{\delta\pars{R - W/I^{2}} \over I^{2}} \,\dd R\,\dd I \\[3mm]&=12\int_{0}^{1}{1 - I \over I}\,{W \over I^{2}} \int_{0}^{1}\delta\pars{R - {W \over I^{2}}}\,\dd R\,\dd I \\[3mm]&=12W\int_{0}^{1}{1 - I \over I^{3}}\, \Theta\pars{1 - {W \over I^{2}}}\,\dd I =12W\int_{0}^{1}{1 - I \over I^{3}}\,\Theta\pars{I - \root{W}}\,\dd I \\[3mm]&=12W\,\Theta\pars{1 - W}\int_{\root{W}}^{1}{1 - I \over I^{3}}\,\dd I =12W\,\Theta\pars{1 - W}\,{\pars{1 - \root{W}}^{2} \over 2W} \end{align}

$$\color{#00f}{\large% {\rm P}\pars{W} = \Theta\pars{W}\Theta\pars{1 - W}6\pars{\root{W} - 1}^{2}} $$

enter image description here

Felix Marin
  • 89,464