0

Let us say that $X$ is a normal random variable, given that its expected value is $0$ and variance is $1$ . How do we compute the density function of $Y= e^ X$?

What I think: Since expected value is $0$ and variance is $1$, $X$ is a standard normal variable, whose pdf is given as: $\frac{1}{\sqrt{2\pi}}e^{-x^2 / 2}$.

Ashok
  • 1,963

2 Answers2

2

How do we compute the density function of $\color{red}{Y=\mathrm e^X}$?

Very briefly, since this was already explained here and there on the site, an answer is: by trying to reach, for every (bounded measurable) function $u$, the identity $$ \mathrm E(u(Y))=\int u(y)f_Y(y)\mathrm dy.\tag{$\ast$} $$ The method is entirely automatized, since by definition of the distribution of $X$, $$ \mathrm E(u(Y))=\mathrm E(u(\mathrm e^X))=\int u(\mathrm e^x)f_X(x)\mathrm dx. $$ Using the change of variable is $y=\mathrm e^x$ (what else?), one gets $x=\log y$ and $\mathrm dx=y^{-1}\mathrm dy$, which yields $$ \int u(\mathrm e^x)f_X(x)\mathrm dx=\int u(y)f_X(\log y)y^{-1}\mathrm dy.\tag{$**$} $$ Equating the RHS of $(*)$ and the RHS of $(**)$, this yields at once $$ \color{red}{f_Y(y)=f_X(\log y)y^{-1}}. $$ Note that the proof uses nothing specific to the gaussian densities, that no daunting erf functions appear then disappear, that the method is quite general and that one does not even use the exact form of the density $f_X$.

Did
  • 279,727
0

$y = e^x$ is a strictly increasing function of $x$ so

$$\Pr(Y \le y) = \Pr(e^X \le y) = \Pr(X \le \log_e y)$$

and you can handle the right hand side since you know the cumulative distribution function of $X$.

Then take the derivative with respect to $y$ to get the density of $Y$.

You know the derivative with respect to $x$ of $\frac{1}{2} \left(1 + \text{erf}[x/ \sqrt 2 ]\right)$ is $\frac{1}{\sqrt{2\pi}}e^{-x^2 / 2}$ so the derivative with respect to $y$ of $\frac{1}{2} \left(1 + \text{erf}[\log_e y / \sqrt 2 ]\right)$ is $\frac{1}{y\sqrt{2\pi}}e^{-(\log_e y)^2 / 2}$.

Henry
  • 157,058
  • As already mentioned, this amounts to using a path that goes from the pdf of X to the cdf of X, then from the cdf of X to the cdf of Y, and finally from the cdf of Y to the pdf of Y, while there exists a way to go directly from the pdf of X to the pdf of Y. – Did Jan 23 '12 at 08:19
  • @henry So we first obtained the cdf of x, converted to the corresponding cdf of y by replacing y by log y? soon after that in order to obtain the pdf of y, we take the derivative of the cdf funtion to arrive at the pdf? so that gives us the final answer? – Probabilityman Jan 23 '12 at 20:52
  • @Probabilityman: Yes. I prefer it when it works (the integral is not always so tractable) because I can keep track of what is going on, and can deal with some non-monotone functions, but Didier Piau seems to prefer a previously described shortcut for a monotone and differentiable function $f(x)$ with density: $$ \left| \frac{1}{f'(f^{-1}(y))} \right| \cdot p_X(f^{-1}(y))$$ – Henry Jan 23 '12 at 22:13
  • Henry: When mentioning another user in a comment, it is recommended to use the @, to signal one's comment to them (except when said user is the author of the post to which said comment is attached). – Did Jan 29 '12 at 13:15