0

Let $X_1∼\exp(λ)$ and $X_2∼\exp(λ)$ be two independent exponentially distributed random variables. Find the pdf of $Y = X_1−X_2$ through convolution.

My approach: Integrating the product of their probability density function taken into account that convolution is usually expressed as:

$$f_{Y}(y)=\int_{-\infty}^{\infty}f_{X1}(y-x)f_{X2}(x)\,\mathrm dx$$

$$f_{Y}(y)=\int \lambda e^{-\lambda(y-x)} . \lambda e^{\lambda x} $$

My initial thought was that even the convolution integration usually expressed as being defined from -infinity to infinity for this being about an exponential distribution it would need to be defined from zero to infinity but given $f_1(y-x)$ and $f_2(x)$ have to be higher than zero then $y-x>0$ and $x>0$, therefore: $y-x>0$, $y>x \; \rightarrow [-\infty,y]$ and $x>0 \rightarrow [0,\infty]$

The solution in the book for this convolution has limits of $[-\infty,y]$ and $[-\infty,0]$

My doubt is Is this convolution properly expressed and What is the logic for one of the limits to be $[-\infty,0]$ instead of $[0,\infty]$?

lber
  • 13
  • Answered at https://math.stackexchange.com/a/417333/321264. – StubbornAtom May 08 '20 at 14:59
  • Thanks for pointing that out. I think that post is different because it has different parameters while this one has the same parameter and also because the solutions posted for that one involved limits of [0,∞]. While what I am asking in this post is about a book solution that I found to be with limits from [−infty,0]. I reframed the question to be about the convolution expression itself and also about the specific use of [−infty,0] in the limits . I hope this makes more sense and makes it different. – lber May 10 '20 at 00:56

4 Answers4

0

If you let $X_1,X_2$ be independent r.v.s with density functions $f_{X_1}(x_1)$ and $f_{X_2}(x_2)$ then the convolution result comes from applying total law of probability and independence. Let $Y=X_1+X_2$ then for $y$ in the range of $Y$ we have

\begin{eqnarray*} F_Y(y) &=& P(Y\leq y) \\ &=& P(X_1+X_2\leq y) \\ &=& \int_{-\infty}^{\infty} P(X_1+X_2\leq y|X_2=x_2)f_{X_2}(x_2)dx_2\\ &=& \int_{-\infty}^{\infty} P(X_1+x_2\leq y|X_2=x_2)f_{X_2}(x_2)dx_2\\ &=& \int_{-\infty}^{\infty} P(X_1+x_2\leq y)f_{X_2}(x_2)dx_2\\ &=& \int_{-\infty}^{\infty} P(X_1\leq y-x_2)f_{X_2}(x_2)dx_2\\ &=& \int_{-\infty}^{\infty}\left(\int_{-\infty}^{y-x_2} f_{X_1}(x_1)dx_1\right)f_{X_2}(x_2)dx_2\\ \end{eqnarray*} so you can see where the inner upper limit comes from. Then, differentiate w.r.t. $y$ to get the density $f_Y(y)$, making use of the Fundamental Theorem of Calculus: \begin{eqnarray*} f_Y(y) &=& \frac{d}{dy}F_Y(y)\\ &=& \frac{d}{dy}\int_{-\infty}^{\infty}\left(\int_{-\infty}^{y-x_2} f_{X_1}(x_1)dx_1\right)f_{X_2}(x_2)dx_2\\ &=& \int_{-\infty}^{\infty}\left(\frac{d}{dy}\int_{-\infty}^{y-x_2} f_{X_1}(x_1)dx_1\right)f_{X_2}(x_2)dx_2\\ &=& \int_{-\infty}^{\infty} f_{X_1}(y-x_2)f_{X_2}(x_2)dx_2 \end{eqnarray*} So, the range of integration is, in principle, $\mathbb{R}$. However, the densities may be zero outside of some subset of $\mathbb{R}$. In your case of two IID exponentials then $y\geq 0$, $f_{X_1}(x_1)=\lambda e^{-\lambda x_1}{\bf 1}_{\{x_1\geq 0\}}$ and $f_{X_2}(x_2)=\lambda e^{-\lambda x_2}{\bf 1}_{\{x_2\geq 0\}}$. Thus, the convolution becomes \begin{eqnarray*} f_Y(y) &=& \int_{-\infty}^{\infty} f_{X_1}(y-x_2)f_{X_2}(x_2)dx_2\\ &=& \int_{-\infty}^{\infty} \lambda e^{-\lambda(y-x_2)}{\bf 1}_{\{y-x_2\geq 0\}}\lambda e^{-\lambda x_2}{\bf 1}_{\{x_2\geq 0\}}dx_2\\ \end{eqnarray*} The indicator functions show that $0\leq x_2 \leq y$. So, the convolution becomes: \begin{eqnarray*} f_Y(y) &=& \int_0^y \lambda e^{-\lambda(y-x_2)}\lambda e^{-\lambda x_2}dx_2\\ &=& \lambda^2e^{-\lambda y } \int_0^y dx_2\\ &=& \lambda^2 y e^{-\lambda y } \end{eqnarray*}

For $Y=X_1-X_2$ we repeat the above procedure: for $y$ in range of $Y$ we have \begin{eqnarray*} F_Y(y) &=& P(Y\leq y) \\ &=& P(X_1-X_2\leq y) \\ &=& \int_{-\infty}^{\infty} P(X_1-X_2\leq y|X_2=x_2)f_{X_2}(x_2)dx_2\\ &=& \int_{-\infty}^{\infty} P(X_1-x_2\leq y|X_2=x_2)f_{X_2}(x_2)dx_2\\ &=& \int_{-\infty}^{\infty} P(X_1-x_2\leq y)f_{X_2}(x_2)dx_2\\ &=& \int_{-\infty}^{\infty} P(X_1\leq y+x_2)f_{X_2}(x_2)dx_2\\ &=& \int_{-\infty}^{\infty}\left(\int_{-\infty}^{y+x_2} f_{X_1}(x_1)dx_1\right)f_{X_2}(x_2)dx_2\\ \end{eqnarray*} Differentiate w.r.t. $y$ to get $f_Y(y)$: \begin{eqnarray*} f_Y(y) &=& \frac{d}{dy}F_Y(y)\\ &=& \frac{d}{dy}\int_{-\infty}^{\infty}\left(\int_{-\infty}^{y+x_2} f_{X_1}(x_1)dx_1\right)f_{X_2}(x_2)dx_2\\ &=& \int_{-\infty}^{\infty}\left(\frac{d}{dy}\int_{-\infty}^{y+x_2} f_{X_1}(x_1)dx_1\right)f_{X_2}(x_2)dx_2\\ &=& \int_{-\infty}^{\infty} f_{X_1}(y+x_2)f_{X_2}(x_2)dx_2 \end{eqnarray*} With $X_1$ and $X_2$ IID exponentials, $y\in (-\infty,\infty)$. And, writing the densities with indicator functions gives \begin{eqnarray*} f_Y(y) &=& \int_{-\infty}^{\infty} f_{X_1}(y+x_2)f_{X_2}(x_2)dx_2\\ &=& \int_{-\infty}^{\infty} \lambda e^{-\lambda(y+x_2)}{\bf 1}_{\{y+x_2\geq 0\}}\lambda e^{-\lambda x_2}{\bf 1}_{\{x_2\geq 0\}}dx_2\\ \end{eqnarray*} The indicator functions show that $x_2 \geq -y \vee 0$, where $"\vee"$ means maximum. So, the convolution becomes: \begin{eqnarray*} f_Y(y) &=& \int_{-y\vee 0}^{\infty} \lambda e^{-\lambda(y+x_2)}\lambda e^{-\lambda x_2}dx_2\\ &=& \lambda^2e^{-\lambda y}\int_{-y\vee 0}^{\infty} e^{-2\lambda x_2}dx_2\\ &=& -\frac{1}{2}\lambda e^{-\lambda y} e^{-2\lambda x_2} \Big|_{-y\vee 0}^{\infty}\\ &=& \frac{1}{2}\lambda e^{-\lambda y} e^{-2\lambda(-y\vee 0)} \end{eqnarray*} When $y<0$ we get $f_{Y}(y)=\frac{1}{2}\lambda e^{\lambda y}$. When $y\geq 0$ we get $f_Y(y)=\frac{1}{2}\lambda e^{-\lambda y}$. Thus, $$ f_Y(y) = \frac{1}{2}\lambda e^{\lambda y}{\bf 1}_{\{y<0\}} + \frac{1}{2}\lambda e^{-\lambda y}{\bf 1}_{\{y\geq 0\}} =\frac{1}{2}\lambda e^{-\lambda |y|}$$

0

$\exp(\lambda x)$ (missing $x$ added) is unbounded and cannot describe the distribution of a random variable. Something is wrong in the problem statement.

By the way, the usual $\text{pdf}$ of an exponential variable is $$\lambda e^{-\lambda x}, x\ge0.$$


Assuming standard distributions, let $z:=x-y$. The domain of integration is $z$ constant in the first $xy$ quadrant, i.e. $x\ge0$ and $y=x-z\ge0\implies x\ge z$.

Hence $$\text{pdf}_{X-Y}(z)=\lambda^2\int_{\max(0,z)}^\infty e^{-\lambda x}e^{-\lambda(x-z)}dx=\frac{\lambda^2}{2\lambda} e^{\lambda z}e^{-2\lambda\max(0,z)}=\frac\lambda2 e^{-\lambda|z|}.$$

Note that we could expect an even function.

0

Let $ X, Y$ are independence random variables with densities respectively $f_{X}$ and and $ f_{Y}$.

Let $ Z = X- Y $ and $ S = X + Y $

then

$ f_{S}(s) = \int_{-\infty}^{\infty}f_{X}(z)f_{Y}(s-z)dx $

$ Z = X + (-Y) $

$ f_{Z} = \int_{-\infty}^{\infty}f_{X}(x)f_{-Y}(z-x)dx $

$ f_{-Y}(z-x) = f_{Y}(x-z) $

$f_{Z}(z) = \int_{-\infty}^{\infty}f_{X}(x)f_{Y}(x-z)dx $

$ f_{X}(x) = f_{Y}(x) = \begin{cases} 0 \ \ \mbox{for} \ \ x < 0 \\ \lambda e^{-\lambda x} \ \ \mbox{for} \ \ x\geq 0\end{cases} $

$ f_{Z}(z) =\int_{0}^{\infty}\lambda e^{-\lambda x}\lambda e^{-\lambda(x-z)}dx=\lambda^2 e^{\lambda z} \int_{0}^{\infty}e^{-2\lambda x} dx =\lambda^2 e^{\lambda z}\left(-\frac{1}{2\lambda} e^{-2\lambda x} \right )_{0}^{\infty} = \frac{1}{2}\lambda e^{\lambda z}.$

$ f_{Z}(z) = f_{Z}(-z) $

$ f_{Z}(z) = \begin{cases} \frac{1}{2}\lambda e^{\lambda z} \ \ \mbox{for} \ \ z<0 \\ \frac{1}{2}e^{-\lambda z} \ \ \mbox{for} \ \ z \geq 0 \end{cases}$

$ f_{Z}(z) = \frac{1}{2} e^{-|\lambda z|}.$

JCH
  • 371
0

If $X_1$ and $X_2$ are independent the formula $$ f_{X_1+X_2}(y)=\int_{-\infty}^{\infty}f_{X1}(y-x)f_{X2}(x)\,\mathrm dx\tag1 $$ gives the density of the sum $X_1+X_2$. But if you want the density of the difference $X_1-X_2$, your book might be using this formula: $$ f_{X_1-X_2}(y)=\int_{-\infty}^{\infty}f_{X1}(y-x)f_{X2}(-x)\,\mathrm dx.\tag2 $$ If (2) is the formula your book is using, then plugging in $f$ in place of $f_{X_1}$ and $f_{X_2}$ gives $$ f_{X_1-X_2}(y)=\int_{-\infty}^{\infty}f(y-x)f(-x)\,\mathrm dx.\tag3$$ Here $f(t):=e^{-\lambda t}$ if $t>0$ and $f(t)=0$ otherwise. So plugging $y-x$ in place of $t$ gets us: $$ f(y-x)=\begin{cases} e^{-\lambda(y-x)}&y-x>0\ \leftrightarrow\ \color{red}{x<y}\\ 0&\text{otherwise} \end{cases}\tag4 $$ and substituting $-x$ in place of $t$ yields: $$ f(-x)=\begin{cases} e^{-\lambda(-x)}&-x>0\ \leftrightarrow\ \color{red}{x<0}\\ 0&\text{otherwise} \end{cases}\tag5 $$ Plugging (4) and (5) into (3) we get $$ f_{X_1-X_2}(y)=\int_{-\infty}^{\infty}e^{-\lambda(y-x)}I_{[-\infty,y]}(x)e^{\lambda x}I_{[-\infty,0]}(x)\,\mathrm dx\tag6 $$ This might explain the constraints $[-\infty,y]$ and $[-\infty,0]$ that your book uses.

grand_chat
  • 38,951