23

Intuitively, I understand that if $Y$ is a constant random variable and $X$ is another random variable, then $X$ and $Y$ are independent.

However, I can't make a formal proof because I can't show that their joint density function are the product of two functions that rely only on x's and y's respectively or using similar methods.

(What is the density function of a constant random variable for example?)

Can you give me a hint in order to make a proof?

HeMan
  • 3,119

4 Answers4

17

$X$ and $Y$ are independent if and only if $P(X\in A, Y\in B)=P(X\in A)P(Y\in B)$ for all $A,B$.

Assume $Y=y$ for some $y\in\mathbb{R}$. Then $$P(X\in A, Y\in B)=\left\{\begin{array}{ll}0&\mathrm{if}\,\,y\notin B,\\ P(X\in A)&\mathrm{if}\,\,y\in B\end{array}\right.$$

But notice $P(Y\in B)=1$ if $y\in B$ and $P(Y\in B)=0$ if $y\notin B$.

user295959
  • 1,250
9

Hint: Work with the cumulative distribution functions. Show that for all $x$ and $y$ we have $\Pr(X\le x\cap Y\le y)=\Pr(X\le x)\Pr(Y\le y)$.

Note that if $Y=k$ with probability $1$, then $F_Y(y)=0$ if $y\lt k$, and $F_Y(y)=1$ if $y\ge k$.

André Nicolas
  • 507,029
4

Here is a fun proof using (introductory) measure theory. $\newcommand{\ind}{\perp\kern-5pt\perp}$

Short version

Let $(\Omega, \mathcal{F}, P)$ be a probability space, $C : \Omega \to \Psi$ a constant random variable, $X: \Omega \to \Psi$ an arbitrary random variable, and $\sigma_X$ the $\sigma$-field generated by $X$. Note that $\sigma_C=\{\emptyset, \Omega \}$. But since $\Omega$ and $\emptyset$ are independent of any other event in $\mathcal{F}$, we have $\sigma_C \ind \sigma_X$. Therefore, $C \ind X$.

Long version

(Assumes almost$^\star$ no prior knowledge of measure theory.) First, given some probability space $(\Omega, \mathcal{F}, P)$, note that $\Omega$ and $\emptyset$ are independent of any other event $A \in \mathcal{F}$. To see this, consider the definition of independent events, which is that $A \ind B$ if $P(A \cap B) = P(A)P(B)$. Now, observe that, $P(A \cap \Omega) = P(A) = P(A) \cdot 1 = P(A) P(\Omega)$, so $\Omega$ is independent of any event in $\mathcal{F}$. A similar argument holds for $\emptyset$.

Next, note that if $C : \Omega \to \Psi$ is a constant random variable, then $\sigma_C$, the $\sigma$-field generated by $C$, is trivial. In other words, $\sigma_C = \{ \emptyset, \Omega \}$. To see this, consider that the definition of a sigma field generated by random variable C is $\sigma_C := \{ \{ C^{-1}(B) \}: B \in \mathcal{B}(\Psi) \}$, where $\mathcal{B}(\Psi)$ are the Borel sets of $\Psi$. Then note that if $C$ takes on constant value $c_0 \in \Psi$, then $C^{-1}(B) = \Omega$ if $c_o \in B$, and otherwise $C^{-1}(B) = \emptyset$.

Now note that $\sigma_C$ and $\sigma_X$ must be independent $\sigma$-fields for any random variable $X$. To see this, consider that two $\sigma$-fields $\mathcal{F}$ and $\mathcal{G}$ are defined to be independent if events $F$ and $G$ are independent for any $F \in \mathcal{F}, G \in \mathcal{G}$. We want to show this is true when $\mathcal{F} = \sigma_X, \mathcal{G}=\sigma_C$. But we have already determined that $\sigma_C = \{ \Omega, \emptyset \}$, and that events $\Omega, \emptyset$ are independent from all other events (including events in $\sigma_X$). So we are done.

Finally, note that two random variables X,Y are independent if the $\sigma$-fields generated by them are independent. In other words, $\sigma_X \ind \sigma_Y \implies X \ind Y$. To see this, recall the definition of $X \ind Y$, which is $\forall B, B' \in \mathcal{B}(\Psi), \{ X^{-1}(B)\} \ind \{Y^{-1}(B')\}$. But by construction, $\{ X^{-1}(B)\} \in \sigma_X$ and $\{Y^{-1}(B')\} \in \sigma_Y$, and those events are independent by assumption.

Footnotes

$\star$: The only prerequisites are the definition of (1) a sigma field and (2) Borel sets. The former is introductory and can be looked up. For some sense of the latter, simply consider $\mathcal{B}(\mathbb{R})$, which is the smallest sigma field that contains all the intervals.

ashman
  • 952
-1

Show a more general result, that if $Y$ is a constant random variable with value $c$ with probability 1, it is independent to any random variable $X$. Essentially, the result we need is from random statement 24. Let $k \neq c$ be a constant.

$\Pr(Y=c)=1$, so $\Pr (B, Y=c) = \Pr (B)$ for all events $B$. In particular, setting $B=\{ X=x \}$ gives $ \Pr (X=x) \Pr (Y=c) = \Pr (X=x) = \Pr(X = x, Y = c)$.

$\Pr(Y=k)=0$, so $\Pr (B \cup (Y=k)) = \Pr (B)$ for all events $B$. In particular, setting $B=\{ X=x \}$ and using the inclusion-exclusion rule gives $ \Pr(X = x, Y = k) = \Pr (X=x) + \Pr (Y=k) - \Pr (X=x \cup Y=k) = \Pr (X=x) + 0 - \Pr (X=x) = 0 = \Pr (X=x) \Pr(Y=k)$.