0

Let $X,Y\in\mathbb R_{\ge 0}$ be random variables. If $X$ and $Y$ are independent, recall that $\mathbb E[XY]=\mathbb E[X]\mathbb E[Y]$ (using the convention $0\cdot\infty=0$, cf. Theorem 2.1.9 in Durrett's Probability Theory, 4th Edition, 2014). This means that we have

  1. $\mathbb E[XY]=\infty$ if and only if $\mathbb E[X]=\infty$ and $\mathbb E[Y]>0$ or $\mathbb E[X]>0$ and $\mathbb E[Y]=\infty$.
  2. $\mathbb E[XY]\in\mathbb R_{>0}$ if and only if $\mathbb E[X],\mathbb E[Y]\in\mathbb R_{>0}$.
  3. $\mathbb E[XY]=0$ if and only if $\mathbb E[X]=0$ or $\mathbb E[Y]=0$.

I'm looking for the most general extension of this result to conditional independence, so let $X$ and $Y$ be conditionally independent given a $\sigma$-algebra $\mathcal F$. Since I want to conclude that $X$ and $Y$ are integrable (or something weaker), I do not want to assume that they are integrable, so the existence of the conditional expectations $\mathbb E[X|\mathcal F]$, $\mathbb E[Y|\mathcal F]$ would have to be justified before usage. Moreover, since conditional expectations cannot take the value $\infty$, meaning $\mathbb E[XY|\mathcal F]\in\mathbb R$ (absolutely surely) assuming that it exists, we have to modify the result to take this into account.

To illustrate what I mean, I provide two results that are easy to derive (respectively well-known).

  1. If we have $\mathbb E[XY]=0$ then we have $\mathbb P(XY=0)=1$, i.e. $\{X=0$ or $Y=0\}$ almost surely. We cannot use $\mathbb E[X|\mathcal F]$ and $\mathbb E[Y|\mathcal F]$, but can we use something similar to derive conclusions?
  2. If we have $\mathbb E[X]$, $\mathbb E[Y]\in\mathbb R_{\ge 0}$, then we have $\mathbb E[XY]\in\mathbb R_{\ge 0}$ (by the above) and further $\mathbb E[XY|\mathcal F]=\mathbb E[X|\mathcal F]\mathbb E[Y|\mathcal F]$ almost surely using Proposition 13 on page 137 in Probability Theory by Rao and Swift (2nd Edition, 2006) to obtain $\mathbb E[Y|X,\mathcal F]=\mathbb E[Y|\mathcal F]$ almost surely, which then gives $\mathbb E[XY|\mathcal F]=\mathbb E[X\mathbb E[Y|X,\mathcal F]|\mathcal F]=\mathbb E[X\mathbb E[Y|\mathcal F]|\mathcal F]=\mathbb E[Y|\mathcal F]\mathbb E[X|\mathcal F]$ almost surely.

But I'm under the very strong impression that much more can be said here, hence the question.

Matija
  • 3,526
  • Fun Fact: Proposition 13 only addresses non-negative random variables, for reasons beyond my grasp. I do have to point out though, that this is the only one of four (or five) probability books that presents this result. So, no front, mad respect. – Matija Oct 14 '22 at 01:27
  • 1
    @Majita By linearity, Proposition 13 is true for integrable real valued $X$ and $Y$. To get direct extensions, you can assume that conditional probability kernels exist. – Mason Oct 15 '22 at 02:43
  • Thanks for the hint! Yes, this is a starting point. But then we still have to deal with the fact that the conditional expectations may not exist, meaning that we do have the pointwise expectations (possibly infinite), but then we still haven't dealt with possibly infinite expectations and the non-existence of conditional expectations. How come this does not pop up in the literature (also the product kernel question)? That's conditional independence 101, right? – Matija Oct 15 '22 at 07:00
  • If $X$ takes values in $[0, \infty)$, the conditional expectation $E(X \mid \mathcal{F})$ does exist by the Radon-Nikodym theorem. – Mason Oct 15 '22 at 16:28
  • Do you have a reference for me? The RN theorem only applies if $\mathbb E[\unicode{120793}EX]<\infty$ for $E\in\mathcal F$, right? I tried to find a broad theory covering infinite conditional expectations, unsuccessfully. The most general definition I found is still only valid for $\mathbb E[X|\mathcal F]\in\mathbb R$. Example: Toss a fair coin $X$, then choose $Y=0$ on $X=0$ and $Y=Z$ on $X=1$, where $Z\in\mathbb Z_{\ge 0}$ (independent of $X$) is not integrable. Now, we clearly have "$\mathbb E[Y|X]\in{0,\infty}$." – Matija Oct 16 '22 at 11:16

1 Answers1

0

Consider a random variable $B\in\mathbb Z$ such that $\mathbb E[\unicode{120793}\{B\ge 0\}B],\mathbb E[-\unicode{120793}\{B\le 0\}B]=\infty$. Then for $Z=\exp(B)>0$ we have $\mathbb E[Z],\mathbb E[1/Z]=\infty$ using Jensen's inequality. Let $R,S\in\{0.5,2\}$ take $0.5$ with probability $2/3$, be independent and independent of $B$. Finally, let $X=RZ$, $Y=S/Z$, then we have $\mathbb E[X]=\mathbb E[Z]=\infty$, $\mathbb E[Y]=\mathbb E[1/Z]=\infty$ and $\mathbb E[XY]=\mathbb E[R]\mathbb E[S]=1$.

So, although we have $X,Y>0$ almost surely, that $X$ and $Y$ are conditionally independent given $Z$, and $\mathbb E[XY]=1<\infty$, we have $\mathbb E[X],\mathbb E[Y]=\infty$. So, say, using kernel theory or directly in this example, we can define the conditional expectations $\mathbb E[X|Z=z]$, $\mathbb E[Y|Z=z]$, but only $\mathbb E[XY|Z]$ exists, while $\mathbb E[X|Z],\mathbb E[Y|Z]$ don't, using the standard definition of the conditional expectation. Using this definition, we recover $\mathbb E[X|Z],\mathbb E[Y|Z]$, but this doesn't change the fact that we necessarily have $\mathbb E[X],\mathbb E[Y]=\infty$.

Matija
  • 3,526