2

Let $X$ and $Y$ be two bounded random variables in a probability space $(\Omega,\mathcal{F},P)$. $G$ is a sub $\sigma$-algebra on $\Omega$. I have to prove that: $$E[XE[Y\mid G]] = E[YE[X\mid G]]$$ As I found in this question, it is quite easy to use the property "taking out what it is known" if $X$ or $Y$ are $G$-measurable. But in my case, $X$ and $Y$ are neither necessarily $G$-measurable nor independent, so I can't use those properties. I tried to follow the steps on page 6 in this pdf but it doesn't say why this equality holds: $$\lim_n E[Z_nX] = \lim_n E[Z_nE[X\mid G]]$$ It is supposed to be understood from number 6 in the same page: $$E[XY1_A] = E[YE[X\mid G]1_A] \space\space\space\space \forall A \in G $$ But I don't know how to drop the indicator function $1_A$. What I think is that the author is using this property: $$\text{if } E[XY1_A] = E[YE[X\mid G]1_A] \space\space\space\space \forall A \in G \Rightarrow E[XY] = E[YE[X\mid G]]$$ but I'm not 100% sure if my reasoning is correct.

  • "But I don't know how to drop the indicator function..." $\Omega \in G$ because $G$ is a $\sigma$-field. – aduh Sep 12 '16 at 00:54
  • @aduh but A is any set in $G$, not only $\Omega$. Can I freely assume it is $\Omega$ and I'm done? (it would be awesome) – Broken_Window Sep 12 '16 at 01:01
  • 1
    The equality you wrote down holds for all events in $G$, so in particular it holds for $\Omega \in G$. In other words, the implication that you conclude with is trivial because of the fact that $\Omega \in G$. – aduh Sep 12 '16 at 01:03
  • 3
    Possible duplicate of Conditional Expectation –  Sep 12 '16 at 01:36

2 Answers2

3

We have $E[XE(Y | G)] = E[E(XE(Y | G)|G)] = E[E(Y | G)E(X | G)]$.

And $E[YE(X | G)] = E[E(YE(X | G)|G)] = E[E(X | G)E(Y | G)]$.

The first equalities follow immediately from the definition of conditional expectation (as noted in my comments), and the second equalities follow from the "taking out what is known" principle that you mention.

aduh
  • 8,662
2

Here are two ways to explain this.

  1. Probabilistic

If $Z$ is a bounded ${\cal G}$-measurable random variable, then $$ E[ X Z ] = E [ E[X|{\cal G}] Z ].$$

Therefore, letting $Z=E[Y|{\cal G}]$, we have $$ E [ X E[Y |{\cal G}]] = E [ E[ X | {\cal G}] E [ Y |{\cal G}]]\quad (1)$$

But the same holds for $Y$. Namely, if $Z$ is bounded and ${\cal G}$-measurable, then $$ E [ Z Y ] = E [ Z E [Y |{\cal G}]]\quad(2).$$ Letting $Z = E[X |{\cal G}]$, (2) gives $$ E[ E [X|{\cal G}] Y ] = E [ E[X|{\cal G}] E[Y|{\cal G}]]\quad (3)$$ The result now follows by comparing (1) and (3).

  1. Functional Analytic

Recall that the mapping $f \to E[f|{\cal G}]$ is an orthogonal projection from $L^2 (\Omega,{\cal F},P)$ to its (closed) subspace of ${\cal G}$-measurable random variables. The inner product in $L^2(\Omega,{\cal F},P)$ is given by

$$ (f,g) = E [ f g] .$$

Now since $T_{\cal G}$ is an orthogonal projection, it satisfies $T_{\cal G}^* = T_{\cal G}$. In particular,

$$ (T_{\cal G} f,g)= (f, T_{\cal G} g).$$

Set $f= X$ and $g=Y$, and the result follows.

(note that what we did in the probabilistic approach was exactly moving $T_{\cal G}$ from one variable to the other, so these actually represent the same idea).

Fnacool
  • 7,519