2

Let $(x^n)_{n\geq 0}$ be a Markov chain indexed by $n$, let $\mathbb E_{\mu}$ denote the expectation taken such that the initial condition is distributed as $\mu$. I can see heuristically why is should hold that

$$\mathbb E_{\mu}[f(x_n)g(x_0)]= \int \mathbb E\big[f(x_n)\mid x_0=y\big] g(y)\ d\mu(y).$$

To prove such a thing do I need to resort to the definition of expectation using the underlying probability space?

1 Answers1

2

To prove such a thing do I need to resort to the definition of expectation using the underlying probability space?

Yes, pretty much. Essentially, this identity is a consequence of the law of total expectation which tells you that for any random variables $X$ and $Y$ such that $X$ is integrable, we have $\mathbb E[X] = \mathbb E[\mathbb E[X\mid Y]] $.

Here we can apply this to $X\equiv f(x_n)g(x_0)$ and $Y\equiv x_0$, and unwind the measure-theoretic definition/properties of expectation to get the result. Here's how a rigorous argument would go :

Let $(\Omega,\mathcal F,\mathbb P)$ be the underlying probability space, and $x_0$ be a real-valued random variable (where $\mathbb R$ comes with the usual Borel $\sigma$-algebra). We have

$$\begin{align}\mathbb E_{x_0\sim\mu}\left[f(x_n)g(x_0)\right]&=\mathbb E_{x_0\sim\mu}\left[\mathbb E\big[f(x_n)g(x_0)\mid x_0\big]\right]\tag1\\ &= \int_{\omega\in\Omega}\mathbb E\big[f(x_n)g(x_0)\mid x_0\big](\omega)\ d\mathbb P(\omega)\\ &=\int_{\omega\in\Omega}\mathbb E\big[f(x_n)\mid x_0 = x_0(\omega)\big]g(x_0(\omega))\ d\mathbb P(\omega)\tag2\\ &=\int_{y\in\mathbb R}\mathbb E\big[f(x_n)\mid x_0 = y\big]g(y)\ d\mu(y),\tag 3\end{align} $$ where $(1)$ follows from the law of total expectation, $(2)$ follows from the "pull-out property" of conditional expectation and $(3)$ is a application of the change of variables formula for Lebesgue integral (remember that, by definition, $\mu$ is the pushforward measure of $\mathbb P$ by $x_0$).