8

I have seen the phrase "sufficiently rich probability space" used a fair often in probability theory. For example, at the bottom of the first page here.

What does the phrase "sufficiently rich" mean?

  • 2
    Essentially, it needs to have enough measurable sets and subdivide probabilities nicely enough that you can define all the random variables you need. $[0,1]$ with the Lebesgue measure is already rich enough to do many, many things. Note that just a U(0,1) random variable will more or less require something isomorphic to $[0,1]$ with the Lebesgue measure to "live" in your probability space. – Ian Feb 17 '17 at 23:37
  • @Ian That is helpful, thanks. So there is no "formal" definition as such, just a casual way of stating that "all should be good"? – Greenparker Feb 18 '17 at 03:36
  • Yeah I don't think there is any "measure of richness", it is just an arbitrary probability space where you can do everything that you want to do. In principle you should prove that such a probability space exists, but in practice this is tedious rather than insightful. (The one place where I've even seen these kind of results explicitly stated is in proving that continuous time+continuous space stochastic processes like Brownian motion can exist.) – Ian Feb 18 '17 at 04:59
  • Typically, you need this "richness" also if you want to have independent random variables; for instance if $X$ is a given random variable and you would like to have an independent copy $Y$ with the same distribution as $X$ ... this requires "richness" of the probability space. – saz Feb 18 '17 at 07:08
  • @saz And why does it require "richness" of the probability space? – Greenparker Feb 18 '17 at 14:30
  • @Greenparker See my answer below... – saz Feb 18 '17 at 15:02

1 Answers1

7

First of all, "richness" of a probability space $(\Omega,\mathcal{A},\mathbb{P})$ means, essentially, that the $\sigma$-algebra $\mathcal{A}$ is "rich". There are (at least) two reasons why we need "richness" of the probability space:

Reason 1: Given a distribution $\mu$ (or a distribution function $F$) we would like to have a random variable $X: \Omega \to \mathbb{R}$ such that $X$ has distribution $\mu$, i.e. $\mathbb{P}(X \in B) = \mu(B)$ for all Borel sets $B$.

If $\Omega$ and/or $\mathcal{A}$ is too small, we cannot expect to construct such a random variable. Consider, for instance, an arbitrary set $\Omega$ endowed with the trivial $\sigma$-algebra $\mathcal{A} := \{\emptyset,\Omega\}$ and an arbitrary probability measure $\mathbb{P}$. The only measurable mappings $X: \Omega \to \mathbb{R}$ are constant mappings, $X(\omega)=c$ with fixed $c \in \mathbb{R}$. This means that we cannot construct a random variable with normal distribution or even with Bernoulli distribution. A similar phenomena happens if $\mathcal{A}$ is countable; then we cannot construct a measurable mapping whose distribution has a density with respect to Lebesgue measure.

The unit interval $\Omega:=(0,1)$ endowed with the Borel-$\sigma$-algebra and the Lebesgue measure (restricted to $(0,1)$) is "rich" in the sense that for any distribution $\mu$ on $(\mathbb{R},\mathcal{B}(\mathbb{R}))$ there exists a random variable $X: \Omega \to \mathbb{R}$ with distribution $\mu$.

Reason 2: Given a random variable $X$ we would like to construct an independent copy $Y$ which has the same distribution as $X$ (or, more generally, a sequence $(X_j)_{j \in \mathbb{N}}$ of independent identically distributed random variables).

As the following example shows, this requires richness of the probability space. Consider $\Omega := (0,1)$ endowed with $$\mathcal{A} := \bigg\{\emptyset, \bigg(0,\frac{1}{2}\bigg), \bigg[\frac{1}{2},1\bigg),\Omega\bigg\}$$ and define a probability measure $\mathbb{P}$ by $$\mathbb{P}\bigg(\bigg[\frac{1}{2},1\bigg)\bigg) = \mathbb{P}\left( \left(0,\frac{1}{2} \right)\right) = \frac{1}{2}.$$ Then $$X(\omega) := \begin{cases} 1 & 0<\omega<1/2, \\ -1, & 1/2 \leq \omega < 1 \end{cases}, \qquad \omega \in \Omega = (0,1),$$ defines a measurable mapping and $\mathbb{P}(X=1) = \mathbb{P}(X=-1) = 1/2$. It is not difficult to see that there does not exist a random variable $Y$ which has the same distribution as $X$ and is independent of $X$. Indeed: Any measurable mapping $Y: \Omega \to \mathbb{R}$ is of the form $$Y(\omega) = c_1 1_{(0,1/2)}(\omega) + c_2 1_{[1/2,1)}(\omega), \qquad \omega \in \Omega = (0,1),$$ for suitable constants $c_1, c_2 \in \mathbb{R}$. Since we want $Y$ to have the same distribution as $X$ we have $$c_1 = 1, c_2 = -1 \qquad \text{or} \qquad c_1 = -1, c_2 = 1.$$ The first choice gives $Y=X$ (and $Y$ is therefore obviously not independent of $X$) and the second one gives $Y=-X$ (which is also not independent of $X$).

This shows that we need $\mathcal{A}$ to be rich enough in order to construct independent random variables. The way out of this problem is to use a product construction to enlarge the probability space, see this question.

saz
  • 120,083