I have seen the phrase "sufficiently rich probability space" used a fair often in probability theory. For example, at the bottom of the first page here.
What does the phrase "sufficiently rich" mean?
I have seen the phrase "sufficiently rich probability space" used a fair often in probability theory. For example, at the bottom of the first page here.
What does the phrase "sufficiently rich" mean?
First of all, "richness" of a probability space $(\Omega,\mathcal{A},\mathbb{P})$ means, essentially, that the $\sigma$-algebra $\mathcal{A}$ is "rich". There are (at least) two reasons why we need "richness" of the probability space:
Reason 1: Given a distribution $\mu$ (or a distribution function $F$) we would like to have a random variable $X: \Omega \to \mathbb{R}$ such that $X$ has distribution $\mu$, i.e. $\mathbb{P}(X \in B) = \mu(B)$ for all Borel sets $B$.
If $\Omega$ and/or $\mathcal{A}$ is too small, we cannot expect to construct such a random variable. Consider, for instance, an arbitrary set $\Omega$ endowed with the trivial $\sigma$-algebra $\mathcal{A} := \{\emptyset,\Omega\}$ and an arbitrary probability measure $\mathbb{P}$. The only measurable mappings $X: \Omega \to \mathbb{R}$ are constant mappings, $X(\omega)=c$ with fixed $c \in \mathbb{R}$. This means that we cannot construct a random variable with normal distribution or even with Bernoulli distribution. A similar phenomena happens if $\mathcal{A}$ is countable; then we cannot construct a measurable mapping whose distribution has a density with respect to Lebesgue measure.
The unit interval $\Omega:=(0,1)$ endowed with the Borel-$\sigma$-algebra and the Lebesgue measure (restricted to $(0,1)$) is "rich" in the sense that for any distribution $\mu$ on $(\mathbb{R},\mathcal{B}(\mathbb{R}))$ there exists a random variable $X: \Omega \to \mathbb{R}$ with distribution $\mu$.
Reason 2: Given a random variable $X$ we would like to construct an independent copy $Y$ which has the same distribution as $X$ (or, more generally, a sequence $(X_j)_{j \in \mathbb{N}}$ of independent identically distributed random variables).
As the following example shows, this requires richness of the probability space. Consider $\Omega := (0,1)$ endowed with $$\mathcal{A} := \bigg\{\emptyset, \bigg(0,\frac{1}{2}\bigg), \bigg[\frac{1}{2},1\bigg),\Omega\bigg\}$$ and define a probability measure $\mathbb{P}$ by $$\mathbb{P}\bigg(\bigg[\frac{1}{2},1\bigg)\bigg) = \mathbb{P}\left( \left(0,\frac{1}{2} \right)\right) = \frac{1}{2}.$$ Then $$X(\omega) := \begin{cases} 1 & 0<\omega<1/2, \\ -1, & 1/2 \leq \omega < 1 \end{cases}, \qquad \omega \in \Omega = (0,1),$$ defines a measurable mapping and $\mathbb{P}(X=1) = \mathbb{P}(X=-1) = 1/2$. It is not difficult to see that there does not exist a random variable $Y$ which has the same distribution as $X$ and is independent of $X$. Indeed: Any measurable mapping $Y: \Omega \to \mathbb{R}$ is of the form $$Y(\omega) = c_1 1_{(0,1/2)}(\omega) + c_2 1_{[1/2,1)}(\omega), \qquad \omega \in \Omega = (0,1),$$ for suitable constants $c_1, c_2 \in \mathbb{R}$. Since we want $Y$ to have the same distribution as $X$ we have $$c_1 = 1, c_2 = -1 \qquad \text{or} \qquad c_1 = -1, c_2 = 1.$$ The first choice gives $Y=X$ (and $Y$ is therefore obviously not independent of $X$) and the second one gives $Y=-X$ (which is also not independent of $X$).
This shows that we need $\mathcal{A}$ to be rich enough in order to construct independent random variables. The way out of this problem is to use a product construction to enlarge the probability space, see this question.