A sequence ${X_n}$ of random variables converges in probability towards the random variable $X$ if for all $\epsilon > 0$
$$\lim_{n\to\infty}\Pr\big(|X_n-X| > \epsilon\big) = 0$$
But why use $\epsilon > 0$ and not just take $\epsilon = 0$?
A sequence ${X_n}$ of random variables converges in probability towards the random variable $X$ if for all $\epsilon > 0$
$$\lim_{n\to\infty}\Pr\big(|X_n-X| > \epsilon\big) = 0$$
But why use $\epsilon > 0$ and not just take $\epsilon = 0$?
Let $X,X_1,X_2,\dots$ be constant random variables: $X(\omega)=x$ and $X_n(\omega)=x_n$ for each $\omega\in\Omega$.
If $x_n$ converges to $x$ then for each $\epsilon>0$: $$\lim_{n\rightarrow\infty}P(|X_n-X|>\epsilon)=0$$showing that $X_n$ converges in probability to $X$.
However for each $n$ with $x_n\neq x$ we have $P(|X_n-X|>0)=1$.
http://math.stackexchange.com/questions/206851/generalisation-of-dominated-convergence-theorem
That way we can generalize some properties instead of demanding stricter assumptions.
– Ilham Apr 25 '15 at 20:01Also, if you have more questions, ask a new one since this discussion is getting overlong. And make it precise!
– Ilham Apr 25 '15 at 20:10