1

Given two continuous iid r.v.s X and Y on $(\Omega, \mathcal{F}, \mathbb{P})$, I want to show that the probablity of a tie is zero, i.e. $\mathbb{P}(\{\omega \in \Omega: X(\omega) = Y(\omega)\}) =0$. One fact is that a continuous r.v. does not have any atom, i.e. $\mathbb{P}(\{X = c\}) =0, \forall c \in \mathbb{R}$. To tackle the problem, one might try to use the said fact and write $\{\omega \in \Omega: X(\omega) = Y(\omega)\} \equiv \bigcup_{c \in \mathbb{R}} \{\omega \in \Omega: X(\omega) = c: \quad Y(\omega)=c \}$. The problem is that this is an uncountable union and $\mathbb{P}$does not necessarily move through $\bigcup_{c \in \mathbb{R}}$.

How can I resolve this problem?

2 Answers2

1

Since $X$ and $Y$ are independent, we have

$$\mathbb{E}(h(X,Y)) = \mathbb{E} \left( \mathbb{E}(h(X,y)) \bigg|_{y=Y} \right) \tag{1}$$

for any measurable bounded function $h: \mathbb{R}^2 \to \mathbb{R}$. If we set

$$D := \{(x,y); x=y\}$$

then $(1)$ shows for $h(x,y) := 1_D(x,y)$ that

$${P}(X=Y)= \mathbb{E} \bigg( \underbrace{\mathbb{P}(X=y)}_{=0} \bigg|_{y=Y} \bigg)=0.$$

Remark: Note that it is crucial that $X$ and $Y$ are independent; without this assumption the assertion fails, in general, to be true.

saz
  • 120,083
  • thanks for the answer. Is there any way to show the result without resorting to expectations and sticking to more basic notions? – math_enthusiast Oct 08 '18 at 20:34
  • @math_enthuthiast Well, expectations are a quite basic notion in probability theory, aren't they? Perhaps someone else will present a different idea which is easier for you to understand... – saz Oct 08 '18 at 20:50
  • no I understood your perfect answer, but let's say I am supposed to show the result with more rudimentary notions than expectation. Thanks again. – math_enthusiast Oct 08 '18 at 23:28
  • @math_enthuthiast So what kind of tools do you have by hand? Or, asked the other way round, what are the "rudimentary notions" you are allowed to use? – saz Oct 09 '18 at 02:56
  • the approach I was grappling with in the statement of the question is an example of rudimentary I have in mind. If it can be proved with tools not involving integrals (expectations), that's ideal. – math_enthusiast Oct 10 '18 at 11:03
  • @math_enthuthiast Currently I don't see how to proceed that way. As NateEldredge suggested in this answer, you can invoke the uniform continuity of the distribution function, and then you will need no integrals/expectations. – saz Oct 10 '18 at 13:33
1

This proof is "elementary" but maybe it could be simplified some more.

Let $F$ be the cdf of $X$, which by assumption is continuous. Set $G_n(x) = F(x + 1/n) - F(x) = P(x < X \le x+ 1/n)$. Note that $G_n$ is continuous, and $G_1(x) \ge G_2(x) \ge \dots$, and $\lim_{n \to \infty} G_n(x) = 0$. So by Dini's theorem we have $G_n \to 0$ uniformly. That is, if we fix any $\epsilon > 0$, there exists $N$ so large that $G_N(x) < \epsilon$ for all $x$.

Now for each $k \in \mathbb{Z}$, let $A_k$ be the event $\{k/N < X,Y \le (k+1)/N\}$. Clearly if $X=Y$ then their value is between $k/N$ and $(k+1)/N$ for some $k$, so $\{X=Y\} \subset \bigcup_{k \in \mathbb{Z}} A_k$. Thus by union bound, $P(X=Y) \le \sum_{k \in \mathbb{Z}} P(A_k)$.

On the other hand, $$\begin{align*} P(A_k) &= P(\{k/N < X \le (k+1)/N \} \cap \{k/N < Y \le (k+1)/N \}) \\ &= P(k/N < X \le (k+1)/N)\cdot P(k/N < Y \le (k+1)/N) && \text{by independence} \\ &= G_N(k/N) \cdot P(k/N < Y \le (k+1)/N) \\ &< \epsilon P(k/N < Y \le (k+1)/N). \end{align*}$$

Thus $P(X=Y) < \epsilon \sum_{k \in \mathbb{Z}} P(k/N < Y \le (k+1)/N) = \epsilon \cdot 1$. But $\epsilon$ was arbitrary so we conclude $P(X=Y) =0$.

Nate Eldredge
  • 97,710
  • I like your ans just wondering if you are able to help with a problem of mine :) https://math.stackexchange.com/questions/2953842/proving-an-inequality-for-upper-right-hand-dini-derivatives – Homaniac Oct 14 '18 at 07:16