1

I'm trying to find a rigorous definition of 'independence of events' and 'independence of random variables'.

I came across 2 definitions in the sources I'm studying from:

  • Definition 1: $A$ and $B$ are independent iff $\Pr(A) = \Pr(A \mid B)$.
  • Definition 2: $A$ and $B$ are independent iff $\Pr(A \cap B) = \Pr(A)\Pr(B)$.

I'll explain later why I think definition 2 is wrong when $\Pr(B) = 0$.

Definition 1 looks right to me, but it doesn't say whether $A$ and $B$ are independent if $\Pr(A \mid B)$ is not defined. This can happen, for instance, if $B = \{\}$. Saying something like '$A$ and $B$ are independent iff $\Pr(A \mid B)$ is not defined or $\Pr(A) = \Pr(A \mid B)$' feels weird, but wouldn't be unacceptable to me.

How does one define the independence of 2 events? Is the independence of 2 events always defined?

To digress a bit, a characterization of when $\Pr(A \mid B)$ is defined would also be useful to me. I tried to find a rigorous definition of conditional probability and I came across concepts like 'regular conditional probability' and 'disintegration theorem' which looked promising, but I think they will take a large amount of time and effort to understand. They also focussed more on the 'how to define' part and less on the 'when is it defined' part.

Now I explain the reason why I think definition 2 is wrong: Let $[-1, 1]^2$ be a dartboard and the dart's landing point is uniformly random. Let $A$ be the event that the dart lands in the circle $x^2 + y^2 \le 1$. Let $B$ be the event that the dart lands on the line $x = 0$. Then $\Pr(A \cap B) = \Pr(B) = 0$ and $\Pr(A) = \pi/4$, so $A$ and $B$ are independent by the definition above. But $A$ is not guaranteed to occur: it has probability $\pi/4$, whereas if $B$ happens, then $A$ is guaranteed to occur (because $B \subseteq A$). Since the occurrence of $B$ affects the odds of occurrence of $A$, I think $A$ and $B$ should not be independent. Formally, I would write this as $\Pr(A \mid B) = 1 \neq \Pr(A)$.

  • Thanks for the good question. After thinking about it further I have also come to accept that events that happen almost surely or almost never are by definition —even if against intuition—independent of all events. – ryang Nov 07 '20 at 17:31

2 Answers2

1

When mathematicians axiomatize the definition of independence, they use your Definition 2. While mathematically robust, this does not always correspond to the intuitive notion of independence. Indeed, if events occur almost surely or almost never, they are independent of themselves! This fact is key to the Kolmogorov 0-1 Law

  • I used to think that definition 2 was naive, used by those who disregarded 0-probability events. But a quick search of 'events independent of themselves' revealed that definition 2 is the one that's used even in the presence of 0-probability events. Although I find definition 2 inappropriate, I'm now somewhat confident that this is the standard definition. – Eklavya Sharma May 26 '20 at 18:44
0

What reference text are you using?

The conditional probability $$ P(A|B) = \frac{P(A\cap B)}{P(B)} $$ is not defined when $P(B) = 0$, so it is not quite right to say that $P(A|B) = 1$ when event $B$ has zero probability. (If any, with the events you used, the conditional probability will be an indeterminate form, $0/0$.)

  • There isn't a single reference that I'm using. I have been reading many sources, including Wikipedia, since different texts have different amount of rigor, ease of understanding, coverage of topics, etc. My primary reference is 'Probability and Random Processes' by Grimett and Stirzanker. They don't define conditional probability when P(B)=0, but in their treatment of continuous random variables, they use statements like Pr(X ∈ [a, b]|Y=y). I could follow that part by intuitive understanding and it makes sense in that context, but I'm unable to fully formalize things otherwise. – Eklavya Sharma May 26 '20 at 11:19