1

I found this earlier question while working on something. In a nutshell, the earlier question confirms that since you have this identity for conditional probability:

$$P(A | B)=\frac{P(A\cap B )}{P(B)}$$

you can also do this:

$$P(A | B\cap C)=\frac{P(A\cap B\cap C )}{P(B\cap C)}$$

My problem is, none of the probability tutorials or textbooks seem to mention that you can treat the intersection of two events as another event (at least, of the ones I've read). Is this the sort of thing that I should have learned from set theory? Or is there a basic probability text that mentions this kind of thing? Ideally, I'd like to find out all the conditions under which you can combine events and plug the combination into one of the basic identities. Are there any good references for that?

Thank-you.

bnsmith
  • 113
  • 3
    Just like the sum of two numbers is another number, the intersection of two events is an event, and can be treated as such in any of the basic identities. – Mike Earnest Jan 13 '15 at 18:42

2 Answers2

1

We could use a substitution to make it a little more clear. Let the intersection of $B$ and $C$ be $B\cap C=D$. Then we have $$ P(A\mid B\cap C)=P(A\mid D) = \frac{P(A\cap D)}{P(D)} = \frac{P[A\cap(B\cap C)]}{P(B\cap C)} $$ I don't think there will be a book or reference that list every possible arrangement of identities. All you need is a little mathematical ingenuity which can be developed over time and experience.

dustin
  • 8,241
1

For the technical answer: If $(\Omega, \mathcal F,\mathbb P)$ is a probability space (where $\Omega$ is the sample space, $\mathcal F$ is the set of events, and $\mathbb P$ is the probability measure), then by definition $\mathcal F$ is a $\sigma$-algebra. This means that $\varnothing\in\mathcal F$, if $E\in\mathcal F$ then $E^c\in\mathcal F$, and if $E_1,E_2,\ldots\in\mathcal F$ then $\bigcup_{i=1}^\infty E_i\in\mathcal F$. If $E_1,\ldots, E_n\in\mathcal F$ then taking $\varnothing=E_{n+1}=E_{n+2}=\cdots$ we see that $E_1\cup\cdots E_n\in\mathcal F$. By De Morgan's law, if $A,B\in\mathcal F$ then $A\cap B = (A^c\cup B^c)^c\in\mathcal F$. So we can treat the intersection of two events as an event, and $\mathbb P(A\cap B)$ is well-defined.

However, if $\Omega$ is finite or countably infinite (for example, $\Omega=\{1,2,3,4,5,6\}$ if we are considering the outcomes of rolling a die), then we can take $\mathcal F=2^{\Omega}$, i.e. the set of all subsets of $\Omega$. If $A$ and $B$ are subsets of $\Omega$ then clearly $A\cap B$ is as well, so we can define $\mathbb P(A\cap B)$. (There are some technical problems if the sample space is countable infinite,e.g. the closed interval $[0,1]$, and we try to define a probability for every subset.)

Math1000
  • 36,983
  • I may not have fully understood everything that you've written here, but I can see that you've put a lot of effort into your answer. Thanks! – bnsmith Jan 15 '15 at 15:31