0

The probability of an event $A$ is given by: $$P(A) = \frac{n(A)}{n}$$ Usually, $n$ is easy to find out, and the only hurdle in finding $P(A)$ is $n(A)$.

However, suppose we are interested in finding $n(A)$. We know $n$ and can find $P(A)$ using some other method (not the formula above), then we can find $$n(A) = nP(A)$$

I'm looking for examples of such counting problems, which can be solved by probability.

D S
  • 4,961
  • The only thing I can think of which comes close is this proof: https://math.stackexchange.com/a/2813070/177399. Instead of just looking at a single event $A$, I looked at mutually exclusive events $A_0,A_1,\dots,A_m$ whose union was certain. This means $P(A_0)+P(A_1)+\dots+P(A_m)=1$ was relatively easy to prove, while the corresponding combinatorial identity $n(A_0)+n(A_1)+\dots+n(A_m)=n$ was hard to prove. – Mike Earnest Mar 27 '23 at 16:45
  • thanks for the response. @MikeEarnest. Maybe you can post this as answer? – D S Mar 27 '23 at 16:49
  • This question might also be of interest: https://math.stackexchange.com/q/3344468/177399 – Mike Earnest Mar 27 '23 at 17:19

2 Answers2

1

The method you cite requires the elements in the outcome set to all have identical probability mass (ie: non-biased weighting) [and, of course, a finite count].

If you do have that, then you can indeed use basic algebra to get $\eta(A)=n~\mathsf P(A)$ .


I cannot think of a case where the finding the count is too much harder than finding the probability. In such a setting, if you can find one you could find the other in much the same way. They just differ by a constant factor, after all.

Graham Kemp
  • 129,094
1

There is a related notion to your idea, probabilistic proofs of combinatorial identities. Suppose you have mutually exclusive and exhaustive events $E_1,\dots,E_m$, which means that $$ P(E_1)+\dots+P(E_m)=1\tag1 $$ Multiplying both sides by $n$, you get $$ n(E_1)+\dots+n(E_m)=n,\tag2 $$ an identity where all numbers involved are integers. Often, for integral identities where all the terms involved have some combinatorial meaning, it is desirable to give a bijective proof, meaning you find two sets describing the two sides of the equation, and give a bijection between the two sets. I will now give an example where $(1)$ is easy to prove, but $(2)$ is hard to give a bijective proof for. In my mind, probabilistic proofs are the next best thing to bijective proofs.

Imagine an urn which initially contains a single red marble and a single blue marble. You make a series of $n$ draws from this urn, where after each draw, you put back the marble you drew, plus two additional marbles of the same color. Let, $E_k$ be the event that you drew exactly $k$ marbles over the course of these $n$ draws. In my answer, I show that $$ P(E_k)=\frac{\binom{2k}k\binom{2(n-k)}{n-k}}{4^n},\qquad k\in \{0,1,\dots,n\} $$ which leads to the identity $$ \sum_{k=0}^n \binom{2k}k\binom{2(n-k)}{n-k}=4^n\tag3 $$ I cannot really claim that $(3)$ is hard to prove. You can automatically prove $(3)$ with a computer, using the methods of the book A=B by Petkovsek, Wilf and Zeilberger. Also, there is a pretty straightforward generating function proof of $(3)$. But these proofs lack the intuitiveness provided by a bijective proof, and proving $(3)$ bijectively is famously hard.

Mike Earnest
  • 75,930