I'm a laymen, and the Gambler's fallacy has bothered me for decades. I'm looking for someone to help me look at it from another perspective.
Here's why it confuses me:
- Of course, if you were to flip a coin 100 times, none of those results should be affecting the 101th flip. The 101th flip is 50/50
- Statistically, if you flipped a coin 100 times, you would expect around 50 heads and 50 tails.
So here's the question... why is it if you flipped 99 coins: 50 Heads and 49 Tails, why are we allowed to say that the 100th coin is still 50/50? Or put in another way, if we flipped 99 coins, and they were all 99 heads, why are we not allowed to say that the 100th coin is going to likely be a tails?
I logically understand that it will always be a 50/50 chance, but where I get confused is when we introduce a set of coin flips (like say we are going to flip 100 coins total), my mind wants to believe that the flips will be distributed 50/50 across the set.
I dug around this stackexchange and did find this answer which sort of helps: https://math.stackexchange.com/a/845405/1276792
His third point i think is the closest I've been to understanding it, that there is no expected difference between the number of heads and tails... but it's still hard for me to reconcile the fact that the 100th toss is 50/50 still. Maybe just a rephrase would help me
Mods feel free to close this question if it is not in the correct spirit of the community
JMoravitz thanks for the comment too, let's say that the coin is definitely fair. I get that every coin flip is independent, and is still 50/50, but i just have a hard time reconciling a 50/50 probability over a set of 100 flips. Actually what kind of helped is MJD's comment that there's only an 8% chance that it's exactly 50 heads 50 tails
– A O Jan 08 '24 at 17:27