This sounds like a simple question, but here's the gist:
Given a coin flip (or some other random process that can result in one of two outcomes) that has a perfect $50-50$ probability of landing on heads or tails (the probability of heads is $50\%$, the probability of tails is $50\%$), if I were to flip the coin 10 times, the results would be close to $5-5$. If I flip it $100$ times, the results would be close to $50-50$. The larger my sample size, the closer the results reflect the probability.
But if I flip this coin once, there's a $50-50$ chance of landing on either heads or tails. The next time I flip the coin, the probability is the same. This means that each result of, say, $20$ flips would be equally likely ($8$ heads and $12$ tails and $10$ heads and $10$ tails would be equally likely).
If this is true, why do the results of flipping a coin many times trend towards an equal split? If this isn't true, why not?