Simple question first
"When a coin is tossed infinitely many times then we would get heads and tails for half of the cases each" is a false statement.
If you can toss a coin infinitely many times, you cannot be a human because you can somehow do infinitely many things in finite time. What on earth is infinity anyway? =) Even then, assuming the mathematical notion of infinity, if you toss a mathematical coin infinitely many times, you would get (with probability $1$) infinitely many heads and infinitely many tails. Both are countable (in the mathematical sense), and so are equinumerous. Not half though, since there's no such thing as dividing an infinite set in half!
Either way, the statement is not even meaningful, not to say true or false. However, there is a meaningful statement: "When a coin is tossed $2n$ times, then we would get $n$ heads and $n$ tails when $n$ is sufficiently large.". This statement is of course false, but the following is true:
When a coin is tossed $2n$ times, then it is more likely to get $n$ heads than any other specific number of heads.
This is simple to prove using the binomial distribution.
Empirical verification (proof)?
Now we turn to your real(?) question:
What does probability signify? Can it ever be verified?
The idea of empirical verification of probability does not even really make sense. An event either happens or doesn't happen, never both or neither. So if you ask for the likelihood of an event happening, sorry to say but it's going to be either $0$ or $1$, not any other number in-between. If you find any event that seems to violate this principle, it is because the event isn't properly specified to begin with.
So what could probability mean then?
Bayesian viewpoint
One viewpoint is the Bayesian viewpoint, which is that probability is merely a mathematical tool for us to investigate things that we don't know, where we assign a real number from $0$ to $1$ to indicate our certainty of a factual assertion. Then you will find that all the basic laws of probability hold. Of course, this also means that it has nothing to do with the actual facts in the real world, because it is about our human convictions or beliefs, and not about objective truth.
Existence of true randomness?
Another viewpoint is that there could be physical processes that are random in some sense. If some process is truly random, then before it causes an event no being can predict the outcome with 100% certainty. We might then assume that the outcome is drawn from a probability distribution, and if it is really so, with "probability distribution" as defined in mathematics, then we can prove that the process must satisfy certain properties. We can perform empirical tests to see whether those seem to hold. In your example, if a coin toss is a truly random process (just for the sake of the argument, because being macroscopic it certainly isn't), then we have three outcomes (head, tail, side), and the outcome will be from some probability distribution. If we further assume that successive coin tosses have outcomes that are all from exactly the same distribution and are also independent (as defined mathematically), only then we can talk about things like the law of large numbers. For simplicity we shall from now discard the "side" outcome.
Even then, the law of large numbers merely says that the ratio $r$ of head to tail tends to stabilize with probability $1$ as the number $n$ of tosses increases. Specifically, given any $ε > 0$, the probability that $r$ deviates from the underlying distribution by more than $ε$ tends to $0$ as $n \to \infty$. So this does not say anything about the particular sequence of tosses that we actually get when we toss the coin! We can go further and compute confidence intervals for each $n$, such as the range $I$ of ratios within which we expect the actual ratio $r$ to be, with 95% certainty. Specifically we have $\mathbb{P}( r \in I ) = .95$. Again, this does not force $r$ to be in $I$ exactly 95% of the time, in the same way we cannot think that the coin will land heads and tails exactly half the time each! So even when we run 100 experiments, each tossing the coin $n$ times, we only expect around 95 of them to have the ratio in the 95% confidence interval.
This also has important implications in scientific studies. Let's say we have 100 useless drugs, but we test each of them on 100 patients each. If we merely use a 95% confidence interval (it doesn't matter how it is computed) to determine whether there is no significant effect of the drug, then we in fact should expect about 5 drugs to fall outside their respective confidence intervals and hence from the viewpoint of the experiment would be considered to have a significant effect with 95% confidence!!! Since scientific studies almost always publish positive results and never negative results (who wants to read tons and tons of experiments that didn't show any result?), we should expect to see about 5 useless drugs claimed to be scientifically proven to have significant effect out of every 100 useless drugs! This issue is not hypothetical but is actually happening all the time (see Publication bias for details and actual cases).
Pseudo-randomness
Finally, another possible viewpoint is that there is no true randomness in the real world, but there is still statistical randomness. Any fixed deterministic program produces a fixed sequence of bits, and its bits are of course not at all random, but if you are unable to determine any bit from the preceding bits, then the program output is said to be statistically random, and for all intents and purposes we can consider it random. We call any program designed to produce or approximate a statistically random bit sequence a PRNG (pseudo-random number generator). If the real world is a statistically-random process, then we would not be able to distinguish it from true randomness, so everything in the previous section applies.
Back to the real world
Note that if the real world is deterministic, then by definition it cannot be truly random. Also, it would mean that if we can replicate a coin toss exactly, the outcome would be always the same. But there is a theorem in quantum mechanics that says that, if our quantum mechanical model of the world is correct, then it is absolutely impossible to replicate the exact state of any particle. Thus by extension we cannot do exactly the same thing twice, which has some implications for the common interpretation of the law of large numbers and for empirical testing at the quantum level. Note that quantum mechanics does not at all prove or disprove the existence of true randomness, for pretty much the same reasons as in the previous section, namely that it is possible for the world to statistically obey the laws of quantum mechanics but actually be deterministic!