While von Mises' frequentist approach to probability--essentially, turning the law of large numbers from a theorem to a definition--can be made formally rigorous, it suffers from practical and conceptual difficulties relative to the more common Kolmogorov axiomatization. The Stanford link summarizes some of the relevant issues for frequentist approaches to probability in general--
Finite frequentism gives an operational definition of probability, and its problems begin there. For example, just as we want to allow that our thermometers could be ill-calibrated, and could thus give misleading measurements of temperature, so we want to allow that our ‘measurements’ of probabilities via frequencies could be misleading, as when a fair coin lands heads 9 out of 10 times. More than that, it seems to be built into the very notion of probability that such misleading results can arise. Indeed, in many cases, misleading results are guaranteed. Starting with a degenerate case: according to the finite frequentist, a coin that is never tossed, and that thus yields no actual outcomes whatsoever, lacks a probability for heads altogether; yet a coin that is never measured does not thereby lack a diameter. Perhaps even more troubling, a coin that is tossed exactly once yields a relative frequency of heads of either 0 or 1, whatever its bias....[this is an instance] of the so-called ‘problem of the single case’. ... The problem of the single case is particularly striking, but we really have a sequence of related problems: ‘the problem of the double case’, ‘the problem of the triple case’ … Every coin that is tossed exactly twice can yield only the relative frequencies $0$, $1/2$ and $1$, whatever its bias… A finite reference class of size $n$, however large $n$ is, can only produce relative frequencies at a certain level of ‘grain’, namely $1/n$. Among other things, this rules out irrational-valued probabilities; yet our best physical theories say otherwise. Furthermore, there is a sense in which any of these problems can be transformed into the problem of the single case. Suppose that we toss a coin a thousand times. We can regard this as a single trial of a thousand-tosses-of-the-coin experiment. Yet we do not want to be committed to saying that that experiment yields its actual result with probability 1.
--and to von Mises' approach in particular:
Some frequentists (notably Venn 1876, Reichenbach 1949, and von Mises 1957 among others), partly in response to some of the problems above, have gone on to consider infinite reference classes, identifying probabilities with limiting relative frequencies of events or attributes therein. Thus, we require an infinite sequence of trials in order to define such probabilities. But what if the actual world does not provide an infinite sequence of trials of a given experiment? Indeed, that appears to be the norm, and perhaps even the rule. In that case, we are to identify probability with a hypothetical or counterfactual limiting relative frequency. ... [T]here are sequences for which the limiting relative frequency of a given attribute does not exist... Von Mises (1957) gives us a ... restriction to what he calls collectives — hypothetical infinite sequences of attributes (possible outcomes) of specified experiments that meet certain requirements. Call a place-selection an effectively specifiable method of selecting indices of members of the sequence, such that the selection or not of the index $i$ depends at most on the first $i−1$ attributes. Von Mises imposes these axioms: 1)Axiom of Convergence: the limiting relative frequency of any attribute exists. 2) Axiom of Randomness: the limiting relative frequency of each attribute in a collective $ω$ is the same in any infinite subsequence of $ω$ which is determined by a place selection. The probability of an attribute $A$, relative to a collective $ω$, is then defined as the limiting relative frequency of $A$ in $ω$.
Although von Mises' definition is attractive, in the sense that it matches our intuition of empirical probabilities as "approximations" to the true limiting probability of some event, it has some unwelcome philosophical consequences:
Von Mises .... regards single case probabilities as nonsense: “We can say nothing about the probability of death of an individual even if we know his condition of life and health in detail. The phrase ‘probability of death’, when it refers to a single person, has no meaning at all for us” (11). Some critics believe that rather than solving the problem of the single case, this merely ignores it. And note that von Mises drastically understates the commitments of his theory: by his lights, the phrase ‘probability of death’ also has no meaning at all when it refers to a million people, or a billion, or any finite number — after all, collectives are infinite. More generally, it seems that von Mises’ theory has the unwelcome consequence that probability statements never have meaning in the real world, for apparently all sequences of attributes are finite. He introduced the notion of a collective because he believed that the regularities in the behavior of certain actual sequences of outcomes are best explained by the hypothesis that those sequences are initial segments of collectives. But this is curious: we know for any actual sequence of outcomes that they are not initial segments of collectives, since we know that they are not initial segments of infinite sequences.
Basically, finite frequentism almost always gives the "wrong" answer for a probability, insofar as it supplies one at all (as it cannot in the case where an experiment is not performed):
[F]inite frequentism makes the connection between probabilities and frequencies too tight, as we have already observed. A fair coin that is tossed a million times is very unlikely to land heads exactly half the time; one that is tossed a million and one times is even less likely to do so! Facts about finite relative frequencies should serve as evidence, but not conclusive evidence, for the relevant probability assignments.
von Mises' infinite or hypothetical frequentism, meanwhile, is unable to tell us the probability of any event whatsoever, even were we able to somehow perform an infinite random sequence of experiments!
Hypothetical frequentism fails to connect probabilities with finite frequencies. It connects them with limiting relative frequencies, of course, but again too tightly: for even in infinite sequences, the two can come apart. (A fair coin could land heads forever, even if it is highly unlikely to do so.)
As a result, von Mises' approach to probability is useless practically:
[S]cience has much interest in finite frequencies, and indeed working with them is much of the business of statistics. Whether it has any interest in highly idealized, hypothetical extensions of actual sequences, and relative frequencies therein, is another matter. The applicability to rational beliefs and to rational decisions go much the same way. Such beliefs and decisions are guided by finite frequency information, but they are not guided by information about limits of hypothetical frequencies, since one never has such information.
(Emphases mostly mine throughout.)