2

I saw the wiki of Zeno's Paradoxes, and it is not clear on whether Zeno's Paradoxes been solved or not. That wiki article is linking to this article: Why mathematical solutions of Zeno's paradoxes miss the point: Zeno's one and many relation and Parmenides' prohibition.

Are Zeno's paradoxes been really solved?

Sensebe
  • 915
  • 3
    I'd say this was a philosophical, not a mathematical, question. – Angina Seng Jun 23 '18 at 10:47
  • 2
    I'd say that Zeno's paradoxes can't be "solved" in a standard mathematical sense. Instead, the paradoxes illustrate that infinity acts differently than regular numbers and that working with infinity requires special considerations. The paradoxes hint at the rules that one must have for working with infinity so that a mathematical system can be consistent. – Michael Burr Jun 23 '18 at 10:55
  • 1
    "In a race, the quickest runner can never overtake the slowest, since the pursuer must first reach the point whence the pursued started, so that the slower must always hold a lead." (Aristotle, Physics VI:9, 239b15). As Lord Shark the Unknown and Michael Burr already said, it is a philosophical question concerning the concept of infinity. In the above question, however, if you understand "never" as "not in finite time", then you see why the mathematical perspective is more reasonable than the "philosophical" which seems to claim that is impossible to do infinitely many steps in a finite time. – Paul Frost Jun 23 '18 at 11:11
  • 1
    It's important to ask, what do you mean by a "solution" to the Zeno's paradoxes? The answer of a mathematician to the reasonings of Zeno would be a simple "you made a lot of unjustified claims, so your proofs are invalid". There are countless examples in mathematics where a seemingly reasonable sequence of statements leads to a ridiculous conclusion. Therefore the mathematicians have come to understand that only a strict reasoning where each claim is properly justified can be relied on. The Zeno's paradoxes lack that quality, so from the mathematical viewpoint, there is nothing to "solve". – Adayah Jun 23 '18 at 12:31
  • 1
    However, there probably are different viewpoints, such as philosophy, where there are no strict criteria for neither a reasoning, nor a refutation, to be valid (of course I don't claim philosophy is worse than math; it just has a different perspective on truth). According to those viewpoints, the paradoxes of Zeno may be real problems which demand real solutions, but it might be better to ask for them on a different site. – Adayah Jun 23 '18 at 12:31
  • 2
    The Achilles and the Tortoise "paradox" presents no problem to the modern mind. Using Galileo's simple simple speed-distance-time formula (developed two thousand years after Zeno), a school child today can calculate precisely where and when Achilles would overtake the Tortoise (assuming constant speeds). The Greek philosophers of Zeno's day apparently had no notion of how to measure the speed of an object. Decomposing the race into infinitely many, ever decreasing segments turned out to be a dead end. – Dan Christensen Jun 25 '18 at 15:49

1 Answers1

1

Zeno's paradox has become irrelevant due to quantum mechanics. The movement of objects is only approximately described by classical mechanics; when you look at smaller and smaller time intervals and length scales, the classical picture in which the motion is supposed to be continuous, becomes increasingly inaccurate. Instead, a particle observed at position $x_1$ at time $t_1$ can end up at position $x_2$ at time $t_2$ with a probability that is given by a path integral that involves an integration over all possible paths from the initial to the final state.

In classical physics the laws of motion can be derived from the Lagrangian. If a particle moves from $x_1$ at time $t_1$ to $x_2$ at time $t_2$, the actual path $x(t)$ this particle takes will minimize the functional $S[x(t)]$, defined as:

$$S[x(t)] = \int_{t_0}^{t_1}L\left(x,\frac{dx}{dt}\right)dt$$

where $L$ is the Lagrangian. According to quantum mechanics, there is no well defined path anymore, instead the probability of finding the particle at position $x_2$ is the modulus squared of a so-called probability amplitude $\mathcal{A}$ that's given by:

$$\mathcal{A} = \frac{1}{Z}\int Dx \exp\left[\frac{i}{\hbar}S[x(t)]\right]$$

Here $Z$ is a normalization constant that can be fixed by demanding that the probability that the particle ends up at any position equals 1. For large systems, the most dominant contribution to the path integral comes from paths that minimize the action which are the paths that classical mechanics predict the particle follows. However, in principle all possible paths contribute to the path integral.

So, quantum mechanics does away with the whole idea that there exists definite intermediary positions. A technical issue with defining the path integral is that this is formally infinite due to the continuous nature of the paths one is integrating over. One has to regularize the paths e.g. by putting these on a lattice, the path integral and the normalization factor are then well defined, the ratio can be taken and then one can take the continuum limit.

This regularization procedure is trivial for simple systems, as in such cases the infinities just factor out and you can just ignore doing the regularization. But in case of quantum field theory, where instead of a single path particle position $x$, we're dealing with entire field configurations that are assigned some amplitude (the so-called wavefunctional, instead of the the single particle wavefunction), the regularization procedure becomes an essential part of doing computations.

What makes the regularization procedure non-trivial is that the Lagrangian itself must be included in the limit procedure that takes one to the continuum. This process is called renormalization, which was seen as an ill defined mathematical trick when it was used in the early days of quantum field theory (because many constants in the Lagrangian tend to infinity in that limit).

However, the modern view that emerged in the 1970s due to the work of Kenneth Wilson, is that one should go about taking the limit continuum limit the other way around. Instead of taking the limit to smaller and smaller scales, one fixes a scale on which the Lagrangian is well defined and one can then consider the effective Lagrangian that one gets when one partially integrates out small scale fluctuations that are not visible on some larger scale from the path integral. One can apply this to a wide range of physical models, not just quantum field theory. In general, for any physical model one can try to get to a rescaled version of that model by averaging over small scale fluctuations. Such methods are known as renormalization group methods.

A discrete model will in general "renormalize" to a continuum model in the limit where we "zoom-out" by an infinite amount. The fact that the macroscopic world that we an probe with some finite resolution looks like a continuum, doesn't therefore tell you anything about how Nature behaves at the smallest scales. The fact that not defining a Lagrangian at some scale and instead pretending that you can take the limit to the smallest scales leads to a nonsense Lagrangian at zero length scale (which doesn't matter for practical computations), can be taken as evidence that the notion of a real continuum as opposed to an effective one, is physically untenable.

Count Iblis
  • 10,366