Zeno's paradox has become irrelevant due to quantum mechanics. The movement of objects is only approximately described by classical mechanics; when you look at smaller and smaller time intervals and length scales, the classical picture in which the motion is supposed to be continuous, becomes increasingly inaccurate. Instead, a particle observed at position $x_1$ at time $t_1$ can end up at position $x_2$ at time $t_2$ with a probability that is given by a path integral that involves an integration over all possible paths from the initial to the final state.
In classical physics the laws of motion can be derived from the Lagrangian. If a particle moves from $x_1$ at time $t_1$ to $x_2$ at time $t_2$, the actual path $x(t)$ this particle takes will minimize the functional $S[x(t)]$, defined as:
$$S[x(t)] = \int_{t_0}^{t_1}L\left(x,\frac{dx}{dt}\right)dt$$
where $L$ is the Lagrangian. According to quantum mechanics, there is no well defined path anymore, instead the probability of finding the particle at position $x_2$ is the modulus squared of a so-called probability amplitude $\mathcal{A}$ that's given by:
$$\mathcal{A} = \frac{1}{Z}\int Dx \exp\left[\frac{i}{\hbar}S[x(t)]\right]$$
Here $Z$ is a normalization constant that can be fixed by demanding that the probability that the particle ends up at any position equals 1. For large systems, the most dominant contribution to the path integral comes from paths that minimize the action which are the paths that classical mechanics predict the particle follows. However, in principle all possible paths contribute to the path integral.
So, quantum mechanics does away with the whole idea that there exists definite intermediary positions. A technical issue with defining the path integral is that this is formally infinite due to the continuous nature of the paths one is integrating over. One has to regularize the paths e.g. by putting these on a lattice, the path integral and the normalization factor are then well defined, the ratio can be taken and then one can take the continuum limit.
This regularization procedure is trivial for simple systems, as in such cases the infinities just factor out and you can just ignore doing the regularization. But in case of quantum field theory, where instead of a single path particle position $x$, we're dealing with entire field configurations that are assigned some amplitude (the so-called wavefunctional, instead of the the single particle wavefunction), the regularization procedure becomes an essential part of doing computations.
What makes the regularization procedure non-trivial is that the Lagrangian itself must be included in the limit procedure that takes one to the continuum. This process is called renormalization, which was seen as an ill defined mathematical trick when it was used in the early days of quantum field theory (because many constants in the Lagrangian tend to infinity in that limit).
However, the modern view that emerged in the 1970s due to the work of Kenneth Wilson, is that one should go about taking the limit continuum limit the other way around. Instead of taking the limit to smaller and smaller scales, one fixes a scale on which the Lagrangian is well defined and one can then consider the effective Lagrangian that one gets when one partially integrates out small scale fluctuations that are not visible on some larger scale from the path integral. One can apply this to a wide range of physical models, not just quantum field theory. In general, for any physical model one can try to get to a rescaled version of that model by averaging over small scale fluctuations. Such methods are known as renormalization group methods.
A discrete model will in general "renormalize" to a continuum model in the limit where we "zoom-out" by an infinite amount. The fact that the macroscopic world that we an probe with some finite resolution looks like a continuum, doesn't therefore tell you anything about how Nature behaves at the smallest scales. The fact that not defining a Lagrangian at some scale and instead pretending that you can take the limit to the smallest scales leads to a nonsense Lagrangian at zero length scale (which doesn't matter for practical computations), can be taken as evidence that the notion of a real continuum as opposed to an effective one, is physically untenable.