Suppose that we have a unit square and are interested in the distance between two opposite corners.
The euclidean distance is $\sqrt{2}$.
The manhattan distance is $1 + 1 = 2$.
Suppose we subdivide the square by divide both the width and height in half.
The euclidean distance remains $\sqrt{2}$.
The manhattan distance of $n$ subdivisions is $n^2 \frac{1}{n^2} + n^2 \frac{1}{n^2} = 2$.
Then, $\lim_{n \to \inf} = 2$.
That said, if $n$ is large I could be presented with a unit square with such high resolution that I wouldn't be able to distinguish the one with a distance of $2$ and $\sqrt{2}$. This seems strange to me. Why doesn't $\lim_{n \to \inf} = \sqrt{2}$ intuitively? I'm assuming it has to do with the fact that no matter how small the resolution, the hypothenuse of each subdivision will always be shorter than the sides, but it's still a bit strange to me.
This is the core flaw behind many, more elementary, "fake proofs" in math, like claims that $\pi=4$ or $\sqrt 2 = 2$.
Some more discussion is here and elsewhere
– PrincessEev Jun 13 '23 at 21:46