The standard deviation of a population $[a_1, a_2, \dots a_N]$ is defined as follows: $$ \sigma = \sqrt{\frac{\Sigma_{i=1}^{N} (a_i - \mu)^2}{N}} = \frac{\sqrt{\Sigma_{i=1}^{N} (a_i - \mu)^2}}{\sqrt{N}} $$
The numerator is the (Euclidean) length of the following vector: $$ [a_1 - \mu, a_2 - \mu, \dots, a_n-\mu] $$ We may imagine that this vector lives in some $N$-dimensional configuration space which describes the dispersion of the population.
As we go into higher dimensions, this vector will naturally get longer, because we are adding more coordinates without changing the values of the existing coordinates. So clearly we have to divide by some scaling factor to normalize this between different population sizes.
Why is that scaling factor the square root of $N$ instead of some other expression? Does it have anything to do with the $\sqrt{m}$ that appears in this problem?