Say that we have a sequence $x_1,x_2,\dots$ of i.i.d. random vectors in $\mathbb{R}^n$ with mean $0$ and variance $\sigma^2$, meaning $$\mathbb{E}[\|x_i\|^2_2] = \sigma^2$$ for all $i$. Then it's a pretty standard excercise that the variance of the empirical averages tends to zero: $$\mathbb{E}\left[\left\|\frac{1}{N}\sum\limits_{i=1}^{N}x_i\right\|_2^2\right] = \frac{\sigma^2}{N}$$ What happens if I replace the Euclidean norm $\|\cdot\|_2$, in both the definition of $\sigma$ and in the norm of the average, with something else, for instance $\|\cdot\|_p$ with $p=1$ or $p=\infty$? Can I still obtain a bound, something like
$$\mathbb{E}\left[\left\|\frac{1}{N}\sum\limits_{i=1}^{N}x_i\right\|_p^2\right] \leq K\frac{\sigma^2}{N}$$ for appropriate $K$? I know that this is possible to do by invoking the equivalence any norm to the Euclidean norm, but this approach will introduce factors that depend on the dimension. My question: Is it possible to obtain a similar bound on the variance of the empirical averages that is independent of the dimension?