Let $X_i$ be independent random variables, $\forall\,i \in \mathbf{n} \equiv \{0,\dots,n-1\}$, with identical expectation value $\mathbb{E}(X_i)=\mu$, and identical variance $\mathrm{Var}(X_i)=\sigma^2$.1 Also, let $\overline{X}$ be their average, $\frac{1}{n}\sum X_i$ (where the summation for all $i\in \mathbf{n}$ is implicit, a convention I'll use throughout).
It is not hard to show, by direct (if slightly tedious) calculation, that
$$ \textstyle \mathbb{E} \left( \sum (X_i - \mu)^2 \right) - \mathbb{E} \left( \sum (X_i - \overline{X})^2 \right) = \sigma^2 $$
Whenever I arrive at a very "simple" result through a "tedious" derivation, as in this case, I get the strong suspicion that there has to be a more direct, and yet entirely rigorous, reasoning to reach it. Or rather, a way to view the problem that makes the result immediately "obvious".2
In this case, the best I have found goes something like this: the above result follows from the fact that, first,
$$ \textstyle \mathbb{E} \left( \sum (X_i - \mu)^2 \right) = n \sigma^2 $$
and, second, if we subtract $\overline{X}$ instead of $\mu$, we have "lost one degree of freedom", and "therefore"
$$ \textstyle \mathbb{E} \left( \sum (X_i - \overline{X})^2 \right) = (n - 1) \sigma^2 $$
I find this hand-wavy argument thoroughly unconvincing. (I doubt that those who propose it would believe it if they didn't already know the result from a more rigorous derivation.)
Is there something better?
EDIT:
Here's an example of the kind of argument I'm looking for. It still has too many gaps to be satisfactory, but at least it shows a reasoning that does not require pencil and paper: it could be delivered orally, or with crude "marks in the sand" (no algebra), and be readily understood.
First we can see that
$$ \textstyle \mathbb{E} \left( \sum (X_i - \mu)^2 \right) \geq \mathbb{E} \left( \sum (X_i - \overline{X})^2 \right) $$
Why? Because, the value of $c$ that minimizes $\sum (X_i - c)^2$ is $c = \overline{X}$ (a fact for which I could give a similarly hand-wavy, not-entirely-watertight argument, though easy to prove by tedious computation), which means that whenever $\overline{X} \neq \mu$, we would have $\sum (X_i - \mu)^2 > \sum (X_i - \overline{X})^2$.
Now, what "drives" $\overline{X}$ away from $\mu$ (so to speak) is $\sigma^2$. So we can conclude that the difference
$$ \textstyle \mathbb{E} \left( \sum (X_i - \mu)^2 \right) - \mathbb{E} \left( \sum (X_i - \overline{X})^2 \right) \geq 0 $$
should increase monotonically as $\sigma^2$ increases...
Fair enough, but why is the difference exactly $\sigma^2$, and not, say, $\sigma^2/n$, or *gasp* $\pi \sigma^2/n$? Here my hand-waving begins to run out of steam... It is suggestive that $\mathbb{E}(\overline{X}) = \mu$ and
$$\mathrm{Var}(\overline{X}) = \frac{\sigma^2}{n} = \mathbb{E} \left( ( \overline{X} - \mathbb{E}(\overline{X}))^2 \right) = \mathbb{E} \left( ( \overline{X} - \mu )^2 \right)$$
Therefore it is tempting to surmise that, since, for each $i$, $(X_i - \mu) - (X_i - \overline{X}) = \overline{X} - \mu$, then each term $(X_i - \mu)^2 - (X_i - \overline{X})^2$ would contribute, "on average", $\mathbb{E}\left( ( \overline{X} - \mu )^2 \right) = \sigma^2/n$ to the total difference. This would require justifying the tantalizingly Pythagorean-looking equality:
$$\mathbb{E} \left( ( X_i - \mu )^2 \right) = \mathbb{E} \left( ( X_i - \overline{X})^2 + ( \overline{X} - \mu )^2 \right)$$
...though I readily concede that this is beginning to look as tedious as any algebraic computation.
(I note that in this argument I did not use the fact that in this case $\mathrm{Var}(\sum X_i)=\sum\mathrm{Var}(X_i)$, which is surely the way forward.)
1 The typically given condition is to say that the $X_i$ are independent and identically distributed, but, AFAICT, the last condition is stronger than necessary. For that matter, as Dilip Sarwate pointed out, the independence condition is also stronger than needed. It is sufficient that $\mathrm{Var}(\sum X_i)=\sum\mathrm{Var}(X_i)$.
2 The "scare quotes" around "simple", "tedious", and "obvious" aim to convey the concession that these terms are all, of course, in the eye of the beholder. So "simplicity" is shorthand for subjective simplicity or perceived simplicity, etc. Also in the eye of the beholder is how much (perceived) tedium seems too much relative to the (perceived) simplicity. If the difference $\mathbb{E} \left( \sum (X_i - \mu)^2 \right) - \mathbb{E} \left( \sum (X_i - \overline{X})^2 \right)$ had been, say, $\sigma^2/\sqrt{\pi}$, I would not have perceived the standard algebraic derivation as particularly tedious, because $\sigma^2/\sqrt{\pi}$ does not seem to me particularly simple.