This proof is based closely on Exercise 5.6 in Chapter II of Theodore S. Chihara, An Introduction to Orthogonal Polynomials (Gordon and Breach 1978, reprinted by Dover 2011).

Given $n \geqslant 1$, and $i$ with $1 \leqslant i \leqslant n$, we prove below that there exists a polynomial $f$ such that:
- $f$ has degree $2n - 2$;
- $f(x) \geqslant 0$ for all $x$;
- $f(x) \geqslant 1$ for $x \leqslant x_i$;
- $f(x_1) = f(x_2) = \cdots = f(x_i) = 1$;
- $f(x_{i+1}) = f(x_{i+2}) = \cdots = f(x_n) = 0$;
- $f(x) \ne 0$ if $x \notin \{x_{i+1}, x_{i+2}, \ldots, x_n\}$.
The $n$-point Gaussian quadrature formula applied to a function $f$
holds as an exact identity when $f$ is a polynomial of degree
$2n - 1$ or less, therefore:
$$
1 + x_i = \int_{-1}^{x_i}dx < \int_{-1}^1f(x)\,dx
= \sum_{j=1}^n w_jf(x_j) = w_1 + w_2 + \cdots + w_i.
$$
Applying this inequality to $n - i + 1$ in place of $i$, and using
the symmetry of the abscissae and weights, and the fact that the sum
of the weights is $2$, we get:
\begin{align*}
1 + x_i & = 2 - (1 + x_{n-i+1}) \\
& = (w_1 + w_2 + \cdots + w_n) - (1 + x_{n-i+1}) \\
& > (w_1 + w_2 + \cdots + w_n) - (w_1 + w_2 + \cdots + w_{n-i+1}) \\
& = (w_1 + w_2 + \cdots + w_n) - (w_n + w_{n-1} + \cdots + w_i) \\
& = w_1 + w_2 + \cdots + w_{i-1}.
\end{align*}
Finally, then:
$$
w_1 + w_2 + \cdots + w_{i-1} - 1 < x_i <
w_1 + w_2 + \cdots + w_i - 1.
$$
$\square$
It remains to prove that $f$ exists.
Definition
A continuous real-valued function $g$ on an open subset of
$\mathbb{R}$ has a simple zero at an interior point $a$ if
$g(a) = 0$ and $g'(a)$ is defined and non-zero.
From the definition of $g'(a)$, there exists $\delta > 0$ such
that
$$
\lvert{g(a + h) - g'(a)h}\rvert < \lvert{g'(a)h}\rvert/2
\quad (\lvert{h}\rvert < \delta).
$$
It follows that if $g$ has a simple zero at $a$, then for
sufficiently small $h$, $g(a + h)$ always has the same sign as $h$,
or else it always has the opposite sign from $h$.
This definition of "simple zero" (it doesn't seem to be
standard, which is why I gave it) coincides with the standard
definition in the case where $g$ is a function defined by a
polynomial expression.
Lemma 1
If $a, b$ are real numbers (or $\pm\infty$), $a < b$, and the
continuous function $g: (a, b) \to \mathbb{R}$ has only finitely
many zeros $\zeta_k$, where
$a < \zeta_1 < \zeta_2 < \cdots < \zeta_r < b$, all of them simple,
then $g(x)$ has a constant sign for all $x$ in each of the intervals
$$
(a, \zeta_1), (\zeta_1, \zeta_2), \cdots, (\zeta_{r-1}, \zeta_r),
(\zeta_r, b),
$$
and this sign reverses between each pair of successive subintervals.
Proof The Intermediate Value Theorem implies that the sign of the value of
$g$ cannot differ between two points in the same interval; and the
remark immediately following the definition above implies that the
sign reverses at each $\zeta_k$. $\square$
Lemma 2
Let $p, q$ be non-negative integers, $M, m, a_1, a_2, \ldots, a_p,
b, c_1, c_2, \ldots, c_q$ real numbers, with $M > m$, and:
$$
a_1 < a_2 < \cdots < a_p < b < c_1 < c_2 < \cdots < c_q.
$$
Then there exists a unique polynomial $f \in \mathbb{R}[X]$ of
degree $\leqslant 2p + 2q$ such that:
\begin{gather*}
f(a_1) = f(a_2) = \cdots = f(a_p) = f(b) = M; \\
f(c_1) = f(c_2) = \cdots = f(c_q) = m; \\
f'(a_1) = f'(a_2) = \cdots = f'(a_p) =
f'(c_1) = f'(c_2) = \cdots = f'(c_q) = 0.
\end{gather*}
Moreover, this polynomial also satisfies these two conditions:
\begin{gather*}
f(x) \geqslant M \text{ for all } x \leqslant b; \\
f(x ) \geqslant m \text{ for all } x \in \mathbb{R}.
\end{gather*}
Proof
Let $U$ be the vector space of all polynomials in $\mathbb{R}[X]$ of
degree $\leqslant 2p + 2q$, and let $V = \mathbb{R}^{2p + 2q + 1}$.
Define a linear map $L: U \to V$, where:
\begin{multline*}
L(f) =
(f(a_1), f(a_2), \ldots, f(a_p), f(b),
f(c_1), f(c_2), \ldots, f(c_q), \\
f'(a_1), f'(a_2), \ldots, f'(a_p),
f'(c_1), f'(c_2), \ldots, f'(c_q)).
\end{multline*}
If $L(f) = 0$, then $f$ is divisible by the polynomial:
$$
(x - a_1)^2(x - a_2)^2\cdots(x - a_p)^2(x - b)
(x - c_1)^2(x - c_2)^2\cdots(x - c_q)^2,
$$
of degree $2p + 2q + 1$, therefore $f = 0$. Therefore $L$ is
injective. Since the dimensions of $U$ and $V$ are equal (both
$2p + 2q + 1$), $L$ is also surjective. The first conclusion
(the existence and uniqueness of $f$) follows.
The derivative $f'$ is a polynomial of degree at most $2p + 2q - 1$,
therefore it has at most $2p + 2q - 1$ real zeros, counted with
multiplicity.
By construction, these zeros include the $p + q$
numbers $a_1, a_2, \ldots, a_p, c_1, c_2, \ldots, c_q$. But also, by
Rolle's theorem, there are $p + q - 1$ other zeros
$\xi_1, \xi_2, \ldots, \xi_p, \eta_1, \eta_2, \ldots, \eta_{q-1}$,
where:
$$
a_1 < \xi_1 < a_2 < \cdots < \xi_{p-1} < a_p < \xi_p < b <
c_1 < \eta_1 < c_2 < \cdots < \eta_{q-1} < c_q.
$$
These $2p + 2q - 1$ numbers (excluding $b$) must therefore be a
complete list of zeros of $f'$; and they must all be simple zeros.
Applying Lemma 1 to $f'$, on the interval
$(-\infty, \infty) = \mathbb{R}$, we find that $f'$ changes sign at
each of its zeros, the sign remaining constant on the $2p + 2q + 1$
open intervals having successive zeros as their endpoints, as well as the
infinite open intervals $(-\infty, a_1)$ and $(c_q, \infty)$.
Because $f(b) = M > m = f(c_1)$, the sign of $f'$ must be negative
in $(\xi_p, c_1)$. It is therefore positive in every $(a_k, \xi_k)$,
negative in every $(\xi_{k-1}, a_k)$ (where $\xi_{-1} = -\infty$),
positive in every $(c_k, \eta_k)$ (where $\eta_q = \infty$), and
negative in every $(\eta_{k-1}, c_k)$. Every real number other than
$a_1, a_2, \ldots, a_p, b, c_1, c_2, \ldots, c_q$ lies in one of
these intervals, and the stated inequalities for $f$ follow from
this fact.
$\square$