Assuming you are working with real vector spaces, the proof of the spectral Theorem for symmetric linear maps (or, equivalently, matrices) follows from these properties:
Let $f:E \to E \ $ be a symmetric linear map. Then:
If $W\subseteq E$ is an invariant subspace of $f$ (i.e. $f(W)\subseteq W$), then so is its othogonal $W^\perp$ (i.e. $f(W^\perp)\subseteq W^\perp$).
If $W\subseteq E$ is an invariant subspace of $f$, then the restriction $f|_W:W\to W$ is also symmetric.
For every linear map $\phi: E \to E$ (actually not necessarily symmetric), there is an invariant subspace of $\phi$ of dimension 1 or 2.
If $\dim E=2$, then $f$ diagonalizes in an orthogonal basis.
Facts (1), (2) are easily proven using that
$$f(v)\cdot w=v\cdot f(w) \ \ \forall v,w\in E$$
Fact (4) follows from considering a symmetric matrix for $f$ and computing its characteristic polynomial.
Fact (3) requires slightly more knowledge of linear algebra: you may use the minimal polynomial for $\phi$.
Now, one can proof the spectral Theorem as follows: by fact (3), there is an invariant subspace $W_1$ of $f$ of dimension 1 (i.e. just an eigenvector) or of dimension 2. By fact (1), $E':=W_1^\perp$ is invariant and by (2), $f|_{E'}: E'\to E'$ is symmetric. We may repeat this argument to obtain invariant spaces $W_1,..., W_m$ of $f$, orthogonal to one another and whose sum is $E$ (assumming of course $\dim E<\infty$). Now, since each $W_i$ has dimension 1 or 2, we can use fact (4) to finally obtain an orthogonal basis for $E$ in which the matric of $f$ is diagonal.
Now, regarding your questions.
Why are symmetric matrices with repeated eigenvalues diagonalizable? By the argument above, of course. However, I think I understand your concerns so let's try to make fact (4) clear, which is the one that has something to do with your question. Fix a basis of $E$ and consider a matrix of $f$ in this basis:
$$ A= \begin{pmatrix}
a & b \\
b & c
\end{pmatrix} $$
You can compute its characteristic polynomial, which has roots
$$\frac{a+c\pm \sqrt{(a-c)^2+4b^2}}{a} $$
If $a=c$ and $b=0$, then $A=a\cdot Id$ and every vector in $E$ is an eigenvector, so yes, you may use the Gram–Schmidt process to produce an orthogonal basis. If $a\neq c$ or $b\neq 0$, then the matrix has distinct eigenvalues, so there is no problem here.
Is it because we can use the Gram–Schmidt process using the eigenvectors from the repeated eigenvalues to produce an orthogonal vector? Not quite. As said, you also need other facts. For general matrices, it may happen that the eigenvectors do not form a basis, because there may be not enough. So this fact, though true (you can always find an orthogonal basis of a subspace using Gram-Schmidt) does not suffice.
And why is this new vector still an eigenvector, is it because it is in the span of the vector used in the Gram–Schmidt process? Indeed, if $v_1, ...v_m\in E$ are eigenvectors with the same eigenvalue $\mu$, then any linear combination of them $w=a_1v_1+...+a_mv_m$ is an eigenvector:
$$f(w)=a_1f(v_1)+...+a_mf(v_m)=a_1\mu v_1+...+a_m\mu v_m=\mu w$$