Too long for a comment:
Let $k\in \mathbb{N}_{\geq 1} \cup \{ \infty, \omega \}$ and $M\in C^k([a,b], \mathbb{R}^{n\times n})$ such that $M$ is symmetric and the eigenvalues of $M(x)$ are all simple. Let $\mu_1(x)< \dots < \mu_n(x)$ be the eigenvalues of $M(x)$ and define
$$D: [a,b] \rightarrow \mathbb{R}^{n\times n}, D(x)= diag(\mu_1(x), \dots, \mu_n(x)).$$
Then $D\in C^k([a,b], \mathbb{R}^{n\times n})$.
This follows from the implicite function theorem. Namely, we define
$$ F: \mathbb{R}^n \times [a,b] \rightarrow \mathbb{R}^n, (\lambda, x) \mapsto (\operatorname{det}(M(x)- \lambda_j Id))_{j=1}^n. $$
As $M$ is $C^k$, so is $F$ (as the determinant is a polynomial map and has a much regularity as we hope for).
Then using
$$ \det(M(x)-\lambda_j Id) = \prod_{k=1}^n (\lambda_j - \mu_k(x)) $$
we get
$$ \partial_{\lambda_j} \det(M(x)-\lambda_j Id)
= \sum_{l=1}^n \prod_{k=1, k\neq l}^n (\lambda_j - \mu_k(x)). $$
Now start from the point $((\mu_1(x), \dots, \mu_n(x), x)$. The Jacobian of $F(\cdot, x)$ in this point is
$$ D_\lambda F (\mu_1(x), \dots, \mu_n(x), x) = diag\left( \prod_{k=1, k\neq 1}^n (\mu_1(x)-\mu_k(x)), \dots, \prod_{k=1, k\neq n}^n (\mu_n(x)-\mu_k(x)) \right). $$
As we assumed the eigenvalues to be simple, we get for all $j\in \{1, \dots, n\}$
$$ \prod_{k=1, k\neq j}^n (\mu_j(x)-\mu_k(x)) \neq 0. $$
Hence, $ D_\lambda F (\mu_1(x), \dots, \mu_n(x), x)$ is invertible and we can use the implicite function theorem which tells us, that locally $D$ is $C^k$. In fact the implicite function theorem tells us that there exist open nbhds $U,V $ of $\lambda$ and $x$ and a $C^k$ function $g: V \rightarrow U$ such that
$$ F(g(x), x)=0.$$
Furthermore, by the uniqueness part of the implicite function, we get $g(x)=(\mu_1(x), \dots, \mu_n(x))$.
However, we can do this for any point $x\in [a,b]$, so $D\in C^k$.
We can play a similar game with the eigenvalues using the eigenvalue equation $(M(x)-\lambda Id)\psi =0$. You can also check it out here http://www.janmagnus.nl/papers/JRM011.pdf
To make our notation lighter we just do it for one eigenvector at a time. Wlog we do it for the first one and define
$$ G: \mathbb{R}^n \times \mathbb{R} \rightarrow \mathbb{R}^{n+1}, G(\psi, x) = \left((M(x) - \mu_1(x) Id)\psi, \langle \psi, \psi \rangle -1 \right).$$
As $M, \mu_1 $ are $C^k$, so is $G$. One computes the Jacobian to be
$$ DG(\psi, x) = \begin{pmatrix} M(x)- \mu_1(x) Id & \psi \\ 2 \psi^T & 0 \end{pmatrix}.$$
Inserting $(\psi, x)$ such that $G(\psi,x)=0$, we get that $\psi$ is a normalized eigenvalue of $M(x)$ associated to the eigenvalue $\mu_1(x)$. Using a change of basis with respect to the eigenvalues, we get
$$ \det (DG(\psi, x)) = \det \begin{pmatrix} diag(0, \mu_2(x) - \mu_1(x), \dots, \mu_n(x) - \mu_1(x)) & e_1 \\ 2e_1^T & 0
\end{pmatrix} = 2 (-1)^{(n+1)+1} (-1)^{n+1} \det(diag(\mu_2(x)- \mu_1(x), \dots, \mu_n(x)-\mu_1(x))) = -2 \prod_{j=2}^n (\mu_j(x)- \mu_1(x)) $$
where $e_1$ is the first standard basis vector. Hence, if all the eigenvalues are different, then we get
$$ \det (DG(\psi, x)) \neq 0 $$
and hence by the inverse function theorem there exist nbhds $U,V$ of $x$ and $\psi$ and a $C^k$-map $h: U \mapsto V$ such that $G(h(x), x) =0$. Hence, $h(x)$ is a normalized eigenvector of $M(x)$ associated to the eigenvalue $\mu_1(x)$. This only gives us a locally defined $C^k$ function. Right now I do not see how to deal with the sign. Meaning, the eigenvector is normalized and as all eigenvalues are simple, this implies that there are exactly two such eigenvectors.
I don't know how to exclude that $M(x_1)=M(x_2)$, but $\psi(x_1)=-\psi(x_2)$
Maybe someone smarter than me could shed some light on this.
As soon as we have that $\psi(x)$ is $C^k$, then
$$ U(x) = \begin{pmatrix} \frac{\psi_1(x)}{\Vert \psi_1(x)\Vert} \dots \frac{\psi_n(x)}{\Vert \psi_n(x)\Vert} \end{pmatrix}^T $$
will do the job (as eigenvectors associated to different eigenvalues of a real symmetric matrix are orthogonal to each other, see here Eigenvectors of real symmetric matrices are orthogonal) and will be $C^k$ as well.
You might want to check this answer https://mathoverflow.net/questions/116123/how-to-find-define-eigenvectors-as-a-continuous-function-of-matrix shows that for $k=0$ the eigenvectors need not to depend continuously on the parameter and they also talk about regularity when eigenvalues may cross. I do not believe that you can make $U$ continuous for $k=0$. However, I cannot prove it.