Given a second order ODE
$$
a(x)y''(x)+b(x)y'(x)+c(x)y(x)=0 \tag{1} \label{eq1}
$$
We can introduce a new variable $z(x)=y'(x)$ and reduce the equation to a first order system (the same idea works for higher order equations).
$$
\begin{align}
y'(x)-z(x)&=0 \\
a(x)z'(x)+b(x)z(x)+c(x)y(x)&=0 \\
\end{align}\tag{2} \label{eq2}
$$
We can write the system in a matrix-vector form
$$
\mathbf{A}(x)\cdot\vec{Y}'(x)+\mathbf{B}(x)\cdot\vec{Y}=0 \tag{3} \label{eq3}
$$
where
$$
\mathbf{A}(x)=\left(\begin{matrix}
1 & 0 \\
0 & a(x) \\
\end{matrix}\right) \tag{4} \label{eq4}
$$
$$
\mathbf{B}(x)=\left(\begin{matrix}
0 & -1 \\
c(x) & b(x) \\
\end{matrix}\right)\tag{5} \label{eq5}
$$
$$
\vec{Y}(x)=\left(\begin{matrix}
y(x) \\
z(x) \\
\end{matrix}\right) \tag{6} \label{eq6}
$$
if $a(x)\neq 0$ in the integration domain we can find $\mathbf{A}^{-1}$ given by
$$
\mathbf{A}^{-1}(x)=\left(\begin{matrix}
1 & 0 \\
0 & \frac{1}{a(x)} \\
\end{matrix}\right) \tag{7}\label{eq7}
$$
Using $\eqref{eq7}$ we can rewrite $\eqref{eq3}$ in the following way:
$$
\vec{Y}'(x)=-\mathbf{A}^{-1}(x)\cdot\mathbf{B}(x)\cdot\vec{Y}=-\mathbf{C}(x)\cdot\vec{Y} \tag{8} \label{eq8}
$$
where
$$
\mathbf{C}(x)=\left(\begin{matrix}
0 & -1 \\
\frac{c(x)}{a(x)} & \frac{b(x)}{a(x)} \\
\end{matrix}\right). \tag{9}\label{eq9}
$$
Notice that $\mathbf{C}$ contains the ratio of the coefficients. If $\mathbf{C}(x_1)\cdot\mathbf{C}(x_2)=\mathbf{C}(x_2)\cdot\mathbf{C}(x_1)$, $\forall\{x_1,x_2\}\in D(\mathbf{C})$ the solution to $\eqref{eq8}$ is similar to the first order scalar case:
$$
\vec{Y}=\mathbf{K\cdot\mathrm{e}}^{-\int \mathbf{C}(x)\mathrm{d}x} \tag{10} \label{eq10}
$$
where we use the matrix exponential and $\mathbf{K}$ is a constant matrix. In that case, we required $\frac{c(x)}{a(x)}=k_1,\frac{b(x)}{a(x)}=k_2$, where $k_1,k_2$ are constants. A general solution is given by the Magnus expansion (see ref. 1 and ref. 2).
Perhaps, it is easier to analyze the effect of the coefficients for a first order equation:
$$
y'=-\alpha(x)y
$$
the solution in this case is
$$
y=\kappa\mathrm{e}^{-\int \alpha(x)\mathrm{d}x}
$$
Therefore, the solution is the composition of $\int \alpha(x)\mathrm{d}x$ and the exponential function. Given a $C^{n}$ function at $x_0$, its integral is $C^{n+1}$. For example, consider the following step-like function:
$$
\alpha(x)=\left\{\begin{matrix}
0 & x<0 \\
1 & x\geq 0 \\
\end{matrix}\right.
$$
It is discontinuous at $x = 0$; its integral in $(-\infty,x]$ is a $C^0$ function (i.e. a continuous function):
$$
\int_{-\infty}^x\alpha(s)\mathrm{d}s=\left\{\begin{matrix}
k & x<0 \\
k+x & x\geq 0 \\
\end{matrix}\right.
$$
where $k$ is a constant. Regarding the questions:
Why, if the coefficient functions are analytic, there is guarantee that the solution is analytic, too, and vice-versa?
Given the integration of the matrix $\mathbf{C}$, you need the ratio of the coefficient to be analytic, i.e., $C^{\infty}$ in order to get a solution which is $C^{\infty}$ as well.
They are only the coefficients of the unknown function $y(x)$ in that equation, not the function $y(x)$ itself, so how can they affect the function in such a heavy way? Given the solution $\eqref{eq10}$, we see that the coefficients fully determine the solution. Therefore, they strongly affect the resulting solution (they are part of the solution).
If for example $a(x_0)=0$, how can this determine also the form of $y(x)$ at $x_0$? In that case you find a singularity at $x_0$, the solution won't be analytic in that case (it may be $C^n$ in the best case).