levap's excellent answer covers the case of real symmetric matrices that the OP asks about. But I think that only considering real symmetric (or more generally Hermitian) matrices actually confuses the matter somewhat, because the fact that the eigenvectors are orthogonal somewhat muddies whether we should thinking about $Q^T A Q$ or $Q^{-1} A Q$ (which in the orthogonal case are equivalent).
A general (not necessarily real-symmetric or Hermitian) finite-dimensional linear operator $A$ on a vector space $V$ over a field $\mathbb{F}$ cannot necessarily be diagonalized. (But if the field is the complex numbers, then $L$ can almost always be diagonalized, i.e. with probability 1 over most standard ensembles of random matrices). If it can be diagonalized by a change-of-basis matrix $S$, then we have $D = S^{-1} A S$, where $D$ is diagonal. That is, the matrix $A$ is similar to a diagonal matrix, where we flank $A$ by inverse matrices.
If $V$ is endowed with an inner product and $A$ happens to be self-adjoint, then the spectral theorem guarantees that this is always possible, and that the eigenvalues will all be real, and that the change-of-basis matrix will be unitary/orthogonal. In terms of matrices, this means that if the matrix $A$ is Hermitian, then there will always be a matrix $S$ such that $S^{-1} A S$ is diagonal, and moreover $S$ can always be chosen to be unitary, so that the matrix to the left of $A$ can be equivalently thought of as $S^{-1}$ or $S^\dagger$.
By contrast, for a general finite-dimensional real bilinear form $B$, "diagonalizing" means finding an invertible change-of-basis matrix $S$ such that $S^T B S = D$, where $D$ is diagonal. That is, $B$ is congruent to a diagonal matrix. Since $D$ is symmetric, it's clear that $B$ must be as well. So only symmetric real bilinear forms can be diagonalized. Since $S$ is only required to be invertible and not orthogonal, this is a much easier task, and (as levap said) can be done using only row addition and scalar multiplication operations, without needing to solve any polynomials. The diagonal terms can always be normalized to $0$ or $\pm 1$ by Sylvester's law of inertia. I'm not quite sure how things work for bilinear or sesquilinear forms over complex Hilbert spaces.
In terms of matrices, this means that for any real symmetric matrix $A$, there's a real invertible matrix $S$ such that $S^T A S$ is diagonal, and indeed only has entries 0, 1, and -1. (And unlike in the previous case where we used inverses, we can find $S$ and $D$ by just doing arithmetic operations on the elements of $A$). For a non-symmetric real matrix $A$, there never exists any $S$ such that $S^T A S$ is diagonal.
Neither matrix similarity nor matrix congruence should be confused with matrix equivalence, which is much more general because $A$ can be rectangular and you can use totally unrelated matrices on both sides of $A$. Rectangular matrices of the same size are equivalent iff they have the same rank.