Elementary row operations don’t preserve eigenvalues, as the simple example John Hughes gave in his comment shows. If you’re going to fiddle with a matrix in this way in order to try to simplify it, you have to do this to $\lambda I-A$, not $A$ itself, since that’s the determinant that you’re trying to compute when you’re working out the characteristic polynomial. (Yixing’s comment gives you a way to manipulate $A$ without affecting its eigenvalues, but I’ve not found that method to be very practical when calculating by hand. It’s very easy to make errors when applying the inverse of an elementary operation to the columns of the matrix.) Because of all the zeros in $A$, you can get a partially-factored characteristic polynomial for it pretty easily, but it’s also possible to find the eigenvalues and corresponding eigenvectors by inspection.
Observe first that the second column is a multiple of $(0,1,0)^T$. Recalling that the columns of a transformation matrix are the images of the basis vectors, we have our first pair: $(0,1,0)^T$ with eigenvalue $2$. This parallels what happens when expanding $\det(\lambda I-A)$ along the second column: you find that the characteristic polynomial must be of the form $(\lambda-2)p(\lambda)$, so you get one of the eigenvalues right away.
Turning now to the other two columns, it’s obvious that their first and last rows sum to the same value. Adding the first and last columns is equivalent to multiplying by $(1,0,1)^T$, so we have our second pair: $(1,0,1)^T$ with eigenvalue $2+1=3$.
For the last eigenvalue and eigenvector, you might notice that the difference of the first and last columns is $(1,0,-1)^T$, which is exactly what you’d multiply $A$ by to get this difference of columns, so there’s the last pair. However, you always get the last eigenvalue “for free” because the sum of the eigenvalues is equal to the trace: $2+3+\lambda = 6$, therefore $\lambda = 1$. So even if you don’t spot this particular linear combination of columns, you have the third eigenvalue. To find a corresponding eigenvector, you can take advantage of the symmetry of the matrix: the eigenspaces of a real symmetric matrix are orthogonal, which means that for this matrix, the eigenspace of $1$ is the orthogonal complement of the sum of the other two eigenspaces. We’re working in $\mathbb R^3$, so we can compute this via a cross product: $(0,1,0)^T\times(1,0,1)^T = (1,0,-1)^T$ as computed previously.
One other shortcut that can be used for this matrix takes advantage of a different aspect of $A$. If you delete the second row and column of $A$, you’re left with a matrix of the form $aI+b\mathbf 1$ (here $\mathbf 1$ is a square matrix of all $1$s), eigenvalues and eigenvectors of which are particularly easy to find by inspection. I won’t go into the details here because they’ve been amply covered in other questions, such as this one.