Here is a more mechanical, more general (but less slick) approach:
Think about the linear map $\alpha$ that the matrix $A$ represents. First we find the matrix of $\alpha$ in Jordan Normal Form. It is then well known (and relatively easy to see by drawing out the matrix) that the if we write $p_\alpha(x)=\prod_{i=1}^k(x-\lambda_i)^{n_i}$ then $n_i$ is the size of the largest $\lambda_i$ block. We can also see that $\dim\big(\ker(\alpha-\lambda_i)\big)$ is the number of $\lambda_i$ blocks.
In this example, we have $0$ as an eigenvalue, since the determinant is zero as the columns are not linearly independent. If $e_i$ is the usual basis of $\mathbb{R}^n$ then $\dim\big(\ker(\alpha-0)\big) = \dim\big(\langle e_1-e_2,\dots,e_1-e_n\rangle\big) = n-1$ and $\dim\big(\ker(\alpha-\frac{n(n-1)}{2}\big)=1$. Thus all $0$ blocks are size $1$ and there is only one $\frac{n(n-1)}{2}$ block, also of size 1. Thus each root of the minimal polynomial appears only once.
In fact, we can in this case do something a little quicker (but in the same spirit). It is always true that the sum of the eigenspaces of a linear map is direct, here the eigenspaces have dimension $n-1$ and $1$ as shown above, so their sum is direct and has dimension $n$ so must be $\mathbb{R}^n$. Thus the matrix is diagonal, so all Jordan Blocks are size $1$ so each root of the minimal polynomial appears only once.