An $n\times n$ matrix represents a linear operator on some vector space $V$. That is, it is the matrix representation with respect to some basis for $V$. A linear operator is diagonalizable if and only if there exists a basis for the space $V$ consisting of eigenvectors of the operator.
Consider the following matrix representation of the operator $T$:
$$\mathcal{M}(T)=\begin{pmatrix}
\lambda_1 & 1 & 0 & 0 \\
0 & \lambda_1 & 1& 0 \\
0 & 0 & \lambda_1 & 0\\
0 & 0 & 0 & \lambda_2
\end{pmatrix} .$$
This is not a diagonalizable matrix/operator, because there exists no basis of eigenvectors of $T$ for $V$. There does exist a basis of generalized eigenvectors for $T$, which allows us an upper-triangular representation representation. In particular, the above is the Jordan form of the operator, but if you don't know about that yet, don't worry.
So, we are guaranteed an upper-triangular matrix representation for a given operator. That is, a matrix over a complex space is guaranteed to be upper-triangularizable. This is useful because the determinant is easy to compute, and the eigenvalues are displayed with multiplicity on the diagonal.
The reason why we care so much about this basis consisting of eigenvectors is that under $T$, and eigenvector $v_j$ satisfies:
$$ Tv_j=\lambda_jv_j$$
where $\lambda_j$ is the corresponding eigenvalue. Because we know that the matrix records how the basis of the space is transformed, if we have a basis of eigenvectors $v_1,v_2,...,v_n$ where $n=dim V$, we know that $Tv_1,Tv_2,...,Tv_n=\lambda_1v_1,\lambda_2v_,...,\lambda_nv_n$ which yields a matrix of the form:
$$ \begin{pmatrix}
\lambda_1 & 0 & 0 & 0 \\
0 & \lambda_2 & 0& 0 \\
0 & 0 & \ddots & 0\\
0 & 0 & 0 & \lambda_n
\end{pmatrix}
.$$
This is a diagonal representation of the operator.