Exercise.
Suppose $V$ is finite-dimensional and $T \in \mathcal{L}(V)$. Prove that $T$ has the same matrix with respect to every basis of $V$ if and only if $T$ is a scalar multiple of the identity operator.
Source.
Linear Algebra Done Right, Sheldon Axler, 4th Edition, Section 3D, exercise number 19.
Where I'm stuck.
I believe I was able to prove the backward direction (see section below this one for that proof). I'm having trouble proving the forward direction. Here's what I have tried:
Assuming $T$ has the same matrix with respect to every basis of $V$, that means that the entry in row $j$, column $k$ of $\mathcal{M}(T, (v_1,\ldots,v_n))$ is equal to the entry in row $j$, column $k$ of $\mathcal{M}(T, (u_1,\ldots,u_n))$, where $v_1,\ldots,v_n$ and $u_1,\ldots,u_n$ are any bases of $V$.
This fact, along with the way the entries are defined by $T$, implies
$$ Tv_k = A_{1,k}v_1 + \cdots + A_{n,k}v_k \\ Tu_k = A_{1,k}u_1 + \cdots + A_{n,k}u_k $$
I need to somehow use this to show $T = \alpha I$ for some $\alpha \in \mathbf{F}$, or equivalently, $Tv = \alpha v$ for all $v \in V$.
What's the strategy here? I've tried every algebraic manipulation possible, but it gets messy and leads nowhere. I suspect I'm missing a key point here.
My proof for backward direction (I'm not expecting anyone to verify it's correct, but be my guest).
We prove that if $T$ is a scalar multiple of the identity operator, then $T$ has the same matrix with respect to every basis of $V$
Let $\alpha \in \mathbf{F}$ and let $T = \alpha I$ where $I$ is the identity operator. Let $v_1, \ldots v_n$ be any basis of $V$. We want to show that the entries of $\mathcal{M}(T, (v_1, \ldots,v_n))$ are fixed values, that is, they do not depend on the basis.
We have that
\begin{align} \mathcal{M}(T, (v_1,\ldots,v_n)) &= \mathcal{M}(\alpha I, (v_1,\ldots,v_n)) \\ &= \alpha\mathcal{M}(I, (v_1,\ldots,v_n)) \end{align}
The entries in column $k$ of the $n$-by-$n$ matrix $\alpha\mathcal{M}(I, (v_1,\ldots,v_n))$, and hence $\mathcal{M}(T, (v_1,\ldots,v_n))$, are defined by
$$ \alpha I(v_k) = \alpha A_{1,k}v_1 + \cdots + \alpha A_{n,k}v_n \tag{1} $$
But we know that $\alpha I(v_k) = \alpha v_k$ by definition of the identity operator. This fact, along with equation $(1)$, tells us that
$$ \alpha A_{j,k} = \begin{cases} \alpha,& j=k \\ 0, & j \neq k \end{cases} \tag{2} $$
for $1 \leq j \leq n$. The entries in $(2)$ imply that
$$ \mathcal{M}(T, (v_1,\ldots,v_n)) = \begin{pmatrix} \alpha & & 0 \\ & \ddots \\ 0 & & \alpha \end{pmatrix} $$
That is, $\mathcal{M}(T, (v_1,\ldots,v_n))$ will always have $\alpha$ in the diagonal, and $0$ everywhere else no matter which basis is chosen.