A complex number is often simply cited to map to specific matricies:
$$ z = a+ib\mapsto\begin{pmatrix}a&b\\-b&a\end{pmatrix} \text{ where it is the case that } 1=e_1\mapsto\begin{pmatrix}1&0\\0&1\end{pmatrix} , i=e_2\mapsto\begin{pmatrix}0&1\\-1&0\end{pmatrix} $$
The response to questions regarding this representation of complex numbers are often met with the answer that it is sufficient that these matrix representations satisfy all the normal field axioms and whatever necessary properties of complex numbers such as associativity and commutativity.
This is fine, and technically true but it is helpful to understand the motivation for creating such representations as well as how it might be done in general.
Paul Garret states here that these representations may be made in general by considering how a complex number acts on the 'basis vectors' of $\mathbb{C}$.
$$ 1z = e_1*(e_1a+e_2b)=e_1a+e_2b= \begin{pmatrix}e_1&e_2\end{pmatrix}\begin{pmatrix}a\\b\end{pmatrix}= \begin{pmatrix}a&b\end{pmatrix}\begin{pmatrix}e_1\\e_2\end{pmatrix} \\ iz=e_2*(e_1a+e_2b)=e_2a-e_1b= \begin{pmatrix}e_1&e_2\end{pmatrix}\begin{pmatrix}-b\\a\end{pmatrix}= \begin{pmatrix}-b&a\end{pmatrix}\begin{pmatrix}e_1\\e_2\end{pmatrix} $$
For the sake of clarity I will specify that row-column matrix product above simple represents a linear combination of the basis $e_1$, $e_2$ with the components of the components $a$, $b$. In this notation, the operation between the matricies is simple matrix multiplication. I included the reversed form since each representation produces the same linear combination.
We may consider both equations at once by considering the tensor product:
$$ \begin{pmatrix}e_1\\e_2\end{pmatrix}\begin{pmatrix}e_1&e_2\end{pmatrix}\begin{pmatrix}a\\b\end{pmatrix}= \begin{pmatrix}e_1&e_2\\e_2&-e_1\end{pmatrix}\begin{pmatrix}a\\b\end{pmatrix}= \begin{pmatrix}e_1a+e_2b\\-e_1b+e_2a\end{pmatrix}= \begin{pmatrix}a&b\\-b&a\end{pmatrix}\begin{pmatrix}e_1\\e_2\end{pmatrix} $$
Question: What is the justification for removing explicit reference to the basis - the $\begin{pmatrix}e_1&e_2\end{pmatrix}^\intercal$ matrix - and making the identification:
$$ a+ib=\begin{pmatrix}a&b\\-b&a\end{pmatrix} $$
Below I have constructed a 'proof' to demonstrate my reasoning on the matter, but I feel that I am lacking sufficient justification for the seemingly arbitrary tensor product and commutation of the complex scalar quantity with the basis matrix.
In general scalars commute with matricies, and the tensor product of a column matrix and a row matrix is a well defined matrix multiplication. I am not confident in my understanding. Below is an attempt to algebraically prove that every complex number $z$ has a unique matrix representation.
Suppose that $z\in\mathbb{C}$ is a scalar quantity and commutes with any given matrix. So potentially we could write:
$$ \begin{pmatrix}e_1\\e_2\end{pmatrix}z= \begin{pmatrix}e_1z\\e_2z\end{pmatrix}= \begin{pmatrix}ze_1\\ze_2\end{pmatrix}= z\begin{pmatrix}e_1\\e_2\end{pmatrix} $$
From above we have:
$$ z\begin{pmatrix}e_1\\e_2\end{pmatrix} =\begin{pmatrix}a&b\\-b&a\end{pmatrix}\begin{pmatrix}e_1\\e_2\end{pmatrix} $$
Then subtract the left side from the right side and factor the basis matrix to obtain:
$$ {\bf 0} =\begin{pmatrix}0\\0\end{pmatrix} =\begin{pmatrix}a&b\\-b&a\end{pmatrix}\begin{pmatrix}e_1\\e_2\end{pmatrix} -z\begin{pmatrix}e_1\\e_2\end{pmatrix} = \left[\begin{pmatrix}a&b\\-b&a\end{pmatrix} -z\right]\begin{pmatrix}e_1\\e_2\end{pmatrix} $$
Since $\begin{pmatrix}e_1&e_2\end{pmatrix}^\intercal\ne{\bf 0}$ then it must be the case that:
$$ 0=\begin{pmatrix}a&b\\-b&a\end{pmatrix} -z \Rightarrow z=\begin{pmatrix}a&b\\-b&a\end{pmatrix} $$
Obviously the above is not a completely rigorous proof, but I was trying to fill in missing details from resources I lack and books I have not yet read.