The question was given in the early chapters of Linear Algebra by Hoffman & Kunze, so I am trying to give a proof with only the tools given to me so far - which are mainly row reduction and knowledge of matrix multiplication, row reduced echelon forms, row equivalence and linear independence.
I attempted a proof as per the following:
Consider $A$ as a collection (not sure if this would be the ideal expression) of $1 \times n$ row vectors, and $B$ as a collection of $n \times 1$ column vectors. Then we have that:
$$ A=\begin{bmatrix} r_1 \\ \vdots \\ r_m \end{bmatrix},\ B=\begin{bmatrix} c_1 & \cdots & c_m \end{bmatrix}. $$ Thus it follows that: $$ AB =\begin{bmatrix} r_1\cdot c_1 & \cdots & r_1\cdot c_m \\ \vdots & ~ & \vdots \\ r_m\cdot c_1 & \cdots & r_m \cdot c_m \end{bmatrix} $$ Clearly, by inspection, the rows are linearly dependent.
Since the rows of $AB$ are linearly dependent, it naturally follows that the reduced row echelon form of $AB$ contains zero rows. Hence, $AB$ is not invertible.
Would this be a mathematically sufficient proof?