Have a look at this old answer of mine, the diagram in particular. This should hopefully be something familiar to you.
The idea is that we wish to describe transformations from abstract $n$-dimensional space $V$, to abstract $m$-dimensional space $W$, in more familiar, computable terms. If we fix a basis $\beta$ for $V$, this gives us an isomorphism between $V$ and the space $F^n$, which takes abstract vectors $v \in V$, and transforms it into the column vector $[v]_\beta \in F^n$. This turns the mysterious, abstract, possibly difficult to work with space $V$ into a familiar space of column vectors. Addition in $V$ corresponds to adding these column vectors, and similarly for scalar multiplication. We can completely understand $V$, by looking at only coordinate vectors instead.
Similarly, fixing a basis $\gamma$ for $W$ similarly gives us an isomorphism $w \mapsto [w]_\gamma$ from $W$ to $F^m$. In much the same way, we can understand the abstract vector space $W$ concretely in terms of column vectors.
This also means that linear transformations from $V$ to $W$, which again can be quite abstract, can be concretely understood as linear transformations between $F^n$ and $F^m$ (once bases $\beta$ and $\gamma$ are fixed).
The nice thing is that linear transformations between $F^n$ and $F^m$ can be expressed as multiplication by unique $m \times n$ matrices. This is what this definition is trying to establish. This step is important: we need not only to establish a correspondence between linear maps $T : V \to W$ and linear maps $S : F^n \to F^m$, but between linear maps $T : V \to W$ and $m \times n$ matrices. Both connections are important to establish this.
There needs to be two directions to this: we need to show that a linear map from $F^n$ to $F^m$ can be expressed as multiplication by an $m \times n$ matrix, and that multiplication by an $m \times n$ matrix is always a linear map from $F^n$ to $F^m$. The latter is what is about to be established. Without showing that $L_A : F^n \to F^m$ is linear, then all we know is that linear maps between $V$ and $W$ correspond to some $m \times n$ matrices. What if certain $m \times n$ matrices turn out to be out-of-bounds?
They're not. As it turns out, $L_A$ is linear, just by standard distributivity and associativity properties of matrices, e.g.
$$L_A(x + y) = A(x + y) = Ax + Ay = L_A(x) + L_A(y).$$
This and the scalar homogeneity argument imply that $L_A$ is always a linear map.
Here is an example to show you how this definition works. Suppose we pick arbitrarily a matrix like
$$A = \begin{pmatrix} 1 & -1 \\ 0 & 0 \\ 2 & -2\end{pmatrix}.$$
Then, $A$ is $3 \times 2$, and so $L_A$ should be a linear map from $\Bbb{R}^2$ to $\Bbb{R}^3$. By definition,
$$L_A\begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} 1 & -1 \\ 0 & 0 \\ 2 & -2\end{pmatrix}\begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} x - y \\ 0 \\ 2x - 2y\end{pmatrix}.$$
Hopefully you can see that this is a linear transformation, and if you were to take the standard matrix for this linear transformtion, you would simply get $A$. You can do this with any $A$, helping prove that matrix multiplication is equivalent to general linear transformations between finite-dimensional spaces.