The connection is this: a matrix consists of the coefficients of a (1,1) tensor, but it is not a tensor itself.
Suppose we are talking about a linear transformation $T$ on an $n$ dimensional vector space $V$.
Now $T$ is certainly a tensor (tensors are, after all, multilinear maps on copies of $V$ and $V^\ast$, and a linear transformation can be interpreted as a multilinar function from $V\times V^\ast$ to $\mathbb{F}$.)
Once a basis for $V$ is fixed, then you can talk about the matrix $A$ for $T$ which is written in terms of the basis. The same can be said for general multilinear functions on copies of $V$ and $V^\ast$, that after you have fixed a basis, you have a big array holding its coefficients.
It's important to remember not to confuse the array for the tensor. The tensor is a basis independent entity: it's a kind of function. The components are just one particular representation of that function, and the components depend upon a choice of basis.