Summary: the equation $\def\rk{\operatorname{rank}}\rk(AB)=\rk(B)$ holds, but (supposing $BA$ is defined) one can have $\rk(BA)\neq\rk(B)$. Simple example of the latter:
$$
A=\begin{pmatrix}1\\0\end{pmatrix}
,\qquad
B=\begin{pmatrix}0&1\end{pmatrix}
,\qquad
BA=(0)
,\qquad\text{( while } AB=\begin{pmatrix}0&1\\0&0\end{pmatrix} \text{).}
$$
One does have (with the given hypotheses) $\ker(AB)=\ker(B)$, which more generally holds whenever $\ker(A)\cap\operatorname{range}(B)=\{0\}$ (the given hypotheses say that $\ker(A)=\{0\}$, which is stronger).
Saying that $A$ has full column rank is saying that the columns of $A$ are linearly independent, which means that the map $x\mapsto Ax$ is injective: since $Ax$ is forming a linear combination of the columns of $A$ with the entries of $x$ as coefficients, linear independence says that $Ax=0$ only has the trivial solution $x=0$. Alternative formulation: $\ker(A)=\{0\}$.
Now an injective linear map sends linearly independent sets to linearly independent sets, so a maximal independent set of columns of $B$ corresponds (at the same column indices) to a maximal independent set of columns of$~AB$, which gives $\rk(AB)=\rk(B)$. As the counterexample shows this argumentation does not work when $A$ is the right factor, as in $BA$.
As for $\ker(AB)$, clearly every $x$ with $Bx=0$ also has $ABx=0$, so $\ker(B)\subseteq\ker(AB)$ without any hypotheses on $A$ (other than that $AB$ is defined). For a condition for the reverse inclusion; if we imagine that $Bx\neq0$ but nonetheless $ABx=0$, this means that $Bx$ is a nonzero vector in $\ker(A)$. Clearly when $\ker(A)=\{0\}$ this cannot happen, and more generally it cannot whenever $\ker(A)\cap\operatorname{range}(B)=\{0\}$, since $Bx$ must be a vector of that intersection.