There is indeed a way to factorize/split, but not quite as you suggest. In the process you have to restrict the action of $A$ to the image of $B$.
Setting $V= {\rm Im}\; B$, you have the formula
$$ \det(A_{|V}^T A_{|V}^{\strut}) \det (B^T B) = \det((AB)^T AB).$$
It looks somewhat similar to your suggestion
and $\det(B^T B)$ indeed appears as a factor but not $\det(A^TA)$.
As noted elsewhere there is in the setup you describe no relation between the RHS and $\det(A^T A)$.
If $B$ was not injective then neither is $AB$ so we have both $\det(B^TB)=0$ and $\det((AB)^T AB)=0$. I assume in the following that $B$ is injective or else the problem is trivial.
Before giving the proof of the formula and explaining the meaning of the first factor,
let me provide an identity for the second. Let $B:{\Bbb R}^k \to {\Bbb R}^\ell$ and pick an orthonormal base $(e_1,...,e_k)$ for ${\Bbb R}^k$. The image by $B$ is spanned by
the vectors $(B e_1,...,B e_k)$ which we have assumed independent. Then
one has the identity:
$$ \det(B^T B) = \det_{1\leq i,j \leq k} \left( \langle B e_i,B e_j \rangle_{\ell}\right).
$$
Here $\langle x,y \rangle_{\ell}=x^T y$ is the Euclidean scalar product between $x,y\in {\Bbb R}^\ell$. To see this formula note that the LHS is
really the same as the RHS when for $(e_1,...,e_k)$ we take the canonical base. Now, choosing another basis will be described by multiplying with an orthogonal matrix (and its transpose), but such a matrix has determinant one and the claim follows.
Now, for $A : {\Bbb R}^\ell \to {\Bbb R}^m$ let $A_{|V}^\strut$ denote the restriction of $A$ to the $k$-dimensional subspace $V$. We obtain a matrix representation by picking an o.n.b. $f_1,...,f_k$ for $V={\rm Im} B$ and writing down the action of $A$ on this basis. Then as above, the following (which defines the first factor) is independent of the choice of o.n.b:
$$\det(A^T_{|V} A^\strut_{|V}) = \det_{1\leq i,j \leq k} \left( \langle A f_i,A f_j \rangle_{m}\right). $$
In order to conclude the proof of the first formula
write $B e_i = \sum_p f_p M_{pi}$. The point in doing so is that $M$ is now a square matrix (unlike $B$), so we have the relation:
$\; \det(B^TB) = \det(M^T M)= \det(M^T)\det(M)\;$, and
therefore
$$ \det((AB)^T AB) = \det_{1\leq i,j \leq k} \left( \langle A B e_i,A B e_j \rangle_{m}\right) = \det_{1\leq i,j \leq k}
\left(\sum_{p,q} M^T_{ip} \langle A f_p,A f_q \rangle_{m} M_{qj}\right).
$$
Inside the determinant we now have a product of square matrices. Whence, the expression becomes:
$$\det(M^T) \det(A_{|V}^T A_{|V}^{\strut}) \det (M) =
\det(A_{|V}^T A_{|V}^{\strut}) \det (B^T B). $$