One way to show this is as follows. Make another copy of your alphabet, $\Sigma'$. Let $B'$ be the language $B$ over $\Sigma'$. The language $AB'$ is regular. By intersecting this with an appropriate context-free language, we can get the language of all words $xy'$, where $|x|=|y'|$, $x \in A$, and $y' \in B'$. Finally, we can remove the tags, replacing $\Sigma'$ with $\Sigma$.
Another way is using grammars. Construct an automaton $D(A)$ for $A$ and $D(B^R)$ for the reverse of $B$. Construct a grammar whose nonterminals encode a pair of states $(q_a,q_b)$, where $q_a$ is a state in $D(A)$ and $q_b$ is a state in $D(B^R)$. There will be productions $(q_a,q_b) \to \sigma_a (q'_a,q'_b) \sigma_b$ which correspond to transitions in both automata simultaneously, and productions $(q_a,q_b) \to \epsilon$ for all pairs of final states.
Yet another way is using NPDAs. The NPDA first simulates an automaton $D(A)$ for $A$, pushing a symbol $X$ into the stack whenever it reads a letter. It then nondeterministically transitions to simulating an automaton $D(B)$ for $B$ (only when reaching a final state of $D(A)$!), popping a symbol $X$ from the stack whenever it reads a letter. You should arrange matters so that the NPDA accepts only if it reaches a final state of $D(B)$, and moreover exhausts the stack. (You can do this using a special symbol found at the bottom of the stack.)
I'll let you work out the details.