Let $A$ be the matrix of $\def\A{\mathscr A}\A$ with respect to the basis $\def\B{\mathcal B}\B=[\xi_1,\ldots,\xi_n]$. Then the minimal polynomial $m_\A$ is also the lowest degree monic polynomial such that $m_\A[A]$ is the zero matrix (of size$~n$). Also, since in coordinates with respect to$~\B$ each of its vectors $\xi_j$ is given by$~\def\e{\mathbf e}\e_j$, a standard basis vector of $\Bbb F^n$, the condition for some polynomial$~p\in\Bbb F[\lambda]$ that $p[\A](\xi_j)=0$ means $p[A]\cdot\e_j=0\in\Bbb F^n$, which just says that column$~j$ of the matrix $p[A]$ is zero. By assumption this happens precisely when $p$ is a (polynomial) multiple of the polynomial $m_{\xi_j}$.
Now $p[A]=0$ means that all columns of $p[A]$ are zero, which by the above means that $p$ is a common multiple of $m_{\xi_1},\ldots,m_{\xi_n}$. And $m_\A$ is also the lowest degree monic polynomial with that property, which by definition is the least common multiple of $m_{\xi_1},\ldots,m_{\xi_n}$. This is really all there is to it.
(In fact it was not necessary to use matrices, since $p[\A]=0$ just means that $p[\A](\xi_j)=0$ for all $j$, whence $\def\lcm{\operatorname{lcm}}\lcm(m_{\xi_1},\ldots,m_{\xi_n})\mid p$. But the matrix point of view makes this more visual.)
I should note that this suggests a way to compute $m_\A$ that is in general quite inefficient. The reason is that the polynomials $m_{\xi_j}$ are likely to be strongly interrelated, as can be seen from the fact that each one has a degree $d_j$ that could be as high as$~n$ (and for "generic" $\A$ and $v$ one has $d_j=n$), yet their $\lcm$ still cannot have degree more than$~n$ (by the Cayley-Hamilton theorem). Since $d_1+\cdots+d_n$ is usually much larger than$~n$, one sees that the $m_{\xi_j}$ are then far from being all relatively prime. What is going on is that the kernel of some $m_\alpha[\A]$, which by construction contains the vector $\alpha$, is also an $\A$-invariant subspace, so that it must contain all repeated images by$~\A$ of$~\alpha$. Those images $\A^i(\alpha)$ are linearly independent for $i<d=\deg(m_\alpha)$, so the kernel has at least dimension$~d$. In order to have the relation $m_\A=\lcm(m_{\alpha_1},\ldots,m_{\alpha_k})$ for certain vectors $\alpha_1,\ldots,\alpha_k$, it suffices that the subspace sum of those kernels fill up the whole space; with vectors running through a basis$~\B$ this is assured, but often taking much less (even taking a single vector) will suffice.
But one can even avoid doing any $\lcm$ at all. Having computed some $m_\alpha$, it is easy to see that the (quotient) factor $Q=m_\A/m_\alpha$ that is "missing" from the minimal polynomial is precisely the minimal polynomial of the restriction of $\A$ to the image subspace $W$ of $m_\alpha[\A]$, in other words the minimal degree monic polynomial$~p$ such that $W\subseteq\ker(p[\A])$. This leads to the following simple (at least in theory) algorithm for computing $m_\A$ in terms of a variable polynomial$~p$, initialised to $1$ and whose final value gives $m_\A$, and a variable subspace$~W$, initialised to $V$ and which ultimately becomes $\{0\}$:
- While $\dim(W)>0$: choose a nonzero vector $w\in W$, and compute the polynomial $m_w$; replace $p$ by the product $pm_w$, and replace $W$ by its image $m_w[\A](W)$.