If you have a complete basis of right eigenvectors for $M$, say $\{v_1,v_2,\ldots,v_\ell\}$ such that $Mv_k = \lambda_k v_k$, then finding $M^n x$ would be easy if we knew the linear combination $x = \sum_{k=1}^\ell c_k v_k$:
$$ M^n x = \sum_{k=1}^\ell c_k \lambda_k^n v_k $$
The OP asks in an edit whether also knowing the left eigenvectors will help us to get that linear combination. In the present Question the OP stipulated in the Comments that these were complete bases and that the multiplicity of each eigenvalue is $1$.
The fact is that left and right eigenvectors of distinct eigenvalues are orthogonal. This sort of thing has been shown in previous Questions here, but often the discussion is confounded by additional matrix properties (like normality) that are unnecessary. So let's give the one-line proof.
Suppose a right eigenvector $v$ with $Mv = \lambda v$ and a left eigenvector $u^T$ with $u^T M = \mu u^T$. If $\lambda \neq \mu$, then:
$$ \lambda (u^T v) = u^T M v = \mu (u^T v) $$
by combining $M$ either with the left or the right eigenvector, which implies that $u^T v$ must be zero, i.e. $(\lambda - \mu)(u^T v) = 0$ but the first factor is nonzero.
Therefore if the left eigenvectors $\{u_1,u_2,\ldots,u_\ell\}$ are ordered so that $u_j^T M = \lambda_j u_j^T$ to match the order of right eigenvectors, then taking "inner products" gives the coefficients:
$$ u_j^T x = \sum_{k=1}^\ell u_j^T (c_k v_k) = c_j (u_j^T v_j) $$
because all the omitted terms in the summation are zero. Thus:
$$ c_j = \frac{u_j^T x}{u_j^T v_j} $$
Added:
Since this post was motivated to an extent by concern that the inverse of the matrix $B$ whose columns are (the known basis of) right eigenvectors might be difficult to compute, it's worth noting what the existence of the basis of left eigenvectors says about forming that inverse.
Let $A$ be the $\ell \times \ell$ matrix whose rows are left eigenvectors in a compatible eigenvalue order as the right eigenvectors (columns of $B$). In terms of the notation used above:
$$ A = \begin{pmatrix} u_1^T \\ u_2^T \\ \vdots \\ u_\ell^T \end{pmatrix}
\; , \; B = \begin{pmatrix} v_1 & v_2 & \cdots & v_\ell \end{pmatrix} $$
As shown above, $AB$ is a diagonal matrix $D_{jj} = u_j^T v_j$, explaining the comment of Robert Israel. Since $A,B$ are invertible (being of rank $\ell$), $D$ also is invertible. Thus:
$$ B^{-1} = D^{-1} A$$
The computation of $B^{-1}$ is thus tractable once the left eigenvalue/eigenvector pairs are known. From this we can conclude that $M$ is diagonalized by the similarity transformation $B^{-1} M B$.