The matrix (I am changing the nomenclature for convenience) being considered is:
$$ C = \begin{pmatrix}
c_1 & c_2 & c_3 & \ldots & c_n \\
c_1^2 & c_2^2 & c_3^2 & \ldots & c_n^2 \\
c_1^3 & c_2^3 & c_3^3 & \ldots & c_n^3 \\
& \ldots & & \ddots & \\
c_1^n & c_2^n & c_3^n & \ldots & c_n^n \end{pmatrix} $$
Such a matrix is invertible if and only if (a) all the $c_i$ are nonzero and (b) all the $c_i$ are distinct. A closely related matrix is the transpose of the Vandermonde matrix:
$$ A = \begin{pmatrix}
1 & 1 & 1 & \ldots & 1 \\
c_1 & c_2 & c_3 & \ldots & c_n \\
c_1^2 & c_2^2 & c_3^2 & \ldots & c_n^2 \\
& \ldots & & \ddots & \\
c_1^{n-1} & c_2^{n-1} & c_3^{n-1} & \ldots & c_n^{n-1} \end{pmatrix} $$
where $C = AD$ holds for $D = \operatorname{diag}(c_1,c_2,\ldots,c_n)$.
Because the Vandermonde matrix is associated with degree $n-1$ polynomial interpolation at a set of distinct arguments $x = c_1,c_2,\ldots,c_n$, much thought has been given to efficiently solving systems of equations to determine the corresponding coefficients. One approach to inverting $C$ would thus be to find an inverse of $A^T$ by solving the corresponding $n$ linear systems.
Golub and van Loan's book, Matrix Computations, Sec. 4.6, notes the use of Newton divided-differences to solve the interpolation problems $A^T x = b$ in $5n^2/2$ flops (Algorithm 4.6.1).
They also give an equally efficient approach to solving systems $Ax = b$ (Algorithm 4.6.2) which amounts to recasting the divided-difference computations as triangular matrix factorizations of $A^{-1}$. They cite Bjorck and Pereya (1970), "Solution of Vandermonde Systems of Equations," Math. Comp. 24, 893-903, for a description and analysis of both algorithms.