Is inverting a matrix in the Complexity class $\text{P}$ ?
From the runtime I would say yes $\mathcal{O}(n^3)$ but the inverted matrix can contain entries where the size is not polynomially bounded by the input?
Is inverting a matrix in the Complexity class $\text{P}$ ?
From the runtime I would say yes $\mathcal{O}(n^3)$ but the inverted matrix can contain entries where the size is not polynomially bounded by the input?
Yes, it can be done in polynomial time, but the proof is quite subtle. It's not simply $\mathcal{O}(n^3)$ time, because Gaussian elimination involves multiplying and adding numbers, and the time to perform each of those arithmetic operations is dependent on how large they are. For some matrices, the intermediate values can become extremely large, so Gaussian elimination doesn't necessarily run in polynomial time.
Fortunately, there are algorithms that do run in polynomial time. They require quite a bit more care in the design of the algorithm and the analysis of the algorithm to prove that the running time is polynomial, but it can be done. For instance, the running time of Bareiss's algorithm is something like $\mathcal{O}(n^5 (\log n)^2)$ [actually it is more complex than that, but take that as a simplification for now].
For lots more details, see Dick Lipton's blog entry Forgetting Results and What is the actual time complexity of Gaussian elimination? and Wikipedia's summary.
Finally, a word of caution. The precise running time depends upon exactly what field you are working over. The above discussion applies if you are working with rational numbers. On the other hand, if, for instance, you are working over the finite field $\mathrm{GF}(2)$ (the integers modulo 2), then naive Gaussian elimination does run in $\mathcal{O}(n^3)$ time. If you don't understand what this means, you can likely ignore this last paragraph.
There is a formula for the entries of the inverse matrix which gives each entry as a ratio of two determinants, one of a minor of the original matrix, and the other of the entire original matrix. This should help you bound the size of the entries in the inverse matrix, if you're careful, given a reasonable notion of "size" (note that even if you start with an integer matrix, the inverse could contain rational entries).
That said, often matrix inverse is studied from the point of view of the algebraic complexity theory, in which you count basic operations regardless of magnitude. In this model, one can show that the complexity of matrix inverse is equivalent to the complexity of matrix multiplication, up to polylogarithmic terms; this reduction can perhaps also help you bound the size of the coefficients.
Given the efficient algorithm in the algebraic complexity theory model, one wonders whether it implies a similarly efficient algorithm in the usual model; can it be that although the final entries are polynomial size, the calculation involves larger ones? This is probably not the case, and even if it were, the issue could perhaps be avoided using the Chinese remainder theorem.