This is true for all commutative rings $R$, and we can prove it via permanence of identities. The idea is to prove it for matrices with variable entries, and then argue that it must then be true in every (commutative) ring. For more information about this technique see
Knapp's Basic Algebra (Chapter V.2) or an old blog post of mine.
Let's first consider the case of $2 \times 2$ matrices for concreteness. We'll work in the quotient ring
$$
A =
\frac{\mathbb{Z}[a_{11}, a_{12}, a_{21}, a_{22}, b_{11}, b_{12}, b_{21}, b_{22}]}
{
a_{11} b_{11} + a_{12} b_{21} = 1 \quad
a_{11} b_{12} + a_{12} b_{22} = 0 \quad
a_{21} b_{11} + a_{22} b_{21} = 0 \quad
a_{21} b_{12} + a_{22} b_{22} = 1
}
$$
Notice that the polynomials we're quotienting by exactly tell us that
$$
\begin{pmatrix} a_{11} &a_{12} \\ a_{21} &a_{22} \end{pmatrix}
\begin{pmatrix} b_{11} &b_{12} \\ b_{21} &b_{22} \end{pmatrix} =
\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}
$$
so that this is the free ring admitting the structure of interest.
By this, we mean that for any ring $R$, for any $2 \times 2$ matrices $M$, $N$ (with coefficients in $R$) satisfying $MN = I$, there is a unique homomorphism $A \to R$ sending the $a_{ij}$ and $b_{ij}$ to the entries of $M$ and $N$.
This is useful because homomorphisms preserve (atomic) truth. If some equation $x = y$ is true in a ring $R$, then for any homomorphism $\varphi : R \to S$ we must have $\varphi(x) = \varphi(y)$ in $S$! So if we can show that
$$
\begin{pmatrix} b_{11} &b_{12} \\ b_{21} &b_{22} \end{pmatrix}
\begin{pmatrix} a_{11} &a_{12} \\ a_{21} &a_{22} \end{pmatrix}
= \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}
$$
in $A$ (by which, of course, we mean the four polynomial equations abbreviated by this matrix multiplication) then those equations will be true in $R$ for any homomorphism $A \to R$. Now we use the fact that $A$ is free in the sense above to show the claim. Concretely, once we know the claim holds in $A$, then:
- Say $MN = I$, with entries in some ring $R$
- Then there is a (unique) ring hom $\varphi : A \to R$ sending the $a_{ij}$ and $b_{ij}$ to entries of $M$ and $N$
- But we know in $A$ that $
\begin{pmatrix} b_{11} &b_{12} \\ b_{21} &b_{22} \end{pmatrix}
\begin{pmatrix} a_{11} &a_{12} \\ a_{21} &a_{22} \end{pmatrix}
= \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}
$
- So since homomorphisms preserve equations, we can hit this equation with $\varphi$ to see that $NM = I$, as desired.
This is an extremely flexible way to solve problems, since it lets us reduce from a complicated setting (general rings $R$) to a simple setting (integer polynomials) where we might have extra tools at our disposal.
For instance, we can show the claim is true in $A$ by reducing to the case of fields! (Edit: There was a mistake in my original answer. Thanks to Qiaochu for recognizing it, and Daniel for suggesting a fix. See the comments). Indeed, $A$ is an integral domain (as the localization of $\mathbb{Z}[a_{11}, a_{12}, a_{21}, a_{22}]$ at the determinant $a_{11} a_{22} - a_{21} a_{12}$. Through this lens, $A$ is the universal ring with an invertible $2 \times 2$ matrix) thus it embeds into a field.
Of course, we know that the desired equation is true for fields, so that our equation in $A$ is true considered after the embedding. But embeddings reflect atomic truth, so that our equation must have been true in $A$ to start with! But once we know the claim for $A$, we know the claim for all rings $R$ by the argument above! Like magic!
Of course, there's nothing special about $2 \times 2$ matrices here. For $n \times n$ matrices, we run exactly the same argument, but with $n^2$-many $a$-variables, $n^2$-many $b$-variables, and $n^2$ polynomial equations telling us how the matrix multiplication works. This is again the localization of $\mathbb{Z}[a_{ij}]$ at the determinant, which is an integral domain.
Since the claim is true for $n \times n$ matrices over a field, we'll be able to pull it back to the variable matrices in $A$, and then push forward to any $n \times n$ matrices $MN = I$ in some ring $R$.
I hope this helps ^_^