2

For an Euclidean domain $R$, take $A,B \in M_n(R)$. Is it true that matrix equality $AB=I$ imply $BA=I$?

I only know how to prove this if $R$ is a field, but cannot find a counterexample when $R$ is an Euclidean domain, a quite strong condition for a ring.


Sorry I decided to change the question to make $R$ stronger, an Euclidean domain. But please feel free to add comments for the result for general commutative rings (for example if this is true for Euclidean domains, then is it still true for general commutative rings?).

HallaSurvivor
  • 38,115
  • 4
  • 46
  • 87
taylor
  • 559
  • 3
    If $R$ is an integral domain (in particular a Euclidean domain), it can be embedded in a field... – Mor A. Sep 07 '22 at 22:20
  • @MorA. and why is the inverse matrix over the field still have entries in $R$? – taylor Sep 07 '22 at 22:21
  • Inverses are unique – Mor A. Sep 07 '22 at 22:22
  • 5
    The cofactor formula $A\operatorname{adj}(A)=\operatorname{adj}(A)A=\det(A)I$ and Binet $\det(AB)=\det(A)\det(B)$ hold for all commutative rings (it isn't hard, but it isn't self-evident), therefore $AB=I$ implies that $\det(A)\in R^*$ and therefore $B=\det(A)^{-1}\operatorname{adj}(A)AB=\det(A)^{-1}\operatorname{adj}(A)$. – Sassatelli Giulio Sep 07 '22 at 22:26
  • Thanks for all and sorry for this stupid question... – taylor Sep 07 '22 at 22:30
  • A related question which might be interesting to look at: https://math.stackexchange.com/questions/3554177/in-mathbbca-11-ldots-a-nn-b-11-ldots-b-nn-is-langle-ab – Daniel Schepler Sep 07 '22 at 22:40
  • 2
    @taylor: I don't think this is a stupid question at all, fwiw! If you aren't very comfortable with the fact that the determinant makes sense over any commutative ring it's not at all obvious how to proceed. – Qiaochu Yuan Sep 07 '22 at 23:36

2 Answers2

2

Sassatelli Giulio's nice argument in the comments shows that this holds over any commutative ring, which answers the question neatly. The rest of this answer is just a long comment.


Here is a simpler argument over an integral domain $D$ which avoids the determinant: if $AB = I$ where $A, B \in M_n(D)$ then $B = A^{-1}$ over the fraction field $F = \text{Frac}(D)$, so $BA = I$ in $M_n(F)$. But since $D$ embeds into $F$ this gives $BA = I$ in $M_n(D)$.

Staring at this argument a bit more we can generalize it as follows. Let $R$ be a commutative ring, let $F_P = \text{Frac}(R/P)$ where $P$ is a prime ideal, and consider the reduction $\bmod P$. Then $AB = I$ implies that $B \equiv A^{-1} \bmod P$ in the sense that the image of $B$ in $M_n(F_P)$ is the inverse of the image of $A$, hence that $BA \equiv I \bmod P$. Applying this argument to all prime ideals, we conclude that $BA \equiv I \bmod N$ where $N$ is the nilradical of $R$. So we get the result for any reduced ring.

To push past this we can argue as follows. Let $A$ and $B$ be universal; that is, let them have entries $a_{ij}, b_{ij}$ over the polynomial ring $\mathbb{Z}[a_{ij}, b_{ij}]$. The condition $AB = I$ is a collection of $n^2$ polynomial identities in these $2n^2$ variables; let $J$ be the ideal they generate. We want to know whether $BA \equiv I \bmod J$; this is equivalent to showing the desired result over any commutative ring (and if it's false over some commutative ring it's false over $\mathbb{Z}[a_{ij}, b_{ij}]/J$, which would be the universal counterexample), since this setup specializes to any corresponding setup over a commutative ring $R$ via a suitable homomorphism $\mathbb{Z}[a_{ij}, b_{ij}]/J \to R$.

To prove this it suffices to show that $\mathbb{Z}[a_{ij}, b_{ij}]/J$ is reduced, or equivalently that $J$ is radical, since then we can apply the previous argument. In fact $J$ is a prime ideal but I don't know how to show this without using the determinant; using the determinant we can identify $\mathbb{Z}[a_{ij}, b_{ij}]/J$ with the localization $\mathbb{Z}[a_{ij}][\det(A)^{-1}]$, which is a localization of an integral domain and hence an integral domain; the $b_{ij}$ get expressed in terms of the $a_{ij}$ and $\det(A)^{-1}$ using Cramer's rule which is the adjugate identity Sassatelli Giulio uses. It is actually possible to discover the determinant this way, and I think this is close to the historical pattern of discovery: you can try to express the $b_{ij}$ in terms of the $a_{ij}$ by inverting $A$ over $\mathbb{Q}(a_{ij})$ (e.g. using row reduction) and if you do that the determinant will appear in the denominators.

This localization is "the ring of functions on the universal invertible matrix"; said another way, it's the ring of functions on the general linear group regarded as an affine group scheme.

Qiaochu Yuan
  • 419,620
0

This is true for all commutative rings $R$, and we can prove it via permanence of identities. The idea is to prove it for matrices with variable entries, and then argue that it must then be true in every (commutative) ring. For more information about this technique see Knapp's Basic Algebra (Chapter V.2) or an old blog post of mine.

Let's first consider the case of $2 \times 2$ matrices for concreteness. We'll work in the quotient ring

$$ A = \frac{\mathbb{Z}[a_{11}, a_{12}, a_{21}, a_{22}, b_{11}, b_{12}, b_{21}, b_{22}]} { a_{11} b_{11} + a_{12} b_{21} = 1 \quad a_{11} b_{12} + a_{12} b_{22} = 0 \quad a_{21} b_{11} + a_{22} b_{21} = 0 \quad a_{21} b_{12} + a_{22} b_{22} = 1 } $$

Notice that the polynomials we're quotienting by exactly tell us that

$$ \begin{pmatrix} a_{11} &a_{12} \\ a_{21} &a_{22} \end{pmatrix} \begin{pmatrix} b_{11} &b_{12} \\ b_{21} &b_{22} \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} $$

so that this is the free ring admitting the structure of interest.

By this, we mean that for any ring $R$, for any $2 \times 2$ matrices $M$, $N$ (with coefficients in $R$) satisfying $MN = I$, there is a unique homomorphism $A \to R$ sending the $a_{ij}$ and $b_{ij}$ to the entries of $M$ and $N$.

This is useful because homomorphisms preserve (atomic) truth. If some equation $x = y$ is true in a ring $R$, then for any homomorphism $\varphi : R \to S$ we must have $\varphi(x) = \varphi(y)$ in $S$! So if we can show that

$$ \begin{pmatrix} b_{11} &b_{12} \\ b_{21} &b_{22} \end{pmatrix} \begin{pmatrix} a_{11} &a_{12} \\ a_{21} &a_{22} \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} $$

in $A$ (by which, of course, we mean the four polynomial equations abbreviated by this matrix multiplication) then those equations will be true in $R$ for any homomorphism $A \to R$. Now we use the fact that $A$ is free in the sense above to show the claim. Concretely, once we know the claim holds in $A$, then:

  1. Say $MN = I$, with entries in some ring $R$
  2. Then there is a (unique) ring hom $\varphi : A \to R$ sending the $a_{ij}$ and $b_{ij}$ to entries of $M$ and $N$
  3. But we know in $A$ that $ \begin{pmatrix} b_{11} &b_{12} \\ b_{21} &b_{22} \end{pmatrix} \begin{pmatrix} a_{11} &a_{12} \\ a_{21} &a_{22} \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} $
  4. So since homomorphisms preserve equations, we can hit this equation with $\varphi$ to see that $NM = I$, as desired.

This is an extremely flexible way to solve problems, since it lets us reduce from a complicated setting (general rings $R$) to a simple setting (integer polynomials) where we might have extra tools at our disposal.

For instance, we can show the claim is true in $A$ by reducing to the case of fields! (Edit: There was a mistake in my original answer. Thanks to Qiaochu for recognizing it, and Daniel for suggesting a fix. See the comments). Indeed, $A$ is an integral domain (as the localization of $\mathbb{Z}[a_{11}, a_{12}, a_{21}, a_{22}]$ at the determinant $a_{11} a_{22} - a_{21} a_{12}$. Through this lens, $A$ is the universal ring with an invertible $2 \times 2$ matrix) thus it embeds into a field.

Of course, we know that the desired equation is true for fields, so that our equation in $A$ is true considered after the embedding. But embeddings reflect atomic truth, so that our equation must have been true in $A$ to start with! But once we know the claim for $A$, we know the claim for all rings $R$ by the argument above! Like magic!


Of course, there's nothing special about $2 \times 2$ matrices here. For $n \times n$ matrices, we run exactly the same argument, but with $n^2$-many $a$-variables, $n^2$-many $b$-variables, and $n^2$ polynomial equations telling us how the matrix multiplication works. This is again the localization of $\mathbb{Z}[a_{ij}]$ at the determinant, which is an integral domain.

Since the claim is true for $n \times n$ matrices over a field, we'll be able to pull it back to the variable matrices in $A$, and then push forward to any $n \times n$ matrices $MN = I$ in some ring $R$.


I hope this helps ^_^

HallaSurvivor
  • 38,115
  • 4
  • 46
  • 87
  • @QiaochuYuan -- Fair point, but we don't even really need that. It's enough to know that the equation is true for any integers we evaluate our polynomials on (which is true since $\mathbb{Z}$ embeds into $\mathbb{Q}$) as that tells us our polynomials are equal directly. I'll edit my answer to reflect that in a minute – HallaSurvivor Sep 07 '22 at 23:08
  • (Deleted and reposted, thought I needed to correct an error) How do you know that $A$ is an integral domain in general? I agree that this is true but it's the crux of the argument and you assert it without proof. It is certainly not true that $A$ embeds into $\mathbb{Q}(a_{ij}, b_{ij})$ as you claim; you can show that it embeds into either $\mathbb{Q}(a_{ij})$ or $\mathbb{Q}(b_{ij})$ but this requires proof. – Qiaochu Yuan Sep 07 '22 at 23:08
  • 1
    In a question/answer that I linked to, I give an isomorphism of $A$ to $\mathbb{Z}[a_{11}, \ldots, a_{22}][(a_{11} a_{22} - a_{12} a_{21})^{-1}]$ - so as a localization of an integral domain by a set not containing zero, it's again an integral domain. – Daniel Schepler Sep 07 '22 at 23:10
  • 1
    @Halla: I don't see how you can get the result this way. It's one thing to use arguments like this to prove universal identities but what we want to prove here is a universal implication from one identity to another identity; that requires knowing that some ideal contains some other ideal, not just that some point evaluations imply some other point evaluations, which at best shows that the radical of some ideal contains the radical of some other ideal. – Qiaochu Yuan Sep 07 '22 at 23:11
  • @Daniel: yes, I know this and it's contained in my answer. But it is nowhere contained in Halla's answer and it is the crux of the entire argument! – Qiaochu Yuan Sep 07 '22 at 23:12
  • @QiaochuYuan -- hm... as I'm trying to write up a fix, I see the problem is a bit more subtle than I thought. Thanks for catching it – HallaSurvivor Sep 07 '22 at 23:24
  • See also here and its links for more examples of proofs using the universality of polynomial identities. – Bill Dubuque Dec 01 '22 at 09:08