6

My question takes a little bit of preamble: it concerns a well-known and solved problem, but I am looking for a solution with a particularly nice property.

$\newcommand{\matrix}[4]{\left( \begin{array}{cc} #1 & #2 \\ #3 & #4 \end{array} \right)} \DeclareMathOperator{\lcm}{lcm}$In using Guassian elimination to put a matrix into Smith normal form over $\mathbb{Z}$ (or, more generally, some PID), the last step is to make sure that successive diagonal entries divide each other. Solving this reduces to the following problem (with all matrices over $\mathbb{Z}$):

  • given a diagonal matrix $M = \left( \begin{array}{cc} a & 0 \\ 0 & b \end{array} \right)$, find invertible matrices $L$, $R$ such that $$ M = L \left( \begin{array}{cc} \gcd(a,b) & 0 \\ 0 & \lcm(a,b) \end{array} \right) R $$

This is of course well-known and not hard to do. For instance, using the Euclidean algorithm to find $(x,y)$ such that $ax + by = d = \gcd(a,b)$, one can define

$$ L = \left( \begin{array}{cc} a/d & -y \\ b/d & x \end{array} \right), \quad R = \left( \begin{array}{cc} 1-yb/d & yb/d \\ -1 & 1 \end{array} \right)$$ or alternatively $$ L = \left( \begin{array}{cc} a/d & -1 \\ 1-xa/d & x \end{array} \right), \quad R = \left( \begin{array}{cc} 1-yb/d & b/d \\ -y & 1 \end{array} \right).$$

However, in the special case where $a$ divides $b$, we know that $M$ is already as desired and so we could simply take $L = R = I$. My question is: can we find a general algebraic solution like the ones above (i.e. algebraic definitions of $L$, $R$ in terms of the integers $(x,y,a/d,b/d)$), but with the additional property that when $a$ divides $b$ (and so $x=1$, $y=0$, $a/d=1$), the solution yields $L = R = I$? Roughly: can we find an algebraic solution which only does something non-trivial if it needs to?

1 Answers1

4

Yes. We just need to come up with a sequence of integer row and column operations that reduces to Smith normal form, and which is trivial in the special case where $a$ divides $b$. Here is one such sequence:

  1. $\begin{bmatrix}a & 0 \\ 0 & b\end{bmatrix} = E_1 \begin{bmatrix}a & 0 \\ a & b\end{bmatrix}$ for some elementary matrix $E_1$.

  2. $\begin{bmatrix}a & 0 \\ a & b\end{bmatrix} = \begin{bmatrix}r & s \\ d & b\end{bmatrix} M_2$ for some matrix $M_2$.
    (Note that $r=d=a$, $s=0$, and $M_2$ is the identity matrix in the special case.)

  3. $\begin{bmatrix}r & s \\ d & b\end{bmatrix} = E_3\begin{bmatrix}d & t \\ d & b\end{bmatrix}$ for some elementary matrix $E_3$.
    (Note that this row operation is trivial in the special case, with $t=0$.)

  4. $\begin{bmatrix}d & t \\ d & b\end{bmatrix} = E_4\begin{bmatrix}d & t \\ 0 & m\end{bmatrix}$ for some elementary matrix $E_4$.
    (In the special case, $m=b$, and this row operation is the inverse of the row operation in the first step.)

  5. $\begin{bmatrix}d & t \\ 0 & m\end{bmatrix} = \begin{bmatrix}d & 0 \\ 0 & m\end{bmatrix}E_5$ for some elementary matrix $E_5$.
    (Note that this column operation is trivial in the special case, since $t=0$.)

Computing gives $$ E_1 = \begin{bmatrix}1 & 0 \\ -1 & 1\end{bmatrix},\qquad M_2 = \begin{bmatrix}(a+by)/d & (b-bx)/d \\ -y & x\end{bmatrix},\qquad E_3 = \begin{bmatrix}1 & ax/d-1 \\ 0 & 1\end{bmatrix},\qquad $$ $$ E_4 = \begin{bmatrix}1 & 0 \\ 1 & 1\end{bmatrix},\qquad E_5 = \begin{bmatrix}1 & b(d-a)/d^2 \\ 0 & 1\end{bmatrix}. $$ Multiplying $L=E_1E_3E_4$ and $R=E_5M_2$ gives us $$ L \;=\; \begin{bmatrix}ax/d & ax/d-1 \\ 1-ax/d & 2-ax/d\end{bmatrix}, \qquad R \;=\; \begin{bmatrix}a(d+by)/d^2 & b(d-ax)/d^2 \\ -y & x\end{bmatrix}. $$ In the case where $x=1$, $y=0$, and $d=a$, both of these matrices are the identity matrix.

Jim Belk
  • 49,278