3

In Pascal's triangle left-justified, take any square matrix that shares a border with the left edge of the triangle. An example is shown in red below.

enter image description here

It seems that the determinant of such a matrix is always $1$.

Is there an intuitive explanation why the determinant of such a matrix is always $1$?

(Here is an example of an intuitive explanation: a scalar triple product is $0$ if the vectors are dependent, because the scalar triple product is the volume of the parallelipiped formed by the vectors.)

I know that the determinant of a matrix is the volume of the region that results from applying the matrix to the unit cube. So if the determinant is $1$, the volume is unchanged. But I don't see why a matrix in Pascal's triangle (as described above) preserves volume.

I found some articles about Pascal's triangle and determinants, but the matrices there were different from mine; they were symmetric or lower/upper triangular, so it's obvious that their determinants equal $0$.

Context: Recently I've been wondering a lot about Pascal's triangle, for example, does it contain three consecutive integers? Can it be split?

Dan
  • 22,158

3 Answers3

7

By definition on the lattice $\mathbb Z_+^2$, any entry $a_{ik}=a_{i-1,k} +a_{i-1,k+1}$ with all $a_{ik}=0,k>i$, $a_{1,1}=1$.

It follows, that by subtraction of rows, without change of the determinant, you may transform your square matrix into an upper tridiagonal matrix with all 1 on the diagonal.

Roland F
  • 2,098
2

Here is another point of view that I will express through 2 examples :

Case $3 \times 3$

$$\left(\begin{array}{rrr}1&7&21\\1&8&28\\1&9&36\end{array}\right)= \left(\begin{array}{rrr}0&1&0\\0&0&1\\1&-3&3\end{array}\right) \left(\begin{array}{rrr}1&6&15\\1&7&21\\1&8&28\end{array}\right)$$

More generally, denoting by $R_k$ the $3$ first entries of the $k$th row of Pascal's triangle that we are concerned with, we have :

$$\underbrace{\left(\begin{array}{r}R_{k+1}\\R_{k+2}\\R_{k+3}\end{array}\right)}_{M_{k+1}}= \underbrace{\left(\begin{array}{rrr}0&1&0\\0&0&1\\1&-3&3\end{array}\right)}_{L_3} \underbrace{\left(\begin{array}{rrr}R_{k}\\R_{k+1}\\R_{k+2}\end{array}\right)}_{M_{k}}$$

which is valid for any $k>1$ with the same matrix $L_3$ where you have may have recognized the coefficients of the $3$rd row of Pascal's triangle (minus the first one) : $1,3,3,1$ with alternated signs.

Taking the determinant on both sides, as the determinant of $L_3$ is $1$, we get :

$$\det(M_{k+1})=\det(M_{k})$$

therefore the result is proven by an immediate recurrence reasoning, knowing that the initial determinant is :

$$\det M_2 = \det \pmatrix{1&2&1\\1&3&3\\1&4&6} = 1$$

Case $4 \times 4$

$$\left(\begin{array}{rrrr}1&6&15&20\\1&7&21&35\\1&8&28&56\\1&9&36&84\end{array}\right)= \left(\begin{array}{rrrr}0&1&0&0\\0&0&1&0\\0&0&0&1\\-1&4&-6&4\end{array}\right) \left(\begin{array}{rrrr}1&5&10&10\\1&6&15&20\\1&7&21&35\\1&8&28&56\end{array}\right)$$

which can be generalized in the same way as in the first case ; for any $k>2$ :

$$\underbrace{\left(\begin{array}{r}R_{k+1}\\R_{k+2}\\R_{k+3}\\R_{k+4}\end{array}\right)}_{M_{k+1}}= \underbrace{\left(\begin{array}{rrr}0&1&0&0\\0&0&1&0\\0&0&0&1\\-1&4&-6&4\end{array}\right)}_{L_4} \underbrace{\left(\begin{array}{rrr}R_{k}\\R_{k+1}\\R_{k+2}\\R_{k+3}\end{array}\right)}_{M_{k}}$$

Here, coefficients $1,4,6,4,(1)$ with an alternated signs are the elements of the $4$th row of Pascal's triangle.

Etc.

Of course, a rigorous treatment is possible if needed, but I think I have conveyed the main ideas by presenting these 2 examples.

Remarks :

  1. The underlying reason for the alternate sign binomial coefficients is that the inverse of any Pascal matrix such as :

$$P_4=\pmatrix{ 1 &0 &0 &0 & 0\\ 1&1& 0& 0 & 0\\ 1 &2 & 1& 0 & 0\\ 1 & 3 &3 & 1& 0\\ 1 &4 & 6 &4 & 1} \ \text{is} $$ $$P_4^{-1}=\left(\begin{array}{rrrrr}1 &0 &0 &0 & 0\\ -1&1& 0& 0 & 0\\ 1 &-2 & 1& 0 & 0\\ -1 & 3 &-3 & 1& 0\\ 1 &-4 & 6 &-4 & 1\end{array}\right)$$

  1. Matrices similar to matrices $L_k$ are often met in linear algebra in particular as companion matrices of more generally as "Leslie matrices".
Jean Marie
  • 81,803
1

This is NOT an intuitive explanation, but it comes from the relations between binomial coefficients.

You can do it by induction. The case of matrices of size $2$ is obvious. If your matrix $M$ has size $k$, and let $L_1,\ldots,L_k$ be the rows of your matrix, do successively the row operations $L_k\leftarrow L_k-L_{k-1}, L_{k-1}\leftarrow L_{k-1}-L_{k-2},\ldots,L_2\leftarrow L_2-L_1$.

Now expand your determinant with respect to the first column. The usual relations between binomial coefficients shows that you get the determinant of the submatrix of $M$ obtained by deleting the last row and last column. Now apply your induction hypothesis.

For example, if you take your matrix in red, performing the row operations above yields the matrix $\begin{pmatrix}1 & 6 & 15& 20 \cr 0 &1 & 6 & 15 \cr 0 & 1 & 7 & 21 \cr 0 & 1 & 8 & 28 \end{pmatrix}$.

So we you expand the determinant, you get the determinant of $\begin{pmatrix}1 & 6 & 15\cr 1 & 7 & 21\cr 1 & 8 & 28\end{pmatrix}$

GreginGre
  • 15,028