6

Now a days I am learning about matrix and determinants and I confused on one properties of determinants which is: interchanging two rows/Columns of a determinant changes the sign of the determinant.

My question is what is the logic(reason) that -ve sign is places outside the determinants while interchanging rows/Columns but no sign is places outsides in gaussian elimination (OR more specific in matrix)

I don't understand the logic behind this. I Google it a lot but found no answer. Can anybody please explain why we do this.

baxx
  • 768

2 Answers2

9

This is simple. Note that

$$\det(PA) = \det(P)\det(A).$$

If you want $P$ to swap rows $k$ and $l$, then

$$P = \begin{bmatrix} 1 \\ & \ddots \\ & & 1 \\ & & & 0 & 0 & \dots & 0 & 1 \\ & & & 0 & 1 & \dots & 0 & 0 \\ & & & \vdots & \vdots & \ddots & \vdots & \vdots \\ & & & 0 & 0 & \dots & 1 & 0 \\ & & & 1 & 0 & \dots & 0 & 0 \\ & & & & & & & & 1 \\ & & & & & & & & & \ddots \\ & & & & & & & & & & 1 \end{bmatrix}.$$

In other words, $P$ is constructed by swapping rows (or, equivalently, columns) $k$ and $l$ of the identity matrix.

Now, check that $\det(P) = -1$, and you have what you asked about.

Vedran Šego
  • 11,372
  • @Verdan You Are Right! Now I Find The Answer Why We Place -1 While Interchanging Rows/Columns In Matrix! But, Again Why We Don't Do This In Gaussian Elimination! – Khizar Iqbal Oct 08 '13 at 18:14
  • I don't understand your question. We don't compute the determinant when doing the Gaussian elimination. Can you, please, explain what exactly is bothering you? – Vedran Šego Oct 08 '13 at 18:16
  • My Question Is That We Place -ive Sign When We Interchange Rows Of Determinant But Why We Do Not Place -ive Sign When We Interchange Rows In Gaussian Elimination i.e., While Performing Row Operations! I Hope Now My Question Is Clear.!! Should I Edit My Question Or Not? – Khizar Iqbal Oct 08 '13 at 18:23
  • When you do the Gaussian eliminations, you may, if you wish, change the sign of a row; it is equivalent to multiplying a corresponding linear equation with $-1$. Generally, elementary operations by which you do the Gaussian eliminations may change the determinant (but they never turn non-zero determinant to zero). So, when you just swap two rows, your determinant will change its sign, but this is not a problem, since we do not require a constant determinant when solving systems of linear equations. – Vedran Šego Oct 08 '13 at 18:35
  • I hope this is now clearer. If not, you may consider opening a new question. You can put a link to this one as a motivation, and then explain thoroughly what is confusing you. – Vedran Šego Oct 08 '13 at 18:35
  • No, There Is No Need To Start A New Question Because I Get The Answer..!! Thanks A Lot Mr. Verdan For Your Reply...!! – Khizar Iqbal Oct 08 '13 at 19:02
6

I will assume that we already know the following results:

  • If two rows of a square matrix are the same, then determinant is zero. (This can be shown by induction using Laplace's expansion. See also ProofWiki.)
  • If we have three square matrices $A$, $B$, $C$ which only differ in one row and the remaining row of $C$ is sum of the corresponding rows of $A$ and $B$, then $|C|=|A|+|B|$. (This can be shown using Leibniz formula, which many texts take as a definition of a determinant. It is also a consequence of multilinearity of determinant. See also ProofWiki

Let us denote by $\vec\alpha_1,\dots,\vec\alpha_n$ the rows of the matrix $A$.

Then we have $$ \begin{vmatrix} \vec\alpha_1 \\ \vec\alpha_i+\vec\alpha_j \\ \vec\alpha_i+\vec\alpha_j \\ \vec\alpha_n \end{vmatrix} =0 $$ since this matrix has repeated rows.

At the same time we have $$ 0=\begin{vmatrix} \vec\alpha_1 \\ \vec\alpha_i+\vec\alpha_j \\ \vec\alpha_i+\vec\alpha_j \\ \vec\alpha_n \end{vmatrix}= \begin{vmatrix} \vec\alpha_1 \\ \vec\alpha_i+\vec\alpha_j \\ \vec\alpha_i \\ \vec\alpha_n \end{vmatrix}+ \begin{vmatrix} \vec\alpha_1 \\ \vec\alpha_i+\vec\alpha_j \\ \vec\alpha_j \\ \vec\alpha_n \end{vmatrix}= \begin{vmatrix} \vec\alpha_1 \\ \vec\alpha_i\\ \vec\alpha_i \\ \vec\alpha_n \end{vmatrix}+ \begin{vmatrix} \vec\alpha_1 \\ \vec\alpha_j \\ \vec\alpha_i \\ \vec\alpha_n \end{vmatrix}+ \begin{vmatrix} \vec\alpha_1 \\ \vec\alpha_i \\ \vec\alpha_j \\ \vec\alpha_n \end{vmatrix}+ \begin{vmatrix} \vec\alpha_1 \\ \vec\alpha_j \\ \vec\alpha_j \\ \vec\alpha_n \end{vmatrix}= \begin{vmatrix} \vec\alpha_1 \\ \vec\alpha_j \\ \vec\alpha_i \\ \vec\alpha_n \end{vmatrix}+ \begin{vmatrix} \vec\alpha_1 \\ \vec\alpha_i \\ \vec\alpha_j \\ \vec\alpha_n \end{vmatrix}$$ which implies $$\begin{vmatrix} \vec\alpha_1 \\ \vec\alpha_j \\ \vec\alpha_i \\ \vec\alpha_n \end{vmatrix}=- \begin{vmatrix} \vec\alpha_1 \\ \vec\alpha_i \\ \vec\alpha_j \\ \vec\alpha_n \end{vmatrix}.$$

  • How can you assume that A has two equal rows? Does that happened on every matrix? – egarro Mar 21 '15 at 20:11
  • 1
    @egarro I did not write that $A$ has two rows which are the same. But the matrix $\begin{pmatrix} \vec\alpha_1 \ \vec\alpha_i+\vec\alpha_j \ \vec\alpha_i+\vec\alpha_j \ \vec\alpha_n \end{pmatrix}$, which is created from $A$ in such a way that both $i$-th and $j$-th row is replaced by their sum, certainly has two equal rows. (Namely the $i$-the and the $j$-th row.) – Martin Sleziak Mar 22 '15 at 07:53
  • So this matrix created from A can we called A'? – egarro Mar 22 '15 at 21:37
  • I was going through Artin's Algebra. In the first chapter while he discusses determinants he goes over this and gives quite an unclear and contrived proof that left me bereft. This is the clearest I've seen. Thank you very much for your contributions on this site. – smokeypeat Mar 03 '17 at 13:32
  • Does the same proof work for matrix columns? – Krish Sep 24 '20 at 08:34
  • 1
    @Krish Yes, the same thing is true for interchanging columns. We know that $|A|=|A^T|$ and interchanging two columns correspond to interchanging two rows of the transpose. – Martin Sleziak Sep 24 '20 at 08:57