2

Could any one please help clarify my confusion here:

I've read that a way of determining if a given matrix is positive definite is:

1) A matrix is positive definite if it’s symmetric and all its pivots are positive. Just perform elimination and examine the diagonal terms.

So taking as an exmaple the matrix:

{1 2}

{2 1}

If we perform elimination (subtract2×row 1 from row 2) we get

{1 2}

{0, -3}

The pivots are1and−3. In particular, one of the pivots is−3, and so the matrix is not positive definite.

But now take the folowing matrix:

{ 2, -1, 0 },

{ -1, 2, -1 },

{ 0, -1, 2};

One can use elementary row operations to reduce this to the upper triangular form:

{2, -1, 0}

{0, -1, 2}

{0, 0, 2}

[ The operations used were:

1) Swap rows 2 and 3

2) Add 0.5 of row one to row three

3) Add 1.5 of row two to row three ]

And the minus one on the diagonal would lead one to think that the original matrix were not positive definite, but it is.

So it is not enough just to reduce to upper triangular? I read on wikipedia that when reducing one should take "care to preserve the sign of its determinant"

Is this what I have failed to do in my reduction?

Thanks for any help.

gnitsuk
  • 217
  • The step "swap rows 2 and 3" does not preserve the sign of the determinant. – Ben Grossmann May 06 '20 at 16:32
  • Could you please provide the link for the wikipedia page that you're talking about? – Ben Grossmann May 06 '20 at 16:32
  • https://en.wikipedia.org/wiki/Definiteness_of_a_matrix I see, so I must reduce to upper triangular without swapping an odd number of times? – gnitsuk May 06 '20 at 17:12
  • @gn Great, thanks. You might also be interested in the Cholesky decomposition. This method of checking for positivity is equivalent to attempting to find such a decomposition. – Ben Grossmann May 06 '20 at 17:16
  • $$\left( \begin{array}{rr} 1 & 0 \ 2 & 1 \ \end{array} \right) \left( \begin{array}{rr} 1 & 0 \ 0 & - 3 \ \end{array} \right) \left( \begin{array}{rr} 1 & 2 \ 0 & 1 \ \end{array} \right) = \left( \begin{array}{rr} 1 & 2 \ 2 & 1 \ \end{array} \right) $$

    indefinite

    – Will Jagy May 06 '20 at 18:00
  • $$\left( \begin{array}{rrr} 1 & 0 & 0 \
    • \frac{ 1 }{ 2 } & 1 & 0 \

    0 & - \frac{ 2 }{ 3 } & 1 \ \end{array} \right) \left( \begin{array}{rrr} 2 & 0 & 0 \ 0 & \frac{ 3 }{ 2 } & 0 \ 0 & 0 & \frac{ 4 }{ 3 } \ \end{array} \right) \left( \begin{array}{rrr} 1 & - \frac{ 1 }{ 2 } & 0 \ 0 & 1 & - \frac{ 2 }{ 3 } \ 0 & 0 & 1 \ \end{array} \right) = \left( \begin{array}{rrr} 2 & - 1 & 0 \

    • 1 & 2 & - 1 \

    0 & - 1 & 2 \ \end{array} \right) $$ positive definite

    – Will Jagy May 06 '20 at 18:01
  • see https://math.stackexchange.com/questions/1388421/reference-for-linear-algebra-books-that-teach-reverse-hermite-method-for-symmetr – Will Jagy May 06 '20 at 18:03
  • the relevant theorem is Sylvester's Law of Inertia. – Will Jagy May 06 '20 at 18:21
  • Thanks for all the replies. I'm familiar with Cholesky decomposition, it is that topic which has led me to my query here. So, if a general matrix is positive deinite can one always reduce it using elementary row operations that preserve the sign of the determinant? So, in my 3x3 exmaple above, could I have used such operations to obtain an upper triangular form that preserved the sign of the determinant. – gnitsuk May 06 '20 at 19:34

0 Answers0