2

Let $$A = \left(\begin{array}{cc} 2&3 \\ 3&4 \end{array}\right) \in M_n(\mathbb{C})$$

Find $P$ such that $P^TAP = D$ where $D$ is a diagonal matrix.

How can I find $P$? I am doing Gauss but it does not work?$$A = \left(\begin{array}{cc|cc} 2&3&1&0\\ 3&4&0&1 \end{array}\right) \sim \left(\begin{array}{cc|cc} 2&0&-8&6\\ 0&-1/2&-3/2&1 \end{array}\right)$$

What am I doing wrong? Steps would be much appreciated.

John Keeper
  • 1,281

5 Answers5

3

You need to perform simulatenous row and column operations on the left hand side while performing only column operations on the right hand side. Then, when the left side becomes diagonal, the right side will be your $P$. In your case,

$$ \left(\begin{array}{cc|cc} 2&3&1&0\\ 3&4&0&1 \end{array}\right) \xrightarrow[C_2 = C_2 - \frac{3}{2}C_1]{R_2 = R_2 - \frac{3}{2}R_1} \left(\begin{array}{cc|cc} 2&0&1& -\frac{3}{2}\\ 0& -\frac{1}{2}&0&1 \end{array}\right) $$

and indeed

$$ \begin{pmatrix} 1 & 0 \\ -\frac{3}{2} & 1 \end{pmatrix} \begin{pmatrix} 2 & 3 \\ 3 & 4 \end{pmatrix} \begin{pmatrix} 1 & -\frac{3}{2} \\ 0 & 1 \end{pmatrix} = \begin{pmatrix} 2 & 0 \\ 0 & -\frac{1}{2} \end{pmatrix}. $$

levap
  • 65,634
  • 5
  • 79
  • 122
  • I'm also not sure what you mean by "Gauss" because the result of your "Gauss" is not the result of the standard Gauss elimination procedure. – levap Jun 15 '17 at 22:41
  • Your picture seems to be one of the figures from Donnie Darko, a movie (directed?) by Drew Barrymore. Your method of writing this is exactly how I was shown at http://math.stackexchange.com/questions/1388421/reference-for-linear-algebra-books-that-teach-reverse-hermite-method-for-symmetr Maybe you have already responded there. Apparently I was wrong to use the name Hermite, it was rather earlier. – Will Jagy Jun 15 '17 at 22:45
  • @WillJagy: Yeah, a great movie :) (the director is Richard Kelly). Regarding your post, I don't have any reference for this method. We teach it in our undergraduate linear algebra here (at HUJI) but it doesn't appear in any of the textbooks we use so we have our own notes for it. I'm not sure who introduced it and from where. – levap Jun 15 '17 at 22:48
  • That seems the general consensus, it is taught in many places but the method just appeared. Note that, coming from working with integer coefficient quadratic forms, I was accustomed to, by informed guesswork, solving $Q^T D Q = A.$ I was surprised when i saw on MSE a fully working algorithm that went in the other order ($P = Q^{-1}$) and required no cleverness. – Will Jagy Jun 15 '17 at 22:53
  • 2
    @WillJagy: Yeah. I think it is good pedagogy to teach it because it really shows that symmetric bilinear forms and self-adjoint operators are different things. One has to work hard to orthogonally diagonalize a matrix but "diagonalizing" a symmetric form is much easier. I also find it slightly annoying that there isn't any special terminology for "diagonalizing" a quadratic form which adds to the confusion between operator diagonalization and symmetric bilinear form diagonalization. I sometimes call it "diagonalization by congruence" but it is cumbersome. – levap Jun 15 '17 at 23:00
  • 1
    Evidently Drew was an "executive producer" http://www.imdb.com/name/nm0000106/?ref_=ttfc_fc_cl_t22#producer My ideas of what a producer might be come entirely from Dustin Hoffman in Wag the Dog. http://www.imdb.com/title/tt0120885/ There are plenty of questions on MSE where the OP tries to orthogonally diagonalize, even though the eigenvalues are intractable; here is one http://math.stackexchange.com/questions/395634/given-a-4-times-4-symmetric-matrix-is-there-an-efficient-way-to-find-its-eige – Will Jagy Jun 16 '17 at 00:25
1

I think Jose comes closest to solving this problem in the way that I would, but his answer doesn't expand too much on the why of it all, so I'm writing up a separate answer.

If there is such a $D$ and $P$ such that $A = P^T D P$, then we say that $A$ is diagonalizable. One can prove that a valid diagonalization of $A$ sets $P:= \left\{\text{normalized eigenvectors of A}\right\}$, an orthogonal matrix (such that $P^T = P^{-1}$), and $D:=diag(\text{eigenvalues of A})$. So indeed, solving this problem reduces to finding the eigenvalues and eigenvectors of $A$. There are many ways to derive the eigensystem of a matrix, but below is my solution.


For an eigenvalue $\lambda$, we must have $\det(\lambda I - A) = 0 \implies \lambda^2 - 6 \lambda - 1 = 0$. The two solutions are $\lambda = 3 \pm \sqrt{10}$.

For $\lambda_1 = 3 + \sqrt{10}$, we have:

$$\begin{aligned} \mathbf{0} = (\lambda I-A)v &= \begin{pmatrix} 1+\sqrt{10} & -3\\ -3 & -1+\sqrt{10}\end{pmatrix}\begin{pmatrix}v_1\\ v_2 \end{pmatrix}\\ \implies v &= \begin{pmatrix} c\\ (1+\sqrt{10}) c/3\end{pmatrix} \end{aligned}$$

For $\lambda_2 = 3 - \sqrt{10}$, we have:

$$\begin{aligned} \mathbf{0} = (\lambda I-A)v &= \begin{pmatrix} 1-\sqrt{10} & -3\\ -3 & -1-\sqrt{10}\end{pmatrix}\begin{pmatrix}v_1\\ v_2 \end{pmatrix}\\ \implies v &= \begin{pmatrix} c \\ (1-\sqrt{10}) c/3\end{pmatrix} \end{aligned}$$

And while this is incredibly painful to normalize by hand, our matrices ends up being:

$$P = \begin{aligned} \cfrac{1}{\sqrt{20-2\sqrt{10}}}\begin{pmatrix}\sqrt{10} - 1 & 3\\ 3 & 1-\sqrt{10} \end{pmatrix},\qquad D = \begin{pmatrix}3+\sqrt{10} & 0\\ 0 & 3-\sqrt{10} \end{pmatrix} \end{aligned}$$

NoseKnowsAll
  • 2,011
0

I would love to hear what book you have that does this.

Meanwhile, take

$$ P = \left( \begin{array}{rr} 1 & - \frac{3}{2}\\ 0 & 1 \end{array} \right) $$

I asked for references here http://math.stackexchange.com/questions/1388421/reference-for-linear-algebra-books-that-teach-reverse-hermite-method-for-symmetr

I prefer to write this as a sequence of steps (if more than one is needed) which gives $P = P_1 P_2 \cdots P_r.$ Other people seemed to like the side-by-side grid that you display, so I guess there are advantages to both.

Will Jagy
  • 139,541
0

The characteristic polynomial of $A$ is $x^2-6x-1$, whose roots are $3\pm\sqrt{10}$. An example of an eigenvector with norm $1$ whose eigenvalue is $3+\sqrt{10}$ is $\frac1{\sqrt{20-2\sqrt{10}}}\bigl(\sqrt{10}-1,3\bigr)$ and an example of an eigenvector with norm $1$ whose eigenvalue is $3-\sqrt{10}$ is $\frac1{\sqrt{20-2\sqrt{10}}}\bigl(3,1-\sqrt{10}\bigr)$. So, take$$P=\frac1{\sqrt{20-2\sqrt{10}}}\begin{pmatrix}\sqrt{10}-1&3\\3&1-\sqrt{10}\end{pmatrix}$$and then $P^TAD=\left(\begin{smallmatrix}3+\sqrt{10}&0\\0&3-\sqrt{10}\end{smallmatrix}\right)$.

0

Problem

Diagonalize the matrix $$ \mathbf{A} = \left[ \begin{array}{cc} 2 & 3 \\ 3 & 4 \\ \end{array} \right] $$


Solution

Compute eigenvalues

The eigenvalues are the roots of the characteristic polynomial $$ p(\lambda) = \lambda^{2} - \lambda \text{ trace }\mathbf{A} + \det \mathbf{A} $$ The trace and determinant are $$ \text{ trace }\mathbf{A} = 6, \qquad \det \mathbf{A} = -1 $$ Therefore $$ p(\lambda) = \lambda^{2} - \lambda \text{ trace }\mathbf{A} + \det \mathbf{A} = \lambda^{2} - 6 \lambda - 1 $$ The roots are the eigenvalue spectrum $$ \lambda \left( \mathbf{A} \right) = 3 \pm \sqrt{10} $$

Result: $$ \mathbf{D} = \left[ \begin{array}{cc} 3+\sqrt{10} & 0 \\ 0 & 3-\sqrt{10} \\ \end{array} \right] $$

Eigenvectors

First $$ \begin{align} \left(\mathbf{A} - \lambda_{1} \mathbf{I}_{2} \right) w_{1} &= \mathbf{0} \\ % \left[ \begin{array}{cc} -1-\sqrt{10}-1 & 3 \\ 3 & 1-\sqrt{10} \\ \end{array} \right] % \left[ \begin{array}{c} w_{x} \\ w_{y} \\ \end{array} \right] % &= % \left[ \begin{array}{c} 0 \\ 0 \\ \end{array} \right] % \end{align} $$

Solution $$ w_{1} = \left[ \begin{array}{c} \frac{1}{3} \left(-1+\sqrt{10}\right) \\ 1 \\ \end{array} \right] $$

Second $$ \begin{align} \left(\mathbf{A} - \lambda_{2} \mathbf{I}_{2} \right) w_{2} &= \mathbf{0} \\ % \left[ \begin{array}{cc} -1+\sqrt{10} & 3 \\ 3 & 1+\sqrt{10} \\ \end{array} \right] % \left[ \begin{array}{c} w_{x} \\ w_{y} \\ \end{array} \right] % &= % \left[ \begin{array}{c} 0 \\ 0 \\ \end{array} \right] % \end{align} $$ Solution $$ w_{2} = \left[ \begin{array}{c} -\frac{1}{3} \left(1+\sqrt{10}\right) \\ 1 \\ \end{array} \right] $$

Diagonalization matrix

$$ \mathbf{P} = \left[ \begin{array}{cc} \frac{1}{3} \left(-1+\sqrt{10}\right) & -\frac{1}{3} \left(1+\sqrt{10} \right) \\ 1 & 1 \\ \end{array} \right], \qquad \mathbf{P}^{-1} = \frac{1}{2\sqrt{10}} \left[ \begin{array}{rr} 3 & 1+\sqrt{10} \\ -3 & -1+\sqrt{10} \\ \end{array} \right] $$


Validation

You can check that $$ \mathbf{P} \mathbf{A} \mathbf{P}^{-1} = \mathbf{D} $$ and $$ \mathbf{P}^{-1} \mathbf{D} \mathbf{P} = \mathbf{A} $$



Gaussian elimination

Solving this problem does not require Gaussian elimination. However, since you specifically asked, here is the process:

Clear column 1 $$ \left[ \begin{array}{rc} \frac{1}{2} & 0 \\ -\frac{3}{2} & 1 \\ \end{array} \right] % \left[ \begin{array}{cc|cc} 2 & 3 & 1 & 0 \\ 3 & 4 & 0 & 1 \\ \end{array} \right] = \left[ \begin{array}{cr|rc} 1 & \frac{3}{2} & \frac{1}{2} & 0 \\ 0 & -\frac{1}{2} & -\frac{3}{2} & 1 \\ \end{array} \right] $$

Clear column 2 $$ \left[ \begin{array}{cr} 1 & 3 \\ 0 & -2 \\ \end{array} \right] % \left[ \begin{array}{cr|rc} 1 & \frac{3}{2} & \frac{1}{2} & 0 \\ 0 & -\frac{1}{2} & -\frac{3}{2} & 1 \\ \end{array} \right] = \left[ \begin{array}{cc|rr} 1 & 0 & -4 & 3 \\ 0 & 1 & 3 & -2 \\ \end{array} \right] $$

dantopa
  • 10,342