0

For a matrix $A$ we can use diagonalization using this formula: $$ A = X D X^{-1}$$ where $X$ is a matrix containing eigenvectors in the columns and $D$ is the diagonal matrix containing eigenvalues.

Now consider a symmetric matrix. In case of a symmetric matrix $X$ matrix is orthogonal. So, Now we have: $$ S = X D X^{T} $$

I am practicing maths on diagonalization. I have this symmetric matrix and they tell me to diagonalize this matrix: $$\begin{pmatrix} 1 & -1 & 1\\ -1 & 1 & -1\\ 1 & -1 & 1 \end{pmatrix}$$

To diagonalize this matrix, they find the eigenvector and then normalize the eigenvectors. So, now the eigenvector matrix $X$ is orthonormal. Then they use the diagonalization theorem.

My question is that from the diagonalization theorem we see that in the case of a symmetric matrix eigenvector matrix $X$ is orthogonal, it need not be orthonormal to perform diagonalization. Then why we take orthonormal eigenvector $X$, does not it violate the diagonalization theorem?

PinkyWay
  • 4,565
  • See my related question. I think they have to be orthonormal. – PinkyWay May 29 '20 at 07:42
  • "They find the eigenvectors" is vague, because if $\vec v$ is an eigenvector, so is $a\vec v$ for any real number $a$. And having a base formed by unit vectors is convenient. – Intelligenti pauca May 29 '20 at 08:34
  • @Aretino I think you misconstrue my statement. I am saying that the eigenvector matrix $X$ need not be the unit vector. But In case of the symmetric matrix, they always make eigenvector to unit length. Why it is so? Why can just use the eigenvector without making it unit? – Swakshar Deb May 29 '20 at 09:21
  • @Cheesecake for the symmetric matrix or for all the square matrices? – Swakshar Deb May 29 '20 at 09:39
  • Because having a base formed by unit vectors is more convenient. But you can do otherwise, if that disturbs you. – Intelligenti pauca May 29 '20 at 10:10
  • @Aretino Initially, I also thought so. But take this example where you want to diagonalize this matrix $$\begin{pmatrix} 5 & 4 \ 4 & 5 \end{pmatrix}$$; the eigenvectors are $(1,1)^{T}$, $(1,-1)^{T} $and eigenvalues are 9 and 1. If you use $A = XDX^{T}$, you will not get the same aforementioned matrix. you have to normalize the eigenvectors if you want to get the same matrix. Why so? – Swakshar Deb May 29 '20 at 10:40
  • The inverse and the transpose would need to be equal in order to use them interchangeably as you have done. Which requires determinant equals $1$. Etc. – Ned May 29 '20 at 11:07
  • @Aretino "... if ⃗ is an eigenvector, so is ⃗ for any real number ." For any non-zero complex number as well. – user May 29 '20 at 11:30
  • @SwaksharDeb Generally one should use $XDX^{-1}$. And here a very useful property of the orthonormal (real) basis shows up, as in this case $X^{-1}=X^T$. – user May 29 '20 at 11:34
  • @user $X^{-1} = X^{T} $ is true for any orthogonal matrix also. My question was about why we have to take orthonormal eigenvector in the aforementioned example? – Swakshar Deb May 29 '20 at 13:12
  • 1
    If the eigenvectors are not orthonormal, the transformation matrix will not be orthogonal. – user May 29 '20 at 14:38
  • I know this, what's your point? Please be cogent. – Swakshar Deb May 29 '20 at 15:38
  • 1
    The inverse of $2I$ is not its transpose $2I$, even though the columns are orthogonal. – Ned May 29 '20 at 16:09
  • If you know this why do you write in the question "In case of a symmetric matrix $X$ matrix is orthogonal"? It is not true if the "contained eigenvectors" are not normalized. – user May 29 '20 at 19:24

0 Answers0