1

For example, this answer. Let's say you have an eigenspace that spans all of your vector space $\mathbb{R}^3$. The eigenvectors are not necessarily orthogonal, and using G-S could "knock them off their span" in that they will no longer be eigenvectors. What I mean to say is that you could produce a set of vectors that are an orthonormal basis, but how does that make it an orthonormal eigenbasis?

Specifically I refer to the following theorem:

let V be a finite dimensional real inner product space and let $T: V \to V$ be a linear transformation, then T has an orthonormal eigenbasis iff T is self-adjoint

  • Look up the Spectral Theorem. Here’s a good proof of it: https://math.mit.edu/~dav/spectral.pdf – moboDawn_φ Jun 13 '23 at 03:54
  • I have read all I have found, but have not found an explanation for how giving a basis for an eigenspace (that is > 1 dimension) still creates an "eigenbasis", when those vectors may not be eigenvectors (the other answer says they are; my question is how this can be true). – user129393192 Jun 13 '23 at 03:55
  • Your comment in my answer, and your comment here, betray a misunderstanding of what the eigenspace is. Note that by definition, the eigenspace corresponding to $\lambda$ is the set of vectors $v$ such that $T(v)=\lambda v$. In particular, any nonzero vector in the eigenspace corresponding to $\lambda$ is necessarily and automatically an eigenvector corresponding to $\lambda$. Moreover,Gram-Schmidt never "knocks [vectors] off their span." It is specifically defined so that the first $k$ vectors it produces have the same span as the first $k$ vectors you started with. – Arturo Magidin Jun 13 '23 at 04:03
  • I simply meant that if you have a vector, and you orthogonalize it w.r.t another vector, it won't necessarily be along the same line. Moreover, I had always thought that when you have an eigenspace of dimension 2 (ex., any vectors on the x and y-axis are scaled by 1) then you can't say that all vectors in their span are also scaled by 1 ... – user129393192 Jun 13 '23 at 04:06
  • @user129393192 Are you asking why we do not get generalized eigenvectors? I.e. why symmetric matrices are diagonalizable? – Severin Schraven Jun 13 '23 at 04:08
  • 1
    No. Thank you. Your comment in the other answer was exactly my misunderstanding. Your simple explanation was perfect. @SeverinSchraven – user129393192 Jun 13 '23 at 04:10
  • Then what you "always thought" is just plain wrong. And if $x$ and $y$ are both eigenvectors corresponding to the same eigenvalue $\lambda$, then for every vector in their span we have $T(\alpha x+\beta y) =\alpha T(x)+\beta T(y)=\alpha\lambda x + \beta\lambda y = \lambda(\alpha x+\beta y)$. Since Gram-Schmidt will produce a multiple of $x$ and then a linear combination of $x$ and $y$ that is nonzero, it will produce eigenvectors corresponding to $\lambda$. – Arturo Magidin Jun 13 '23 at 04:11

2 Answers2

1

I think that you forgot the following fact:

Suppose that $A$ is an $n \times n$ real symmetric matrix. Suppose that $Au = \lambda u$ and $Av = \mu v$, in which $u$, $v \in \mathbb{R}^{n}$ and $\lambda$, $\mu$ are distinct real numbers. Then $u^{\mathrm{T}} v = 0$.

There is another fact:

Suppose that $A$ is an $n \times n$ matrix. Suppose that $k$ is a number. Suppose that $u_1$, $u_2$, $\dots$, $u_m$ are $n \times 1$ matrices such that $$ Au_i = \color{red}{k} u_i. $$ Suppose that $c_1$, $c_2$, $\dots$, $c_m$ are numbers. If $u = c_1 u_1 + c_2 u_2 + \dots + c_m u_m$ is nonzero, then $Au = \color{red}{k} u$, which means that $u$ is an eigenvector of $A$.


Okay. Take any $n \times n$ real symmetric matrix $A$. Suppose that $\lambda$ is an eigenvalue of $A$. Suppose that you have found all $m$ real linearly independent solutions to $AX = \lambda X$: $$ u_1, u_2, \dots, u_m. $$ By Gram-Schmidt, there exists real numbers $c_{i,j}$ such that $$ \begin{aligned} & v_1 = c_{1,1} u_1, \\ & v_2 = c_{1,2} u_1 + c_{2,2} u_2, \\ & \cdots \cdots \cdots \cdots, \\ & v_m = c_{1,m} u_1 + c_{2,m} u_2 + \dots + c_{m,m} u_m, \end{aligned} $$ and that $$ v_i^{\mathrm{T}} v_j = \begin{cases} 1, & i = j; \\ 0, & i \neq j. \end{cases} $$ By the second fact that I have listed above, $v_1$, $v_2$, $\dots$, $v_m$ are still eigenvectors of $A$.

Suppose that $\mu$ is another eigenvalue of $A$. Suppose that you have found $p$ real orthonormal solutions to $AX = \mu X$: $$ w_1, w_2, \dots, w_p. $$ By the first fact that I have listed above, $w_i^{\mathrm{T}}$ must be orthogonal to $v_j$; you do not have to apply Gram-Schmidt to the list $$ v_1, v_2, \dots, v_m, w_1, w_2, \dots, w_p, $$ since they are already orthonormal.


I hope that what I have said is helpful.

Juliamisto
  • 1,300
0

If you apply Gram-Schmidt to a collection of vectors all belonging to the same eigenspace you will get an orthonormal basis for their span, which is contained in that eigenspace. In particular the orthonormal basis consists of eigenvectors. To see why, not that any linear combination of eigenvectors (all corresponding to the same eigenvalue) is again an eigenvector.

Since you know that there is a basis of eigenvectors, you just need to apply the above once for each eigenspace.

Jamie Radcliffe
  • 2,035
  • 1
  • 13
  • 8
  • So if you have an eigenspace $E_\lambda(T)$ of dimension 2, with eigenvectors $v, w \in E_\lambda(T)$ that you produce by taking $ker(T - \lambda id_v)$, you say that any other vector in that eigenspace is an eigenvector? I do not see this, solely because the set of eigenvectors does not form a subspace. For example, consider if the eigenvectors are along the x and y-axis, then all of $\mathbb{R}^2$ is the eigenspace, but any vectors picked not directly on the x and y-axis are not eigenvectors. Is this reasoning incorrect? – user129393192 Jun 13 '23 at 03:59
  • @user129393192 If $Av=\lambda v, Aw=\lambda w$, then $$A(v+w)=Av+Aw=\lambda(v+w).$$ So, yes, linear combinations of eigenvectors are eigenvectors (unless they add up to the zero vector). – Severin Schraven Jun 13 '23 at 04:02
  • @user129393192 The only reason why the set of eigenvectors corresponding to $\lambda$ do not form a subspace is because you are missing the zero vector. The nullpsace of $T-\lambda I$ is necessarily a subspace, being a nullspace, and every nonzero vector in that nullspace is an eigenvector corresponding to $\lambda$. – Arturo Magidin Jun 13 '23 at 04:05