I know a matrix has the same eigenvalues as its transpose, but is there a way to see this in line with Axler (i.e. without bringing in determinants)?
-
1Why ? What's wrong with bringing in determinants ? – Hosam Hajeer Nov 30 '21 at 00:26
-
1@Potato "Once determinants have been banished to the end of the book, a new route opens to the main goal of linear algebra— understanding the structure of linear operators." - from Linear algebra done right. – plop Nov 30 '21 at 00:30
-
2Axler doesn't use them, and I'm using his textbook, so, I just want to like, reconcile this result with it. – Jnrn Nov 30 '21 at 00:30
-
What properties of the transpose are you allowed to assume? – Dan Nov 30 '21 at 00:35
-
@Dan Any of the ones in that list are fine – Jnrn Nov 30 '21 at 00:50
-
@Jnrn: Well, I assume you mean to exclude #5 (which brings in determinants) and #9 (which is what you want to prove). – Dan Nov 30 '21 at 00:53
-
Related. – PinkyWay Nov 30 '21 at 09:58
-
In support of the Answers posted below, the general result that row rank = column rank can be useful here. – hardmath Dec 04 '21 at 17:58
2 Answers
The eigenvalues of $M$ are the numbers $\lambda$ such that $M - \lambda I$ has non-trivial kernel, i.e. the numbers for which the square matrix $M-\lambda I$ cannot be inverted. But $$(M-\lambda I)^T = M^T - \lambda I^T = M^T-\lambda I$$
and so if $M - \lambda I$ is not invertible, $(M-\lambda I)^T$ is not invertible either, as a square matrix is invertible if and only if its transpose is (thanks to Marc for the suggestions!). But the above shows that $\lambda$ is also an eigenvalue for $M^T$, as we wanted to show.

- 17,862
- 4
- 42
- 85
-
3I have difficulty seeing how exactly your third sentence uses the second one (maybe because I don't know if you define a cokernel using the transpose matrix, or using the dual vector space, or as quotient by the image subspace. Wouldn't it be easier to say that $M-\lambda I$ is invertible if and only if its transpose is (which happens if and only if either has trivial kernel)? – Marc van Leeuwen Nov 30 '21 at 09:40
-
That's a good suggestion. I'll update the answer to reflect it. I'm implicitly using a bunch of identifications that I can just skip if I do it that way. – A. Thomas Yerger Nov 30 '21 at 19:17
The eigenvalues of a square matrix $A$ are the roots of its minimal polynomial. Since for any polynomial $P\in K[X]$ the matrix $P[A^\top]$ is the transpose of $P[A]$, and one of these is zero if and only if the other is, the matrices $A$ and its transpose $A^\top$ have the same minimal polynomials. Therefore they have the same eigenvalues.
Or even simpler, $\lambda$ is an eigenvalue of $A$ if and only if $A-\lambda I$ fails to be invertible, which happens if and only if it transpose $A^\top-\lambda I$ fails to be invertible (because clearly if some square matrix $M$ has an inverse, the transpose of that inverse will be an inverse of $M^\top$, and vice versa).

- 115,048