I know that every eigenvalue of $ A^k $ is an eigenvalue of $A$ raised to the power of k. The proof that I saw for this proposition is based on the Jordan normal form. Is there any other proof idea that is not based on the Jordan normal form ?
-
3The converse is straightforward. This direction is not so much. As an illustrative example, $\begin{pmatrix} 0 & -1 \ 1 & 0\end{pmatrix}$, considered as a real operator on $\Bbb{R}^2$, has no eigenvalues, but its square is $-I$, which has one (repeated) eigenvalue. This is something specific to algebraically complete fields like $\Bbb{C}$, and it tells me that there'll probably have to be some application of the fundamental theorem of algebra at some point. – Theo Bendit Aug 11 '21 at 22:25
-
1@RobertShore That's going the wrong way; your (now deleted) comment showed that an eigenvector of $A$ is an eigenvector of $A^k$. The reverse is more delicate, because it's not entirely apparent why no eigenvalues can be created by exponentiation (which is particularly subtle because eigenvectors can be created, but their associated eigenvalues are still of the form $\lambda^k$). – Ian Aug 11 '21 at 22:28
-
https://math.stackexchange.com/questions/1333720/characteristic-polynomial-of-ak and https://math.stackexchange.com/questions/241764/eigenvalues-and-power-of-a-matrix – Asinomás Aug 11 '21 at 22:35
2 Answers
I have an answer, but it may not be particularly fulfilling. I will have to make an assumption. This assumption is a fundamental result, and buries the application of the fundamental theorem of algebra. Specifically, I assume:
Every operator on a finite-dimensional non-trivial complex vector space admits an eigenvector.
This result underpins the construction of the Jordan normal form, but we can shortcut much of the construction of the JNF by using this result directly. I don't think we'll get simpler than this.
Suppose $A$ is an operator on complex vector space $V$, and $\lambda$ is an eigenvalue of $A^k$. Let $W = \ker(A^k - \lambda I)$, i.e. the eigenspace of $A^k$ with respect to $\lambda$. Note that, eigenspaces are by definition non-trivial, as eigenvectors are defined to be non-zero vectors in these spaces. I claim that $W$ is invariant with respect to $A$.
Suppose $v \in W$. I wish to show that $Av \in W$. We have $$(A^k - \lambda I)Av = A(A^k - \lambda I)v = A0 = 0,$$ since polynomials of $A$ commute. Thus, $Av \in W$, so $W$ is indeed invariant.
Define $B = A|_W$. Then $B$ is an operator on finite-dimensional, non-trivial complex vector space $W$. By our assumption, there exists some eigenvector $v \in W$ corresponding to eigenvalue $\mu$. So, $$Bv = \mu v \implies Av = \mu v \implies A^k v = \mu^k v.$$ But, $v \in W = \ker (A^k - \lambda I)$, so $A^k v = \lambda v$, thus (given $v \neq 0$), $$\lambda = \mu^k.$$ So, the eigenvalue $\lambda$ is indeed a $k$th power of an eigenvalue $\mu$ of $A$ (well, of $B$, but we still have $Bv = Av = \mu v$). This completes the proof.

- 50,900
-
I think in your assumption you intended to write eigenvector instead of eigenvalue. I agree with you that we need this fundamental result, see also my closely related answer – Vincent Aug 11 '21 at 22:55
-
@Vincent I intended to write it as I did, but it doesn't matter. Eigenvalues exist if and only if eigenvectors do. You can't have one without the other. I tend to prefer talking about eigenvalues because a single eigenvalue corresponds to multiple eigenvectors, and there's no danger of muddling eigenvectors with generalised eigenvectors, etc. – Theo Bendit Aug 11 '21 at 22:56
-
1Well, it depends on how you define eigenvalue. If it is 'just' a root of the characteristic polynomial you can have eigenvalues without eigenvectors, e.g. in the example you gave in the comments. But in the end I think that the definition of eigenvalue in terms of eigenvectors is more natural and beautiful and if we stick with that you are absolutely right of course. – Vincent Aug 11 '21 at 22:58
-
-
@Tomer I don't see where I've written that. It probably should be $Av = \mu v$ if I've written it somewhere. – Theo Bendit Aug 12 '21 at 02:08
-
-
1@Tomer Because of the highlighted assumption. $B$ has an eigenvalue/eigenvector pair, as it is an operator on the non-trivial complex vector space $W$, meaning that there exists some $\mu \in \mathbb{C}$ and $v \in W$ (non-zero) such that $Bv = \mu v$. But, $B$ is just the restriction of $A$ to $W$, so $Av$ and $Bv$ are the same thing. – Theo Bendit Aug 12 '21 at 06:05
As Theo says in the comments we need to assume that we are working over an algebraically closed field so that to every eigenvalue there is also an eigenvector.
EDIT: we actually need two closely related results
- For every eigenvalue there is also an eigenvector (we use this below to find an eigenvector $v$ belonging to the eigenvalue $\mu$ of $A^k$)
- For every linear map from a space to itself there is an eigenvector for that map. (We use this to find the eigenvector $\sum \alpha_i A^iv$ for $A$ below.)
Now which of these two statements is somewhat obvious and which requires algebraically closedness of the field depends on your definition of eigenvalue:
If you say that an eigenvalue is a root of of the characteristic polynomial then statement 1 requires algebraically closure: in general only eigenvalues that lie in the field over which the vector space is defined come with eigenvectors. (And when defining an eigenvalue as above it lives in general in some extension of the field). Statement 2 then follows from statement 1 and the obvious fact that every polynomial has some root in an algebraically closed field.
If you say that an eigenvalue is the number appearing in the definition of eigenvector, then statement 1 is trivial but statement 2 still requires an algebraically closed field, because the claim that every operator even has an eigenvalue is not obvious in this case. What we need is that every root of the characteristic polynomial that lies in the field over which the vector space is defined is actually an eigenvalue, in the sense of having a corresponding eigenvector. Assuming this for now it is easy to see what is the point of requiring algebraic closure: in that case all roots of the characteristic polynomial lie in the field we are working over.
So there is one loose end here: in both cases we need the following result.
Let linear map $A$ act on a space $V$ defined over a field $K$ and let $\lambda$ be a root of the characteristic polynomial of $A$ that actually lies in the field $K$ (rather than 'just' in some field extension). Then $A$ has a non-zero eigenvector with eigenvalue $\lambda$.
The proof of this result looks very much like one part of Theo's answer:
Since $\lambda \in K$ we have that $A - \lambda I$ is also a well defined linear operator on $V$ and by the definition of 'characteristic polynomial of $A$' it has deterinant 0. Now by the magic of determinants this means it has non-trivial kernel and it is easy to see that any element in the kernel of $A - \lambda I$ is also an eigenvector of $A$ with eigenvalue $\lambda$.
END of EDIT
Let $v$ be an eigenvector for $A^k$ with eigenvalue $\mu$ and let's look at the subspace $V = span(v, Av, A^2v, \ldots)$ of the (possibly) bigger space on which $A$ acts. Obviously $V$ is at most $k$-dimensional, but there is a reason that I write it as the span of an infinite set of vectors. This reason is that it helps you realize that $A$ maps $V$ to itself, and as a result it contains an eigenvector of $A$, with eigenvalue $\lambda$, say.
I will assume that $V$ is actually $k$-dimensional for simplicity: if it is lower dimensional I think this implies that there is a smaller number $k'$ such that $v$ was already an eigenvalue for $A^{k'}$ and hence we can pretend that we have already solved that case somewhere in the past. (You can make this formal by stating that we are working with induction on $k$)
Okay, so under the simplifying assumption we are in the weird situation that for some scalars $\alpha_0, \ldots, \alpha_{k-1}$ we have that
$$A(\alpha_0 v + \alpha_1 Av + \ldots + \alpha_{k-1}A^{k-1}v) = \lambda(\alpha_0 v + \alpha_1 Av + \ldots + \alpha_{k-1}A^{k-1}v)$$
Now since $v$ is an eigenvector for $A^k$ we can re-express the left-hand side as
$$\alpha_0 Av + \alpha_1 A^v + \ldots + \alpha_{k-1}\mu v$$
so that the full equality gives many small 'scalar' equalities
$\alpha_0 = \lambda \alpha_1; \ldots; \alpha_{k-2} = \lambda \alpha_k$ and $\mu \alpha_{k-1} = \lambda \alpha_0$.
(This step assumes that $v, Av$, etc are linearly independent, which is equivalent to the simplifying assumption of $\dim V = k$ I made above.)
Now systematically substituting these small scalar equalities into each other we eliminate the $\alpha$'s one by one and end up with
$$\mu = \lambda^k$$

- 10,614
-
Why if $A$ maps $V$ to itself then $V$ must contains an eigenvector of $A$ ? – Tomer Aug 12 '21 at 03:30
-
@Tomer Right, this relates to the issue I was discussing with Theo below the other answer, I'll edit the beginning of my question in the same spirit to explain this – Vincent Aug 12 '21 at 10:33