1

We know the following proposition is true. The proof together with the specification of the eigenvectors of $\operatorname{ad}x$ is here.

Let $x\in \operatorname{gl}(n,F)$ be diagonalizable with $n$ eigenvalues $a_1,\ldots,a_n$ in $F$. The eigenvalues of $\text{ad }x$, where $\operatorname{ad}x(y):=[x,y]=xy-yx$ are precisely the $n^2$ scalars $a_i-a_j$ ($1\leq i,j\leq n$).

What is the result if $x$ is not diagonalizable? We know $x$ can always be transformed into the Jordan canonical form. I solved the cases of $2\times2$ and $3\times3$ Jordan canonical forms. I would like to know the general solution.

Hans
  • 9,804

2 Answers2

2

The same conclusion holds.

Use the same argument as in the linked post, if $xv=\lambda v$ and $x^t w = \mu w$ where $v$, $w$ are nonzero (eigenvectors), then $vw^t$ is an eigenvector of $\operatorname{ad}(x)$. Hence $\lambda -\mu$ is an eigenvalue of $\operatorname{ad}(x)$.

Now we only need to show if $x$ is nilpotent, then $\operatorname{ad}(x)$ only has eigenvalue $0$, in other words, $\operatorname{ad}(x)$ is nilpotent (this simple fact is used in the proof of the Engel's theorem in Lie algebra):

$\operatorname{L_x}(y):=xy$ and $\operatorname{R_x}(y):=yx$ are both nilpotent and commute, therefore their difference $L_x-R_x=\operatorname{ad}(x)$ is also nilpotent.


To be slightly more rigorous, let $v_1, \dotsc, v_l$ (resp. $w_1, \dotsc, w_l$) be a linearly independent set of generalized eigenvectors of $x$ (resp. $x^t$), then we claim $v_iw_j^t$ is a linearly independent set of generalized eigenvectors of $\operatorname{ad}(x)$. The linear independence follows from the independence of $v_i$'s and $w_j$'s.

And if $(x-\lambda_i I)^mv_i=0$, $(x^t-\lambda_j I)^n w_j=0$, then we have $$(L_x-R_x - (\lambda_i-\lambda_j)I)^{m+n} (v_iw_j^t) = ((L_x-\lambda_i) - (R_x-\lambda_j))^{m+n}(v_iw_j^t).$$

Using the binomial theorem, each term in the expansion of $((L_x-\lambda_i I) - (R_x-\lambda_j I))^{m+n}$ either contains $(L_x-\lambda_i I)^p$ for $p\ge m$ or $(R_x-\lambda_j I)^q$ for $q\ge n$ as a factor, and in the first case $$(L_x-\lambda_i I)^pv_iw_j^t=O$$ while similarly in the latter $$(R_x-\lambda_j I)^qv_iw_j^t=v_i((x-\lambda_j)w_j)^t=O.$$

Therefore $v_iw_j^t$ is a generalized eigenvector of $\operatorname{ad}(x)$ (corresponding to eigenvalue $\lambda_i-\lambda_j$).


To answer the question from the comment. Without loss of generality, let's assume $\lambda_i=\lambda_j=0$.

If $x^m v = 0$ and $x^iv\ne0$ for $i=1, 2, \dotsc, m-1$, also $w^tx^n=0$ and $w^tx^i\ne0$ for $i=1, 2, \dotsc, n-1$. Then in the expansion, the only possible nonzero term is $$(-1)^{n-1}{ {m+n-2}\choose {m-1} } L_x^{m-1}R_x^{n-1}vw^t = (-1)^{n-1}{ {m+n-2}\choose {m-1} }(x^{m-1}v)(w^tx^{n-1}).$$

If this is zero, since $(x^{m-1}v), (w^tx^{n-1})$ are nonzero, their product is also not zero. Therefore the scalar ${ {m+n-2}\choose {m-1} }=0$. Hence the base field has positive characteristic. Similarly, if the base field has characteristic $0$, $p=m+n-1$ must be the minimial integer such that $\operatorname{ad}(x)^p vw^t=0$.

Hans
  • 9,804
Just a user
  • 14,899
  • +1 Thank you for your answer. I already knew the second paragraph which is obvious. The nilpotent case was what I was after, thus the Jordan canonical form in the question. I made a mistake in my earlier computation for the $3\times3$ case. That gave me a strange result leading to this question. It is making sense after having corrected the mistake. Question for you: can you prove $p=m+n$ is the least $p$ so that $\big(\text{ad} x-(\lambda_i-\lambda_j)\big)^p v_iw^t_j=0$ if $m, n$ are for $L_x, R_x$ respectively? – Hans Mar 27 '24 at 06:39
  • 1
    Interesting question. We should aim to show $p=m+n-1$ which is clearly sufficient. This is true when the characteristic of the base field is $0$, and false otherwise. Details have been added to the answer. – Just a user Mar 27 '24 at 13:30
  • Why does $x^{m - 1}v \ne 0$ and $w^t x^{n - 1} \ne 0$ imply that $(x^{m - 1}v)(w^t x^{n - 1}) \ne 0$? Why does $x^m v = 0$ and $x^{m - 1}v \ne 0$ imply $w^t x^{n - 1} \ne 0$? (Did you mean the latter $n$ to be the same as the dimension of the underlying space?) – LSpice Mar 27 '24 at 13:36
  • If you have a column vector $a\not=0$ and a row vector $b\not=0$, then the matrix $ab$ is not zero (this also follows from more generally that $v_1, \cdots, v_n$ are linearly independent, and $w_1, \cdots, w_m$ are linearly independent, then ${v_iw_j^t}_{i, j}$ is linearly independent, which has been used somewhere in the solution as well.) And yeah, I borrowed the usage of $n$ from your comment as the minimal exponent, but shouldn't use $n$ as both the dimension and the minimal exponent. – Just a user Mar 27 '24 at 13:45
  • Re, oh, right, I had the other order of multiplication in mind. (By the way, the use of $n$ was in a comment from the asker, not me.) – LSpice Mar 27 '24 at 13:51
  • 1
    @LSpice oh, sorry for the mistake. Thank you for the editing. – Just a user Mar 27 '24 at 14:01
  • Oh, darn. I made a silly mistake again when considering the nilpotent index of $\operatorname{ad}x$. I worried about the cancellation of the terms when $<+−1$ overlooking the special property that there was only one non-zero term when $=+−1$. I took the liberty of disabusing the notational confusion of the dimension number being equal to the nilpotent index of pointed out by @LSpice. – Hans Mar 27 '24 at 18:29
  • Here is a somewhat intriguing question. $(L-R)^p vw^t=0\implies (L-R)^{p+1} vw^t=0$. Given the nilpotent index $p=m+n-1$, $\operatorname{ad}x)^pvw^t\neq0, \forall p<m+n-2$. Is there a way to see this non-vanishing result from the non-cancellation of the binomial expansion terms? – Hans Mar 27 '24 at 21:55
2

@Justauser has given a great answer to this that's the best way to think about it from a hands-on perspective, but there's also a machinery-heavy perspective from which this fact isn't surprising: $\operatorname{ad}$ is an algebraic representation, and algebraic representations carry semisimple (respectively, nilpotent) elements to semisimple (respectively, nilpotent) elements. Indeed, this is why the notion of the Jordan decomposition of an algebraic Lie algebra can be defined intrinsically (as opposed to an abstract Lie algebra, where we'd want to consider every element of $\operatorname{Lie}(\mathbb R)$ nilpotent and every element of $\operatorname{Lie}(\mathbb R^\times)$ semisimple, but the non-algebraic isomorphism $\operatorname{Lie}(\exp) : \operatorname{Lie}(\mathbb R) \to \operatorname{Lie}(\mathbb R^\times)$ forces us to give up on morphisms respecting this notion). The relevant result in full generality is Theorem 4.4 of Borel - Linear algebraic groups; and the special case in which you are interested is Proposition 1.24 of version 2.00 of Milne - Lie algebras, algebraic groups, and Lie groups.

LSpice
  • 2,687