4

I’m trying to prove Lie algebra $sl(2,R)$ is not isomorphic to $su(2)$ by showing that all Lie algebra isomorphism will have determinant $0$, hence a contradiction. However, my computation gives a ridiculous result that said such isomorphism always exist. Please help me checking what’s wrong here.

I take the usual basis of $sl(2,R)$ such that $$[X_1,X_2]=X_2$$ $$[X_1,X_3]=-X_3$$ $$[X_2,X_3]=2X_1$$ And the usual basis of $su(2)$ such that $$[Y_i,Y_j]=\epsilon_{ijk}Y_k$$ Now suppose $sl(2,R)$ is isomorphic to $su(2)$, then there must exist a set of basis in $sl(2,R)$, which can be written as $$X’_1=a_1X_1+b_1X_2+c_1X_3$$ $$X’_2=a_2X_1+b_2X_2+c_2X_3$$ $$X’_3=a_3X_1+b_3X_2+c_3X_3$$ satisfies $$[X’_i,X’_j]=\epsilon_{ijk}X’_k$$

We then have $$[a_1X_1+b_1X_2+c_1X_3,a_2X_1+b_2X_2+c_2X_3]\\=a_1b_2[X_1,X_2]+a_1c_2[X_1,X_3]+b_1a_2[X_2,X_1]+b_1c_2[X_2,X_3]+c_1a_2[X_3,X_1]+c_1b_2[X_3,X_2]\\=a_1b_2X_2+a_1c_2(-X_3)+b_1a_2(-X_2)+b_1c_2(2X_1)+c_1a_2(X_3)+c_1b_2(-2X_1)\\=(2b_1c_2-2c_1b_2)X_1+(a_1b_2-b_1a_2)X_2+(c_1a_2-a_1c_2)X_3$$ Which implies $$a_3=2b_1c_2-2b_2c_1$$ $$b_3=a_1b_2-a_2b_1$$ $$c_3=a_2c_1-a_1c_2$$

The other $6$ conditions are obtained just by permutation $(1,2,3)$.

By above relationships, I finally get $$det(a_i,b_j,c_k)=\frac{1}{2}a_1^2+2b_1c_1= \frac{1}{2}a_2^2+2b_2c_2= \frac{1}{2}a_3^2+2b_3c_3\\=\frac{1}{2}a_1^2+\frac{1}{2}a_2^2+\frac{1}{2}a_3^2=b_1c_1+b_2c_2+b_3c_3 $$ by all possible cofactors expansions. This result seems not going to give $det(a_i,b_j,c_k)=0$. I’m not sure what is going wrong here. I have checked my calculations more than $10$ times to make sure nothing goes wrong. Please help me, really thanks!

GK1202
  • 609
  • 5
    The two Lie algebras have isomorphic complexifications (you may have heard of ladder operators in a course on quantum mechanics). Meaning that there exists a solution with complex values for $a_i, b_i, c_i$, $i=1,2,3$. Therefore the argument absolutely needs to show that no real solutions can exist. I doubt the determinant will suffice for this reason. – Jyrki Lahtonen Feb 07 '21 at 22:32
  • Hmmm, so I should actually argue that the above equalities have nor real solutions? I’ve tried for a while but having no clue so far. BTW, how ladder operators are related to complexifications? @JyrkiLahtonen – GK1202 Feb 07 '21 at 22:53
  • 3
    Related: https://math.stackexchange.com/q/1388415/96384, https://math.stackexchange.com/q/1466917/96384, and other duplicates. – Torsten Schoeneberg Feb 07 '21 at 23:14
  • 1
    Maybe this is already known to you but: if you really need to prove this statement (in any way) then you could show that $\mathfrak{su}(2)$ and $\mathfrak{sl}(2,\Bbb{R})$ have non-equivalent Killing Forms. – Alekos Robotis Feb 08 '21 at 00:29
  • Wait...doesn’t the flip sign argument actually give a contradiction? Since we have det($A$)=$-$det($A$) in this case. @TorstenSchoeneberg – GK1202 Feb 08 '21 at 00:35
  • @AlekosRobotis Thank you for your kind suggestion, however, I haven’t studied Killing form yet. – GK1202 Feb 08 '21 at 00:35
  • 1
    (Deleted earlier comment about sign flip which is actually wrong as it would not preserve the bracket relations.) But: A complex solution to the original equations is $a_1=i, b_2=-b_3=1/2, c_2=c_3=i/2$ (all others $=0$). Maybe you can cross-check your calculations with that. – Torsten Schoeneberg Feb 08 '21 at 02:17
  • But why should I care whether it preserves Lie bracket? Doesn’t the determinant satisfy this property naturally? I use those conditions to derive a formula of det($A$) which gives det($A$)=-det($A$), then det($A$)=$0$, I don’t see where my logic goes wrong. @TorstenSchoeneberg – GK1202 Feb 08 '21 at 05:28
  • Happy to explain my own previous mistake: Given one solution, multiplying everything through with $-1$ (i.e. mapping $X'_i \mapsto X''_i := -X'_i$) changes e.g. the relation $[X'_1, X'_2]=X'_3$ to $[X''_1, X''_2] =\color{red}{-}X''_3$, so one cannot use the earlier computations and does not get a contradiction. Besides, I thought I just gave an example where obviously $det(A) \neq -det(A)$. – Torsten Schoeneberg Feb 08 '21 at 05:39
  • Thank you for your explanation. Hmmm...Things got stuck again...I verified again my calculation based on your suggestions, nothing goes wrong. I’m pretty not sure where to go now :( – GK1202 Feb 08 '21 at 06:05
  • @GK1202 Have you really written down all equations in the nine variables coming from $A([x,y])=[[A(x),A(y)]]$? For example, the zero map ($A=0$, i.e., all nine variables equal to zero) should satisfy all equations except for $\det(A)\neq 0$. – Dietrich Burde Feb 08 '21 at 12:54
  • And sorry again, in the complex solution I wrote I messed up rows and columns. It should be $b_2=-c_2=1/2$ and $b_3=c_3=i/2$. – Torsten Schoeneberg Feb 08 '21 at 15:47
  • As I wrote down, you can see $A=0$ satisfies all conditions. I have calculated this several times, please trust me. Nine conditions have already been written in the question, just above my formula of det($A$). @Dietrich Burde – GK1202 Feb 08 '21 at 18:51
  • Since you arrive at a contradiction, there must be a mistake in your equations. My equations look different at first sight, but it is not easy to see this immediately. My equations give an isomorphism over $\Bbb C$, but $\det(A)=0$ over $\Bbb R$, after eliminating several variables and doing case distinctions. I trust my computations, too. – Dietrich Burde Feb 08 '21 at 19:16
  • I have added a mess expansion of one Lie relationship. If my calculation of this is correct, then I really don’t know what’s wrong here. If you said your computation gives right result, is that possible to type up your formula of det($A$)? This question really makes me crazy. But whatever, thanks for your patience so far.@DietrichBurde – GK1202 Feb 08 '21 at 20:46
  • The simplest solution I know uses computation of signature of the Killing form. – Moishe Kohan Feb 08 '21 at 23:12
  • @DietrichBurde: I'm pretty sure that there is a solution e.g. over $\mathbb Q_3$ even though $x^2+y^2 \neq 0$ for all non-trivial $(x,y)$. It suffices to have $x^2+y^2=-1$ or equivalently non-trivial solutions to $x^2+y^2+z^2=0$ in the ground field. See answer. – Torsten Schoeneberg Feb 20 '21 at 05:16
  • 1
    @TorstenSchoeneberg Yes, very nice. I definitely obtained a contradiction over the real numbers, but I don't remember the actual equations I had. It was a relatively simple computation, and I ckecked it again by computer. I also obtained several complex solutions. – Dietrich Burde Feb 20 '21 at 11:57

1 Answers1

4

Your calculations are entirely correct. They give no contradiction at all. They are just not quite finished, because over many fields, including $\mathbb R$, one can show with further calculations that your set of nine equations in nine variables has no solutions except all $a_i=b_i=c_i=0$ which of course we should exclude via the tenth equation $$\det(\pmatrix{a_1&b_1&c_1\\a_2&b_2&c_2\\a_3&b_3&c_3}) \neq 0.$$

Indeed, assume our variables $a_i,b_i,c_i$ take values in a field $K$ ($\mathrm{char}(K)\neq 2$) where there exist $x \in K, y \in K^*$ such that $x^2+y^2=-1$. Then

$$\pmatrix{a_1&b_1&c_1\\a_2&b_2&c_2\\a_3&b_3&c_3} =\pmatrix{-1/y&x/2y&x/2y\\0&1/2&-1/2\\-x/y&-1/2y&-1/2y} $$

is a solution to the system. In particular if $K$ contains an element $i$ with $i^2=-1$ (like $K= \mathbb C$), we can choose $x=0, y=i$ and have the solution

$$\pmatrix{a_1&b_1&c_1\\a_2&b_2&c_2\\a_3&b_3&c_3} =\pmatrix{i&0&0\\0&1/2&-1/2\\0&i/2&i/2} .$$


So the existence or non-existence of solutions depends on the arithmetic of the field $K$ (which otherwise can be any field of characteristic $\neq 2$). The precise statement is:

Theorem: For a non-trivial solution to the equations to exist, it is necessary and sufficient that $-1$ is a sum of two squares in $K$.

(This condition is sometimes phrased as "the level (French: niveau, German: Stufe) of $K$ is $\le 2$". It trivially includes the case that $-1$ is itself a square in $K$, but is strictly weaker than that: e.g. in all $p$-adic fields $\mathbb Q_p$ with $p \equiv 3$ mod $4$, one can write $-1$ as sum of two squares, but it is no square itself.)

Well, the examples above showed sufficiency, and what we want to see now is the "necessary" part (which obviously excludes any solutions for $K=\mathbb R$ or $\mathbb Q$). The existing solutions above, plus considerations of $ad$-eigenvalues of elements in the Lie algebra, guided me first to the following lemmata which exclude some special cases. They can quickly be derived from what you have. One just has to know what to look for.

Lemma 1: Assume there exists a non-trivial solution $\pmatrix{a_1&b_1&c_1\\a_2&b_2&c_2\\a_3&b_3&c_3}$ with some $b_i=0$ or $c_i=0$. Then $a_i^2=-1$.

To prove this e.g. for $b_1$, insert the equations for $b_2, b_3$ into the one for $a_1$ and compare with $$b_2c_2+b_3c_3 = \frac{1}{2}a_1^2 + \underbrace{b_1c_1}_0$$ via your equations for the determinant.

(Conceptual reason: If $b_i$ or $c_i$ is zero, the $ad$-eigenvalues of $a_iX_1+b_iX_2+c_iX_3$ are just the square roots of $a$. But the $ad$-eigenvalues of all our $Y_i$ are $\pm\sqrt{-1}$.) $\qquad \square$

Likewise, one can derive (but we will not use it later)

Lemma 1': Assume there exists a non-trivial solution $\pmatrix{a_1&b_1&c_1\\a_2&b_2&c_2\\a_3&b_3&c_3}$ with some $a_i=0$. Then for $\{j,k\} = \{1,2,3\}\setminus\{i\}$, we have $a_j^2+a_k^2=-1$.

To prove this e.g. for $a_1$, insert the equations for $b_2, b_3$ into the one for $b_1$ which w.l.o.g. (lemma 1) is $\neq 0$.

(Conceptual reason: Now we compute $\pm\sqrt{-1}$, the $ad$-eigenvalues of $a_iX_1+b_iX_2+c_iX_3$, as $\pm 2\sqrt{b_ic_i}$. Via the determinant equations, the square of this is $=a_j^2+a_k^2$.) $\qquad \square$

One would hope that with enough effort, one can derive a similar contradiction in general; however, I have not been successful with that. So instead I tried to reduce to these special cases and got

Lemma 2: Let $m, n \in K$ with $m^2+n^2=1$. Then $$Y_1' := mY_1+nY_2, \qquad Y'_2:= nY_1-mY_2, \qquad Y'_3 := -Y_3$$ also satisfy $[Y'_i, Y'_j] = \epsilon_{ijk}Y'_k$, or in other words $$\pmatrix{a'_1&b'_1&c'_1\\a'_2&b'_2&c'_2\\a'_3&b'_3&c'_3} := \pmatrix{ma_1+na_2&mb_1+nb_2&mc_1+nc_2\\na_1-ma_2&nb_1-mb_2&nc_1-mc_2\\-a_3&-b_3&-c_3}$$ is another solution to your equations.

Proof: Direct check. I derived this by just starting with $Y'_1 := mY_1+nY_2$ and computing what restrictions we need to keep the relations. In hindsight, over $K=\mathbb R$, this is just choosing an angle $\theta$, setting $m=\cos(\theta), n=\sin(\theta)$, and applying one of the simplest non-trivial isometry $\pmatrix{\cos(\theta)&\sin(\theta)&0\\ \sin(\theta)&-\cos(\theta)&0\\0&0&-1} \in SO(3)$ to the basis $Y_1,Y_2, Y_3$. However, to deal with far more general fields than $\mathbb R$, the above purely algebraic formulation will be handy.

But first let's quickly prove

Proposition: For $K= \mathbb R$, there are no non-trivial solutions.

Proof: Assume there is one. Because $-1$ is not a square in $\mathbb R$, by Lemma 1 we can assume w.l.o.g. that all $b_i \neq 0$. Then apply Lemma 2 with $m := \dfrac{-b_2}{\sqrt{b_1^2+b_2^2}}, n := \dfrac{b_1}{\sqrt{b_1^2+b_2^2}}$ which gives us a new solution, this time with $b'_1 =0$ but $b'_3 \neq 0$, contradicting Lemma 1. QED.

The same idea, but with a little more work, finishes the

Proof of the Theorem: So let $-1$ be not a sum of two squares in $K$, but assume there is a non-trivial solution. Again via Lemma 1 we can w.l.o.g. assume all $b_i \neq 0$. We further have $b_1^2+b_2^2 \neq 0$ (otherwise $-1 = \left(\dfrac{b_1}{b_2}\right)^2$). Consider $L := K(\sqrt{b_1^2+b_2^2})$ which is either $=K$ itself (like it was for $K=\mathbb R$, then the proof finishes like above) or a quadratic extension of $K$.

I claim that in this case, $-1$ is still not a square in $L$. For if it were, then there would exist $d,e \in K$ such that $-1= (d+e\sqrt{b_1^2+b_2^2})^2 = d^2 + (eb_1)^2 +(eb_2)^2 +2de\sqrt{b_1^2+b_2^2}$. Since the last summand must be $=0$, either $d$ or $e$ must be $=0$, and in both cases we would have written $-1$ as a sum of two squares in $K$, which we had assumed not to be the case. (Actually, one can even show that if $-1$ is a sum of three squares, then it can already be written as sum of two squares, but we don't need that here.)

But now our assumed non-trivial solution is one over $L$ as well, and we can apply Lemma 2 over that field with its elements $m := \dfrac{-b_2}{\sqrt{b_1^2+b_2^2}}, n := \dfrac{b_1}{\sqrt{b_1^2+b_2^2}}$, giving a new solution in $L$ with $b'_1=0$, which via Lemma 1 contradicts what we just established, that $-1$ is not a square in $L$. QED.


For a conceptual reason why solutions of quadratic forms over $K$ pop up here, compare e.g. https://math.stackexchange.com/a/3863613/96384: The relations describing the $X_i$, i.e. what we call $\mathfrak{sl}_2(K)$, correspond to the split quaternion algebra $\left(\dfrac{1,1}{K}\right) \simeq M_2(K)$; the relations describing the $Y_i$, i.e. what we call $\mathfrak{su}_2$, correspond to the quaternion algebra $\left(\dfrac{-1,-1}{K}\right)$; then the question is equivalent to whether the quaternion algebra $\left(\dfrac{-1,-1}{K}\right)$ is split, which is equivalent to whether $-1$ is a norm of the field extension $K(\sqrt{-1})\vert K$, which is equivalent to whether the quadratic form $X^2+Y^2+Z^2$ has a non-trivial zero in $K^3$, which is equivalent to the level of $K$ being $\le 2$.

Finally, to link this to the easy argument via the Killing form suggested by some people in the comments, the Killing form of $\mathfrak{sl}_2(K)$ is isotropic (and hence the unique (up to scalar multiple) isotropic form in three variables over $K$), whereas the Killing form of $\mathfrak{su}_2$ as described is $-X^2-Y^2-Z^2$; and again we see where the condition on $K$ comes from.