Exercise 1.1
(i) Show that [v,0]=0=[0,v][v, 0] = 0 = [0, v] for all vLv \in L
Solution
Since the Lie bracket is bilinear, we have

[v,0]=[v,0+0]=[v,0]+[v,0][v, 0] = [v, 0 + 0] = [v, 0] + [v, 0]

and

[0,v]=[0+0,v]=[0,v]+[0,v][0, v] = [0 + 0, v] = [0, v] + [0, v]

Subtracting [v,0][v, 0] from the first equation and [0,v][0, v] from the second gives the desired result. \square


(ii) Suppose that x,yLx, y \in L satisfy [x,y]0[x, y] \neq 0. Show that xx and yy are linearly independent over FF.
Solution
The contrapositive of the statement is to assume that xx and yy are linearly dependent over F and prove that [x,y]=0[x, y] = 0. We shall prove the contrapositive. So take y=kxy=kx for some kFk \in F. Then

[x,y]=[x,kx]=k[x,x]=0[x, y] = [x, kx] = k [x, x] = 0

Since [x,x]=0[x, x] = 0 by property (L1). \square


Exercise 1.2
Let F=RF = \mathbf{R}. The vector product (x,y)xy(x, y) \mapsto x \wedge y defines the structure of a Lie algebra on R3\mathbf{R}^3. We denote this Lie algebra by R3\mathbf{R}^3_\wedge. Explicitly, if x=(x1,x2,x3)x=(x_1, x_2, x_3) and y=(y1,y2,y3)y=(y_1, y_2, y_3) then

xy=(x2y3x3y2,x3y1x1y3,x1y2x2y1)x \wedge y = (x_2y_3 - x_3y_2, x_3y_1 - x_1y_3, x_1y_2 - x_2y_1)

Convince yourself that \wedge is bilinear. Then check that the Jacobi identity holds. Hint: If xyx \cdot y denotes the dot product of the vectors x,yR3x, y \in \mathbf{R}^3, then

x(yz)=(xz)y(xy)zfor all x,y,zR3x \wedge (y \wedge z) = (x \cdot z)y - (x \cdot y)z \quad \text{for all } x,y, z \in \mathbf{R}^3

Solution
Let a,bRa, b \in \mathbb{R} and x,y,zR3x, y, z \in \mathbf{R}^3. To show that \wedge is bilinear we must show that

(ax+by)z=a(xz)+b(yz),z(ax+by)=a(zx)+b(zy),(a x + b y) \wedge z = a (x \wedge z) + b (y \wedge z), \\ z \wedge (a x + b y) = a ( z \wedge x) + b(z \wedge y),

So

(ax+by)z=(ax1+by1,ax2+by2,ax3+by3)(z1,z2,z3)=((ax2+by2)z3(ax3+by3)z2,(ax3+by3)z1(ax1+by1)z3,(ax1+by1)z2(ax2+by2)z1)=(a(x2z3x3z2)+b(y2z3y3z2),a(x3z1x1z3)+b(y3z1y1z3),a(x1z2x2z1)+b(y1z2y2z1))=a(x2z3x3z2,x3z1x1z3,x1z2x2z1)+b(y2z3y3z2,y3z1y1z3,y1z2y2z1)=a(xz)+b(yz)\begin{aligned} (a x + b y) \wedge z &= (ax_1 + by_1, ax_2 + by_2, ax_3 + by_3) \wedge (z_1, z_2, z_3) \\ &= \big((ax_2 + by_2)z_3 - (ax_3 + by_3)z_2, \\ &\quad (ax_3 + by_3)z_1 - (ax_1 + by_1)z_3, \\ &\quad (ax_1 + by_1)z_2 - (ax_2 + by_2)z_1\big) \\ &= \big(a(x_2z_3 - x_3z_2) + b(y_2z_3 - y_3z_2), \\ &\quad a(x_3z_1 - x_1z_3) + b(y_3z_1 - y_1z_3), \\ &\quad a(x_1z_2 - x_2z_1) + b(y_1z_2 - y_2z_1)\big) \\ &= a(x_2z_3 - x_3z_2, x_3z_1 - x_1z_3, x_1z_2 - x_2z_1) \\ &\quad + b(y_2z_3 - y_3z_2, y_3z_1 - y_1z_3, y_1z_2 - y_2z_1) \\ &= a (x \wedge z) + b (y \wedge z) \end{aligned}

To show that z(ax+by)=a(zx)+b(zy)z \wedge (a x + b y) = a ( z \wedge x) + b(z \wedge y) we could repeat the above but reversed, but there is a shortcut. Calculate that

yx=(y2x3y3x2,y3x1y1x3,y1x2y2x1)=(y3x2y2x3,y1x3y3x1,y2x1y1x2)=(x2y3x3y2,x3y1x1y3,x1y2x2y1)=xy\begin{aligned} y \wedge x &= (y_2x_3 - y_3x_2, y_3x_1 - y_1x_3, y_1x_2 - y_2x_1) \\ &= - (y_3x_2 - y_2x_3, y_1x_3 - y_3x_1, y_2x_1 - y_1x_2) \\ &= - (x_2y_3 - x_3y_2, x_3y_1 - x_1y_3, x_1y_2 - x_2y_1) \\ &= - x \wedge y \end{aligned}

So

z(ax+by)=((ax+by)z)=(a(xz)+b(yz))=a(zx)+b(zy)z \wedge (a x + b y) = - ((a x + b y) \wedge z) = -(a (x \wedge z) + b (y \wedge z)) = a(z \wedge x) + b(z \wedge y)

Finally we must show that the Jacobi identity holds for \wedge. This is easy using the given identity

x(yz)+y(zx)+z(xy)=((xz)y(xy)z)+((yx)z(yz)x)+((zy)x(zx)y)=0.x \wedge (y \wedge z) + y \wedge (z \wedge x) + z \wedge (x \wedge y) \\= ((x \cdot z)y - (x \cdot y)z) + ((y \cdot x)z - (y \cdot z)x) + ((z \cdot y)x - (z \cdot x)y) = 0. \quad \square


Exercise 1.3
Suppose that VV is a finite-dimensional vector space over FF. Write gl(V)\mathrm{gl}(V) for the set of all linear maps from VV to VV. This is again a vector space over FF, and it becomes a Lie algebra, known as the general linear algebra, if we define the Lie bracket [,][-, -] by

[x,y]xyyxfor x,ygl(V),[x, y] \coloneqq x \circ y- y \circ x \quad \text{for } x, y \in \mathrm{gl}(V),

where \circ denotes the composition of maps. Check that the Jacobi identity holds.
Solution
Calculate that

=[x,[y,z]]+[y,[z,x]]+[z,[x,y]]=x[y,z][y,z]x+y[z,x][z,x]y+z[x,y][x,y]z=x(yzzy)(yzzy)x+y(zxxz)(zxxz)y+z(xyyx)(xyyx)z.=xyzxzyyzx+zyx+yzxyxzzxy+xzy+zxyzyxxyz+yxz=0\begin{aligned} &\phantom{=}[x, [y, z]] + [y, [z, x]] + [z,[x,y]] \\ &= x \circ [y, z] - [y, z ]\circ x \\ &\quad + y \circ [z, x] - [z, x ] \circ y \\ &\quad + z \circ [x, y] - [x, y] \circ z \\ &= x \circ (y \circ z - z \circ y) - (y \circ z - z \circ y) \circ x \\ &\quad + y \circ (z \circ x - x \circ z) - (z \circ x - x \circ z) \circ y \\ &\quad + z \circ (x \circ y - y \circ x) - (x \circ y - y \circ x) \circ z. \\ &= x \circ y \circ z - x \circ z \circ y - y \circ z \circ x + z \circ y \circ x \\ &\quad + y \circ z \circ x - y \circ x \circ z - z \circ x \circ y + x \circ z \circ y \\ &\quad + z \circ x \circ y - z \circ y \circ x - x \circ y \circ z + y \circ x \circ z \\ &= 0 \end{aligned}

It seems that all the terms cancelled out nicely! I see why this exercise is famous as one that every mathematician should do at least once in their life 😃 \square


Unnamed Exercise
Write gl(n,F)\mathrm{gl}(n, F) for the vector space of all n×nn \times n matrices over FF with the Lie bracket defined by

[x,y]xyyx[x, y] \coloneqq xy - yx

Where xyxy is the usual product of the matrices xx and yy.
As a vector space, gl(n,F)\mathrm{gl}(n, F) has a basis consisting of the matrix units eije_{ij} for 1i,jn1 \leq i, j \leq n. Here eije_{ij} is the n×nn \times n matrix which has 1 in the ijij-th position and all other entries are 0. We leave it as an exercise to check that

[eij,ekl]=δjkeilδilekj[e_{ij}, e_{kl}] = \delta_{jk}e_{il} - \delta_{il}e_{kj}

where δ\delta is the Kronecker delta, defined by δij=1\delta_{ij} = 1 if i=ji = j and δij=0\delta_{ij} = 0 otherwise.
Solution

[eij,ekl]=eijeklekleij[e_{ij}, e_{kl}] = e_{ij}e_{kl} - e_{kl}e_{ij}

Using the formula for matrix multiplication at some index 1a,bn1 \leq a, b \leq n we get

(eijekl)ab=x=0n(eij)ax(ekl)xb(e_{ij}e_{kl})_{ab} = \sum_{x = 0}^{n} (e_{ij})_{ax}\cdot(e_{kl})_{xb}

Whenever xjx \neq j by definition we have (eij)ax=0(e_{ij})_{ax} = 0 so this reduces to

(eijekl)ab=(eij)aj(ekl)jb=δaiδkjδlb(e_{ij}e_{kl})_{ab} = (e_{ij})_{aj}\cdot(e_{kl})_{jb} = \delta_{ai} \cdot \delta_{kj} \cdot \delta_{lb}

Hence we have

eijekl=δjkeile_{ij}e_{kl} = \delta_{jk} \cdot e_{il}

As by definition (eil)ab=δaiδlb(e_{il})_{ab} = \delta_{ai} \cdot \delta_{lb}. Rearranging symbols gives ekleij=δliekje_{kl}e_{ij} = \delta_{li}e_{kj}, combining the results finishes the proof. \square


Exercise 1.4
Check the following assertions:
Let b(n,F)\mathrm{b}(n,F) be the upper triangular matrices in gl(n,F)\mathrm{gl}(n,F). (A matrix xx is said to be upper triangular if xij=0x_{ij}=0 whenever i>ji > j.) This is a Lie algebra with the same Lie bracket as gl(n,F)\mathrm{gl}(n,F).

Similarly, let n(n,F)\mathrm{n}(n,F) be the strictly upper triangular matrices in gl(n,F)\mathrm{gl}(n,F). (a matrix xx is said to be strictly upper triangular if xij=0x_{ij}=0 whenever iji \geq j). Again this is a Lie algebra with the same Lie bracket as gl(n,F)\mathrm{gl}(n,F).
Solution
Let x,yx, y be upper-triangular matrices. Recall from linear algebra that upper triangular matrices and strictly upper triangular matrices are subspaces of the vector space formed by all square matrices. Therefore, (xyyx)(xy - yx) forms a Lie Algebra over b(n,F)\mathrm{b}(n,F) and n(n,F)\mathrm{n}(n,F), because the required properties of being a bilinear map [:]:LL[:]:L \to L satisfying the properties

TODO You need to show that when you multiply two (strictly) upper triangular matrices you get a (strictly) upper triangular matrix!

(L1)[x,x]=0for all xL,(L1)\tag{L1} \phantom{(L1)} \quad [x,x]=0 \quad\text{for all } x \in L,

(L2)[x,[y,z]]+[y,[z,x]]+[z,[x,y]]=0for all x,y,zL.(L2)\tag{L2} \phantom{(L2)} \quad [x,[y,z]] + [y,[z,x]] + [z,[x,y]]=0 \quad \text{for all } x,y,z \in L.

are inherited from gl(n,F)\mathrm{gl}(n,F). \quad \square


Exercise 1.5
Find Z(L)Z(L) when L=sl(2,F)L = \mathrm{sl}(2,F). You should find that the answer depends on the characteristic of FF.
Solution
First let's just plug in

Z(sl(2,F))={xsl(2,F):xyyx=0 for all ysl(2,F)}Z(\mathrm{sl}(2,F)) = \{ x \in \mathrm{sl}(2,F): xy - yx = 0 \text{ for all } y \in \mathrm{sl}(2,F) \}

Now since x,ysl(2,F)x,y \in \mathrm{sl}(2,F) we know that their trace is zero

x=(x11x12x21x11),y=(y11y12y21y11)xy=(x11x12x21x11)(y11y12y21y11)=(x11y11+x12y21x11y12x12y11x21y11x11y21x21y12+x11y11)yx=(y11y12y21y11)(x11x12x21x11)=(y11x11+y12x21y11x12y12x11y21x11y11x21y21x12+y11x11)xyyx=(x12y21y12x212x11y122x12y112x21y112x11y21x21y12y21x12)x = \begin{pmatrix} x_{11} & x_{12} \\ x_{21} & -x_{11} \end{pmatrix}, y = \begin{pmatrix} y_{11} & y_{12} \\ y_{21} & -y_{11} \end{pmatrix} \\[4pt] xy = \begin{pmatrix} x_{11} & x_{12} \\ x_{21} & -x_{11} \end{pmatrix} \cdot \begin{pmatrix} y_{11} & y_{12} \\ y_{21} & -y_{11} \end{pmatrix} = \begin{pmatrix} x_{11}y_{11} + x_{12}y_{21} & x_{11}y_{12} - x_{12}y_{11} \\ x_{21}y_{11} - x_{11}y_{21} & x_{21}y_{12} + x_{11}y_{11} \end{pmatrix} \\[4pt] yx = \begin{pmatrix} y_{11} & y_{12} \\ y_{21} & -y_{11} \end{pmatrix} \cdot \begin{pmatrix} x_{11} & x_{12} \\ x_{21} & -x_{11} \end{pmatrix} = \begin{pmatrix} y_{11}x_{11} + y_{12}x_{21} & y_{11}x_{12} - y_{12}x_{11} \\ y_{21}x_{11} - y_{11}x_{21} & y_{21}x_{12} + y_{11}x_{11} \end{pmatrix} \\[4pt] xy - yx = \begin{pmatrix} x_{12}y_{21} - y_{12}x_{21} & 2x_{11}y_{12} - 2x_{12}y_{11} \\ 2x_{21}y_{11} - 2x_{11}y_{21} & x_{21}y_{12} - y_{21}x_{12} \end{pmatrix}

So when is xyyx=0xy - yx = 0? Can we come up with some generators for this set? Keep in mind we are only picking for xx, yy can be anything.

x12y21=y12x212x11y12=2x12y112x21y21=2x11y21x21y12=y21x12x_{12}y_{21} = y_{12}x_{21} \\ 2x_{11}y_{12} = 2x_{12}y_{11} \\ 2x_{21}y_{21} = 2x_{11}y_{21} \\ x_{21}y_{12} = y_{21}x_{12}

Let's look at some examples. Let F be the field of two elements. Then the only requirements for the center become

x12y21=y12x21x21y12=y21x12x_{12}y_{21} = y_{12}x_{21} \\ x_{21}y_{12} = y_{21}x_{12}

So consider the matrix

x=(1001)x = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}

This matrix solves the equations above, to show it is in the center explicitly we can show

(1001)(y11y12y21y11)=(y11y12y21y11)(y11y12y21y11)(1001)=(y11y12y21y11)xyyx=(1001)(y11y12y21y11)(y11y12y21y11)(1001)=(y11y12y21y11)(y11y12y21y11)=(0y12+y1200)\begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \cdot \begin{pmatrix} y_{11} & y_{12} \\ y_{21} & -y_{11} \end{pmatrix} = \begin{pmatrix} y_{11} & y_{12} \\ -y_{21} & y_{11} \end{pmatrix} \\[4pt] \begin{pmatrix} y_{11} & y_{12} \\ y_{21} & -y_{11} \end{pmatrix} \cdot \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} = \begin{pmatrix} y_{11} & -y_{12} \\ -y_{21} & y_{11} \end{pmatrix} \\[4pt] xy - yx = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \cdot \begin{pmatrix} y_{11} & y_{12} \\ y_{21} & -y_{11} \end{pmatrix} - \begin{pmatrix} y_{11} & y_{12} \\ y_{21} & -y_{11} \end{pmatrix} \cdot \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}\\[4pt] = \begin{pmatrix} y_{11} & y_{12} \\ -y_{21} & y_{11} \end{pmatrix} - \begin{pmatrix} y_{11} & -y_{12} \\ -y_{21} & y_{11} \end{pmatrix} = \begin{pmatrix} 0 & y_{12} + y_{12} \\ 0 & 0 \end{pmatrix}

Since for all λF2\lambda \in F_2 we have λ+λ=0\lambda + \lambda = 0, in F2F_2 we clearly have xyyx=0xy - yx = 0, so xZ(sl(2,F2))x \in Z(\mathrm{sl}(2, F_2)). Of course, we always have 0Z(sl(2,F))0 \in Z(\mathrm{sl}(2,F)) for all fields FF, and when the characteristic of FF is not 2, this is the only element. \quad \square


Exercise 1.6
Show that if φ:L1L2\varphi : L_1 \to L_2 is a homomorphism, then the kernel of φ\varphi, kerφ\ker \varphi, is an ideal of L1L_1, and the image of φ\varphi, imφ\text{im} \varphi, is a Lie subalgebra of L2L_2.
Solution
First let's show that kerφ\ker \varphi is an ideal of L1L_1. The definition of a kernel gives us

kerφ={xL1:φ(x)=0}.\ker \varphi = \{x \in L_1: \varphi(x)=0 \}.

We wish to show that kerφ\ker \varphi is an ideal of L1L_1.
First we must show that it is a subspace of L1L_1. This amounts to showing that kerφ\ker \varphi contains the zero vector, and is closed under both vector addition and scalar multiplication. 0kerφ    φ(0)=00 \in \ker \varphi \iff \varphi(0) = 0, the latter following from φ\varphi being a homomorphism, so kerϕ\ker \phi contains the zero vector. Let x,ykerφx, y \in \ker \varphi and cFc \in F (where FF is the common field of scalars shared by L1L_1 and L2L_2). Then φ(x+y)=φ(x)+φ(y)=0\varphi(x + y) = \varphi(x) + \varphi(y) = 0 and φ(cx)=cφ(x)=0\varphi(c \cdot x) = c \cdot \varphi(x) = 0. We conclude that kerφ\ker \varphi is a subspace of L1L_1
Let xL1x \in L_1 and ykerφy \in \ker \varphi. To show that kerφ\ker \varphi is an ideal of L1L_1, we need to show that [x,y]kerφ[x, y] \in \ker \varphi. Calculate that

φ([x,y])=[φ(x),φ(y)]=[φ(x),0]=0\varphi ([x,y]) = [\varphi(x), \varphi(y)] = [\varphi(x), 0] = 0 \quad \square

Next we show that imφ\text{im} \varphi is a Lie subalgebra of L2L_2. Recall that

imφ={yL2:y=φ(x), for some xL1}.\text{im} \varphi = \{ y \in L_2: y = \varphi(x), \text{ for some } x \in L_1 \}.

To prove that imφ\text{im} \varphi is a Lie subalgebra of L2L_2, we first need to show that imφ\text{im} \varphi is a vector subspace of L2L_2. Since φ(0)=0\varphi(0)=0, 0imφ0 \in \text{im} \varphi. If y1=φ(x1),y2=φ(x2),cF,x1,x2L1y_1 = \varphi(x_1), y_2 = \varphi(x_2), c \in F, x_1, x_2 \in L_1, then y1+y2=φ(x1)+φ(x2)=φ(x1+x2)y_1 + y_2 = \varphi(x_1) + \varphi(x_2) = \varphi(x_1 + x_2) and cy1=cφ(x1)=φ(cx1)cy_1 = c \varphi(x_1) = \varphi(cx_1), so imφ\text{im} \varphi is closed under vector addition and scalar multiplication. We conclude that imφ\text{im} \varphi is a vector subspace of L2L_2.
To show that imφ\text{im} \varphi is a Lie subalgebra of L2L_2, it remains to show that

[y1,y2]imφfor all y1,y2imφ[y_1, y_2] \in \text{im} \varphi \qquad \text{for all } y_1,y_2 \in \text{im} \varphi

So let y1=φ(x1),y2=φ(x2)y_1 = \varphi(x_1), y_2 = \varphi(x_2). Calculate that

[y1,y2]=[φ(x1),φ(x2)]=φ([x1,x2])[y_1, y_2] = [\varphi(x_1), \varphi(x_2)] = \varphi([x_1, x_2]) \qquad \square


Exercise 1.7
Let LL be a Lie algebra. Show that the Lie bracket is associative, that is, [x,[y,z]]=[[x,y],z][x, [y, z]] = [[x,y],z] for all x,y,zLx, y, z \in L, if and only if for all a,bLa, b \in L the commutator [a,b][a, b] lies in Z(L)Z(L).
Solution
First note that

[a,b]Z(L) for all a,bL    [[a,b],c]=0 for all a,b,cL\quad [a,b] \in Z(L) \text{ for all } a,b \in L \iff [[a,b],c] = 0 \text{ for all } a,b,c \in L

Then calculate that

[[a,b],c]=0 for all a,b,cL    [x,[y,z]]=[[x,y],z]=0[[a,b],c] = 0 \text{ for all } a,b,c \in L \implies [x,[y,z]] = [[x,y],z] = 0

and that

[x,[y,z]]=[[x,y],z] for all x,y,zL    [[x,y],z]=[x,[y,z]]=[[y,z],x]=[y,[z,x]]=[[z,x],y]=[z,[x,y]]=[[x,y],z]    [[x,y],z]=[[x,y],z]    1+1=0 or [[x,y],z]=0[x, [y, z]] = [[x,y],z] \text{ for all } x,y,z \in L \\ \implies [[x,y],z] = [x, [y,z]] = -[[y,z],x] = -[y, [z,x]] = [[z,x],y] = [z,[x,y]] = -[[x,y],z] \\ \implies [[x,y],z] = -[[x,y],z] \\ \implies 1 + 1 = 0 \text{ or } [[x,y],z] = 0 \quad \square


Exercise 1.8
Let DD and EE be derivations of an algebra AA.
(i) Show that [D,E]=DEED[D,E] = D \circ E - E \circ D is also a derivation.
Solution
Let a,bAa,b \in A, by the definition of a derivation we have

D(ab)=aD(b)+D(a)bE(ab)=aE(b)+E(a)bD(ab) = aD(b) + D(a)b \\ E(ab) = aE(b) + E(a)b

so

[D,E](ab)=D(E(ab))E(D(ab))=D(aE(b)+E(a)b)E(aD(b)+D(a)b)=D(aE(b))+D(E(a)b)E(aD(b))E(D(a)b)[D, E](ab) = D(E(ab)) - E(D(ab)) = D(aE(b)+E(a)b) - E(aD(b) + D(a)b) \\ = D(aE(b)) + D(E(a)b) - E(aD(b)) - E(D(a)b)

We need to show that

[D,E](ab)=a[D,E](b)+[D,E](a)b=a(D(E(b))E(D(b)))+(D(E(a))E(D(a)))b[D,E](ab) = a[D,E](b) + [D,E](a)b = a(D(E(b)) - E(D(b))) + (D(E(a))-E(D(a)))b

The result follows from rearranging terms. \quad \square


(ii) Show that DED \circ E need not be a derivation.
Solution
By definition DED \circ E being a derivation means that for all a,bAa, b \in A we have

(DE)(ab)=a(DE)(b)+(DE)(a)b    D(E(ab))=aD(E(b))+D(E(a))b    D(aE(b)+E(a)b)=aD(E(b))+D(E(a))b    D(aE(b))+D(E(a)b)=aD(E(b))+D(E(a))b    aD(E(b))+D(a)E(b)+E(a)D(b)+D(E(a))b=aD(E(b))+D(E(a))b    D(a)E(b)+E(a)D(b)=0(D \circ E) (ab) = a(D \circ E)(b) + (D \circ E)(a)b \\ \iff D(E(ab)) = aD(E(b)) + D(E(a))b \\ \iff D(aE(b) + E(a)b) = aD(E(b)) + D(E(a))b \\ \iff D(aE(b)) + D(E(a)b) = aD(E(b)) + D(E(a))b \\ \iff aD(E(b)) + D(a)E(b) + E(a)D(b) + D(E(a))b = aD(E(b)) + D(E(a))b \\ \iff D(a)E(b) + E(a)D(b) = 0

Let DD be ordinary differentiation in the associative algebra A=CRA=C^{\infty}\mathbf{R} of infinitely differentiable functions f,g:RRf,g: \mathbf{R} \to \mathbf{R}, where the product fgfg is given by pointwise multiplication: (fg)(x)=f(x)g(x)(fg)(x) = f(x)g(x). Then DDD \circ D is a derivation if and only if for every a,bAa, b \in A,

D(a)D(b)+D(a)D(b)=0.D(a)D(b) + D(a)D(b) = 0.

Take a,ba,b to be the identity functions that is a(x)=xa(x) = x, b(x)=xb(x) = x for all xRx \in \mathbf{R}. Then

D(a)D(b)+D(a)D(b)=11+11=20.D(a)D(b) + D(a)D(b) = 1 \cdot 1 + 1 \cdot 1 = 2 \neq 0.

Indeed, we can check explicitly that

(DD)(a2)=D(D(a2))=D(2a)=2.(D \circ D)(a^2) = D(D(a^2)) = D(2a) = 2.

If DDD \circ D was a derivation we would have

(DD)(aa)=a(DD)(a)+(DD)(a)a=0,(D \circ D)(aa) = a (D \circ D)(a) + (D \circ D)(a) a = 0,

and of course in R\mathbf{R} we have 020 \neq 2 and this counterexample shows that the composition of two derivations is not necessarily a derivation. \quad \square


Exercise 1.9
Let L1L_1 and L2L_2 be Lie algebras. Show that L1L_1 is isomorphic to L2L_2 if and only if there is a basis B1B_1 of L1L_1 and a basis B2B_2 of L2L_2 such that the structure constants of L1L_1 with respect to B1B_1 is equal to the structure constants of L2L_2 with respect to B2B_2.
Solution
(    )\left( \implies \right) Let f:L1L2f: L_1 \to L_2 be an isomorphism, and let x1,x2,,xnx_{1}, x_{2}, \ldots, x_{n} be a basis of L1L_1 (hold up- do we know that L1L_1 is finite dimensional?). (Check that you can do this) Then f(x1),,f(xn)f(x_1), \ldots, f(x_n) is a basis of L2L_2. L1L_1 has structure constants aijka_{ij}^k such that

[xi,xj]=n=0kaijkxkf([xi,xj])=f(n=0kaijkxk)[f(xi),f(xj)]=n=0kf(aijkxk)=n=0kaijkf(xk)[x_i, x_j] = \sum_{n=0}^k a_{ij}^k x_{k} \\[4px] f([x_i, x_j]) = f\left(\sum_{n=0}^k a_{ij}^k x_{k}\right) \\[4px] [f(x_i), f(x_j)] = \sum_{n=0}^k f(a_{ij}^k x_k) = \sum_{n=0}^k a_{ij}^k f(x_k) \quad \square \\[4px]

(    )\left( \impliedby \right) Let B1=(x1,,xn)B_1 = (x_1, \ldots, x_n) be a basis of L1L_1 and let B2=(y1,,yn)B_2 = (y_1, \ldots, y_n) be a basis of L2L_2, with a shared set of structure constants aijka_{ij}^k. Let ff be the unique linear function satisfying

f(xi)=yif(x_i) = y_i

Of course ff is an isomorphism of vector spaces, what we need to show is that ff is also an isomorphism of Lie algebras, that is that ff commutes with [,][-,-], meaning that for all a,bL1a, b \in L_1

f([a,b])=[f(a),f(b)]f([a,b]) = [f(a), f(b)]

Use our basis to get

a=λ1x1++λnxnb=μ1x1+μnxna = \lambda_1 x_1 + \cdots + \lambda_n x_n \\ b =\mu_1 x_1 + \cdots \mu_n x_n

Calculate that

f([a,b])=f([λ1x1++λnxn,μ1x1+μnxn])=f([λ1x1,i=1nμixi]++[λnxn,i=1nμixi])=f(i=1n[λixi,j=1nμjxj])=f(i=1nj=1n[λixi,μjxj])=f(i=1nj=1nλiμj[xi,xj])=i=1nj=1nλiμjf([xi,xj])f([a,b]) = f([\lambda_1x_1 + \cdots + \lambda_n x_n, \mu_1 x_1 + \cdots \mu_n x_n]) \\[4px] = f\left(\left[\lambda_1x_1, \sum_{i=1}^n \mu_i x_i \right] + \cdots + \left[\lambda_nx_n, \sum_{i=1}^n \mu_i x_i \right] \right) \\[8px] = f\left(\sum_{i=1}^n \left[ \lambda_i x_i, \sum_{j=1}^n \mu_j x_j \right] \right) = f\left(\sum_{i=1}^n \sum_{j=1}^n [\lambda_i x_i, \mu_jx_j]\right) \\[8px] = f\left(\sum_{i=1}^n \sum_{j=1}^n \lambda_i \mu_j [x_i, x_j]\right) = \sum_{i=1}^n \sum_{j=1}^n \lambda_i \mu_j f([x_i,x_j]) \\[8px]

Recall that by the definition of a structure constant

[xi,xj]=k=1naijkxk[x_i, x_j] = \sum_{k=1}^n a_{ij}^k x_k

So we get

f([a,b])=i=1nj=1nλiμjk=1naijkxkf([a,b]) = \sum_{i=1}^n \sum_{j=1}^n \lambda_i \mu_j \sum_{k=1}^n a_{ij}^k x_k

Separately calculate that

[f(a),f(b)]=[f(i=1nλixi),f(j=1nμjyj)]=[i=1nλif(xi),j=1nμjf(xj)]=i=1nj=1n[λif(xi),μjf(xj)]=i=1nj=1nλiμj[f(xi),f(xj)][f(a),f(b)] = \left[f\left(\sum_{i=1}^n \lambda_i x_i \right),f\left(\sum_{j=1}^n \mu_j y_j\right)\right] = \left[ \sum_{i=1}^n \lambda_i f(x_i), \sum_{j=1}^n \mu_j f(x_j) \right] \\[8px] = \sum_{i=1}^n \sum_{j=1}^n [\lambda_i f(x_i), \mu_j f(x_j)] = \sum_{i=1}^n \sum_{j=1}^n \lambda_i \mu_j [f(x_i), f(x_j)]

By the definition of ff we get

[f(a),f(b)]=i=1nj=1nλiμj[yi,yj][f(a), f(b)] = \sum_{i=1}^n \sum_{j=1}^n \lambda_i \mu_j [y_i, y_j]

By the hypothesis B1=(x1,,xn)B_1 = (x_1, \dots, x_n) and B2=(y1,,yn)B_2 = (y_1, \dots, y_n) share the same structure constants aijka_{ij}^k so

[f(a),f(b)]=i=1nj=1nλiμjk=1naijkxk[f(a), f(b)] = \sum_{i=1}^n \sum_{j=1}^n \lambda_i \mu_j \sum_{k=1}^n a_{ij}^k x_k

We conclude that for each a,bL1a, b \in L_1 we have

f([a,b])=[f(a),f(b)]f([a,b]) = [f(a), f(b)] \quad \square


Exercise 1.10
Let LL be a Lie algebra with a basis (x1,,xn)(x_1, \dots, x_n). What conditions does the Jacobi identity impose on the structure constants aijka_{ij}^k?
Solution
Recall that the Jacobi identity states that for all x,y,zLx, y, z \in L we have

[x,[y,z]]+[y,[z,x]]+[z,[x,y]]=0[x, [y, z]] + [y, [z, x]] + [z, [x, y]] = 0

and that, for each i,j1,,ni, j \in {1, \dots, n}, the structure constants satisfy the equation

[xi,xj]=k=1naijkxk[x_i, x_j] = \sum_{k=1}^n a_{ij}^k x_k

Take x=xix = x_i, y=xjy = x_j, z=xkz = x_k. Then

0=[x,[y,z]]+[y,[z,x]]+[z,[x,y]]=[xi,[xj,xk]]+[xj,[xk,xi]]+[xk,[xi,xj]]=[xi,l=1najklxl]+[xj,l=1nakilxl]+[xk,l=1naijlxl]=l=1najkl[xi,xl]+l=1nakil[xj,xl]+l=1naijl[xk,xl]=l=1naijl[xi,xl]+akil[xj,xl]+aijl[xk,xl]=l=1naijlm=1nailmxm+akilm=1najlmxm+aijlm=1naikmxm=l=1nm=1naijlailmxm+akilajlmxm+aijlaikmxm=m=1nl=1naijlailmxm+akilajlmxm+aijlaikmxm=m=1nl=1n(aijlailm+akilajlm+aijlaikm)xm0 = [x,[y,z]] + [y,[z,x]] + [z, [x, y]] = [x_i, [x_j, x_k]] + [x_j, [x_k, x_i]] + [x_k, [x_i, x_j]] \\[8px] = \left[x_i, \sum_{l=1}^n a_{jk}^l x_l \right] + \left[x_j, \sum_{l=1}^n a_{ki}^l x_l \right] + \left[x_k, \sum_{l=1}^n a_{ij}^l x_l \right] \\ = \sum_{l=1}^n a_{jk}^l [x_i, x_l] + \sum_{l=1}^n a_{ki}^l [x_j, x_l ] + \sum_{l=1}^n a_{ij}^l[x_k, x_l ] \\ = \sum_{l=1}^n a_{ij}^l [x_i, x_l] + a_{ki}^l [x_j, x_l ] + a_{ij}^l[x_k, x_l] \\ = \sum_{l=1}^n a_{ij}^l \sum_{m=1}^n a_{il}^m x_m + a_{ki}^l \sum_{m=1}^n a_{jl}^mx_m + a_{ij}^l \sum_{m=1}^n a_{ik}^m x_m \\ = \sum_{l=1}^n \sum_{m=1}^n a_{ij}^l a_{il}^m x_m + a_{ki}^l a_{jl}^mx_m + a_{ij}^l a_{ik}^m x_m \\ = \sum_{m=1}^n \sum_{l=1}^n a_{ij}^l a_{il}^m x_m + a_{ki}^l a_{jl}^mx_m + a_{ij}^l a_{ik}^m x_m \\ = \sum_{m=1}^n \sum_{l=1}^n (a_{ij}^l a_{il}^m + a_{ki}^l a_{jl}^m + a_{ij}^l a_{ik}^m)x_m \\

Since (x1,,xn)(x_1, \dots, x_n) forms a basis they are linearly independent, so for each m1,,nm \in 1, \dots, n we must have

l=1naijlailm+akilajlm+aijlaikm=0\sum_{l=1}^n a_{ij}^l a_{il}^m + a_{ki}^l a_{jl}^m + a_{ij}^l a_{ik}^m = 0


Exercise 1.11
Let L1L_1 and L2L_2 be two abelian Lie algebras. Show that L1L_1 and L2L_2 are isomorphic if and only if they have the same dimension.
Solution
(    )(\implies) If L1L_1 and L2L_2 are isomorphic as Lie algebras than they must be isomorphic as vector spaces     \iff they have the same dimension
(    )(\impliedby) If L1L_1 and L2L_2 are the same dimension then we get a vector space isomorphism f:L1L2f: L_1 \to L_2. To show that this is a Lie algebra isomorphism, we need to show that for each a,bL1a, b \in L_1 we have

f([a,b])=[f(a),f(b)]f([a,b]) = [f(a),f(b)]

Since L1L_1 is abelian we have

f([a,b])=f(0)=0,f([a,b]) = f(0) = 0,

with the last equality coming from the fact that ff is a vector space isomorphism. L2L_2 is abelian as well, so

[f(a),f(b)]=0.[f(a), f(b)] = 0.

We conclude that f([a,b])=[f(a),f(b)]f([a,b]) = [f(a),f(b)] for each a,bL1a, b \in L_1, as desired. \quad \square


1.12.1.12. \dag \quad Find the structure constants of sl(2,F)\textsf{sl}(2, F) with respect to the basis given by the matrices

e=(0100),f=(0010),h=(1001).e = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}, \> f = \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix}, \> h = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}.

Solution

First calculate that

aefee+aefff+aefhh=[e,f]=effe=(0100)(0010)(0010)(0100)=(1000)(0001)=(1001)=0e+0f+1ha_{ef}^e e + a_{ef}^f f + a_{ef}^h h = [e, f] = ef - fe \\[4px] = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix} - \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} \\[4px] = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} - \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} = 0e + 0f + 1h

Work similarly for the other combinations, avoiding duplicate work by recalling that [a,b]=[b,a][a, b] = -[b,a]. For ee and hh we get

aehee+aehff+aehhh=[e,h]=ehhe=(0100)(1001)(1001)(0100)=(0100)(0100)=2e+0f+0ha_{eh}^e e + a_{eh}^f f + a_{eh}^h h = [e, h] = eh - he \\[4px] = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} - \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} \\ = \begin{pmatrix} 0 & -1 \\ 0 & 0 \end{pmatrix} - \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} = -2e + 0f + 0h \\[4px]

For ff and hh we get

afhee+afhff+afhhh=[f,h]=fhhf=(0010)(1001)(1001)(0010)=(0010)(0010)=(0020)=0e+2f+0h.a_{fh}^e e + a_{fh}^f f + a_{fh}^h h = [f, h] = fh - hf \\[4px] = \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} - \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix} \\[4px] = \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix} - \begin{pmatrix} 0 & 0 \\ -1 & 0 \end{pmatrix} = \begin{pmatrix} 0 & 0 \\ 2 & 0 \end{pmatrix} \\ = 0e + 2f + 0h.


1.131.13 \quad Prove that sl(2,C)\textsf{sl}(2, \mathbf{C}) has no non-trivial ideals.

Solution
Let AA be a non-zero ideal of sl(2,C)\sf{sl}(2, \bf{C}). By definition, for each aAa \in A and bsl(2,C)b \in \sf{sl}(2, \bf{C}) we have

[a,b]A[a, b] \in A

Using the basis from 1.121.12,

a=aee+aff+ahh,a = a_ee + a_ff + a_hh, \\

for some ae,af,ahCa_e, a_f, a_h \in \bf{C}. Further, since we supposed that A0A \neq 0, we can take a0a \neq 0, so we do not have ae=af=ah=0a_e = a_f = a_h = 0.

Calculate that

[a,e]=[aee+aff+ahh,e]=ae[e,e]+af[f,e]+ah[h,e]=afh+2ahe[a,f]=[aee+aff+ahh,f]=ae[e,f]+af[f,f]+ah[h,f]=aeh2ahf[a,h]=[aee+aff+ahh,h]=ae[e,h]+af[f,h]+ah[h,h]=2aee+2aff[a,e] = [a_ee + a_ff + a_hh, e] = a_e[e,e] + a_f[f,e] + a_h[h,e] \\ = -a_fh + 2a_h e \\ [a,f] = [a_ee + a_ff + a_hh, f] = a_e[e,f] + a_f[f,f] + a_h[h,f] \\ = a_e h - 2a_h f \\ [a, h] = [a_ee + a_ff + a_hh, h] = a_e[e,h] + a_f[f,h] + a_h[h,h] \\ = -2a_e e + 2 a_f f

Any linear combination of these must be in AA, such as

af[a,f]+ah[a,h]=aeafh2afahf2aeahe+2afahf=aeafh2afahf2aeahe+2afahf=aeafh2aeahea_f[a,f] + a_h[a,h] \\ = a_ea_fh - 2a_fa_hf - 2a_ea_he + 2a_fa_hf \\ = a_ea_fh - \cancel{2a_fa_hf} - 2a_ea_he + \cancel{2a_fa_hf} \\ = a_ea_fh - 2a_ea_he \\

Dividing by aea_e gives

afhaheAa_fh - a_he \in A

Since

[a,e]=afh+2aheA[a,e] = -a_fh + 2a_he \in A

Their sum is in the ideal as well,

aheA    ah=0 or eAa_h e \in A \iff a_h = 0 \text{ or } e \in A

If ah=0a_h = 0 then

[a,e]=afhA[a,f]=aehA[a,e] = -a_fh \in A \\ [a,f] = a_eh \in A

So

ae=af=0 or hAa_e = a_f = 0 \text{ or } h \in A

However ae=af=0a_e = a_f = 0 is a contradiction (since we already have ah=0a_h = 0 and explicitly can not have ae=af=ah=0a_e = a_f = a_h = 0). Therefore we have hAh \in A.

Since hAh \in A then [e,h]=2eA[e, h] = -2e \in A and [f,h]=2fA[f,h] = 2f \in A. So we have e,f,hA    A=sl(2,C)e, f, h \in A \implies A = \sf{sl}(2,\bf{C}) since {e,f,h}\{e, f, h\} is a basis for sl(2,C)\sf{sl}(2,\bf{C}).

We have shown that if ah=0a_h = 0 then A=sl(2,C)A = \sf{sl}(2, \bf{C}). If ah0a_h \neq 0 then eAe \in A, so [e,f]=hA[e,f] = h \in A so we also have A=sl(2,C)A = \sf{sl}(2, \bf{C}).

We have now shown that any non-zero ideal of sl(2,C)\sf{sl}(2,\bf{C}) is equal to sl(2,C)\sf{sl}(2,\bf{C}) itself. Therefore sl(2,C)\sf{sl}(2,\bf{C}) has no non-trivial ideals. \quad \square


1.14.1.14. \dag \quad Let LL be the 33-dimensional complex Lie algebra with basis (x,y,z)(x, y, z) and Lie bracket defined by

[x,y]=z,[y,z]=x,[z,x]=y.[x, y] = z, \> [y, z] = x, \> [z, x] = y.

(Here LL is the "complexification" of the 33-dimensional real Lie algebra R3)\mathbf{R}^{3}_{\land})

(i)(\text{i}) Show that LL is isomorphic to the Lie subalgebra of gl(3,C)\textsf{gl}(3, \mathbf{C}) consisting of all 3×33 \times 3 antisymmetric matrices with entries in C\mathbf{C}.

(ii)(\text{ii}) Find an explicit isomorphism sl(2,C)L\textsf{sl}(2, \mathbf{C}) \cong L.

Solution

To solve (i)(\text{i}) we can make use of Exercise 1.91.9, which tells us that it is sufficient to find a basis of the 3×33 \times 3 antisymmetric matrices with entries in C\bf{C} {x,y,z}\{x, y, z\} such that

[x,y]=z,[y,z]=x,[z,x]=y[x,y] = z, \> [y,z] = x, \> [z,x] = y

Let

x=(010100000),y=(001000100),z=(000001010),x = \begin{pmatrix} 0 & 1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}, \> y = \begin{pmatrix} 0 & 0 & -1 \\ 0 & 0 & 0 \\ 1 & 0 & 0 \end{pmatrix}, \> z = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & -1 & 0 \end{pmatrix},

then {x,y,z}\{x, y, z\} is a basis for the 3×33 \times 3 antisymmetric matrices because they are clearly linearly independent and

(0aba0cbc0)=ax+by+cz,\begin{pmatrix} 0 & a & -b \\ -a & 0 & c \\ b & -c & 0 \end{pmatrix} = ax + by + cz,

so it is sufficient to show that

[x,y]=z,[y,z]=x,[z,x]=y.[x,y] = z, \> [y,z] = x, \> [z,x] = y.

Calculate that

[x,y]=xyyx=(010100000)(001000100)(001000100)(010100000)=(000001000)(000000010)=(000001010)=z,[y,z]=yzzy=(001000100)(000001010)(000001010)(001000100)=(010000000)(000100000)=(010100000)=x,[z,x]=zxxz(000001010)(010100000)(010100000)(000001010)=(000000100)(001000000)=(001000100)=y.[x,y] = xy - yx \\[4px] = \begin{pmatrix} 0 & 1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 0 & -1 \\ 0 & 0 & 0 \\ 1 & 0 & 0 \end{pmatrix} -\begin{pmatrix} 0 & 0 & -1 \\ 0 & 0 & 0 \\ 1 & 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} \\[4px] = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0\\ \end{pmatrix} - \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix} = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & -1 & 0 \end{pmatrix} = z, \\[4px] [y,z] = yz - zy \\[4px] = \begin{pmatrix} 0 & 0 & -1 \\ 0 & 0 & 0 \\ 1 & 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & -1 & 0 \end{pmatrix} - \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & -1 & 0 \end{pmatrix} \begin{pmatrix} 0 & 0 & -1 \\ 0 & 0 & 0 \\ 1 & 0 & 0 \end{pmatrix} \\[4px] = \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ \end{pmatrix} - \begin{pmatrix} 0 & 0 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} = \begin{pmatrix} 0 & 1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} = x, \\[4px] [z,x] = zx - xz \\[4px] \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & -1 & 0 \end{pmatrix} \begin{pmatrix} 0 & 1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} - \begin{pmatrix} 0 & 1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & -1 & 0 \end{pmatrix} \\[4px] = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 1 & 0 & 0 \end{pmatrix} - \begin{pmatrix} 0 & 0 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} = \begin{pmatrix} 0 & 0 & -1 \\ 0 & 0 & 0 \\ 1 & 0 & 0 \end{pmatrix} = y.

This completes (i)\text{(i)}. Recall that (ii)\text{(ii)} asks for an explicit isomorphism φ:sl(2,C)L\varphi: \sf{sl}(2,\bf{C}) \to \mathnormal{L}. In other words, φ\varphi has to be a linear bijective map that respects the Lie bracket, so for each u,vsl(2,C)u, v \in \sf{sl}(2, \bf{C})

[φ(u),φ(v)]=φ([u,v])[\varphi(u), \varphi(v)] = \varphi([u, v])

Recall from 1.121.12 that

{e=(0100),f=(0010),h=(1001)}\left\{ e = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}, \> f = \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix}, \> h = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \right\}

is a basis for sl(2,C)\sf{sl}(2, \bf{C}) and that

[e,f]=h,[e,h]=2e,[f,h]=2f,[e,f] = h, \> [e, h] = -2e, \> [f,h] = 2f,

so it is sufficient to show that (prove this?)

[φ(e),φ(f)]=φ([e,f])=φ(h),[φ(e),φ(h)]=φ([e,h])=2φ(e),[φ(f),φ(h)]=φ([f,h])=2φ(f).[\varphi(e), \varphi(f)] = \varphi([e, f]) = \varphi(h), \\ [\varphi(e), \varphi(h)] = \varphi([e,h]) = -2 \varphi(e), \\ [\varphi(f), \varphi(h)] = \varphi([f,h]) = 2 \varphi(f).

The key insight comes from the following calculation

[ix+z,ixz]=[ix,ixz]+[z,ixz]=[ix,ix][ix,z]+[z,ix][z,z]=2i[x,z]=2iy[ix + z, ix - z] = [ix, ix - z] + [z, ix - z] \\ = [ix, ix] - [ix, z] + [z, ix] - [z,z] \\ = -2i [x,z] = 2iy

So take (prove that this is well defined and a bijection? Enough to show that these are three linearly independent elements?)

φ(e)=ix+z,φ(f)=ixz,φ(h)=2iy.\varphi(e) = ix + z, \> \varphi(f) = ix - z, \> \varphi(h) = 2iy.

The previous calculation verified that

[φ(e),φ(f)]=φ(h),[\varphi(e), \varphi(f)] = \varphi(h),

so one equation down, two to go. Calculate that

[φ(e),φ(h)]=[ix+z,2iy]=[ix,2iy]+[z,2iy]=2[x,y]+2i[z,y]=2ix2z=2φ(e).[\varphi(e), \varphi(h)] = [ix + z, 2iy] = [ix, 2iy] + [z, 2iy] \\ = -2[x,y] + 2i [z,y] = -2ix - 2z = -2 \varphi(e).

One more to go!

[φ(f),φ(h)]=[ixz,2iy]=[ix,2iy][z,2iy]=2[x,y]2i[z,y]=2ix2z=2(ixz)=2φ(f).[\varphi(f), \varphi(h)] = [ix - z, 2iy] = [ix, 2iy] - [z, 2iy] \\ = -2[x,y] - 2i [z,y] = 2ix - 2z = 2(ix - z) = 2 \varphi(f). \quad \square


1.15.1.15. \quad Let SS be an n×nn \times n matrix with entries in a field FF. Define

glS(n,F)={xgl(n,F):xtS=Sx}.\mathsf{gl}_S (n, F) = \{ x \in \mathsf{gl}(n, F): x^tS = -Sx \}.

(i)(\text{i}) \> Show that glS(n,F)\mathsf{gl}_S(n, F) is a Lie subalgebra of gl(n,F)\mathsf{gl}(n, F).
(ii)(\text{ii}) \> Find glS(2,R)\mathsf{gl}_S(2, \bf{R}) if S=(0100)S = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}.
(iii)(\text{iii}) \> Does there exist a matrix SS such that glS(2,R)\mathsf{gl}_S(2,\bf{R}) is equal to the set of all diagonal matrices in gl(2,R)\mathsf{gl}(2,\bf{R})?
(iv)(\text{iv}) \> Find a matrix SS such that glS(3,R)\mathsf{gl}_S (3,\bf{R}) is isomorphic to the Lie algebra R3\mathbf{R}^3_\land defined in §1.2\S 1.2, Example 1.
Hint: Part (i)(\text{i}) of Exercise 1.141.14 is relevant.

Solution

(i)\text{(i)} \> In order to show that glS(n,F)\mathsf{gl}_S(n,F) is a Lie subalgebra of gl(n,F)\mathsf{gl}(n,F), we must first show that it is a vector subspace.
Let λF\lambda \in F and a,bglS(n,F)a, b \in \mathsf{gl}_S(n,F). It suffices to show that a+λbglS(n,F)a + \lambda b \in \mathsf{gl}_S(n,F).
By the definition of glS(n,F)\mathsf{gl}_S(n,F) we have

atS=Sa,btS=Sb(1)\tag{1} a^tS = -Sa, \> b^tS = -Sb

Since matrix transposition is linear

(a+λb)t=at+λbt(a + \lambda b)^t = a^t + \lambda b^t

Multiplying by SS

(a+λb)tS=(at+λbt)S=atS+λbtS(a + \lambda b)^tS = (a^t + \lambda b^t)S = a^tS + \lambda b^t S

Plugging in (1)(1)

=SaλSb=S(a+λb)= -Sa - \lambda Sb = -S(a + \lambda b)

This shows that glS(n,F)\mathsf{gl}_S(n,F) is a vector subspace of gl(n,F)\mathsf{gl}(n,F). To show that it is a Lie subalgebra it remains to show that [a,b]glS(n,F)[a,b] \in \mathsf{gl}_S(n,F). Calculate that

[a,b]tS=(abba)tS=(ab)tS(ba)tS=btatSatbtS=bt(Sa)at(Sb)=(atS)b(btS)a=Sab(Sb)a=SbaSab=S(baab)=S(abba)=S[a,b].[a,b]^t S = (ab - ba)^t S = (ab)^tS - (ba)^tS \\ = b^ta^tS - a^tb^tS = b^t (-Sa) - a^t (-Sb) \\ = (a^t S)b - (b^tS)a = -Sab - (-Sb)a \\ = Sba - Sab = S (ba - ab) = -S(ab - ba) = -S [a,b]. \quad \square

(ii)\text{(ii)} \> Let S=(0100)S = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}, we need to find glS(2,R)\mathsf{gl}_S(2,\mathbf{R}). Let x=(x11x12x21x22)x = \begin{pmatrix} x_{11} & x_{12} \\ x_{21} & x_{22} \end{pmatrix}. Calculate that

xtS=Sx    (x11x21x12x22)(0100)=(0100)(x11x12x21x22)    (0x110x12)=(x21x2200)    (x21x11+x220x12)=0x^tS = -Sx \\[4px] \iff \begin{pmatrix} x_{11} & x_{21} \\ x_{12} & x_{22}\end{pmatrix} \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} = - \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} \begin{pmatrix} x_{11} & x_{12} \\ x_{21} & x_{22} \end{pmatrix} \\[4px] \iff \begin{pmatrix} 0 & x_{11} \\ 0 & x_{12} \end{pmatrix} = - \begin{pmatrix} x_{21} & x_{22} \\ 0 & 0 \end{pmatrix} \\[4px] \iff \begin{pmatrix} x_{21} & x_{11} + x_{22} \\ 0 & x_{12} \end{pmatrix} = 0

So we can conclude that glS(2,R)=span{(1001)}\mathsf{gl}_S(2,\mathbf{R}) = \text{span}\left\{\begin{pmatrix}1 & 0 \\ 0 & -1\end{pmatrix}\right\}. \quad \square

(iii)\text{(iii)} \> We need to find or prove that their doesn't exist a matrix SS such that glS(2,R)=span({(1000),(0001)})\mathsf{gl}_S(2,\mathbf{R}) = \text{span}\left(\left\{\begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix},\begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}\right\}\right).

xtS=Sx(x11x12x21x22)(S11S12S21S22)=(S11S12S21S22)(x11x12x21x22)(x11x21x12x22)(S11S12S21S22)=(S11S12S21S22)(x11x12x21x22)(x11S11+x21S21x11S12+x21S22x12S11+x22S21x12S12+x22S22)=(S11x11+S12x21S11x12+S12x22S21x11+S22x21S21x12+S22x22)x11S11+x21S21+S11x11+S12x21=0x11S12+x21S22+S11x12+S12x22=0x12S11+x22S21+S21x11+S22x21=0x12S12+x22S22+S21x12+S22x22=0x^tS = -Sx \\ \begin{pmatrix} x_{11} & x_{12} \\ x_{21} & x_{22} \end{pmatrix}^\intercal \begin{pmatrix} S_{11} & S_{12} \\ S_{21} & S_{22} \end{pmatrix} = - \begin{pmatrix} S_{11} & S_{12} \\ S_{21} & S_{22} \end{pmatrix} \begin{pmatrix} x_{11} & x_{12} \\ x_{21} & x_{22} \\ \end{pmatrix} \\[4px] \begin{pmatrix} x_{11} & x_{21} \\ x_{12} & x_{22} \end{pmatrix} \begin{pmatrix} S_{11} & S_{12} \\ S_{21} & S_{22} \end{pmatrix} = - \begin{pmatrix} S_{11} & S_{12} \\ S_{21} & S_{22} \end{pmatrix} \begin{pmatrix} x_{11} & x_{12} \\ x_{21} & x_{22} \\ \end{pmatrix} \\[4px] \begin{pmatrix} x_{11}S_{11} + x_{21}S_{21} & x_{11}S_{12} + x_{21}S_{22} \\ x_{12}S_{11} + x_{22}S_{21} & x_{12}S_{12} + x_{22}S_{22} \end{pmatrix} \\ = - \begin{pmatrix} S_{11}x_{11} + S_{12}x_{21} & S_{11}x_{12} + S_{12}x_{22} \\ S_{21}x_{11} + S_{22}x_{21} & S_{21}x_{12} + S_{22}x_{22} \end{pmatrix} \\ x_{11}S_{11} + x_{21}S_{21} + S_{11}x_{11} + S_{12}x_{21} = 0 \\ x_{11}S_{12} + x_{21}S_{22} + S_{11}x_{12} + S_{12}x_{22} = 0 \\ x_{12}S_{11} + x_{22}S_{21} + S_{21}x_{11} + S_{22}x_{21} = 0 \\ x_{12}S_{12} + x_{22}S_{22} + S_{21}x_{12} + S_{22}x_{22} = 0

We need to choose SS such that this reduces to

x12=x21=0x_{12} = x_{21} = 0

The last equation can be rewritten as

(S12+S21)x12+2S22x22=0(S_{12} + S_{21})x_{12} + 2S_{22}x_{22} = 0

And the first as

(S12+S21)x21+2S11x11=0(S_{12} + S_{21})x_{21} + 2S_{11}x_{11} = 0

So we must have

S11=S22=0S_{11} = S_{22} = 0

Plugging in

x21(S21+S12)=0x11S12+x22S12=0x22S21+x11S21=0x12(S12+S21)=0x_{21}(S_{21} + S_{12}) = 0 \\ x_{11}S_{12} + x_{22}S_{12} = 0 \\ x_{22}S_{21} + x_{11}S_{21} = 0 \\ x_{12}(S_{12} + S_{21}) = 0

so we would need to also have S12=S21S_{12} = S_{21}, but this doesn't give us the desired result, so no such matrix SS exists. \quad \square

(iv)\text{(iv)} \quad We need to find SS such that glS(3,R)\mathsf{gl}_S(3,\mathbf{R}) is isomorphic to R3\mathbf{R}^3_\land. By part (i)\text{(i)} of Exercise 1.141.14 we know that R3\mathbf{R}^3_\land is isomorphic to the antisymmetric 3×33 \times 3 matrices. So we need to find SS such that

xtS=Sx    xt=xx^tS = -Sx \iff x^t = -x

Of course S=IS = I solves this!


1.16.1.16. \dag \quad Show, by giving an example, that if FF is a field of characteristic 22, there are algebras over FF which satisfy (L1’)(\text{L1'}) and (L2)(\text{L}2) but are not Lie algebras.

Solution

Recall that

(L1)[x,y]=[y,x]for allx,yL.(L1’)\tag{L1'} \phantom{(L1')} \quad [x,y] = -[y,x] \quad \text{for all} \enspace x,y \in L.

(L2)[x,[y,z]]+[y,[z,x]]+[z,[x,y]]=0for allx,y,zL.(L2)\tag{L2} \phantom{(L2)} \quad [x,[y,z]] + [y,[z,x]] + [z,[x,y]] = 0 \quad \text{for all} \enspace x,y,z \in L.

So let's find and an algebra over FF such that the above holds but (L1)\text{(L1)}

(L1)[x,x]=0for allxL(L1)\tag{L1} \phantom{(L1')} \quad [x,x] = 0 \quad \text{for all} \enspace x \in L

does not.

Consider the vector space VV of 3×33 \times 3 diagonal matrices with entries in FF and over the field of FF. Its basis is given by

e=(100000000),j=(000010000),k=(000000001),V=span({e,j,k}).e = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} , \enspace j = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{pmatrix}, \enspace k = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix}, \\[.35em] V = \text{span} \left( \left\{ e,j,k \right\} \right).

Equip it with a bilinear map given by

[x,y]=xy.[x,y] = xy.

Calculate that

[e,e]=(100000000)(100000000)=(100000000)=e,[e,e] = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} = e,

[e,j]=(100000000)(000010000)=(000000000)=0,[e,j] = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{pmatrix} = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} = 0,

[e,k]=(100000000)(000000001)=(000000000)=0,[e,k] = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix} = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} = 0,

[j,e]=(000010000)(100000000)=(000000000)=0,[j,e] = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} = 0,

[j,j]=(000010000)(000010000)=(000010000)=j,[j,j] = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{pmatrix} = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{pmatrix} = j,

[j,k]=(000010000)(000000001)=(000000000)=0,[j,k] = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix} = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} = 0,

[k,e]=(000000001)(100000000)=(000000000)=0,[k,e] = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} = 0,

[k,j]=(000000001)(000010000)=(000000000)=0,[k,j] = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{pmatrix} = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} = 0,

[k,k]=(000000001)(000000001)=(000000001)=k.[k,k] = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix} = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix} = k.

Obviously (L1)\text{(L1)} does not hold, so it remains to show that (L1’)\text{(L1')} and (L2)\text{(L2)} hold.

Let x,yVx, y \in V, and let x=λee+λjj+λkk,y=μee+μjj+μkkx = \lambda_e e + \lambda_j j + \lambda_k k, \> y = \mu_e e + \mu_j j + \mu_k k. Calculate that

=[x,y]=[λee+λjj+λkk,μee+μjj+μkk]=λeμe[e,e]+λeμj[e,j]+λeμk[e,k]=+λjμe[j,e]+λjμj[j,j]+λjμk[j,k]=+λkμe[k,e]+λkμj[k,j]+λkμk[k,k]=λeμee+λjμjj+λkμkk.\begin{aligned} &\phantom{=} [x,y] = [\lambda_e e + \lambda_j j + \lambda_k k, \mu_e e + \mu_j j + \mu_k k] \\ &= \lambda_e \mu_e [e,e] + \lambda_e \mu_j[e,j] + \lambda_e \mu_k [e,k] \\ &\phantom{=} + \enspace \lambda_j \mu_e [j,e] + \lambda_j \mu_j [j,j] + \lambda_j \mu_k [j,k] \\ &\phantom{=} + \enspace \lambda_k \mu_e [k,e] + \lambda_k \mu_j [k,j] + \lambda_k \mu_k [k,k] \\ &= \lambda_e \mu_e e + \lambda_j \mu_j j + \lambda_k \mu_k k. \end{aligned}

By the symmetry of the above equation, [x,y]=[y,x][x,y] = [y,x], so

[x,y]+[y,x]=[x,y]+[x,y]=(1+1)[x,y]=0,[x,y] + [y,x] = [x,y] + [x,y] = (1 + 1) [x,y] = 0,

therefore (L1’)\text{(L1')} holds. Calculate that

=[e,[j,k]]+[j,[k,e]]+[k,[e,j]]=[e,0]+[j,0]+[k,0]=0+0+0=0,\begin{aligned} &\phantom{=}[e,[j,k]] + [j, [k,e]] + [k, [e,j]] \\ &= [e,0] + [j,0] + [k,0] \\ &= 0 + 0 + 0 = 0, \end{aligned}

so (L2)\text{(L2)} holds as well. \enspace \square


1.17.1.17. \quad Let VV be an nn-dimensional complex vector space and let L=gl(V)L = \mathsf{gl}(V). Suppose that xLx \in L is diagonalisable, with eigenvalues λ1,,λn\lambda_1, \dots, \lambda_n. Show that ad xgl(L)\text{ad } x \in \mathsf{gl}(L) is also diagonalisable, and that its eigenvalues are λiλj\lambda_i - \lambda_j for 1i,jn1 \leq i, j \leq n.


1.18.1.18. \quad Let LL be a Lie algebra. We saw in §1.6\S 1.6, Example 1.2(2)1.2(2) that the maps ad x:LL\text{ad } x: L \to L for xLx \in L are derivations of LL; these are known as inner derivations. Show that if IDer L\text{IDer } L is the set of inner derivations of LL, then IDer L\text{IDer } L is an ideal of Der L\text{Der } L.


1.19.1.19. \quad Let AA be an algebra and let δ:AA\delta: A \to A be a derivation. Prove that δ\delta satisfies the Leibniz rule

δn(xy)=r=0n(nr)δr(x)δnr(y)for all x,yA\delta^n(xy) = \sum_{r = 0}^n \binom{n}{r} \delta^r(x) \delta^{n-r}(y) \quad \text{for all } x,y \in A

\\