zotero/storage/CXH8SZEF/.zotero-ft-cache

5022 lines
140 KiB
Plaintext
Raw Permalink Normal View History

2024-08-27 21:48:20 -05:00
F R () N T I L R \ I ~ P ti Y S I (: S
SECOND EDITION
LIE ALGEBRAS 1N
PARTICLE PHYSICS
From Isospin to Unified Theories
Ho\Yctrd (;corgi
Lie Algebras in Particle Physics
Second Edition
Howard Georgi
',, __ __.____
/
~estJview
\
/PIIIS.9
Advanced Book Prouram b"'
"'--r--, A Member of the Perseus Books Group
Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book and Perseus Books was aware of a trademark claim, the designations have been printed in initial capital letters.
Library of Congress Catalog Card Number: 99-64878 ISBN: 0-7382-0233-9
Copyright© 1999 by Howard Georgi
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher. Printed in the United States of America.
Westview Press is a Member of the Perseus Books Group
To Herman and Mrs. G
Preface to the Revised Edition
Lie Algebras in Particle Physics has been a very successful book. I have long resisted the temptation to produce a revised edition. I do so finally, because I find that there is so much new material that should be included, and so many things that I would like to say slightly differently. On the other hand, one of the good things about the first edition was that it did not do too much. The material could be dealt with in a one semester course by students with good preparation in quantum mechanics. In an attempt to preserve this advantage while including new material, I have flagged some sections that can be left out in a first reading. The titles of these sections begin with an asterisk, as do the problems that refer to them.
I may be prejudiced, but I think that this material is wonderful fun to teach, and to learn. I use this as a text for what is formally a graduate class, but it is taken successfully by many advanced undergrads at Harvard. The important prerequisite is a good background in quantum mechanics and linear algebra.
It has been over five years since I first began to revise this material and typeset it in IbTp(. Between then and now, many many students have used the evolving manuscript as a text. I am grateful to many of them for suggestions of many kinds, from typos to grammar to pedagogy.
As always, I am enormously grateful to my family for putting up with me for all this time. I am also grateful for their help with my inspirational epilogue.
Howard Georgi Cambridge, MA May, 1999
xi
Contents
Why Group Theory?
1
1 Finite Groups
2
1.1 Groups and representations
2
1.2 Example- Z3 . . . . " . .
3
1.3 The regular representation
4
l.4 Irreducible representations
5
1.5 Transformation groups ..
6
1.6 Application: parity in quantum mechanics
7
1.7 Example: S3 . . . . . . . . . ·
8
1.8 Example: addition of integers .
9
1.9 Useful theorems .
10
1.10 Subgroups . . . . . . . .
11
1.11 Schur's lemma . . . . .
13
1.12 * Orthogonality relations
17
1.13 Characters . . .
20
1.14 Eigenstates .......
25
1.15 Tensor products . . . . .
26
1.16 Example of tensor products .
27
1.17 * Finding the normal modes
29
1.18 * Symmetries of 2n+l-gons
33
1.19 Permutation group on n objects .
34
1.20 Conjugacy classes . . . . . . .
35
1.21 Young tableaux . . . . . . . .
37
l.22 Example - our old friend S3 .
38
1.23 Another example - S4 . . . .
38
1.24 * Young tableaux and representations of S11
38
xiii
xiv
2 Lie Groups 2.1 Generators . 2.2 Lie algebras 2.3 The Jacobi identity 2.4 The adjoint representation 2.5 Simple algebras and groups . 2.6 States and operators .. 2.7 Fun with exponentials .
3 SU(2)
3.1 J3 eigenstates . . . . . . . . . 3.2 Raising and lowering operators 3.3 The standard notation 3.4 Tensor products 3.5 Ja values add
4 Tensor Operators
4.1 Orbital angular momentum
4.2 Using tensor operators ..
4.3 The Wigner-Eckart theorem 4.4 Example . . . . . . . . . .
4.5 * Making tensor operators
4.6 Products of operators
5 Isospin 5.1 Charge independence 5.2 Creation operators . 5.3 Number operators . . 5.4 Isospin generators . . 5.5 Symmetry of tensor products
5.6 The deuteron ....
5.7 Superselection rules . . . . . 5.8 Other particles . . . . . . . . 5.9 Approximate isospin symmetry .
5.10 Perturbation theory .... " ...
6 Roots and Weights
6. I Weights ... . . . . . . . . . . .
6.2 More on the adjoint representation
. 6.3 Roots . . ~ . . . .
6.4 Raising and lowering " . . . . . .
CONTENTS
43 43 45 47 48 51 52 53
56 56 57 60 63 64
68
68 69 70 72 75
77
79 79 80 82 82 83 84 85 86 88 88
90 90 91 92 93
CONTENTS
xv
6.5 Lots of SU(2)s ......... .
93
6.6 Watch carefully - this is important!
95
1 SU(3)
98
7.1 The Gell-Mann matrices
98
7.2 Weights and roots of SU(3)
100
8 Simple Roots
103
8. I Positive weights .
103
8.2 Simple roots . . .
105
8.3 Constructing the algebra
108
8.4 Dynkin diagrams
111
8.5 Example: G2 ...
112
8.6 The roots of G2 ..
112
8.7 The Cartan matrix .
114
8.8 Finding all the roots .
115
8.9 The SU(2)s .....
117
8.10 Constructing the G2 algebra
118
8.1 l Another example: the algebra 0 3 •
120
8.12 Fundamental weights ..
121
8.13 The trace of a generator . . . . . .
123
9 More SU(3)
125
9.1 Fundamental representations of SU(3) .
125
9.2 Constructing the states
127
9.3 The Wey! group . . . . . . . . ..
130
9.4 Complex conjugation ...... .
131
9.5 Examples of other representations
132
10 Tensor Methods
138
10.1 lower and upper indices . . . . . . . . , .
138
10.2 Tensor components and wave functions
139
10.3 Irreducible representations and symmetry
140
10.4 Invariant tensors . . . . . . . . . .
141
10.5 Clebsch-Gordan decomposition ..
141
10.6 Triality . . . . . . . . . . . .
143
I0.7 Matrix elements and operators
143
I0.8 Normalization . . . . . .
144
10.9 Tensor operators . . ...
145
lO.IOThe dimension of (n, m)
145
10.11 * The weights of (n, m) .
146
XVI
l 0.12Generalization of Wigner-Eckart . . . . . . 10.13* Tensors for SU(2) . . . . . . . . . . . . 10.14 * Clebsch-Gordan coefficients from tensors
10.15* Spin s1 + s2 - 1 10.16 * Spin s 1 + s2 - k . . . . . . . . . . . . .
11 Hypercharge and Strangeness 11.1 The eight-fold way . . . . 11.2 The Gell-Mann Okubo fonnula . 11.3 Hadron resonances 11.4 Quarks . . . . . . . . . . . . . .
12 Young Tableaux 12.1 Raising the indices . . . . . . . . . 12.2 Clebsch-Gordan decomposition . 12.3 SU(3) -, SU(2) x U(l) . . . . . .
13 SU(N) 13.1 Generalized Gell-Mann matrices 13.2 SU(N) tensors . . . . . 13. 3 Dimensions . . . . . . . . . . . 13.4 Complex representations . . . .
13.5 SU(N) ® SU(M) E SU(N + M)
14 3-D Harmonic Oscillator 14.1 Raising and lowering operators . 14.2 Angular momentum . . . . . . . 14.3 A more complicated example .
15 SU(6) and the Quark Model 15.1 Including the spin . . . . 15.2 SU(N)@ SU(M) E SU(NM) 15.3 The baryon states . 15.4 Magnetic moments . . . . . . .
16 Color 16.1 Colored quarks . . . . . . . 16.2 Quantum Chromodynamics . 16. 3 Heavy quarks . . . . . . 16.4 Flavor SU(4) is useless! ..
CONTENTS
152 154 156 157 160
166 166 169 173 174
178 178 180 183
187 187 190 193 194 195
198 198 200 200
205 205 206 208 210
214 214 218 219 219
CONTENTS
xvii
17 Constituent Quarks
221
17.1 111e nonrelativistic limit .
221
18 Unified Theories and SU(5)
225
18. l Grand unification . . . .
225
18.2 Parity violation, helicity and handedness . . . . . . . . . . 226
18.3 Spontaneously broken symmetry . . . . . . .
228
18.4 Physics of spontaneous symmetry breaking .
229
18.5 Is the Higgs real? . . . . . .
230
18.6 Unification and SU(5) .. .
231
18.7 Breaking SU(5) .
234
18.8 Proton decay . . . . . . . .
235
19 The Classical Groups
237
19.1 The S0(2n) algebras .
237
19.2 The S0(2n + 1) algebras ..
238
19.3 The Sp(2n) algebras
239
19.4 Quaternions . . . . . . .
240
20 The Classification Theorem
244
20.l II-systems ...... .
244
20.2 Regular subalgebras ..
251
20.3 Other Subalgebras . .
253
21 S0(2n + 1) and Spinors
255
21.1 Fundamental weight of S0(2n + 1)
255
21.2 Real and pseudo-real . . . .
259
21.3 Real representations . . . . .
261
21.4 Pseudo-real representations .
262
21.5 R is an invariant tensor .
262
21.6 The explicit form for R .
262
22 S0(2n + 2) Spinors
265
22. l Fundamental weights of S0(2n + 2)
265
23 SU(n) c S0(2n)
270
23. l Clifford algebras . . . . . . . .
270
rm 23.2 and R as invariant tensors .
272
23.3 Products of rs . . .
274
23.4 Self-duality . . . .
277
23.5 Example: S0(10) .
279
xviii
CONTENTS
23.6 The SU(n) subalgebra ........ .
279
24 SO(10)
282
24.1 SO(10) and SU(4) x SU(2) x SU(2) . . . . . . . . . . . 282
24.2 * Spontaneous breaking of SO(10) . . . . . . . . . . . . . . 285
24.3 * Breaking S0(10) --+ SU(5) . . . . . . . . . .
285
24.4 * Breaking SO(10) --+ SU(3) x SU(2) x U(l) . . . . . . 287
24.5 * Breaking SO(10) ~ SU(3) x U(l) . . . . . . . . . . . . 289
24.6 * Lepton number as a fourth color . . . . . . . . . . . . . . 289
25 Automorphisms
291
25.1 Outer automorphisms
291
25.2 Fun with S0(8) ..
293
26 Sp(2n)
297
26.1 Weights of SU (n) .
297
26.2 Tensors for Sp(2n)
299
27 Odds and Ends
302
27 .1 Exceptional algebras and octonians . . . . . . . . . . . . . 302
27.2 E6 unification . . . . . . . . . . . .
304
27.3 Breaking E5 . . . . . . . . . . . . . . .
308
27.4 SU(3) x SU(3) x SU(3) unification .
308
27.5 Anomalies . . . . . . . . . . . . . . . .
309
Epilogue
311
Index
312
Why Group Theory?
Group theory is the study of symmetry. It is an incredible labor saving device. It allows us to say interesting, sometimes very detailed things about physical systems even when we don't understand exactly what the systems are! When I was a teenager, I read an essay by Sir Arthur Stanley Eddington on the Theory of Groups and a quotation from it has stuck with me for over 30 years: 1
We need a super-mathematics in which the operations are as unknown as the quantities they operate on, and a super-mathematician who does not know what he is doing when he perfonns these operations. Such a super-mathematics is the Theory of Groups.
In this book, I will try to convince you that Eddington had things a little bit wrong, as least as far as physics is concerned. A lot of what physicists use to extract information from symmetry is not the groups themselves, but group representations. You will see exactly what this means in more detail as you read on. What I hope you will take away from this book is enough about the theory of groups and Lie algebras and their representations to use group representations as labor-saving tools, particularly in the study of quantum mechanics.
The basic approach will be to alternate between mathematics and physics, and to approach each problem from several different angles. I hope that you will learn that by using several techniques at once, you can work problems more efficiently, and also understand each of the techniques more deeply.
1in The World of Mathematics, Ed. by James R. Newman, Simon& Schuster, New York, 1956.
Chapter 1
Finite Groups
We will begin with an introduction to finite group theory. This is not intended to be a self-contained treatment of this enormous and beautiful subject. We will concentrate on a few simple facts that are useful in understanding the compact Lie algebras. We will introduce a lot of definitions, sometimes proving things, but often relying on the reader to prove them.
1.1 Groups and representations
A Group, G, is a set with a rule for assigning to every (ordered) pair of
elements, a third element, satisfying:
(1.A.1) If f,g E G then h = Jg E G.
(l.A.2) For f, g, h E G, f (gh) = (Jg)h.
= (l.A.3) There is an identity element, e, such that for all f E G, ef
fe = f.
(l.A.4) Every element f E G has an inverse, 1-1, such that f 1-1 =
1-1f = e.
Thus a group is a multiplication table specifying g1g2 Vg1, g2 E G. If
the group elements are discrete, we can write the multiplication table in the
form
\ I e I 91 I 92 I ···
e e 91 92 ...
91 91 9191 9192 ...
(1. 1)
92 92 9291 9292 ...
2
1.2. EXAMPLE- Z3
3
A Representation of G is a mapping, D of the elements of G onto a set of linear operators with the following properties:
l.B .1 D (e) = 1, where 1 is the identity operator in the space on which
the linear operators act.
l.B.2 D(91)D(92) = D(9192), in other words the group multiplica-
tion law is mapped qnto the natural multiplication in the linear space on which the linear operators act.
1.2 Example - Z3
A group is finite if it has a finite number of elements. Otherwise it is infinite. The number of elements in a finite group G is called the order of G. Here is a finite group of order 3.
\ I e Ia Ib I
e eab a abe
(1.2)
b bea
This is Z3, the cyclic group of order 3. Notice that every row and column
of the multiplication table contains each element of the group exactly once. This must be the case because the inverse exists.
An Abelian group in one in which the multiplication law is commutative
= 9192 9291 ·
(1.3)
Evidently, Z3 is Abelian. The following is a representation of Z3
D(e) = 1, D(a) = e21ri/3 , D(b) = e47ri/3
(1.4)
The dimension of a representation is the dimension of the space on which it acts - the representation (1.4) is 1 dimensional.
4
CHAPTER 1. FINITE GROUPS
1.3 The regular representation
Here's another representation of Z3
1 0 0) (0 D(e) = 0 1 0 (0 0 1
D(a) = 1 0
(0 1 0) D(b) = o O 1
(1.5)
1 0 0
This representation was constructed directly from the multiplication table by the following trick. Take the group elements themselves to form an
orthonormal basis for a vector space, le), la), and lb). Now define
(1.6)
The reader should show that this is a representation. It is called the regular
representation. Evidently, the dimension of the regular representation is the
order of the group. The matrices of (1.5) are then constructed as follows.
lei) =le), le2) = la), le3) = lb)
(1. 7)
[D(g)]ij = (eilD(g )lej)
(1.8)
The matrices are the matrix elements of the linear operators. (1.8) is a
simple, but very general and very important way of going back and forth from
operators to matrices. This works for any representation, not just the regular representation. We will use it constantly. The basic idea here is just the insertion of a complete set of intennediate states. The matrix corresponding to a product of operators is the matrix product of the matrices corresponding
to the operators -
[D(g1g2)]ij = [D(g1)D(g2)b
= (eilD(gi)D(g2)lej)
= :E(eilD(g1)lek) (eklD(g2)lej)
(1.9)
k
= L[D(g1)]ik[D(g2)]kj
k
Note that the construction of the regular representation is completely general for any finite group. For any finite group, we can define a vector space in which the basis vectors are labeled by the group elements. Then (1.6) defines the regular representation. We will see the regular representation of various groups in this chapter.
I .4. IRREDUCIBLE REPRESENTATIONS
5
1.4 Irreducible representations
What makes the idea of group representations so powerful is the fact that they live in linear spaces. And the wonderful thing about linear spaces is we are free to choose to represent the states in a more convenient way by making a linear transformation. As long as the transformation is invertible, the new states are just as good as the old. Such a transformation on the states produces a similarity transformation on the linear operators, so that we can always make a new representation of the form
D(g) --t D'(g) = s- 1D(g)S
(1.10)
Because of the form of the similarity transformation, the new set of operators has the same multiplication rules as the old one, so D' is a representation if D is. D' and D are said to be equivalent representations because they differ just by a trivial choice of basis.
= Unitary operators (0 such that ot o- 1) are particularly important. A
representation is unitary if all the D(g)s are unitary. Both the representations we have discussed so far are unitary. It will tum out that all representations of
finite groups are equivalent to unitary representations (we'll prove this later it is easy and neat).
A representation is reducible if it has an invariant subspace, which means that the action of any D(g) on any vector in the subspace is still in
the subspace. In terms of a projection operator P onto the subspace this con-
dition can be written as
P D(g)P = D(g)P Vg E G
(1.11)
For example, the regular representation of Z3 ( 1.5) has an invariant subspace projected on by
1
D 1
1
( 1.12)
because D(g)P = P 't/g. The restriction of the representation to the invariant
subspace is itself a representation. In this case, it is the trivial representa-
tion for which D(g) = 1 (the trivial representation, D(g) = 1, is always a
representation - every group has one).
A representation is irreducible if it is not reducible.
A representation is completely reducible if it is equivalent to a represen-
6
CHAPTER 1. FINITE GROUPS
tation whose matrix elements have the following form:
.. .")
(1.13)
where Dj(9) is irreducible Vj. This is called block diagonal form. A representation in block diagonal form is said to be the direct sum of
the subrepresentations, Dj (9),
(1.14)
In transforming a representation to block diagonal form, we are decom-
posing the original representation into a direct sum of its irreducible components. Thus another way of defining complete reducibility is to say that a completely reducible representation can be decomposed into a direct sum of irreducible representations. This is an important idea. We will use it often.
We will show later that any representation of a finite group is completely reducible. For example, for (1.5), take
IC 1
;,) S= 3 ~ w2
(1.15)
w
where
w = e21ri/3
( 1.16)
then
G0 0) D'(e) =
1 0
G 1,)0
D'(a) =
w
0 1
0
G ~)0
D'(b) =
w2
0
( 1.17)
1.5 Transformation groups
There is a natural multiplication law for transformations of a physical system. If 91 and 92 are two transformations, 9192 means first do 92 and then do 91.
I .6. APPLICATION: PARITY IN QUANTUM MECHANICS
7
Note that it is purely convention whether we define our composition law to be right to left, as we have done, or left to right. Either gives a perfectly consistent definition of a transformation group.
If this transfonnation is a symmetry of a quantum mechanical system, then the transformation takes the Hilbert space into an equivalent one. Then for each group element g, there is a unitary operator D(g) that maps the Hilbert space into an equivalent one. These unitary operators form a representation of the transfonnation group because the transformed quantum states represent the transformed physical system. Thus for any set of symmetries, there is a representation of the symmetry group on the Hilbert space - we say that the Hilbert space transforms according to some representation of the group. Furthermore, because the transformed states have the same energy as
the originals, D(g) commutes with the Hamiltonian, [D(g), H] = 0. As we
will see in more detail later, this means that we can always choose the energy eigenstates to transform like irreducible representations of the group. It is useful to think about this in a simple example.
1.6 Application: parity in quantum mechanics
Parity is the operation of reflection in a mirror. Reflecting twice gets you back to where you started. If p is a group element representing the parity
reflection, this means that p 2 = e. Thus this is a transformation that together
with the identity transformation (that is, doing nothing) fonns a very simple group, with the following multiplication law:
( 1.18)
This group is called Z2. For this group there are only two irreducible representations, the trivial one in which D(p) = 1 and one in which D(e) = 1
and D (p) = -1. Any representation is completely reducible. In particular,
that means that the Hilbert space of any parity invariant system can be de-
composed into states that behave like irreducible representations, that is on which D (p) is either 1 or -1. Furthermore, because D (p) commutes with the Hamiltonian, D(p) and H can be simultaneously diagonalized. That is we can assign each energy eigenstate a definite value of D(p). The energy
eigenstates on which D(p) = 1 are said to transform according to the trivial
representation. Those on which D(p) = -1 transform according to the other
representation. This should be familiar from nonrelativistic quantum me-
chanics in one dimension. There you know that a particle in a potential that is
8
CHAPTER 1. FINITE GROUPS
symmetric about x = 0 has energy eigenfunctions that are either symmetric
under x --t -x (corresponding to the trivial representation), or antisymmetric
(the representation with D(p) = -1).
1.7 Example: S3
The permutation group (or symmetric group) on 3 objects, called S3 where
a1 = (1,2,3) a2 = (3,2,1)
a3 = (1,2) a4 = (2,3) a5 = (3, 1)
( 1.19)
The notation means that a 1 is a cyclic permutation of the things in positions 1, 2 and 3; a2 is the inverse, anticyclic permutation; a3 interchanges the objects in positions 1 and 2; and so on. The multiplication law is then determined by the transformation rule that 9192 means first do 92 and then do 91· It is
e e a1 a2 a3 a4 a5 a1 a1 a2 e a5 a3 a4 a2 a2 e a1 a4 as a3 a3 a3 a4 as e a1 a2 a4 a4 as a3 a2 e a1 a5 a5 a3 a4 a1 a2 e
(1.20)
We could equally well define it to mean first do 91 and then do 92· These two rules define different multiplication tables, but they are related to one another by simple relabeling of the elements, so they give the same group. There is another possibility of confusion here between whether we are permuting the objects in positions 1, 2 and 3, or simply treating 1, 2 and 3 as names for the three objects. Again these two give different multiplication tables, but only up to trivial renamings. The first is a little more physical, so we will use that. The permutation group is an another example of a transformation group on a physical system.
S3 is non-Abelian because the group multiplication law is not commuta-
tive. We will see that it is the lack of commutativity that makes group theory so interesting.
1.8. EXAMPLE: ADDITION OF INTEGERS
9
Here is a unitary irreducible representation of S3
(-1 _r , (1 0) _v'3) Ji D(e) = O l , D(a1) =
1 -\ , ( _1
D(a2) = _
1 -f\l) , ( 1 D (a4) = ( 1
-v2'.3)
2 -1 2 0
D(a3) = ( 0 1 ) ,
v'3) D (a5) = _1 ~ ½
(1.21)
The interesting thing is that the irreducible unitary representation is more than 1 dimensional. It is necessary that at least some of the representations of a non-Abelian group must-be matrices rather than numbers. Only matrices can reproduce the non-Abelian multiplication law. Not all the operators in the representation can be diagonalized simultaneously. It is this that is responsible for a lot of the power of the theory of group representations.
1.8 Example: addition of integers
The integers fonn an infinite group under addition.
xy = x+y
(1.22)
This is rather unimaginatively called the additive group of the integers. Since this group is infinite, we can't write down the multiplication table, but the rule above specifies it completely.
Here is a representation:
D(x) = (~ ~)
(1.23)
This representation is reducible, but you can show that it is not completely
reducible and it is not equivalent to a unitary representation. It is reducible
because
D(x)P = P
(1.24)
where
• p = (~ ~)
( 1.25)
However,
D(x)(I - P) -::J (I - P)
( 1.26)
so it is not completely reducible.
10
CHAPTER 1. FINITE GROUPS
The additive group of the integers is infinite, because, obviously, there are an infinite number of integers. For a finite group, all reducible representations are completely reducible, because all representations are equivalent to unitary representations.
1.9 Useful theorems
Theorem 1.1 Every representation ofa finite group is equivalent to a unitary representation.
Proof: Suppose D(g) is a representation of a finite group G. Construct the
operator
L s = D(g)t D(g)
(1.27)
gEG
8 is hermitian and positive semidefinite. Thus it can be diagonalized and its eigenvalues are non-negative:
( 1.28)
where dis diagonal
(1.29)
where dj 2: O Vj. Because of the group property, all of the dj s are actually
positive. Proof - suppose one of the djs is zero. Then there is a vector .X.
such that 8).. = 0. But then
L = = )..fS).. 0 IID(g)>-.112 .
gEG
(1.30)
Thus D(g)).. must vanish for all g, which is impossible, since D(e) = 1.
Therefore, we can construct a square-root of 8 that is hermitian and invertible
x = s112 =u-1 ( P~ i
~ O
.. ')
u ·.:.·
( 1.31)
X is invertible, because none of the djs are zero. We can now define
D'(g) = X D(g) x-1
( 1.32)
I. 10. SUBGROUPS
11
Now, somewhat amazingly, this representation is unitary!
(1.33)
but
(L D(g)t8D(g) = D(g)t
D(h)tD(h)) D(g)
hEG
L = D(hg)tD(hg)
(1.34)
hEG
L = D(h)tD(h) = s = x 2
hEG
where the last line follows because hg runs over all elements of G when h
does. QED. We saw in the representation (1.23) of the additive group of the integers
an example of a reducible but not completely reducible representation. The way it works is that there is a P that projects onto an invariant subspace, but (1 - P) does not. This is impossible for a unitary representation, and thus
representations of finite groups are always completely reducible. Let's prove
it.
Theorem 1.2 Every representation ofa finite group is completely reducible.
Proof: By the previous theorem, it is sufficient to consider unitary repre-
sentations. If the representation is irreducible, we are finished because it is
already in block diagonal form. If it is reducible, then 3 a projector P such
that PD(g)P = D(g)P Vg E G. This is the condition that P be an invariant
subspace. Taking the adjoint gives PD(g)tP = PD(g)t Vg E G. But because D(g) is unitary, D(g)t = D(g)- 1 = D(g-1 ) and thus since g- 1 runs over all G when g does, PD(g)P = PD(g) Vg E G. But this implies that (1 - P)D(g)(I - P) = D(g)(l - P) Vg E G and thus 1 - P projects onto
an invariant subspace. Thus we can keep going by induction and eventually
completely reduce the representation.
1.10 Subgroups
A group H whose elements are all elements of a group G is called a subgroup of G. The identity, and the group G are trivial subgroups of G. But many groups have nontrivial subgroups (which just means some subgroup other
than G or e) as well. For example, the permutation group, Eh, has a Z3
subgroup formed by the elements {e, a1, a2}.
12
CHAPTER 1. FINITE GROUPS
We can use a subgroup to divide up the elements of the group into subsets called cosets. A right-coset of the subgroup H in the group G is a set of
elements formed by the action of the elements of H on the left on a given
element of G, that is all elements of the form Hg for some fixed g. You can define left-cosets as well.
For example, {a3, a4, as} is a coset of Z3 in Eh in (1.20) above. The
number of elements in each coset is the order of H. Every element of G must belong to one and only one coset. Thus for finite groups, the order of a subgroup H must be a factor of order of G. It is also sometimes useful to think about the coset-space, G / H defined by regarding each coset as a single element of the space.
A subgroup H of G is called an invariant or normal subgroup if for every g E G
gH=Hg
( 1.35)
which is (we hope) an obvious short-hand for the following: for every g E G and hi E H there exists an h2 E H such that hig = gh2, or gh2g- 1 = h 1 . The trivial subgroups e and G are invariant for any group. It is less ob-
vious but also true of the subgroup Z3 of Eh in (1.20) (you can see this
by direct computation or notice that the elements of Z3 are those permuta-
tions that involve an even number of interchanges). However, the set {e, a4 }
is a subgroup of G which is not invariant. a5 {e, a4} = {a5, a1} while
{e,a4}a5 = {a5,a2}.
If H is invariant, then we can regard the coset space as a group. The multiplication law in G gives the natural multiplication law on the cosets,
Hg:
(1.36)
But if H is invariant Hg 1Hg-;1 = H, so the product of elements in two
cosets is in the coset represented by the product of the elements. In this case,
the coset space, G/ H, is called the factor group of G by H.
What is the factor group Eh/Z3 ? The answer is Z2.
The center of a group G is the set of all elements of G that commute
with all elements of G. The center is always an Abelian, invariant subgroup
of G. However, it may be trivial, consisting only of the identity, or of the
whole group.
There is one other concept, related to the idea of an invariant subgroup,
that will be useful. Notice that the condition for a subgroup to be invariant
can be rewritten as
gHg-1 = H\:/g E G
( 1.37)
I.I I. SCHUR'S LEMMA
13
This suggests that we consider sets rather than subgroups satisfying same condition.
( 1.38)
Such sets are called conjugacy classes. We will see later that there is a oneto-one correspondence between them and irreducible representations. A subgroup that is a union of conjugacy classes is invariant.
Example-
The conjugacy classes of Eh are {e}, {a1, a2} and {a3, a4, a5}.
The mapping (1.39)
for a fixed g is also interesting. It is called an inner automorphism. An isomorphism is a one-to-one mapping of one group onto another that preserves the multiplication law. An automorphism is a one-to-one mapping of a group onto itself that preserves the multiplication law. It is easy to see
that (1.39) is an automorphism. Because g- 1g1g g- 1g2g = 9- 19192g, it preserves the multiplication law. Since g- 191g = g- 1929 ~ 91 = 92, it is one
to one. An automorphism of the form ( 1.39) where g is a group element is called an inner automorphism). An outer automorphism is one that cannot be written as 9- 1Gg for any group element g.
1.11 Schur's lemma Theorem 1.3 If D1(g)A = AD2(g) \:lg E G where D1 and D2 are inequivalent, irreducible representations, then A = 0.
Proof: This is part of Schur's lemma. First suppose that there is a vector Iµ)
such that Alµ) = 0. Then there is a non-zero projector, P, onto the subspace
that annihilates A on the right. But this subspace is invariant with respect to
the representation D 2, because
AD2(g)P = D 1(g)AP = 0 \:lg E G
( 1.40)
But because D2 is irreducible, P must project onto the whole space, and A must vanish. If A annihilates one state, it must annihilate them all. A similar argument shows that A vanishes if there is a (vi which annihilates A. If no vector annihilates A on either side, then it must be an invertible square matrix. It must be square, because, for example, if the number of rows were larger than the number of columns, then the rows could not be a complete set of states, and there would be a vector that annihilates A on the
14
CHAPTER 1. FINITE GROUPS
right. A square matrix is invertible unless its determinant vanishes. But if the determinant vanishes, then the set of homogeneous linear equations
Alµ)= O
( 1.41)
has a nontrivial solution, which again means that there is a vector that annihilates A. But if A is square and invertible, then
(1.42)
so D 1 and D2 are equivalent, contrary to assumption. QED. The more important half of Schur's lemma applies to the situation where
D1 and D2 above are equivalent representations. In this case, we might as well take D1 = D2 = D, because we can do so by a simple change of basis.
The other half of Schur's lemma is the following.
Theorem 1.4 If D(g)A = AD(g) Vg E G where Dis a.finite dimensional
irreducible representation, then A ex I.
In words, if a matrix commutes with all the elements of a finite dimensional
irreducible representation, it is proportional to the identity.
Proof: Note that here the restriction to a finite dimensional representation
is important. We use the fact that any finite dimensional matrix has at least
one eigenvalue, because the characteristic equation det(A - >..I) = 0 has at
least one root, and then we can solve the homogeneous linear equations for
the components of the eigenvector Iµ). But then D(g)(A - >..I) = (A >..I)D(g) Vg E G and (A - >..I)Iµ) = 0. Thus the same argument we used in the proof of the previous theorem implies (A - >..I) = 0. QED.
A consequence of Schur's lemma is that the form of the basis states of an
irreducible representation are essentially unique. We can rewrite theorem 1.4
as the statement
A-1D(g)A = D(g) Vg E G ~ A ex I
(1.43)
for any irreducible representation D. This means once the form of D is fixed, there is no further freedom to make nontrivial similarity transformations on the states. The only unitary transformation you can make is to multiply all the states by the same phase factor.
In quantum mechanics, Schur's lemma has very strong consequences for the matrix elements of any operator, 0, corresponding to an observable that is invariant under the symmetry transformations. This is because the matrix
xi elements (a, j, OI b, k, y) behave like the A operator in ( 1.40). To see this,
I.I 1. SCHUR'S LEMMA
15
let's consider the complete reduction of the Hilbert space in more detail. The symmetry group gets mapped into a unitary representation
g -t D(g) Vg E G
( 1.44)
where D is the (in general very reducible) unitary representation of G that
acts on the entire Hilbert space of the quantum mechanical system. But if the
representation is completely reducible, we know that we can choose a basis
in which D has block diagonal form with each block corresponding to some
unitary irreducible representation of G. We can write the orthonormal basis
states as
la,j, x)
( I .45)
satisfying
(a, j, X I b, k, y) = 8ab 8jk 8xy
(1.46)
where a labels the irreducible representation, j = 1 to na labels the state
within the representation, and x represents whatever other physical parameters there are.
Implicit in this treatment is an important assumption that we will almost always make without talking about it. We assume that have chosen a basis in which all occurences of each irreducible representation a, is described by the
same set of unitary representation matrices, Da(g). In other words, for each
irreducible representation, we choose a canonical form, and use it exclusively
In this special basis, the matrix elements of D (g) are
(a, j, xi D(g) lb, k, y) = 8ab 8xy [Da(g)]jk
( 1.47)
This is just a rewriting of (1.13) with explicit indices rather than as a matrix.
We can now check that our treatment makes sense by writing the representa-
tion D in this basis by inserting a complete set of intermediate states on both
sides:
L I= la,j,x)(a,j,xl a,j,x
( 1.48)
Then we can write
L L D(g) = la,j,x)(a,j,xlD(g) lb,k,y)(b,k,yl
a,j,x
b,k,y
L = la, j, x) Oab Oxy [Da(g)]jk (b, k, YI
a,j,x
b,k,y
L =
la, j, x) [Da(g)b (a, k, xi
a,j,k,x
(1.49)
16
CHAPTER I. FINITE GROUPS
This is another way of writing a representation that is in block diagonal form. Note that if a particular irreducible representation appears only once in D, then we don't actually need the x variable to label its states. But typically, in the full quantum mechanical Hilbert space, each irreducible representation will appear many times, and then the physical x variable distinguish states that have the same symmetry properties, but different physics. The important fact, however, is that the dependence on the physics in ( 1.47) is rather trivial - only that the states are orthonormal - all the group theory is independent of x and y.
Under the symmetry transformation, since the states transform like
Iµ) -t D(g) Iµ) (µI -t (µI D(g)t
(1.50)
operators transform like
0 -t D(g) 0 D(g)t
( 1.51)
in order that all matrix element remain unchanged. Thus an invariant observ-
able satisfies
0 -t D(g) 0 D(g)t = 0
(1.52)
which implies that O commutes with D(g)
[O, D(g)] = 0 Vg E G.
(1.53)
Then we can constrain the matrix element
(a, j, xlOlb, k, y)
(1.54)
by arguing as follows:
0 = (a,j, xl[O, D(g)]lb, k, y)
= :E(a,j, xlOlb, k', y)(b, k', y!D(g)lb, k, y)
k'
- L (a, j, xlD(g)la, j', x) (a, j', xlOlb, k, y)
j'
(1.55)
Now we use (1.47), which exhibits the fact that the matrix elements of D(g)
have only trivial dependence on the physics, to write
0 = (a,j,xl[O,D(g)]lb,k,y)
= I:(a,j, xlOlb, k', y)[Db(g)]k'k
k'
- ~)Da(g)]jj'(a,j',xlOlb,k,y)
j'
( 1.56)
I. I2. * ORTHOGONALITY RELATIONS
17
Thus the matrix element (1.54) satisfies the hypotheses of Schur's lemma. It
must vanish if a -/- b. It must be proportional to the identity (in indices, that
is 81k) for a = b. However, the symmetry doesn't tell us anything about the
dependence on the physical parameters, x and y. Thus we can write •
(a,j, xlOlb, k, y) = fa(x, y) 8ab 8jk
( 1.57)
The importance of this is that the physics is all contained in the function
fa(x, y) - all the dependence on the group theory labels is completely fixed
by the symmetry. As we will see, this can be very powerful. This is a simple example of the Wigner-Eckart theorem, which we will discuss in much more generality later.
1.12 * Orthogonality relations
The same kind of summation over the group elements that we used in the proof of theorem 1.1, can be used together with Schur's lemma to show some more remarkable properties of the irreducible representations. Consider the following linear operator (written as a "dyadic")
L AjJ = Da(g- 1)la,j)(b, llDb(g)
gEG
(1.58)
where Da and Db are finite dimensional irreducible representations of G.
Now look at
L Da(gi)AjJ = Da(91)Da(g- 1)la,j)(b,l!IDb(g)
gEG
( 1.59)
L = Da(g1g-1)la,j)(b,l!IDb(g)
gEG
L = Da((gg11)- 1 )la,j)(b,l!IDb(g)
gEG
Now let g' = gg11
L = Da(g'-1)la,j)(b,l!IDb(g'g1)
g'EG
L Da(g'- 1 )la,j)(b, llDb(g')Db(g1) = AjZDb(gi)
g'EG
(1.60) (1.61)
(1.62) (1.63)
18
CHAPTER 1. FINITE GROUPS
= Now Schur's lemma (theorems 1.3 and 1.4) implies AJl 0 if Da and
Db are different, and further that if they are the same (remember that we have
chosen a canonical form for each representation so equivalent representations
are written in exactly the same way) AJl ex I. Thus we can write
L AJl = Da(g- 1 )la,j)(b,l!IDb(g) = <5ab>..JeI
gEG
( 1.64)
To compute >.Je, compute the trace of AJl (in the Hilbert space, not the in-
dices) in two different ways. We can write
(1.65)
where na is the dimension of Da. But we can also use the cyclic property of the trace and the fact that AJ£ ex <5ab to write
L Tr AJl = <5ab (a, llDa(g)Da(g-l )la,j) = N <5ab <5je
gEG
( 1.66)
where N is the order of the group. Thus >..Je = N '5je/ na and we have shown
( 1.67)
Taking the matrix elements of these relations yields orthogonality relations for the matrix elements of irreducible representations.
L ;; [Da(g- 1 )]kj[Db(g)]em = <5abc5jec5km
gEG
( 1.68)
For unitary irreducible representations, we can write
L -;; [Da(g)]Jk[Db(g)]em = <5abc5jec5km
gEG
( 1.69)
so that with proper normalization, the matrix elements of the inequivalent unitary irreducible representations
( 1.70)
are orthonormal functions of the group elements, g. Because the matrix elements are orthonormal, they must be linearly independent. We can also show
I.12. * ORTHOGONALITY RELATIONS
19
that they are a complete set of functions of g, in the sense that an arbitrary
function of g can be expanded in them. An arbitrary function of g can be writ-
ten in terms of a bra vector in the space on which the regular representation
acts:
F(g) =(Fig)= (FIDR(g)le)
(I.71)
where
L (Fl = F(g')(g'I
g'EG
(1.72)
and DR is the regular representation. Thus an arbitrary F(g) can be written
as a linear combination of the matrix elements of the regular representation.
L L F(g) = F(g')(g'IDR(g)le) = F(g')[DR(g)] 91e
g'EG
g'EG
( I. 73)
But since DR is completely reducible, this can be rewritten as a linear com-
bination of the matrix elements of the irreducible representations. Note that while this shows that the matrix elements of the inequivalent irreducible representations are complete, it doesn't tell us how to actually find what they are. The orthogonality relations are the same. They are useful only once we actually know explicitly what the representation look like. Putting these results together, we have proved
Theorem 1.5 The matrix elements ofthe unitary, irreducible representations of G are a complete orthonormal set for the vector space of the regular representation, or alternatively, for functions of g E G.
An immediate corollary is a result that is rather amazing:
(1.74)
- the order of the group N is the sum of the squares of the dimensions of the irreducible representations ni just because this is the number of components
of the matrix elements of the irreducible representations. You can check that this works for all the examples we have seen.
= Example: Fourier series - cyclic group ZN with elements a1 for j =
0 to N - I (with a0 e)
= a1ak a(j+k) mod N
(1.75)
The irreducible representations of ZN are
= Dn(aj) e21rinj/N
(1.76)
20
CHAPTER 1. FINITE GROUPS
all I-dimensional. 1 Thus ( 1.69) gives
_ L l N-1 e-21rin'j/N e21rinj/N = <>n'n
N J.= o which is the fundamental relation for Fourier series.
(1.77)
1.13 Characters
The characters XD (g) of a group representation D are the traces of the linear
-=- operators of the representation or their matrix elements:
XD(g) = 'Ir D(g) = L[D(g)]ii
(1.78)
The advantage of the characters is that because of the cyclic property of the
trace 'Ir(AB} = 'Ir(BA}, they are unchanged by similarity transformations,
thus all equivalent representations have the same characters. The characters are also different for each inequivalent irreducible representation, Da - in
fact, they are orthonormal up to an overall factor of N - to see this just sum
e (1.69) over j = k and = m
L ! L [Da(g)]jk[Db(g)]em = ]:_<>ab<>je<>km = <>ab
9ea
j=k na
j=k
l=m
l=m
or
N1~ " XDa(g) *XD&(g) = <>ab
(1.79)
gEG
Since the characters of different irreducible representations are orthogonal, they are different.
The characters are constant on conjugacy classes because
(1.80)
It is less obvious, but also true that the characters are a complete basis for functions that are constant on the conjugacy classes and we can see this by
explicit calculation. Suppose that F(g1) is such a function. We already know
1We will prove below that Abelian finite groups have only I-dimensional irreducible representations.
1.13. CHARACTERS
21
that F(g1 ) can be expanded in terms of the matrix elements of the irreducible
represent1:1tions -
L F(g1) = cJk[Da(g1)]jk
(1.81)
a,j,k
but since F is constant on conjugacy classes, we can write it as
(l.82)
and thus
! L F(g1) =
cJk[Da(g- 1)]je[Da(g1)]em[Da(g)]mk
a,j,k g,l,m
( 1.83)
But now we can do the sum over g explicitly using the orthogonality relation, ( 1.68).
( 1.84)
or ( 1.85)
This was straightforward to get from the orthogonality relation, but it has an
important consequence. The characters, Xa(g), of the independent irreducible
representations form a complete, orthonormal basis set for the functions that are constant on conjugacy classes. Thus the number of irreducible representations is equal to the number of conjugacy classes. We will use this frequently.
This also implies that there is an orthogonality condition for a sum over
representations. To see this, label the conjugacy classes by an integer a, and
let k0 be the number of elements in the conjugacy class. Then define the matrix V with matrix elements
(1.86)
where g0 is the conjugacy class a. Then the orthogonality relation ( 1.79) can
vt be written as V = 1. But V is a square matrix, so it is unitary, and thus we also have vvt = 1, or
(1.87)
22
CHAPTER 1. FINITE GROUPS
Consequences: Let D be any representation (not necessarily irreducible). In
its completely reduced form, it will contain each of the irreducible representations some integer number of times, ma, We can compute ma simply by using the orthogonality relation for the characters (1.79)
N1 "~ XDa(g) *XD (g) = maD
gEG
( 1.88)
The point is that D is a direct sum
m!; times
a
For example, consider the regular representation. It's characters are
(1.89)
XR(e) = N XR(g) = 0 for g-:/ e
( 1.90)
Thus
mf = Xa(e) = na
(1.91)
Each irreducible representation appears in the regular representation a num-
ber of times equal to its dimension. Note that this is consistent with (1.74).
Note also that ma is uniquely determined, independent of the basis. Example: Back to S3 once more. Let's determine the characters without thinking about the 2-dimensional representation explicitly, but knowing the conjugacy classes, {e}, {a1,a2} and {a3,a4,a5}. It is easiest to start with the one representation we know every group has - the trivial representa-
tion, Do for which Do(g) = 1 for all g. This representation has characters x0 (g) = l. Note that this is properly normalized. It follows from the condition L n~ = N that the other two representations have dimensions 1 and
2. It is almost equally easy to write down the characters for the other I-
dimensional representation. In general, when there is an invariant subgroup H of G, there are representations of G that are constant on H, forming a
representation of the factor group, G / H. In this case, the factor group is Z2,
with nontrivial representation 1 for H = {e, a1, a2} and -1 for {a3, a4, as}. We know that for the 2 dimensional representation, x3 (e) = n3 = 2, thus so
far the character table looks like
011 1 1 1 1 -1 22? ?
(l.92)
1.13. CHARACTERS
23
But then we can fill in the last two entries using orthogonality. We could actually have just used orthogonality without even knowing about the second representation, but using the Z2 makes the algebra trivial.
01 1 1 1 1 1 -1 2 2 -1 0
(1.93)
We can use the characters not just to find out how many irreducible representations appear in a particular reducible one, but actually to explicitly decompose the reducible representation into its irreducible components. It is
easy to see that if D is an arbitrary representation, the sum
r;; L Pa=
XDJg)* D(g)
gEG
(1.94)
is a projection operator onto the subspace that transfonns under the represen-
tation a. To see this, note that if we set j = k and sum in the orthogonality
relation (1.69), we find
r;; L XDJg)*[Db(g)]em = <5abOem
gEG
(1.95)
Thus when D is written in block diagonal form, the sum in (1.95) gives 1 on the subspaces that transform like Da and O on all the rest - thus it is the projection operator as promised. The point, however, is that (1.94) gives us the projection operator in the original basis. We did not have to know how to transform to block diagonal form. An example may help to clarify this. Example - S3 again
Here's a three dimensional representation of S3
0
G D, G D,(e) =
1
~) 0
1
G G D3(a2) =
0
D n 0
0
D,(a1) =
0
1
1
D,(a,) =
0
0
(l.96)
G D, G D 0
D3(a,) =
0
1
0
D,(a,) =
1
0
24
CHAPTER I. FINITE GROUPS
More precisely, as usual when we write down a set of matrices to represent linear operators, these are matrices which have the same matrix elements that is
(1.97)
One could use a different symbol to represent the operators and the matrices, but its always easy to figure out which is which from the context. The important point is that the way this acts on the states, lj) is by matrix multiplication on the right, because we can insert a complete set of intermediate states
D3(g)lj) = })k)(klD3(g)lj) = Llk)[D3(g)]kj
( 1.98)
k
k
This particular representation is an important one because it is the defining representation for the group - it actually implements the permutations on the states. For example
D3(a1)ll) = })k)[D3(a1)]k1 = 12)
k
D3(ai)l2) = Llk)(D3(ai)]k2 = /3)
k
D3(a1)13) = })k)[D3(a1)]k3 = II)
(l.99)
k
thus this implements the cyclic transformation (1,2,3), or 1 --+ 2 --+ 3 --+ 1. Now if we construct the projection operators, we find
ID
(1.100)
(1.101)
(1.102)
This makes good sense. Pa projects onto the invariant combination (11) + 12) + 13))/\,/3, which transforms trivially, while P2 projects onto the two
dimensional subspace spanned by the differences of pairs of components,
I1) - 12), etc, which transforms according to D3.
This constructions shows that the representation D3 decomposes into a direct sum of the irreducible representations,
(1.103)
1.14. EIGENSTATES
25
1.14 Eigenstates
In quantum mechanics, we are often interested in the eigenstates of an invariant hermitian operator, in particular the Hamiltonian, H. We can always take these eigenstates to transform according to irreducible representations of the symmetry group. To prove this, note that we can divide up the Hilbert space into subspaces with different eigenvalues of H. Each subspace furnishes a representation of the symmetry group because D (g), the group represen-
tation on the full Hilbert space, cannot change the H eigenvalue (because
[D(g), H] = 0). But then we can completely reduce the representation in
each subspace. A related fact is that if some irreducible representation appears only once
in the Hilbert space, then the states in that representation must be eigenstates
of H (and any other invariant operator). This is true because Hla, j, x) must
be in the same irreducible representation, thus
L H la,j,x) = Cy la,j, y)
y
(l.104)
and if x and y take only one value, then la, j, x) is an eigenstate.
This is sufficiently important to say again in the form of a theorem:
Theorem 1.6 If a hermitian operator, H, commutes with all the elements, D(g), of a representation of the group G, then you can choose the eigenstates of H to transform according to irreducible representations of G. If an irreducible representation appears only once in the Hilbert space, every state in the irreducible representation is an eigenstate of H with the same eigenvalue.
Notice that for Abelian groups, this procedure of choosing the H eigenstates to transform under irreducible representations is analogous to simulta-
neously diagonalizing Hand D(g). For example, for the group Z2 associated with parity, it is the statement that we can always choose the H eigenstates to
be either symmetric or antisymmetric. In the case of parity, the linear operator representing parity is hermitian,
so we know that it can be diagonalized. But in general, while we have shown that operators representing finite group elements can be chosen to be unitary, they will not be hermitian. Nevertheless, we can show that for an Abelian group that commutes with the H, the group elements can simultaneously diagonalized along with H. The reason is the following theorem:
Theorem 1.7 All of the irreducible representations of a finite Abelian group are I -dimensional.
26
CHAPTER I. FINITE GROUPS
One proof of this follows from our discussion of conjugacy classes and from
(1.74). For an Abelian group, conjugation does nothing, because g g' g- 1 =
g' for all g and g'. Therefore, each element is in a conjugacy class all by itself. Because there is one irreducible representation for each conjugacy class, the number of irreducible representations is equal to the order of the group. Then the only way to satisfy (I.74) is to have all of the nis equal to one. This proves the theorem, and it means that decomposing a representation of an Abelian group into its irreducible representations amounts to just diagonalizing all the representation matrices for all the group elements.
For a non-Abelian group, we cannot simultaneously diagonalize all of
the D(g)s, but the procedure of completely reducing the representation on each subspace of constant H is the next best thing.
A classical problem which is quite analogous to the problem of diago-
nalizing the Hamiltonian in quantum mechanics is the problem of finding the normal modes of small oscillations of a mechanical system about a point of stable equilibrium. Here, the square of the angular frequency is the eigen-
yalue of the M-1K matrix and the normal modes are the eigenvectors of M-1K. In the next three sections, we will work out an example.
1.15 Tensor products
We have seen that we can take reducible representations apart into direct sums of smaller representations. We can also put representations together into larger representations. Suppose that D1 is an m dimensional representa-
tion acting on a space with basis vectors lj) for j = 1 tom and D2 is an n dimensional representation acting on a space with basis vectors Ix) for x = I
to n. We can make an m x n dimensional space called the tensor product space by taking basis vectors labeled by both j and x in an ordered pair lj, x). Then when j goes from 1 tom and x goes from I ton, the ordered pair (j, x) runs over m x n different combinations. On this large space, we can define a new representation called the tensor product representation D 1 0 D2 by multiplying the two smaller representations. More precisely, the matrix elements of D Di@D2 (g) are products of those of D1 (g) and D2(g):
(1.105)
It is easy to see that this defines a representation of G. In general, however, it will not be an irreducible representation. One of our favorite pastimes in what follows will be to decompose reducible tensor product representations into irreducible representations.
1.16. EXAMPLE OF TENSOR PRODUCTS
27
1.16 Example of tensor products
Consider the following physics problem. Three blocks are connected by springs in a triangle as shown
(1.106)
Suppose that these are free to slide on a frictionless surface. What can we say about the normal modes of this system. The point is that there is an S3 symmetry of the system, and we can learn a lot about the system by using the symmetry and applying theorem 1.6. The system has 6 degrees of freedom, described by the x and y coordinates of the three blocks:
( XI YI X2 Y2 X3 Y3)
(1.107)
This has the structure of a tensor product - the 6 dimensional space is a product of a 3 dimensional space of the blocks, and the 2 dimensional space of the x and y coordinates. We can think of these coordinates as having two
It indices. is three two dimensional vectors, 'G, each of the vector indices has
two components. So we can write the components as Tjµ where j labels the
mass and runs from 1 to 3 and µ labels the x or y component and runs from 1 to 2, with theconnection
= ( X1 YI X2 Y2 X3 Y3)
( ru r12 r21 r22 r31 r32)
(1.108)
The 3 dimensional space transforms under S3 by the representation D 3. The
n, 1 -=-n , 2 dimensional space transforms by the representation D 2 below:
D,(e) = (~
D2(a1) = (
~~ D2 (a2) = (
~) , D,(a,) = ( ~l ~) ,
(1.109)
1 ~½ , -4 D2(a4) = ( 1 ,y] ) D2(a5) = ( 1 __v'3_2~3 )
28
CHAPTER I. FINITE GROUPS
This is the same as ( t.21 ). Then, using ( 1.105), the 6 dimensional representation of the coordinates is simply the product of these two representations:
[D5(g)]jµkv = [D3(g)]jk[D2(g)]µv
(1.110)
Thus, for example,
D5(a1) =
0
0
0
0
- 21
_v'3
2
0
0
0
0
v'3 2
- 21
-21 --vy'3 0
0
0
0
_y]
2
- 21
0
0
0
0
0
0
- 21
_ _y]
2
0
0
0
0
v'3
2
- 21
0
0
(1.111)
This has the structure of 3 copies of D2 (a1) in place of the 1's in D3 (a1).
The other generators are similar in structure.
Because the system has the S3 symmetry, the normal modes of the system must transform under definite irreducible representations of the symmetry. Thus if we construct the projectors onto these representations, we will have gone some way towards finding the normal modes. In particular, if an irreducible representation appears only once, it must be a normal mode by theorem 1.6. If a representation appears more than once, then we need some additional information to determine the modes.
We can easily determine how many times each irreducible representation appears in D5 by finding the characters of D6 and using the orthogonality relations. To find the characters of D5, we use an important general result. The character of the tensor product of two representations is the product of the characters of the factors. This follows immediately from the definition of the tensor product and the trace.
(1.112)
So in this case,
X6(g) = I)D5(g)]jµjµ
= [D3(g)Jii[D2(g)]µµ = X3(g)x2(g)
(1.113)
1.17. * FINDING THE NORMAL MODES
29
so that the product is as shown in the table below:
D3 3 0 1 D2 2 -1 0
(1.114)
D5 6 0 0
This is the same as the characters of the regular representation, thus this representation is equivalent to the regular representation, and contains Do and D1 once and D2 twice.
Note that (1.113) is an example of a simple but important general relation, which we might as well dignify by calling it a theorem -
Theorem 1.8 The characters ofa tensor product representation are the products of the characters of the factors.
With these tools, we can use group theory to find the normal modes of the system.
1.17 * Finding the normal modes
The projectors onto Do and D 1 will be 1 dimensional. Po is
I: Po = -i xo(g) *D5(g) 6 gEG
1
4
_y]
12
- 41
_y]
12
0
_v'3
6
v'3 IT
1 12
_ _y]
12
1 12
0
_ _y]
6
- 41
_ _y]
12
1
4
_ _y]
12
0
_y]
6
v'3 IT
1 12
-1v2'3
1 12
0
_ _y]
6
0
0
0
0 0 0
- 6 v'3 -61
_y]
6
-61 0
1 3
(l.115)
1 ¥ ( ½ -½
0 -~)
(1.116)
30 corresponding to the motion
CHAPTER 1. FINITE GROUPS
(1.117)
the so-called "breathing mode" in which the triangle grows and shrinks while retaining its shape.
Pi is
:E Pi=! x1(g)* D6(g) 6 gEG
1 _ _y] 1
12
12 12
_y]
12
-61 0
_ _y]
12
1
4
_ _y]
12
- 41
_y]
6
0
1 _ _y] 1
=
12
y3
12
12
- 41
12
_y]
12
_y]
12
-61 0
1
4
y3
-6
0
- 61
_y]
6
- 61
_.Y]
6
1
3
0
0
0
0
0
0 0
(1.118)
=
(-¥ ½ -f -½ ~ 0)
(1.119)
1.17. * FINDING THENORMALMODES corresponding to the motion
31 (l.120)
the mode in which the triangle rotates - this is a nonnal mode with zero frequency because there is no restoring force.
Notice, again, that we found these two normal modes without putting in any physics at all except the symmetry!
Finally, P2 is
:E P2 = ~ x2(g)* D6(g)
gEG
2
3
0
1
6
0
2
_y]
3
6
1
y3
2
=6 6
_y3
1
6
6
3 0
1
_y3
1
6
6
6
y3
1 _ _y]
6
6
6
_y] 1
y3
6
6
6
1 _ _y] 1
6
6
6
0
1
6
y3
-6
2
_y]
1
3
6
6
y3
2
6
3
0
1
6
0
2
3
(l.121)
As expected, this is a rank 4 projection operator (Tr P2 = 4). We need some
dynamical infonnation. Fortunately, two modes are easy to get - translations
of the whole triangle.
Translations in the x direction, for example, are projected by
Tx =
1
3 0
0 0
1
·o3
0 0
1
3 0
0 0
1
3
0
1
3
0
1
3
0
0 0 0 0 0 0
1
3
0
1
3
0
1
3
0
0 0 0 0 0 0
(1.122)
32
CHAPTER 1. FINITE GROUPS
and those in the y direction by
Ty=
0 0 0 0 0 0
0
1
3
0
1
3
0
1
3
0 0 0 0 0 0
0
1
3
0
1
3
0
1
3
0 0 0 0 0 0
0
1
3
0
1
3
0
1
3
So the nontrivial modes are projected by
(1.123)
P2 -Tx -Ty=
1
3
0
- 61
_y3 6
- 61 _y]
6
0
1
3
y3
6 - 61 _ _y]
6
- 61
- 61 _y]
6 1
3
0
- 61
y3
-6
y3
-6 - 61
0
1
3 _y]
6
- 61
- 61
y3
-6
- 61 _y]
6 1
3
0
y3
6 - 61 _ _y]
6
- 61
0
1
3
(1.124)
To see what the corresponding modes look like, act with this on the vector ( 0 0 0 0 0 1 ) to get
("? -1{- -! 0 ½ )
(l.125)
corresponding to the motion
(1.126)
Then rotating by 21r /3 gives a linearly independent mode.
1. 18. * SYMMETRIES OF 2N +1-GONS
33
1.18 * Symmetries of 2n+l-gons
This is a nice simple example of a transfonnation group for which we can work out the characters (and actually the whole set of irreducible representa-
tions) easily. Consider a regular polygon with 2n + 1 vertices, like the 7-gon
shown below.
(1.127)
The grouf of symmetries of the 2n+1-gon consists of the identity, the 2n
rotations by 2,;~{ for j = l to n,
. rotations by
±21rj ---
for
. J
=
1
to
n
2n+ 1
(1.128)
and the 2n+ 1 reflections about lines through the center and a vertex, as show below:
reflections about lines through center and vertex
(1.129)
(1.130)
Thus the order of the group of symmetries is N = 2 x (2n + 1). There are n + 2 conjugacy classes:
1 - the identity, e;
2 - the 2n+ 1 reflections;
3 to n+2 - the rotations by ~~{ for j = l to n - each value of j is a
separate conjugacy class.
The way this works is that the reflections are all in the same conjugacy class because by conjugating with rotations, you can get from any one reflection to any other. The rotations are unchanged by conjugation by rotations, but a conjugation by a reflection changes the sign of the rotation, so there is
a ± pair in each conjugacy class.
34
CHAPTER I. FINITE GROUPS
Furthermore, the n conjugacy classes of rotations are equivalent under cyclic permutations and relabeling of the vertices, as shown below:
(1.131)
(l.132)
The characters look like
1 1 1 -1 2 0
1 1 2 COS 21rm
2n+l
j=2 1···1 j=n
1
...
1
1
...
1
2 COS 41rm
2n+l
...
2cos 2nirm
2n+l
(1.133)
In the last line, the different values of m give the characters of the n different
2-dimensional representations.
1.19 Permutation group on n objects
Any element of the permutation group on n objects, called Sn, can be written in term of cycles, where a cycle is a cyclic permutation of a subset. We will use a notation that makes use of this, where each cycle is written as a set of numbers in parentheses, indicating the set of things that are cyclicly permuted. For example:
(l) means x1 --t x1 (I 372) means x1 --t X3 --t x7 --t x2 --t x1
Each element of Sn involves each integer from 1 to n in exactly one
cycle. Examples: The identity element looks like e =(1)(2)· · ·(n) - n I-cycles - there is
only one of these.
1.20. CONJUGACY CLASSES
35
An interchange of two elements looks like (12)(3)· · -(n) - a 2-cycle and
n - 2 1-cycles - there are n(n - 1) /2 of these - (j132)(13) ···Un)-
An arbitrary element has kj j-cycles, where
n
L, j kj = n
j=l
(1.134)
For example, the permutation (123)(456)(78)(9) has two 3-cycles, 1 2-cycle
and a 1-cycle, so k1 = k2 = 1 and k3 = 2.
There is an simple (but reducible) n dimensional representation of Sn called the defining representation where the "objects" being permuted are just the basis vectors of an n dimensional vector space,
11) , 12) , .. · In)
(l.135)
If the permutation takes Xj to Xk, the corresponding representation operator D takes lj) to lk), so that
D IJ) = lk)
(1.136)
and thus
(1.137)
Each matrix in the representation has a single 1 in each row and column.
1.20 Conjugacy classes
The conjugacy classes are just the cycle structure, that is they can be labeled by the integers kj. For example, all interchanges are in the same conjugacy class - it is enough to check that the inner automorphism gg1g- 1 doesn't change the cycle structure of g1 when g is an interchange, because we can build up any permutation from interchanges. Let us see how this works in some examples. In particular, we will see that conjugating an arbitrary permutation by the interchange (12)(3)· ··just interchanges 1 and 2 without changing the cycle structure
Examples - (12)(3)(4)·(1)(23)(4)·(12)(3)(4) (note that an interchange
36
CHAPTER I. FINITE GROUPS
is its own inverse)
1234 ~ (12)(3)(4)
2134
~ (1)(23)(4)
2314 ~ (12)(3)(4)
3214
1234
~ (2)(13)(4)
3214
(12)(3)(4)-(1)(234)-(12)(3)(4)
1234 ~ (12)(3)(4)
2134
~ (1)(234)
2341 ~ (12)(3)(4)
3241
1234
-1- (2)(134)
3241
(1.138) (1.139)
If 1 and 2 are in different cycles, they just get interchanged by conjugation by (12), as promised.
The same thing happens when 1 and 2 are in the same cycle. For example
1234
21+34 (12)(3)(4)
~ (123)(4)
1324
~ (12)(3)(4) 3124
(1.140)
1234
~
(213)(4)
3124
Again, in the same cycle this time, 1 and 2 just get interchanged. Another way of seeing this is to notice that the conjugation is analogous to a similarity transformation. In fact, in the defining, n dimensional representation of (1.135) the conjugation by the interchange (12) is just a change of basis that switches 11) t-t 12). Then it is clear that conjugation
1.21. YOUNG TABLEAUX
37
does not change the cycle structure, but simply interchanges what the permutation does to I and 2. Since we can put interchanges together to form an arbitrary permutation, and since by repeated conjugations by interchanges, we can get from any ordering of the integers in the given cycle structure to any other, the conjugacy classes must consist of all possible permutations with a particular cycle structure.
Now let us count the number of group elements in each conjugacy class.
Suppose a conjugacy class consists of permutations of the form of k1 Icycles, k2 2-cycles, etc, satisfying (1.134). The number of different permuta-
tions in the conjugacy class is
n! (l.141)
because each permutation of number l to n gives a permutation in the class, but cyclic order doesn't matter within a cycle
(123) is the same as (231)
(1.142)
and order doesn't matter at all between cycles of the same length
(12)(34) is the same as (34)(12)
(1.143)
1.21 Young tableaux
It is useful to represent each j-cycle by a column of boxes of length j, topjustified and arranged in order of decreasing j as you go to the right. The total number of boxes is n. Here is an example:
I II I I
(1.144)
is four I-cycles in S4 - that is the identity element - always a conjugacy class all by itself. Here's another:
(1.145)
is a 4-cycle, a 3-cycle and a I-cycle in Ss. These collections of boxes are called Young tableaux. Each different tableaux represents a different conjugacy class, and therefore the tableaux are in one-to-one correspondence with the irreducible representations.
38
CHAPTER 1. FINITE GROUPS
1.22 Example - our old friend S3
The conjugacy classes are
ITIJ EP
with numbers of elements
3! -3! = 1
3! -2 =3
8
3! - =2 3
(1.146) (l.147)
1.23 Another example - S4
If ~ EfTI EE I,--I,--I,--,I--,I
with numbers of elements
(l.148)
4! = 1 4! = 6 4! = 3 4! = 8 _4:1_ = 6
4!
4
8
3
4
(1.149)
The characters of 84 look like this (with the conjugacy classes which label the columns in the same order as in (1.148)):
conjugacy classes 11 1 1 1 3 1 -1 0 -1 2 0 2 -1 0 3 -1 -1 0 1 1 -1 1 1 -1
( 1.150)
The first row represents the trivial representation.
1.24 * Young tableaux and representations of Sn
We have seen that a Young tableau with n boxes is associated with an irreducible representation of Sn. We can actually use the tableau to explicitly construct the irreducible representation by identifying an appropriate subspace of the regular representation of Sn.
1.24. * YOUNG TABLEAUX AND REPRESENTATIONS OF SN
39
To see what the irreducible representation is, we begin by putting the
integers from l to n in the boxes of the tableau in all possible ways. There are n! ways to do this. We then identify each assignment of integers l to n to the boxes with a state in the regular representation of Sn by defining
a standard ordering, say from left to right and then top down (like reading words on a page) to translate from integers in the boxes to a state associated with a particular pennutation. So for example
~ ~ ---+ 16532174)
(1.151)
where J6532174) is the state corresponding to the permutation
1234567 ---+ 6532174
(1.152)
Now each of the n! assignment of boxes to the tableau describes one of the
n! states of the regular representation.
Next, for a particular tableau, symmetrize the corresponding state in the
numbers in each row, and antisymmetrize in the numbers in each column. For
example
[ill] ---+ 112) + 121)
(l.153)
and
w21 ---+ 1123) + 1213} _ /321} _ 1231)
(l.154)
Now the set of states constructed in this ways spans some subspace of the regular representation. We can construct the states explicitly, and we know how permutations act on these states. That the subspace constructed
in this way is a representation of 8n, because a permutation just corresponds
to starting with a different assignment of numbers to the tableau, so acting with the permutation on any state in the subspace gives another state in the subspace. In fact, this representation is irreducible, and is the irreducible representation we say is associated with the Young tableau.
Consider the example of Eh. The tableau
[I1J
(1.155)
gives completely symmetrized states, and so is associated with a one dimensional subspace that transforms under the trivial representation. The tableau
§
(1.156)
40
CHAPTER 1. FINITE GROUPS
gives completely antisymmetrized states, and so, again is associated with a one dimensional subspace, this time transforming under the representation in which interchanges are represented by -1. Finally
(1.157)
gives the following states:
[!]21 --t !123) + !213) - !321) - !231)
(1.158)
[1] 21--t !321) + 1231) - !123) - !213)
(1.159)
[1] 31--t !231) + !321) - !132) - !312)
(1.160)
8] 31--t !132) + 1312) - !231) - !321)
(1.161)
[!j 11 --t /312) + 1132} - 1213} - 1123)
(1.162)
[j_] 11 --t !213) + !123) - !312) - !132)
(l.163)
Note that interchanging two numbers in the same column of a tableau just
changes the sign of the state. This is generally true. Furthermore, you can
see explicitly that the sum of three states related by cyclic permutations van-
ishes. Thus the subspace is two dimensional and transforms under the two
dimensional irreducible representation of S3.
It turns out that the dimension of the representation constructed in this
way is
n! (1.164)
H
where the quantity H is the "hooks" factor for the Young tableau, computed
as follows. A hook is a line passing vertically up through the bottom of
some column of boxes, making a right hand tum in some box and passing out
through the row of boxes. There is one hook for each box. Call the number
of boxes the hook passes through h. Then H is the product of the hs for
all hooks. We will come back to hooks when we discuss the application of
Young tableaux to the representations of SU(N) in chapter XII I.
This procedure for constructing the irreducible representations of Sn is
entirely mechanical (if somewhat tedious) and can be used to construct all the
representations of Sn from the Young tableaux with n boxes.
1.24. * YOUNG TABLEAUX AND REPRESENTATIONS OF SN
41
We could say much more about finite groups and their representations, but our primary subject is continuous groups, so we will leave finite groups for now. We will see, however, that the representations of the permutation groups play an important role in the representations of continuous groups. So
we will come back to Sn now and again.
Problems
1.A. Find the multiplication table for a group with three elements and prove that it is unique.
1.B. Find all essentially different possible multiplication tables for groups with four elements (which cannot be related by renaming elements).
1.C. Show that the representation (1.135) of the permutation group is reducible.
1.D. Suppose that D 1 and D 2 are equivalent, irreducible representations of a finite group G, such that
D2(g) = S D1(g) s-1 Vg E G
What can you say about an operator A that satisfies
1.E. Find the group of all the discrete rotations that leave a regular tetrahedron invariant by labeling the four vertices and considering the rotations as permutations on the four vertices. This defines a four dimensional representation of a group. Find the conjugacy classes and the characters of the irreducible representations of this group.
*1.F. Analyze the normal modes of the system of four blocks sliding on a frictionless plane, connected by springs as shown below:
42
CHAPTER I. FINITE GROUPS
just as we did for the triangle, but using the 8-element symmetry group of the square. Assume that the springs are rigidly attached to the masses (rather than pivoted, for example), so that the square has some rigidity.
Chapter 2
Lie Groups
Suppose our group elements g E G depend smoothly on a set of continuous
parameters -
g(a)
(2.1)
What we mean by smooth is that there is some notion of closeness on the group such that if two elements are "close together" in the space of the group elements, the parameters that describe them are also close together.
2.1 Generators
Since the identity is an important element in the group, it is useful to param-
eterize the elements (at least those close to the identity element) in such a
way that a = 0 corresponds to the identity element. Thus we assume that in
some neighborhood of the identity, the group elements can be described by a
function of N real parameters, aa for a = l to N, such that
g(a)la=O = e
(2.2)
Then if we find a representation of the group, the linear operators of the representation will be parameterized the same way, and
D(a)la=O = 1
(2.3)
Then in some neighborhood of the identity element, we can Taylor expand D (a), and if we are close enough, just keep the first term:
D(da) = 1 +•idaaXa + ···
(2.4)
43
44
CHAPTER 2. LIE GROUPS
where we have called the parameter da to remind you that it is infinitesimal.
In (2.4), a sum over repeated indices is understood (the "Einstein summation
convention") and
Xa = -i aa D(a)I
(2.5)
aa
a=O
The Xa for a = l to N are called the generators of the group. If the
parameterization is parsimonious (that is - all the parameters are actually
needed to distinguish different group elements), the Xa will be independent.
The i is included in the definition (2.5) so that if the representation is unitary,
the Xa will be hermitian operators.
Sophus Lie showed how the generators can actually be defined in the
abstract group without mentioning representations at all. As a result of his
work, groups of this kind are called Lie groups. I am not going to talk about
them this way because I am more interested in representations than in groups,
but it is a beautiful theoretical construction that you may want to look up if
you haven't seen it.
As we go away from the identity, there is enormous freedom to param-
eterize the group elements in different ways, but we may as well choose our
parameterization so that the group multiplication law and thus the multipli-
cation law for the representation operators in the Hilbert space looks nice.
In particular, we can go away from the identity in some fixed direction by
simply raising an infinitesimal group element
D(da) = 1 + idaaXa
(2.6)
to some large power. Because of the group property, this always gives another group element. This suggests defining the representation of the group elements for finite a as
(2.7)
In the limit, this must go to the representation of a group element because
1 + iaaXa/k becomes the representation of a group element in (2.4) as k
becomes large. This defines a particular parameterization of the representations (sometimes called the exponential parameterization), and thus of the
group multiplication law itself. In particular, this means that we can write the group elements (at least in some neighborhood of e) in terms of the generators. That's nice, because unlike the group elements, the generators form a
vector space. They can be added together and multiplied by real numbers. In
fact, we will often use the term generator to refer to any element in the real linear space spanned by the Xas.
2.2. LIE ALGEBRAS
45
2.2 Lie algebras
Now in any particular direction, the group multiplication law is uncomplicated. There is a one parameter family of group elements of the form
(2.8)
and the group multiplication law is simply
(2.9)
However, if we multiply group elements generated by two different linear
combinations of generators, things are not so easy. In general,
(2.10)
On the other hand, because the exponentials form a representation of the group (at least if we are close to the identity), it must be true that the product is some exponential of a generator,
(2.11)
for some o. And because everything is smooth, we can find Oa by expanding
both sides and equating appropriate powers of a and /3. When we do this,
something interesting happens. We find that it only works if the generators form an algebra under commutation (or a commutator algebra). To see this, let's actually do it to leading nontrivial order. We can write
(2.12)
I will now expand this, keeping terms up to second order in the parameters a
and /3, using the Taylor expansion ofln(l + K) where
= I( ei<'<aXaeif3&Xb _ 1
= (1 + iaaXa - !(aaXa)2 + ···)
1 2
(1 + if3bXb - 2(f3bXb) 2 + ···) -1 = iaaXa + i/3aXa - aaXa/3bXb - 21 (aaXa) 2 - 21 (f3aXa) 2 + ···
(2. I 3)
46
CHAPTER 2. LIE GROUPS
This gives
ioaXa = K - ! K 2 + ··· 2
= iaaXa + if3aXa - aaXaf3bXb
1 -taaXa)
2
-
1
2
2(f3aXa)
+2(aaXa + f3aXa) 2 + ···
(2.14)
Now here is the point. The higher order terms in (2.14) are trying to cancel. If the X s were numbers, they would cancel, because the product of the exponentials is the exponential of the sum of the exponents. They fail to cancel
only because the Xs are linear operators, and don't commute with one an-
other. Thus the extra terms beyond iaaXa + if3aXa in (2. I4) are proportional
to the commutator. Sure enough, explicit calculation in (2.14) gives
ioaXa = K - !K2 + ···
2
= iaaXa + i/3aXa
-
1
2 [aaXa,
/3bXb]
+
· · ·
(2.15)
We obtained (2.15) using only the group property and smoothness, which allowed us to use the Taylor expansion. From (2.15) we can calculate Oa, again in an expansion in a and /3. We conclude that
(2.16)
where the i is put in to make I real and the · · · represent terms that have more
than two factors of a or /3. Since (2.16) must be true for all a and /3, we must
have (2.17)
for some constants !abc, thus
(2.18)
where
fabc = - fbac
because [A, B] = -[B, A]. Note that we can now write
Oa
= aa + f3a -
1 -2,a
+
· · ·
(2.19) (2.20)
2.3. THE JACOBI IDENTITY
47
so that if I and the higher terms vanish, we would restore the equality in (2.10).
(2.18) is what is meant by the statement that the generators form an algebra under commutation. We have just shown that this follows from the group properties for Lie groups, because the Lie group elements depend smoothly on the parameters. The commutator in the algebra plays a role similar to the multiplication law for the group.
Now you might worry that if we keep expanding (2.12) beyond second order, we would need additional conditions to make sure that the group mu!: tiplication law is maintained. The remarkable thing is that we don't. The commutator relation (2.18) is enough. In fact, if you know the constants,
f abc, you can reconstruct oas accurately as you like for any a and /3 in some
finite neighborhood of the origin! Thus the !abc are tremendously important - they summarize virtually the entire group multiplication law. The f abc are called the structure constants of the group. They can be computed in any nontrivial representation, that is unless the Xa vanish.
The commutator relation (2.18) is called the Lie algebra of the group. The Lie algebra is completely determined by the structure constants. Each group representation gives a representation of the algebra in an obvious way, and the structure constants are the same for all representations because they are fixed just by the group multiplication law and smoothness. Equivalence, reducibility and irreducibility can be transferred from the group to the algebra with no change.
Note that if there is any unitary representation of the algebra, then the f abcs are real, because if we take the adjoint of the commutator relation for hermitian X s, we get
[Xa, Xb]t = -i J:bcXc
= [Xb, Xa] = i fbacXc = -i fabcXc
(2.21)
Since we are interested in groups which have unitary representations, we will just assume that the f abc are real.
2.3 The Jacobi identity
The matrix generators also satisfy the following identity:
[Xa, [Xb, Xe]] + cyclic permutations = 0.
(2.22)
48
CHAPTER 2. LIE GROUPS
called the Jacobi identity, which you can check by just expanding out the commutators. 1
The Jacobi identity can be written in a different way that is sometimes easier to use and is also instructive:
(2.23)
This is a generalization of the product rule for commutation:
(2.24)
The Jacobi identity is rather trivial for the Lie algebras with only finite dimensional representations that we will study in this book. But it is worth noting that in Lie's more general treatment, it makes sense in situations in which the product of generators is not even well defined.
2.4 The adjoint representation
The structure constants themselves generate a representation of the algebra
called the adjoint representation. If we use the algebra(2. l 8), we can com-
pute
[Xa, [Xb, Xe]]
= i fbcd [Xa, Xd]
(2.25)
= - fbcdfadeXe
so (because the Xa are independent),{2.22) implies
fbcdfade + fabdfcde + fcadfbde = 0 ·
(2.26)
Defining a set of matrices Ta
[Ta]bc = -ifabc
(2.27)
then (2.26) can be rewritten as
[Ta, n] = i !abcTc
(2.28)
Thus the structure constants themselves furnish a representation of the algebra. This is called the adjoint representation. The dimension of a representation is the dimension of the linear space on which it acts (just as for a
1The Jacobi identity is really more subtle than this. We could have proved it directly in the abstract group, where the generators are not linear operators on a Hilbert space. Then the algebra involves a "Lie product" which is not necessarily a commutator, but nevertheless satisfies the Jacobi identity.
2.4. THE ADJOINT REPRESENTATION
49
finite group). The dimension of the adjoint representation is just the number of independent generators, which is the number of real parameters required to describe a group element. Note that since the !abcS are real, the generators of the adjoint representation are pure imaginary.
We would like to have a convenient scalar product on the linear space of the generators in the adjoint representation, (2.27), to tum it into a vector space. A good one is the trace in the adjoint representation
(2.29)
This is a real symmetric matrix. We will next show that we can put it into a very simple canonical form. We can change its form by making a linear transformation on the Xa, which in tum, induces a linear transformation on the structure constants. Suppose
(2.30)
then
[X:, X~] = i LadLbefdecXc
= i LadLbefdegL;i LhcXc = i LadLbe!de9 L9c1X~
so2
f abc -+ f~bc = LadLbefdegL;/
If
we
then
" k
define a new
Tas
with
the
transformed
f
s,
(2.31) (2.32)
(2.33)
or (2.34)
In other words, a linear transformation on the Xas induces a linear transformation on the Tas which involves both a similarity transformation and the
same linear transformation on the a index that labels the generator. But in the
trace the similarity transformation doesn't matter, so
(2.35)
2Because of the L- 1 in (2.32), it would be make sense to treat the third index in /abc differently, and write it as an upper index - f~b- We will not bother to do this because we are going to move very quickly to a restricted set of groups and basis sets in which Tr(TaI'b) ex
= dab- Then only orthogonal transfonnation on the Xas are allowed, L- 1 Lr, so that all
three indices are treated in the same way.
50
CHAPTER 2. LIE GROUPS
Thus we can diagonalize the trace by choosing an appropriate L (here we only need an orthogonal matrix). Suppose we have done this (and dropped the primes), so that
(2.36)
We still have the freedom to rescale the generators (by making a diagonal L transformation), so for example, we could choose all the non-zero kas to have absolute value 1. But, we cannot change the sign of the kas (because L appears squared in the transformation (2.35)).
For now, we will assume that the kas are positive. This defines the class of algebras that we study in this book. They are called the compact Lie algebras. We will come back briefly below to algebras in which some are zero.3 And we will take
(2.37)
for some convenient positive .X.. In this basis, the structure constants are completely antisymmetric, because we can write
labc = -i .X. -l 'Ir([Ta, n] Tc)
(2.38)
which is completely antisymmetric b..e...c...a. u( se of the cyclic property of the trace.
.I ti
~Jr([Ta, n] Tc) d.(TanTc - nTaTc)
=:.(nTcTa - TcnTa) = 'Ir([n, Tc] Ta)
(2.39)
which implies
fabc = /bca ·
(2.40)
Taken together, (2.19) and (2.40) imply the complete antisymmetry of !abc
f abc = /bca = fcab = -fbac = -facb = - fcba ·
(2.41)
In this basis, the adjoint representation is unitary, because the Ta are imagi-
nary and antisymmetric, and therefore hermitian.
3Algebras in which some of the kas are negative have no nontrivial finite dimensional unitary representations. This does not mean that they are not interesting (the Lorentz group is one such), but we will not discuss them.
2.5. SIMPLE ALGEBRAS AND GROUPS
51
2.5 Simple algebras and groups
An invariant subalgebra is some set of generators which goes into itself under commutation with any element of the algebra. That is, if X is any generator in the invariant subalgebra and Y is any generator in the whole al-
gebra, [Y, X] is a generator in the invariant subalgebra. When exponentiated,
an invariant subalgebra generates an invariant subgroup. To see this note that
(2.42)
where
X'
= e-iy
X ei-y = X
-
i [Y, X] -
1
2
[Y,
[Y,
X]]
+
· · ·
.
Note that the easy way to see this is to consider
(2.43) (2.44)
(2.45)
then Taylor expand in t and set t = 1. Each derivative brings another com-
mutator. Evidently, each of the terms in X' is in the subalgebra, and thus eiX'
is in the subgroup, which is therefore invariant. The whole algebra and O are trivial invariant subalgebras. An algebra
which has no nontrivial invariant subalgebra is called simple. A simple algebra generates a simple group.
The adjoint representation of a simple Lie algebra satisfying (2.37) is irreducible. To see this, assume the contrary. Then there is an invariant subspace in the adjoint representation. But the states of the adjoint representation correspond to generators, so this means that we can find a basis in which the
invariant subspace is spanned by some subset of the generators, Tr for r = 1 to K. Call the rest of the generators Tx for x = K + I to N. Then because
the rs span an invariant subspace, we must have
[Ta]xr = -ifaxr = 0
(2.46)
for all a, x and r. Because of the complete antisymmetry of the structure
constants, this means that all components of f that have two rs and one x or
two xs and one r vanish. But that means that the nonzero structures constants involve either three rs or three xs, and thus the algebra falls apart into two nontrivial invariant subalgebras, and is not simple. Thus the adjoint representation of a simple Lie algebra satisfying (2.37) is irreducible.
52
CHAPTER 2. LIE GROUPS
We will often find it useful to discuss special Abelian invariant subalgebras consisting of a single generator which commutes with all the generators of the group (or of some subgroup we are interested in). We will call such an algebra a U(l) factor of the group. U(l) is the group of phase transformations. U(l) factors do not appear in the structure constants at all. These Abelian invariant subalgebras correspond to directions in the space of generators for which ka = 0 in (2.36). If Xa is a U(l) generator, fabc = 0 for all b and c. That also means that the corresponding ka is zero, so the trace scalar product does not give a norm on the space. The structure constants do not teJI us anything about the U(l) subalgebras.
Algebras without Abelian invariant subalgebras are called semisimple. They are built, as we will see, by putting simple algebras together. In these algebras, every generator has a non-zero commutator with some other generator. Because of the cyclic property of the structure constants, (2.38), this also implies that every generator is a linear combination of commutators of generators. In such a case, the structure constants carry a great deal of information. We will use them to determine the entire structure of the algebra and its representations. From here on, unless explicitly stated, we will discuss semisimple algebras, and we will deal with representations by unitary operators.
2.6 States and operators
The generators of a representation (like the elements of the representations they generate) can be thought of as either linear operators or matrices, just as we saw when we were discussing representations of finite groups -
(2.47)
with the sum on j understood. As in (l.98), the states form row vectors and the matrix representing a linear operator acts on the right.
In the Hilbert space on which the representation acts, the group elements can be thought of as transformations on the states. The group element eicxaXa maps or transforms the kets as follows:
(2.48)
Taking the adjoint shows that the corresponding bras transform as
(ii ~ (i'I = (ile-iaaXa .
(2.49)
2. 7. FUN WITH EXPONENTIALS
53
The ket obtained by acting on Ii) with an operator O is a sum of kets, and
therefore must also transform as in (2.48).
Oli) ~ eiaaXaOli)
= eiaaXaoe-iaaXaeiaaXali) = O'li').
(2.50)
This implies that any operator O transforms as follows:
(2.51)
The transformation leaves all matrix elements invariant. The action of the algebra on these objects is related to the change in the
state of operator under an infinitesimal transformation.
(2.52)
-io(il = -(ilaaXa
-ioO = [aaXa, 0] .
Thus, corresponding to the action of the generator Xa on a ket
(2.53) (2.54)
(2.55)
is - Xa acting on a bra4
-(ilXa
and the commutator of Xa with an operator
(2.56)
[Xa,O].
(2.57)
Then the invariance of a matrix element (ilOli) is expressed by the fact,
(ilO (Xali)) + (ii [Xa, O] Ii) - ((ilXa) Oli) = 0.
(2.58)
2.7 Fun with exponentials
Consider the exponential
(2.59)
4The argument above can be summarized by saying that the minus signs in (2.56) and in the commutator in (2.57) come ultimately from the unitarity of the transformation, (2.48).
54
CHAPTER 2. LIE GROUPS
where Xa is a representation matrix. We can always define the exponential
as a power series,
f= eiaaXa
(iaa~at
(2.60)
n=O
n.
However, it is useful to develop some rules for dealing with these things without expanding, like our simple rules for exponentials of commuting numbers. We have already seen that the multiplication law is not as simple as just adding the exponents. You might guess that the calculus is also more complicated. In particular,
(2.61)
However, it is true that
-edZS<·'<a X a = iabxb eZ·SOa X a = ieZS.<'<a X a abxb
ds
(2.62)
because aaXa commutes with itself. This is very important, because you can often use it to derive other useful results. It is also true that
I ~ eiaaXa
= iXb
8ab
a=O
(2.63)
because this can be shown directly from the expansion. It is occasionally useful to have a general expression for the derivative. Besides, it is a beautiful formula, so I will write it down and tell you how to derive it. The formula is
= { ~eiaaXa
1
ds eisaaXa (iXb) ei(l-s)acXc
aab
lo
(2.64)
I love this relation because it is so nontrivial, yet so easy to remember. The integral just expresses the fact that the derivative may act anywhere "inside" the exponential, so the result is the average of all the places where the derivative can act. One way of deriving this is to define the exponential as a limit as in (2.7).
(2.65)
and differentiate both sides - the result (2.64) is then just an exercise in defining an integral as a limit of a sum. Another way of doing it is to expand both sides and use the famous integral
fol
m
n
m!n!
o ds S (1 - S) = (m+n+ l) .I
(2.66)
We will see other properties of exponentials of matrices as we go along.
2. 7. FUN WITH EXPONENTIALS
55
Problems
2.A. Find all components of the matrix eiaA where
2.B. If [A, B] = B, calculate
eiaA B e-iaA
2.C. Carry out the expansion of Jc in (2.11) and (2.12) to third order in
a and /3 (one order beyond what is discussed in the text).
Chapter 3
SU(2)
The SU(2) algebra is familiar. 1
(3.1)
This is the simplest of the compact Lie algebras because Eijk for i, j, k =
1 to 3 is the simplest possible completely antisymmetric object with three in-
dices. (3.1) is equivalent (in units in which 1i = 1) to the angular momentum
algebra that you studied in quantum mechanics. In fact we will only do two things differently here. One is to label the generators by 1, 2 and 3 instead of x, y and z. This is obviously a great step forward. More important is the fact that we will not make any use of the operator lala. Initially, this will make the analysis slightly more complicated, but it will start us on a path that generalizes beautifully to all the other compact Lie algebras.
3.1 J3 eigenstates
Our ultimate goal is to completely reduce the Hilbert space of the world to block diagonal form. To start the process, let us think about some finite space, of dimension N, and assume that it transforms under some irreducible representation of the algebra. Then we can see what the form of the algebra tells us about the representation. Clearly, we want to diagonalize as many of the elements of the algebra as we can. In this case, since nothing commutes with anything else, we can only diagonalize one element, which we may as well take to be ]3. When we have done that, we pick out the states with the highest value of ]3 (we can always do that because we have assumed that the space
1We will see below why the name SU(2) is appropriate.
56
3.2. RAISING AND LOWERING OPERATORS
57
is finite dimensional). Call the highest value of J3 j. Then we have a set of states
(3.2)
where a is another label, only necessary if there is more than one state of highest J3 (of course, you know that we really don't need a because the highest state is unique, but we haven't shown that yet, so we will keep it). We can also always choose the states so that
(j, alj, (3) = Oat3
(3.3)
3.2 Raising and lowering operators
Now, just as in introductory quantum mechanics, we define raising and lowering operators,
(3.4)
satisfying
[J3, J±] = ±J±
(3.5)
[J+, J-J = J3
(3.6)
so they raise and lower the value of J3 on the states. If
J3lm) = mlm)
(3.7)
then
The key idea is that we can use the raising and lowering operators to construct the irreducible representations and to completely reduce reducible representations. This idea is very simple for SU(2), but it is very useful to see how it works in this simple case before we generalize it to an arbitrary compact Lie algebra.
There is no state with J3=j+ l because we have assumed that j is the highest value of J3 • Thus it must be that
(3.9)
because any non-zero states would have J3=j+l. The states obtained by acting with the lowering operator have J3=j- t, so it makes sense to define
(3.10)
58
CHAPTER 3. SU(2)
where Nj (a) is a normalization factor. But we easily see that states with different a are orthogonal, because
Nj(/3)* Nj(a)(j -1,,Blj -1,a)
= (j, ,a11+1-1i, a)
= (j, ,Bl [J+, 1-] lj, a) = (j,,Bll3lj,a)
= j (j, ,Blj, a) = j Oa/3
(3.11)
Thus we can choose the states lj - 1, a) to be orthonormal by choosing
Nj(a) = VJ= Nj
(3.12)
Then in addition to (3.10), we have
1+1j- l,a) = ~- 1+1-1j,a)
J
= ~- [J+, 1-] lj,a)
,J
(3.13)
= ;. lj,a) = Njlj,a)
J
The point is that because of the algebra, we can define the states so that the raising and lowering operators act without changing a. That is why the parameter a is eventually going to go away. Now an analogous argument
shows that there are orthonormal states lj - 2, a) satisfying
1-1j -1,a) = Nj-ilJ-2,a) 1+1j - 2,a) = Nj-1IJ- l,a)
(3.14)
Continuing the process, we find a whole tower of orthonormal states, Jj k, a) satisfying
1-lj- k,a) = Ni-klj -k-1,a) 1+Jj- k -1,a) = Nj-klj-k,a)
(3.15)
The N s can be chosen to be real, and because of the algebra, they satisfy
NJ-k = (j - k, all+ 1-Jj - k, a) = (j - k, al [1+, 1-J lj - k, a)
+(j - k, aJ1- 1+1j - k, a)
(3.16)
=NJ_k+1 +j-k
3.2. RAISING AND LOWERING OPERATORS
59
This is a recursion relation for the N s which is easy to solve by starting with
Nf
N2
J
=J
N2J- 1
N2
J
=j-1
NJ-k+I = j- k NJ2-k = (k + l)j - k(k + 1)/2
= ½(k + 1)(2j - k)
(3.17)
or setting k = j - m
Nm= ~V(j+m)(j-m+l)
(3.18)
Because the representation is finite dimensional (by assumption - we haven't
proved this) there must be some maximum number of lowering operators,£,
that we can apply to lj, a). We must eventually come to some m = j -£ such
that applying any more lowering operators gives 0. Then £ is a non-negative integer specifying the number of times we can lower the str.tes with highest
]3. Another lowering operator annihilates the state -
But then the norm of J- 1j - e, a) must vanish, which means that
(3.19)
(3.20)
e the factor + 1 cannot vanish, thus we must have e= 2j.
(3.21)
Thus
j = ~ e. for some integer
(3.22)
Now we can get rid of a. It is now clear that the space breaks up into subspaces that are invariant under the algebra, one for each value of a, because the generators do not change a. Thus from our original assumption of irreducibility, there must be only one a value, so we can drop the a entirely.
60
CHAPTER 3. SU(2)
Furthermore, there can be no other states, or the subspace we just constructed would be nontrivial (and invariant). Thus we have learned how the generators act on all the finite dimensional irreducible representations. In fact, though we won't prove it, there are no others - that is all representations are finite dimensional, so we know all of them.
3.3 The standard notation
We can now switch to the standard notation in which we label the states of
the irreducible representations by the highest J3 value in the representation and the J3 value: 2
ij,m)
(3.23)
and the matrix elements of the generators are determined by the matrix ele-
ments of J3 and the r1Jising and lowering operators, J±:3
(j,m'IJ3lj,m) = m Jm'm
=.ju+ (j, m'IJ+lj, m)
m+ l)(j - m)/2 Jm',m+i
(j, m'IJ-li, m) =.ju+ m)(j - m+ 1)/2 Jm',m-1
(3.24)
These matrix elements define the spin j representation of the SU (2) algebra:
[J1]ke = (j,j + 1-klJali,j + 1-l)
(3.25)
Here we have written the matrix elements in the conventional language where
the rows and columns are labeled from 1 to 2j + 1. In this case, it is often
convenient to label the rows and columns directly by their m values, which
are just j + 1 - l and j + 1 - k above in (3.25). In this notation, (3.25) would
read
[Ji]m'm = (j, m'IJali, m)
(3.26)
where m and m' run from j to -j in steps of -1. We will use these interchangeably - choosing whichever is most convenient for the problem at hand.
2Well, not completely standard - in some books, including the first edition of this one,
the j and m are written in the other order.
3The ./2 factors are the result of our definition of the raising and lowering operators and
are absent in some other treatments.
3.3. THE STANDARD NOTATION
61
For example, for 1/2, this gives the spin 1/2 representation
! (~ = J,21/2 2 i
J,1/2 = ! ( 1
3
2 0
where the as are the Pauli matrices.
(3.27) (3.28)
satisfying
(3.29)
The spin 1/2 representation is the simplest representation of SU(2). It is called the "defining" representation of SU (2), and is responsible for the name SU, which is an acronym for "Special Unitary". Exponentiating the gener-
ators of the spin 1/2 representation to get the representation of finite group elements gives matrices of the form
(3.30)
which are the most general 2 x 2 unitary matrices with determinant 1. The "special", in Special Unitary means that the determinant is 1, rather than an arbitrary complex number of absolute value l.
All the other irreducible representations can be constructed similarly. For example, the spin 1 representation looks like
I (° I n J
1 1
=
-
1 0
y2 0 1
I(° ~i) -i
JJ = v2 ~ 0
(3.31)
i
G 1J0
J§= 0
0
62
CHAPTER 3. SU(2)
while the spin 3/2 representation is
A 0
0 0
If J3/2 _
0 20
A 1 -
0 2 0
A 0 0
0
A· 0 -
i
0
0
A· J3/2 _
i 0 -2i 0
2 0
2i
A· 0 -
i
A· 0
0
i
0
(3.32)
(~ ]J J3/2 _ 3 -
0 0
1
2
0
0 -21
0 0
The construction of the irreducible representations above generalizes to any compact Lie algebra, as we will see. The J3 values are called weights, and the analysis we have just done is called the highest weight construction because it starts with the unique highest weight of the representation. Note that the same construction provides a systematic procedure for bringing an arbitrary finite dimensional representation into block diagonal form. The procedure is as follows:
1. Diagonalize J3.
2. Find the states with the highest J3 value, j.
3. For each such state, explicitly construct the states of the irreducible spin j representation by applying the lowering operator to the states with highest J3.
4. Now set aside the subspace spanned by these representations, which is now in canonical form, and concentrate on the subspace orthogonal to it.
5. Take these remaining states, go to step 2 and start again with the
states with next highest J3 value. (3.33)
3.4. TENSOR PRODUCTS
63
The end result will be the construction of a basis for the Hilbert space of the
form
lj,m,a)
(3.34)
where m and j refer to the J3 value and the representation as usual (as in (3.23) and a refers to all the other observables that can be diagonalized to
characterize the state. These satisfy
(3.35)
The Kronecker bs are automatic consequences of our construction. They are also required by Schur's lemma, because the matrix elements satisfy
(j', m', a'IJalJ, m, a)
= [J[]m1m11 (j',m",a'lj,m,a)
= (J·I, m I , a 'IJ., m II , a ) [Jia] m"m
(3.36)
because we can insert a complete set of intermediate states on either side of
la. Thus (j', m', a' lj, m, a) commutes with all the elements of an irreducible representation, and is either Oif j -/- j' or proportional to the identity,
'5m'm if j = j'.
3.4 Tensor products
You have probably all used the highest weight scheme, possibly without knowing it, to do what in introductory quantum mechanics is called addition of angular momentum. This occurs when we form a tensor product of two sets of states which transform under the group. 4 This happens, in tum,
whenever a system responds to the group transformation in more than one way. The classic example of this is a particle that carries both spin and orbital
angular momentum. In this case, the system can be described in a space that
you can think of as built of a product of two different kinds of kets.
Ii, x) = Ii) Ix)
(3.37)
where the first states, Ii) transforms under representation D1 of the group and the second, Ix), under D 2. Then the product, called the tensor product,
4We saw an example of this in the nonnal modes of the triangle in our discussion of finite groups.
64
CHAPTER 3. SU(2)
transforms as follows:
D(g) Ii, x) = lj, y) [D102(g)]jyix = lj) IY) [Di(g)lJi [D2(g)]yx
= (lj) [Di(g)lJi) (ly) [D2(g)]yx)
(3.38)
In other words, the two kets are just transforming independently under their
own representations. If we look at this near the identity, for infinitesimal aa,
(1 + iaaJa) Ii, x) = lj,y)(j,yl (1 +iaaJa) li,x) = IJ, Y) (<l"ji<l"yx + iaa[J!®2 (g)]jyix) = lj, y) (bji + iaa[J!]ji) (<l"yx + iaa[J;]yx)
(3.39)
Thus identifying first powers of aa
[J!®2(g)]jyix = [J!]jiOyx + <l"ji[J;]yx
(3.40)
When we multiply the representations, the generators add, in the sense shown in (3.40). This is what happens with addition of angular momenta. We will often write (3.40) simply as
J1a©2 = Jal + J2a
(3.41)
leaving you to figure out from the context where the indices go, and ignoring the J-functions which, after all, are just identity operators on the appropriate space. In fact, you can think of this in terms of the action of the generators as follows:
(3.42)
3.5 ]3 values add
This is particularly simple for the generator J3 because we work in a basis in which J3 is diagonal. Thus the J3 values of tensor product states are just the sums of the J3 values of the factors:
J3(IJ1,m1)lh,m2)) = (m1 +m2) (IJ1,m1)lh,m2))
(3.43)
This is what we would expect, classically, for addition of angular momentum, of course. But in quantum mechanics, we can only make it work for one
3.5. ,h VALUES ADD
65
component. We can, however, use this in the highest weight construction, (3.33).
Consider, for example, the tensor product of a spin I/2 and spin 1 representation. The highest weight procedure (3.33) is what you would use to decompose the product space into irreducible representations. Let's do it explicitly. There is a unique highest weight state,
13/2, 3/2) = 11/2, 1/2)11, 1)
(3.44)
We can now construct the rest of the spin 3/2 states by applying lowering operators to both sides. For example using (3.42)
J-13/2,3/2) = J-(11/2,1/2)11,1)) = ~13/2, 1/2) = /I11;2, -1/2) 11, 1) + 11/2, 1/2)11, 0)
or
13/2, 1/2) = ti11;2, -1/2) 11, 1) + /[11;2, 1/2) 11, 0)
Continuing the process gives
(3.45) (3.46)
13/2, -1/2) = /[11/2, -1/2) 11, 0) + ti11;2, 1/2) 11, -1) 13/2, -3/2) = 11/2, -1/2) 11, -1)
Then the remaining states are orthogonal to these -
(3.47)
/[11;2, -1/2) 11, 1) - ti11;2, 1/2) 11, 0)
and
ti11;2, -1/2)11,0) - /[11;2, 1/2)11, -1)
applying the highest weight scheme to this reduced space gives
(3.48) (3.49)
11/2, 1/2) = /[11;2, -1/2) 11, 1) - ti11;2, 1/2) 11, 0) 11/2, -1/2) = ti11;2, -1/2) 11, 0) - /[11;2, 1/2) 11, -1)
(3.50)
In this case, we have used up all the states, so the process terminates. Note
that the signs of the spin 1/2 states were not determined when we found the
states orthogonal to the spin 3/2 states, but that the relative sign is fixed be-
cause the J3 = ±1/2 states are related by the raising and lowering operators.
66
Problems
CHAPTER 3. SU(2)
3.A. Use the highest weight decomposition, (3.33), to show that
I: {n s+j
U} ® { s} =
ffii=Js-jJ
where the EB in the summation just means that the sum is a direct sum, and { k} denotes the spin k representation of SU(2). To do this problem, you do not need to construct the precise linear combinations of states that appear in each irreducible representation, but you must at least show how the counting of states goes at each stage of the highest weight decomposition.
3.B. Calculate
where a are the Pauli matrices. Hint: writer= Iii r.
3.C. Show explicitly that the spin 1 representation obtained by the high-
est weight procedure with j = 1 is equivalent to the adjoint representation,
= withfabc Eabc by finding the similarity transformation that implements the
equivalence.
3.D.
Suppose that [O'a]ij and [1Ja]xy are Pauli matrices in two different
two dimensional spaces. In the four dimensional tensor product space, define
the basis
II) = Ii= l)lx = 1)
12) = Ii = 1) Ix = 2)
13) =Ii= 2)1x = 1)
14) = Ii = 2) Ix = 2)
Write out the matrix elements of 0'2 ® 771 in this basis.
3.E. We will often abbreviate the tensor product notation by leaving out the indices and the identity matrices. This makes for a very compact notation, but you must keep your wits about you to stay in the right space. In the example of problem 3.D, we could write:
[aa]ij [TJ&]xy as O'a7'Jb
[O'a]ij<l"xy as O'a Jij [11&]xy as 1Jb
3.5. h VALUES ADD
67
<\jbxy as 1
So for example, (0-1)(0-2771) = io-3771 and (0-1772)(0-1773) = i771.
To get some practice with this notation, calculate (a) [aa,O"b77c], (b) Tr(aa{17b,O"c1Jd}), (c) [a-1771, 0-2772] ·
where O-a and 77a are independent sets of Pauli matrices and {A, B} =AB+ BA is the "anticommutator."
Chapter 4
Tensor Operators
A tensor operator is a set of operators that transforms under commutation
with the generators of some Lie algebra like an irreducible representation of
the algebra. In this chapter, we will define and discuss tensor operators for
the SU(2) algebra discussed in chapter 3. A tensor operator transforming
under the spin-s representation of SU(2) consists of a set of operators, OJ
for l = 1 to 2s+l (or -s to s), such that
·
[Ja, on= o:n [J1]me.
( 4.1)
It is true, though we have not proved it, that every irreducible representation is finite dimensional and equivalent to one of the representations that we found with the highest weight construction. We can always choose all tensor operators for SU(2) to have this form.
4.1 Orbital angular momentum
Here is an example - a particle in a spherically symmetric potential. If the particle has no spin, then Ja is the orbital angular momentum operator,
Ja =La= Eabc Tb Pc
(4.2)
The position vector is related to a tensor operator because it transforms under the adjoint representation
[Ja, Tb]= Eacd [Tc Pd, Tb] = -i facd Tc <5bd
(4.3)
= -i Eacb Tc = Tc [J:,Clj]cb
68
4.2. USING TENSOR OPERATORS
69
where iadj is the adjoint representation, and we know from problem 3.C that this representation is equivalent to the standard spin I representation from the highest weight procedure.
4.2 Using tensor operators
Note that the transformation of the position operator in (4.3) does not have quite the right form, because the representation matrices J;;dj are not the standard form. The first step in using tensor operators is to choose the operator basis so that the conventional spin s representation appears in the commutation relation (4.1). This is not absolutely necessary, but it makes things easier, as we will see. We will discuss this process in general, and then see how it works for ra.
Suppose that we are given a set of operators, nx for x = I to 2s+ 1
that transforms according a representation D that is equivalent to the spin-s representation of SU(2):
(4.4)
Since by assumption, Dis equivalent to the spin-s representation, we can find
a matrix S such that
S JDa s-I = J5a
(4.5)
or in terms of matrix elements
(4.6)
Then we define a new set of operators
Oe = !1y [s- 1]ye fore= -s to s
(4.7)
Now Oj satisfies
[Ja,Oe]
= [Ja, !1y] [S-1]ye
= nz[Jf]zy [S- 1]ye
(4.8)
= nz[S- 1]ze' [S]erz' [Jf]z'y [S- 1]ye
= Oe, [J;]ere
70
CHAPTER4. TENSOROPERATORS
which is what we want. Notice that (4.8) is particularly simple for h, because
e in our standard basis in which the indices label the 13 value, lJ, (or 1!, for
any s) is a diagonal matrix
[13]e,e = eou fore, e' = -s to s.
(4.9)
Thus
[13, O]] = O], [lj]e,e = f O].
(4.10)
In practice, it is usually not necessary to find the matrix S explicitly. If
we can find any linear combination of the fix which has a definite value of
13 (that means that it is proportional to its commutator with 13), we can take
that
to
be
a
component
of
0
5 ,
and
then
build
up
all
the
other
0
5
components
by applying raising and lowering operators.
For the position operator it is easiest to start by finding the operator ro.
Since [13, r3] = 0, we know that r3 has 13 = 0 and therefore that r3 ex ro.
Thus we can take
ro = r3
(4.11)
Then the commutation relations for the spin 1 raising and lowering operators give the rest
[J±,ro]= r±1
= :i=(r1 ± i r2)/v'2
(4.12)
4.3 The Wigner-Eckart theorem
The interesting thing about tensor operators is how the product O] lj, m, a)
transforms.
la O] lj, m, a) = [la, O]] lj, m, a) + O] la lj, m, a) = O], lj, m, a) [l!]e1e+ O] lj, m', a) [li]m'm
(4.13)
This is the transformation law for a tensor product of spin s and spin j, s ® j.
Because we are using the standard basis for the states and operators in which
13 is diagonal, this is particularly simple for the generator 13, for which (4.13)
becomes
h 0] lj,m,a) = (l + m) Oe lj,m, a)
(4.14)
The 13 value of the product of a tensor operator with a state is just the sum of the 13 values of the operator and the state.
4.3. THE WIGNER-ECKART THEOREM
71
The remarkable thing about this is that the product of the tensor operator
and the ket behaves under the algebra just like the tensor product of two
kets. Thus we can decompose it into irreducible representations in exactly
the same way, using the highest weight procedure. That is, we note that
O! lj, j, a) with h = j + s is the highest weight state. We can lower it to construct the rest of the spin j + s representation. Then we can find the linear combination of h = j + s - 1 states that is the highest weight of the spin j + s - 1 representation, and lower it to get the entire representation,
and so on. In this way, we find explicit representations for the states of the
irreducible components of the tensor product in terms of linear combinations
of the O} lj, m, a). You probably know, and have shown explicitly in problem
3.A, that in this decomposition, each representation from j + s to lj - si
appears exactly once. We can write the result of the highest weight analysis
as follows:
L Oe lj,M -l,a) (s,j,l,M -e I J,M) = k1 IJ,M)
e
(4.15)
Here IJ, M) is a normalized state that transforms like the J3 = M compo-
nent of the spin J representation and k1 is an unknown constant for each J
(but does not depend on M). The coefficients (s, j, e, M - e I J, M) are
determined by the highest weight construction, and can be evaluated from
the tensor product of kets, where all the normalizations are known and the
constants k1 are equal to 1:
L /s,e) lj,M -e) (s,j,f,M -e I J,M) = IJ,M)
e
(4.16)
One way to prove 1 that the coefficients can be taken to be the same in (4.15)
and (4.16) is to notice that in both cases, J+ IJ, J) must vanish and that
e, e this condition determines the coefficients (s, j, J - I J, J) up to a mul-
tiplicative constant. Since the transformation properties of 01 lj, m) and Is, f) )j, m) are identical, the coefficients must be proportional. The only
difference is the factor of k1 in (4.15).
We can invert (4.15) and express the original product states as linear combinations of the states with definite total spin J.
Lj+s
O}lj,m,a)=
(J,e+mls,j,e,m)k1IJ,e+m)
J=lj-sl
( 4.17)
'This is probably obvious. but as we will emphasize below, the operators are different because we do not have a scalar product for them.
72
CHAPTER 4. TENSOR OPERATORS
e, The coefficients (J, M I s, j, M - f) are thus entirely determined by
the algebra, up to some choices of the phases of the states. Once we have a
convention for fixing these phases, we can make tables of these coefficients
e e, once and for all, and be done with it. The notation (J, + mis, j, m) just e means the coefficient of IJ, + m) in the product Is, f) lj, m). These are
called Clebsch-Gordan coefficients.
The Clebsch-Gordan coefficients are all group theory. The physics comes
in when we reexpress the IJ, e+ m) in terms of the Hilbert space basis states IJ,e + m,(3)-
L kJIJ,e+m) = ka,0IJ,e+m,,8) ,B
( 4.18)
We have absorbed the unknown coefficients kJ into the equally unknown coefficients ka,B· These depend on a, j, 0 8 ands, because the original products do, and on ,8 and J, of course. But they do not depend at all one or m. We
e only need to know the coefficients for one value of + m. The k0 ,0 are called
reduced matrix elements and denoted
ka,B = (J, ,Bl 0 5 lj, a)
(4.19)
Putting all this together, we get the Wigner-Eckart theorem for matrix elements of tensor operators:
(J, m', ,81 Oe IJ, m, a)
e = o'm',e+m (J, + mis,j,f, m) · (J, ,Bl 0 8 lj, a)
(4.20)
If we know any non-zero matrix element of a tensor operator between states
of some given J, ,8 and j, a, we can compute all the others using the algebra.
This sounds pretty amazing, but all that is really going on is that we can use the raising and lowering operators to go up and down within representations using pure group theory. Thus by clever use of the raising and lowering operators, we can compute any matrix element from another. The Wigner-Eckart theorem just expresses this formally.
4.4 Example
Suppose Find
(1/2,1/2,alr3ll/2,1/2,,8) = A
(1/2,1/2,alr1 jl/2,-1/2,,8) =?
(4.21) (4.22)
4.4. EXAMPLE
73
First, since To = T3,
(1/2, 1/2,al To ll/2, 1/2,(3) = A
Then we know from (4.12) that
(4.23)
(4.24)
Thus
(1/2, 1/2, al TI 11/2, -1/2, (3)
=
(1/2,1/2,al
1 y2(-T+1
+r-1)
11/2,-1/2,(3)
(4.25)
=
-
1 v2(1/2,
1/2,
al
r +111/2,
-1/2,(3)
Now we could plug this into the formula, and you could find the ClebschGordan coefficients in a table. But I'll be honest with you. I can never remember what the definitions in the formula are long enough to use it. Instead, I try to understand what the formula means, and I suggest that you do the same. We could also just use what we have already done, decomposing
1/2 0 1 into irreducible representations. For example, we know from the
highest weight construction that
13/2,3/2) = r+111/2, 1/2,(3)
(4.26)
is a 3/2,3/2 state because it is the highest weight state that we can get as a
product of an re operator acting on an 11/2, m) state. Then we can get the
corresponding j3/2, 1/2) state in the same representation by acting with the lowering operator J-
~ l3/2, 1/2) = J- l3/2, 3/2)
= /[ro/1/2,1/2,(3) +~r+111/2,-1/2,(3)
(4.27)
But we know that this spin-3/2 state has zero matrix element with any spin1/2 state, and thus
0 = (1/2, 1/2, al 3/2, 1/2)
+Jf= /J(l/2, 1/2, al To 11/2, 1/2, (3) (1/2, 1/2, al '"+i I112, -1/2, /JI
(4.28)
74
CHAPTER 4. TENSOR OPERATORS
so
(1/2, 1/2, al r+1 11/2, -1/2, /3) = -v2(1/2, 1/2, al ro 11/2, 1/2, /3) =-v2A
so
(1/2, 1/2, al r111/2, -1/2,/3) = A
Although we did not need it here, we can also conclude that
ff ~ 11/2, 1/2) = ro I1/2, 1/2, a) - r+1 11/2, -1/2, a)
(4.29) (4.30) (4.31)
is a 1/2, 1/2 state. This statement is actually a little subtle, and shows the
power of the algebra. When we did this analysis for the tensor product of j=l
and j=l/2 states, we used the fact that the 11/2, 1/2) must be orthogonal to
the 13/2, 1/2) states to find the form of the I1/2, 1/2) state. We cannot do this
here, because we do not know from the symmetry alone how to determine the
norms of the states
re ll/2, m)
(4.32)
However, we know from the analysis with the states and the fact that the transformation of these objects is analogous that
1+ 11/2, 1/2) = O
(4.33)
Thus it is a 1/2,1/2 state because it is the highest weight state in the representation. We will return to this issue later.
There are several ways of approaching such questions. Here is another way. Consider the matrix elements
(1/2,m,al ra 11/2,m',/3)
(4.34)
The Wigner-Eckart theorem implies that these matrix elements are all proportional to a single parameter, the k0 13. Furthermore, this result is a consequence of the algebra alone. Any operator that has the same commutation relations with la will have matrix elements proportional to ra. But la itself has the same commutation relations. Thus the matrix elements of ra are proportional to those of la, This is only helpful if the matrix elements of la are not zero (if they are all zero, the Wigner-Eckart theorem is trivially satisfied). In this
case, they are not (at least if a = /3)
(1/2,m,al la 11/2,m',/3) = Oaf31[aa]mm'
(4.35)
4.5. * MAKING TENSOR OPERATORS
75
Thus
(1/2, m, o:J ra Jl/2, m', /3) ex [aa]mm'
This gives the same result.
(4.36)
4.5 * Making tensor operators
If often happens that you come upon a set of operators which transforms under commutation with the generators like a reducible representation of the algebra
(4.37)
where D is reducible. In this case, some work is required to tum these into tensor operators, but the work is essentially just the familiar highest weight construction again. The first step is to make linear combinations of the fix operators that have definite ]3 values
(4.38)
This is always possible because D can be decomposed into irreducible representations that have this property. Then we can apply the highest weight procedure and conclude that the operators, with the highest weight, Oj,o: are components of a tensor operator with spin j, one for each o:. If there are any operators with weight j-1/2, Oj-l/2,f3, they will be components of tensor operators with spin j-1/2. However, things can get subtle at the next level. To find the tensor operators with spin j-1, you must find linear combinations
of the operators with weight j-1 which have vanishing commutator with J+
- then they correspond to the highest weights of the spin j-1 reps
(4.39)
The point is, if you get the operators in a random basis, you have nothing like a scalar product, so you cannot simply find the operators that are "orthogonal" to the ones you have already assigned to representations. I hope that an example will make this clearer. Consider seven operators, a±1, b±i and a0 , bo and co, with the following commutation relations with the generators:
[J3, a+iJ = a+1 [h, b+iJ = b+1
[h,~]=[h,~]=[h,~]=0
(4.40)
76
CHAPTER 4. TENSOR OPERATORS
[J+,ao] =a+1
[J+, a_i] = co
(4.41)
_
[J , a+i]
=
1
2(ao
+
bo
+
co)
1
p-, b+d = 2(ao + bo - co)
[J-,ao]=2a-1+b_1 p-,bo]=a-1+b_1 [J-,co]=a-1
[J-,a-1] = [J-,b_i] = 0
(4.42) To construct the tensor operators, we start with the highest weight states, and define
B+1 = b+1
(4.43)
Then we construct the rest of the components by applying the lowering oper-
ators
Ao = 21 (ao + bo + co) Bo = 21 (ao + bo - co)
(4.44)
and (4.45)
You can check that the raising operators now just move us back up within the representations.
Now there is one operator left, so it must be a spin O representation. But which one is it? It must be the linear combination that has vanishing commutator with J± - therefore it is
Go = ao - bo - co
(4.46)
Let me emphasize again chat we went through this analysis explicitly to show the differences between dealing with states and dealing with tensor operators. Had this been a set of seven states transforming similarly under the algebra, we could have constructed the singlet state by simply finding
the linear combination of ]3 = 0 states orthogonal to the J3 = 0 states in
the triplets. Here we do not have this crutch, but we can still find the singlet operator directly from the commutation relations. We could do the same thing for states, of course, but it is usually easier for states to use the nice properties of the scalar product.
4.6. PRODUCTS OF OPERATORS
77
4.6 Products of operators
One of the reasons that tensor operators are important is that a product of two
tensor operators,
0~ 1
and
0~ 2
in
the
spin
s 1 and
spin
s2
representations,
transforms under the tensor product representation, s1 0 s2 because
[Ja, o~l 0~2]
= [Ja,O~I] 0~2 +o~l [Ja,O~J l + -- 0sm1'1 0sm22 [XSa] m'1mt osm11 0sm2'2[xsa21 m I2m2
(4.47)
Thus the product can be decomposed into tensor operators using the highest weight procedure.
Note that as usual, things are particularly simple for the generator J3. (4.47) implies
( 4.48)
The h value of the product of two tensor operators is just the sum of the J3
values of the two operators in the product.
Problems
4.A. Consider an operator Ox, for x = 1 to 2, transforming according to
the spin l/2 representation as follows:
where a a are the Pauli matrices. Given
(3/2, -1/2, al 01 11, -1, /3) = A
find
(3/2, -3/2, al 02 Il, -1, /3)
4.B.
The operator (r+I )2 satisfies
It is therefore the 0+2 component of a spin 2 tensor operator. Construct the other components, Om. Note that the product of tensor operators transforms
78
CHAPTER 4. TENSOR OPERATORS
like the tensor product of their representations. What is the connection of
this with the spherical harmonics, Yi ,m (0, ¢)? Hint: let r 1 = sin 0 cos ¢,
= = r2 sin 0 sin</>, and r3 cos 0. Can you generalize this construction to
arbitrary land explain what is going on?
4.C. Find
where the X~ are given by (3.31) Hint: There is a trick that makes this one
easy. Write
aa Xal = aaAa xai
where
a= Jaaaa'
You know that &aXJ has eigenvalues ±1 and 0, just like XJ (because all directions are equivalent). Thus (&aXJ) 2 is a projection operator and
You should be able to use this to manipulate the expansion of the exponential and get an explicit expression for eiaaXJ.
Chapter 5
Isospin
The idea of isospin arose in nuclear physics in the early thirties. Heisenberg introduced a notation in which the proton and neutron were treated as two components of a nucleon doublet
N = (~)
(5.1)
He did this originally because he was trying to think about the forces between nucleons in nuclei, and it was mathematically convenient to write things in
this notation. In fact, his first ideas about this were totally wrong - he re-
ally didn't have the right idea about the relation between the proton and the neutron. He was thinking of the neutron as a sort of tightly bound state of proton and electron, and imagined that forces between nucleons could arise
by exchange of electrons. In this way you could get a force between proton and neutron by letting the electron shuttle back and forth - in analogy with
an H;J ion, and a force between neutron and neutron - an analogy with a
neutral H2 molecule. But no force between proton and proton.
5.1 Charge independence
It was soon realized that the model was crazy, and the force had to be charge independent - the same between pp, pn and nn to account for the pattern of nuclei that were observed. But while his model was crazy, he had put the p and n together in a doublet, and he had used the Pauli matrices to describe their interactions. Various people soon realized that charge independence would be automatic if there were really a conserved "spin" that acted on the doublet of p and n just as ordinary spin acts on the two J3 components of a spin-1/2 representation. Some people called this "isobaric spin",
79
80
CHAPTER 5. ISOSPIN
which made sense, because isobars are nuclei with the same total number of baryons, 1 protons plus neutrons, and thus the transformations could move from one isobar to another. Unfortunately, Wigner called it isotopic spin and that name stuck. This name makes no sense at all because the isotopes have the same number of protons and different numbers of neutrons, so eventually, the "topic" got dropped, and it is now called isospin.
5.2 Creation operators
Isospin really gets interesting in particle physics, where particles are routinely
created and destroyed. The natural language for describing this dynamics is
based on creation and annihilation operators (and this language is very useful
for nuclear physics, as we will see). For example, for the nucleon doublet in
(5.1 ), we can write
IP, a) = atN'21 ,o IO)
= In, a) atN _1 IO) ' 2,0
(5.2)
where the
(5.3)
are creation operators for proton(+½) and neutron(-½) respectively in the
state a, and 10) is the vacuum state - the state with no particles in it. The N
stands for nucleon, and it is important to give it a name because we will soon discuss creation operators for other particles as well. The creation operators are not hermitian. Their adjoints are annihilation operators,
aN,±½,a
(5.4)
These operators annihilate a proton (or a neutron) if they can find one, and otherwise annihilate the state, so they satisfy
(5.5)
The whole notation assumes that the symmetry that rotates proton into neutron is at least approximately correct. If the proton and the neutron were not in some sense similar, it wouldn't make any sense to talk about them being in the same state.
'Baryons are particles like protons and neutrons. More generally, the baryon number is one third the number of quarks. Because, as we will discuss in more detail later, the proton and the neutron are each made of three quarks, each has baryon number I.
5.2. CREATION OPERATORS
81
Because the p and n are fermions, their creation and annihilation opera-
tors satisfy anticommutation relations:
tm' {aN,m,a, a ,/3} = '5mm' <5aj3
(5.6)
{a~,m,a,a~,m',/3} = {aN,m,a,aN,m',/3} = 0
With creation and annihilation operators, we can make multiparticle states
by simply applying more than one creation operator to the vacuum state. For
example
n proton creation operators
at 1
N, 2 ,01
···atN, 21,an
10)
(5.7)
ex /n protons; a1, ···,an)
produces an n proton state, with the protons in states, a1 through an. The
anticommutation relation implies that the state is completely antisymmetric in the labels of the particles. This guarantees that the state vanishes if any two of the as are the same. It means (among other things) that the Pauli exclusion principle is automatically satisfied. What is nice about the creation and annihilation operators is that we can construct states with both protons and neutrons in the same way. For example,
n nucleon creation operators
atN,m1,a1 ···atN,mn,°'n /0)
ex In nucleon; m1, a1; · · ·; mn, an)
(5.8)
is an n nucleon state, with the nucleons in states described by the m variable (which tells you whether it is a proton or a neutron) and the a label, which
tells you what state the nucleon is in. Now the anticommutation relation implies that the state is completely antisymmetric under exchange of the pairs
of labels, m and a.
In nucleon; m1, a1; m2, a2 · · ·; mn, an)
= -/n nucleon; m2, a2; m1, a1; · · ·; mn, an)
(5.9)
If you haven't seen this before, it should bother you. It is one thing to assume that the proton creation operators anticommute, because two protons really cannot be in the same state. But why should proton and neutron creation operators anticommute? This principle is called the "generalized exclusion principleY Why should it be true? This is an important question, and we will come back to it below. For now, however, we will just see how the creation and annihilation operators behave in some examples.
82
CHAPTER 5. ISOSPIN
5.3 Number operators
We can make operators that count the number of protons and neutrons by putting creation and annihilation operators together (the summation convention is assumed):
at N,+½,a
a N,+
1
2
,
a
at
a 1
N,-½,a N,-2•°'
aNt ,m,a aN,m,a
counts protons counts neutrons counts nucleons
(5.10)
Acting on any state with Np protons and Nn neutrons, these operators have eigenvalues Np, Nn and Np+ Nn respectively. This works because of (5.5) and the fact that for a generic pair of creation and annihilation operators
(5.11)
Notice that the number operators in (5.1 O) are summed over all the possible quantum states of the proton and neutron, labeled by a. If we did not sum over a, the operators would just count the number of protons or neutrons or both in the state a. We could get fancy and devise more restricted number operators where we sum over some a and not others, but we won't talk further about such things. The total number operators, summed over all a, will be particularly useful.
5.4 Isospin generators
For the one-particle states, we know how the generators of isospin symmetry should act, in analogy with the spin generators:
Or in terms of creation operators
(5.12)
(5.13)
Furthermore, the state with no particles should transform like the trivial representation -
(5.14)
5.5. SYMMETRY OF TENSOR PRODUCTS
83
Thus we will get the right transformation properties for the one particle states if the creation operators transform like a tensor operator in the spin 1/2 representation under isospin:
[Ta, a~,m,a] = a~,m',a [JJl2]m'm = ~a~,m',a [aa]m'm
It is easy to check that the following form for Ta does the trick:
(5.15)
Ta=
a~m'
a
[JJl
2 ]m
1m
aN,m,a
+ ···
=
=
-21
1
'
a~m'
'
'
a
'
-2atN,a aa
[aa]m'm aN,m,a aN,a +···
+
· · ·
(5.16)
where · · · commutes with the nucleon creations and annihilation operators (and also annihilates )O} ). The last line is written in matrix form, where we think of the annihilation operators as column vectors and the creation operators as row vectors. Let us check that (5.16) has the right commutation relations with the creation operators so that (5.15) is satisfied.
[Ta, atm,a]
[JJ = [a~,m',,8 12]m1m" aN,m",,8, a~,m,a]
= a~,m',,8 [JJl2]m1m" {aN,m",,8, a~,m,a}
-
{a~,m',,8• a~,m,a}
[JJl
2 ]m1
m11
aN,m",,8
-_ aNt ,m',a [Jla/2] m'm
(5.17)
The advantage of thinking about the generators in this way is that we now immediately see how multiparticle states transform. Since the multiparticle states are built by applying more tensor (creation) operators to the vacuum state, the multiparticle states transform like tensor products - not a surprising result, but not entirely trivial either.
5.5 Symmetry of tensor products
We pause here to discuss an important fact about the combination of spin states (either ordinary spin or isospin). We will use it in the next section to discuss the deuteron. The result is this: when the tensor product of two identical spin 1/2 representations is decomposed into irreducible representations,
84
CHAPTER 5. ISOSPIN
the spin I representation appears symmetrically, while the spin Oappears antisymmetrically. To see what this means, suppose that the spin 1/2 states are
ll/2, ±1/2, a)
(5.18)
where a indicates whatever other parameters are required to describe the state. Now consider the highest weight state in the tensor product. This is the spin 1 combination of two identical J3=1/2 states, and is thus symmetric in the exchange of the other labels:
11, 1) = ll/2, 1/2, a) ll/2, 1/2, ,8) = ll/2, 1/2, ,8) 11/2, 1/2, a) (5.19)
The lowering operators that produce the other states in the spin l representation preserve this symmetry because they act in the same way on the two spin 1/2 states.
11,0) = ~(11/2,-1/2,a)ll/2,1/2,,8)
,8)) +11/2, 1/2, a) ll/2, -1/2,
(5.20)
11, -1) = 11/2, -1/2, a) 11/2, -1/2, ,8)
Then the orthogonal spin O state is antisymmetric in the exchange of a and ,8:
10,0) = ~(11/2,-1/2,a)ll/2,1/2,,8)
-11/2, 1/2, a) 11/2, -1/2, ,8))
(5.21)
5.6 The deuteron
The nucleons have spin 1/2 as well as isospin 1/2, so the a in the nucleon creation operator actually contains a J3 label, in addition to whatever other
parameters are required to determine the state. As a simple example of the transformation of a multiparticle state, con-
sider a state of two nucleons in an s-wave - a zero angular momentum state. Then the total angular momentum of the state is simply the spin angular momentum, the sum of the two nucleon spins. Furthermore, in an s-wave state, the wave function is symmetrical in the exchange of the position variables of the two nucleons. Then because the two-particle wave function is proportional to the product of two anticommuting creation operators acting on the vacuum state, it is antisymmetric under the simultaneous exchange of the isospin and spin labels of the two nucleons - if the spin representation is
5. 7. SUPERSELECTION RULES
85
symmetric, the isospin representation must be antisymmetric, and vice versa. When combined with the results of the previous section, this has physical consequences. The only allowed states are those with isospin l and spin O or with isospin O and spin 1. The deuteron is an isospin O conbination, and has spin l, as expected.
5.7 Superselection rules
It appears, in this argument, that we have assigned some fundamental physical significance to the anticommutation of the creation operators for protons and neutrons. As I mentioned above, this seems suspect, because in fact, the proton and neutron are not identical particles. What we actually know directly
al from the Pauli exclusion principle is that the creation operator, for any
state of a particle obeying Fermi-Dirac statistics satisfies
(5.22)
If we have another creation operator for the same particle in another state,
ab, al+ ab, we can form the combination
which when acting on the vacuum
creates the particle in the state a + f3 (with the wrong normalization). Thus
the exclusion principle also implies
(ai+a1)2 =0
(5.23)
and thus
at} = {
at Ql
/3
0
(5.24)
This argument is formally correct, but it doesn't really make much physi-
al a1 cal sense if and create states of different particles, because it doesn't
really make sense to superpose the states - this superposition is forbidden by a superselection rule. A superselection rule is a funny concept. It is the statement that you never need to think about superposing states with different values of an exactly conserved quantum number because those states must be orthogonal. Anything you can derive by such a superposition must also be derivable in some other way that does not involve the "forbidden" superposition. Thus as you see, the superposition is not so much forbidden as it is irrelevant. In this case, it is possible to show that one can choose the creation operators to anticommute without running into inconsistencies, but there is a much stronger argument. The anticommutation is required by the fact that the creation operators transform like tensor operators. Let's see how this implies the stated result for the two nucleon system.
86
CHAPTER 5. ISOSPIN
at± ( Call the creation operators for the baryons
dropping the N for
brevity) where the first sign is the sign of the third component of isospin and
(a~+) the second is the sign of third component of spin. Since
2 = 0, there
is no two nucleon state with T3 = 1 and h = 1. But this means that there is
no state with isospin I and spin 1, since the highest weight state would have
to have T3 = 1 and ]3 = 1. In terms of creation operators, for example
(5.25)
Similar arguments show that the operators must anticommute whenever they have one common index and the others are different.
The argument for operators that have no index in common is a little more subtle. First compute
(5.26)
But the two terms in the sum must separately vanish because they are phys-
ically distinguishable. There cannot be a relation like (5.26) unless the two
operators
{a~_, a~+} IO)
(5.27)
and (5.28)
separately vanish, because these two operators, if they did not vanish, would do physically distinguishable things - the creation of a proton with spin up and a neutron with spin down is not the same as the creation of proton with spin down and a neutron with spin up. Thus the operators (5.27) and (5.28) must separately vanish. Thus, not only does the isospin 1, spin I state,(5.26) vanish but so also does the isospin 0, spin O state
(5.29)
5.8 Other particles
When isospin was introduced, the only known particles that carried it were the proton and neutron, and the nuclei built out of them. But as particle physicists explored further, at higher energies, new particles appeared that are not built out of nucleons. The first of these were the pions, three spinless bosons (that is obeying Bose-Einstein, rather than Fermi-Dirac statistics) with
5. 8. OTHER PARTICLES
87
charges Q = +1, 0 and - l, and T3 = Q, forming an isospin triplet.2 The
creation and annihilation operators for the pions can be written as
a1t T,m,o,
a=,m,~ form= -1 to 1
11
l..l.
They satisfy commutation, rather than anticommutation relations
(5.30)
[a1r,m,a, a~,m',/3] = Omm'Oa/3 [at,m,a,a~,m',/3] = [a1r,m,a,a1r,m',/3] = 0
(5.31)
so that the particle states will be completely symmetric. They also commute with nucleon creation and annihilation operators.
The isospin generators look like
Ta= at,m,a [J~]mm' a1r,m1,a + ···
(5.32)
where as in (5.16) the · · · refers to the contributions of other particles (like nucleons). Again, then the creation operators are tensor operators.
There are many many other particles like the nucleons and the pions that participate in the strong interactions and carry isospin. The formalism of creation and annihilation operators gives us a nice way of writing the generators of isospin that acts on all these particles. The complete form of the isospin generators is
particles .:t states o:
T3 values m,m1
(5.33)
where at,m,a and ax,m',a are creation and annihilation operators for x-type
particles satisfying commutation or anticommutation relations depending on whether they are bosons or fermions,
= [ax,m,a,a~,,m,,/3]± Omm'Oa,aOxx'
[ai,m,a,at,,m',/3]± = [ax,m,a,ax',m',/3]± = 0
(5.34)
The rule for the ± (+ for anticommutator, - for commutator) is that the anti-
commutator is used when both x and x' are fermions, otherwise the commu-
tator is used. The Jx in (5.33) is the isospin of the x particles.
2When these particles were discovered, it was not completely obvious that they were not built out of nucleons and their antiparticles. When very little was known about the strong interactions, it was possible to imagine, for example, that the 7r+ was a bound state of a proton and an antineutron. This has all the right quantum numbers - even the isospin is right. It just turns out that this model of the pion is wrong. Group theory can never tell you this kind of thing. You need real dynamical information about the strong interactions.
88
CHAPTER 5. ISOSPIN
5.9 Approximate isospin symmetry
Isospin is an approximate symmetry. What this means in general is that the Hamiltonian can be written as
H =Ho+ b:.H
(5.35)
where Ho commutes with the symmetry generators and b:.H does not, but in some sense b:.H is small compared to H0 . It is traditional to say in the
case of isospin that the "strong" interactions are isospin symmetric while
the weak and electromagnetic interactions are not, and so take Ho = Hs and b:.H = HEM + Hw where Hs, HEM and Hw are the contributions
to the Hamiltonian describing the strong interactions (including the kinetic energy), the electromagnetic interactions, and the weak interactions, respectively. From our modem perspective, this division is a bit misleading for two reasons. Firstly, the division between electromagnetic and weak interactions is not so obvious because of the partial unification of the two forces. Secondly, part of the isospin violating interaction arises from the difference in mass between the u and d quarks which is actually part of the kinetic energy. It seems to be purely accidental that this effect is roughly the same size as the effect of the electromagnetic interactions. But this accident was important historically, because it made it easy to understand isospin as an approximate symmetry. There are so many such accidents in particle physics that it makes one wonder whether there is something more going on. At any rate, we will
simply lump all isospin violation into b:.H. The group theory doesn't care
about the dynamics anyway, as long as the symmetry structure is properly taken into account.
5.10 Perturbation theory
The way (5.35) is used is in perturbation theory. The states are classified into eigenstates of the zeroth order, isospin symmetric part of the Hamilto-
nian, H0 . Sometimes, just Ho is good enough to approximate the physics of interest. If not, one must treat the effects of b:.H as perturbations. In the
scattering of strongly interacting particles, for example, the weak and electromagnetic interactions can often be ignored. Thus in pion-nucleon scattering, all the different possible charge states have either isospin 1/2 or 3/2 (because
1 0 1/2 = 3/2 EB 1/2), so this scattering process can be described approxi-
mately by only two amplitudes. The mathematics here is exactly the same as that which appears in the
decomposition of a spin-1/2 state with an orbital angular momentum 1 into
5.10. PERTURBATION THEORY
89
states with total angular momentum 3/2 and 1/2. The state with one pion and one nucleon can be described as a tensor product of an isospin 1/2 nucleon state with an isospin 1 pion state, just as the state with both spin and orbital angular momentum can be described as a tensor product, having both spin and angular momentum indices.
Problems
5.A. Suppose that in some process, a pair of pions is produced in a state with zero relative orbital angular momentum. What total isospin values are possible for this state?
5.B.
Show that the operators defined in (5.33) have the commutation
relations of isospin generators.
5.C.
~ ++, ~ +, ~0 and ~ - are isospin 3/2 particles (T3 = 3/2, 1/2,
-1/2 and -3/2 respectively) with baryon number 1. They are produced
by strong interactions in 7r-nucleon collisions. Compare the probability of producing ~ ++ in 7r+ P --t ~ ++ with the probability of producing ~ 0 in
7l'- p-+ ~o.