Towards an ‘edgier’ perspective on graphs

Standard

With the Foundations of Software Technology and Theoretical Computer Science Conference 2011 just four days away and the prospect of getting a glimpse of the likes of Umesh Vazirani, Moshe Vardi and Madhu Sudhan exchanging ideas and insights right in our campus delightfully imminent, Karthik, Surya and I are quite excited about the whole affair. Not to mention that this is going to be the first major conference we’ll be attending. So, we three have been warming up for whirlwind of a week that it’s going to be with all the talks and workshops by thinking about one of the most beautifully cruel problems of the trade, one that all the three have common ground in. Namely, the Graph Isomorphism problem, henceforth referred to as simply GI.

The statement of GI is pretty straightforward – given two finite graphs G and H, determine whether there exists a bijection f from the set of vertices of G to that of H so that given two vertices g_1 and g_2 in G, f(g_1) and f(g_2) are adjacent in H if and only if g_1 and g_2 are adjacent in G. In other words, determine whether the graph G may be turned into the graph H by a mere relabeling of vertices.

In a sense, graph isomorphism is to graph theory what coordinate transformations are to physics – mathematically relevant and meaningful results ought to be independent of how the vertices are labeled just as physically meaningful statements about natural phenomena ought to be independent of how our coordinate grid is laid out. GI, as a problem, is hence one of great interest, and this is all the more accentuated by the fact that we don’t know where it fits in the grand landscape of algorithmic complexity. Is it solvable in polynomial time? Is it NP-complete? As of now, these questions still remain unanswered.

So, this is the problem that we (as in, we the Triumvirate) have taken upon. Of course, everyone would like to see label-independent descriptions of graphs à la manifestly covariant formulations of dynamical laws in physics, but it’s hard to do that in an unambiguous and general way. For instance, a graph with ‘six vertices, all of  degree two’ describes a hexagon as well as a Star of David, which are clearly non-isomorphic. Of course, we could impose further conditions of connectedness et cetera but that doesn’t adhere to the ideal of generality. With the above in mind, the Triumvirate has decided to begin by developing alternative ways of looking at graphs which hopefully shall shed some light on this matter, if we are lucky enough. And this is what we came up with.

Consider an object which I shall call grof (no, nothing to do with the psychiatrist) that consists of finitely many things called groficles. I’m deliberately choosing highly nonstandard, not to mention, funky, terminology to obscure how things tie up to the usual notion of graphs. Maybe I’m being too hard-nosed as a mathematician but then the prospect of preexisting notions getting in the way is too habitual a risk when we’re trying to look at things in new ways. Now, back to grofs. So, a grof \mathfrak{G} consist of elements called groficles that satisfy the  following axioms.

  1. For every groficle \mathfrak{g}\in\mathfrak{G}, there is a partition \Pi(\mathfrak{g}) of \mathfrak{G}\smallsetminus\{\mathfrak{g}\} into three equivalence classes, of which one is singled out as the zero class \zeta(\mathfrak{g})\in \Pi(\mathfrak{g}).
  2. Given two distinct groficles \mathfrak{f} and \mathfrak{g} in \mathfrak{G}, if \mathfrak{f}\in \zeta(\mathfrak{g}), then \mathfrak{g}\in \zeta(\mathfrak{f}).
  3. Given three pairwise distinct groficles \mathfrak{f}, \mathfrak{g} and \mathfrak{h} in \mathfrak{G}, if \mathfrak{g}\approx_{\mathfrak{f}}\mathfrak{h} and \mathfrak{h}\not\approx_{\mathfrak{g}}\mathfrak{f}, then \mathfrak{g}, \mathfrak{h}\in \zeta(\mathfrak{f}),  where \approx_{\mathfrak{f}} is the equivalence relation induced by the partition \Pi(\mathfrak{f}) (in other words, \mathfrak{g}\approx_{\mathfrak{f}}\mathfrak{h} amounts to saying that \mathfrak{g} and \mathfrak{h} belong to the same equivalence class in the partition \Pi(\mathfrak{f})).

And now for the recipe for turning the confunding mess above into something more palatable. Let’s start by picking up an arbitrary groficle \mathfrak{g} in \mathfrak{G}. Now, it is quite possible that one or more of the equivalence classes in \Pi(\mathfrak{g}) are empty, but that really isn’t much of a bother as all the assertions that are to follow hold for empty sets trivially. So, without loss of generality we treat all our equivalence classes as nonempty. Also, we’ll let, as the standard convention goes, [\mathfrak{g}]_{\mathfrak{f}} denote the equivalence class in \Pi(\mathfrak{f}) that contains \mathfrak{g}. And now, we make the assertion:

If [\mathfrak{g}]_{\mathfrak{f}}\ne\zeta(\mathfrak{f}), then [\mathfrak{g}]_{\mathfrak{f}}\cup\{\mathfrak{f}\}=[\mathfrak{f}]_{\mathfrak{g}}\cup\{\mathfrak{g}\}.

Proof: The antecedent in the above conditional, [\mathfrak{g}]_{\mathfrak{f}}\ne\zeta(\mathfrak{f}), is just another way of saying that \mathfrak{g}\not\in\zeta(\mathfrak{f}). The groficles \mathfrak{f} and \mathfrak{g} belong to both [\mathfrak{g}]_{\mathfrak{f}}\cup\{\mathfrak{f}\} and [\mathfrak{f}]_{\mathfrak{g}}\cup\{\mathfrak{g}\}, so, we consider a groficle \mathfrak{h}\in[\mathfrak{g}]_{\mathfrak{f}}\cup\{\mathfrak{f}\} which is different from both \mathfrak{f} and \mathfrak{g}. Since \mathfrak{h} is different from \mathfrak{f}, it must belong to [\mathfrak{g}]_{\mathfrak{f}}, which means \mathfrak{g}\approx_{\mathfrak{f}}\mathfrak{h}. Now, if \mathfrak{h}\not\approx_{\mathfrak{g}}\mathfrak{f}, then, by virtue of Axiom 3, we would have \mathfrak{g}\in\zeta(\mathfrak{f}) which is a contradiction. Hence, we conclude, \mathfrak{h}\approx_{\mathfrak{g}}\mathfrak{f}, or in other words, \mathfrak{h}\in[\mathfrak{f}]_{\mathfrak{g}}, which further implies,  \mathfrak{h}\in[\mathfrak{f}]_{\mathfrak{g}}\cup\{\mathfrak{g}\}. Thus, [\mathfrak{g}]_{\mathfrak{f}}\cup\{\mathfrak{f}\}\subseteq[\mathfrak{f}]_{\mathfrak{g}}\cup\{\mathfrak{g}\}. Now, by Axiom 2, we know \mathfrak{g}\not\in\zeta(\mathfrak{f}) implies \mathfrak{f}\not\in\zeta(\mathfrak{g}), which means we can interchange \mathfrak{f} and \mathfrak{g} in the above argument to obtain [\mathfrak{f}]_{\mathfrak{g}}\cup\{\mathfrak{g}\}\subseteq[\mathfrak{g}]_{\mathfrak{f}}\cup\{\mathfrak{f}\}. Putting the two together, we end up with  [\mathfrak{g}]_{\mathfrak{f}}\cup\{\mathfrak{f}\}=[\mathfrak{f}]_{\mathfrak{g}}\cup\{\mathfrak{g}\}. Q. E. D.

What we have essentially shown here is that we can take one of the two nonzero equivalence classes of any groficle, add the groficle in question to it to obtain a set which we’ll call v for now, remove another groficle from it and get one of the nonzero equivalence classes of the second groficle in question. The set v represents a rather nice democratic situation, which calls for some more funky terminology – all the groficles with are elements of v are said to canoodle with one another at the canoodlarium v.

A few quick observations: since for any given groficle \mathfrak{g}, there are exactly two disjoint nonzero equivalence classes in the partition \Pi(\mathfrak{g}), there are exactly two distinct cannodlaria associated with it. The two canoodlaria are necessarily nonempty – they both contain at least the groficle \mathfrak{g}. Given two arbitrary canoodlaria u and v, they may either be disjoint or contain exactly one groficle in common. What if two distinct canoodlaria had at least two distinct groficles, say \mathfrak{f} and \mathfrak{g},  in common? Then the two nonzero equivalence classes in \Pi(\mathfrak{f}) would be u\smallsetminus\{\mathfrak{f}\} and v\smallsetminus\{\mathfrak{f}\}. But then, \mathfrak{g}\in \left(u\smallsetminus\{\mathfrak{f}\}\right)\cap\left (v\smallsetminus\{\mathfrak{f}\}\right). The two equivalence classes in the partition \Pi(\mathfrak{f}) are thus not disjoint, which is a contradiction.

So this is how this whole grof business we’ve been at ties up with the familiar notion of graphs. The canoodlaria become the vertices, with disjoint canoodlaria corresponding to non-adjacent vertices, and the groficles in the intersection of the two canoodlaria  corresponding to the edges. Axiom 1 says that an edge may or may not be linked with another edge, and if linked, it can do so in two ways (i.e. through its two vertices). Axiom 2 says that the relation of being linked is a symmetric one. Axiom 3 says (in a very disguised fashion) that if the edges in question link through the same vertex, then the relation of being linked is also a transitive one. However, there are a few important differences that have to be kept in mind as well. Firstly, our construction doesn’t allow for the possibility of loops (since there have to be exactly two canoodlaria associated with each groficle) or multiple edges (since there can be at most one groficle for every pair of canoodlaria). And secondly, isolated vertices are meaningless as they correspond to empty canoodlaria and all empty canoodlaria are set theoretically the same. This isn’t undesirable at all, for as far as GI is concerned, loops, multiple edges and isolated vertices only lead to increased paperwork and are as useful as complimenting a girl you’ve just chatted up over a drink about how she is topologically equivalent to a five-holed blob.

In fact, I think I’ll go the whole hog and tout our grof formalism as being better adapted for the analysis of GI than the plain old ‘vertex-ey’ way of doing things. This, by itself, is of course a rather dangerous statement to make for multiple reasons, a heightened probability of a nuclear holocaust not being one of them. The notion of functions on grofs, where edges aka groficles are mapped to edges aka groficles, is markedly different from the notion of functions on graphs, where vertices are mapped to vertices, and it makes no sense to directly compare them. What is legitimate is that we first define what a grof-isomorphism is in parallel with a graph-isomorphism.

Given two (finite) graphs G and H, a graph-isomorphism f_{G,H} is a bijection from the set of vertices of G to that of H such that given two vertices g_1 and g_2 in G, f_{G,H}(g_1) and f_{G,H}(g_2) are adjacent in H if and only if g_1 and g_2 are adjacent in G.

Given two (finite) grofs \mathfrak{G} and \mathfrak{H}, a grof-isomorphism f_{\mathfrak{G},\mathfrak{H}} is a bijection from the set of groficles of \mathfrak{G} to that of \mathfrak{H} such that given some groficles \mathfrak{g}_i (where i belongs to a finite indexing set) in \mathfrak{G}, f_{\mathfrak{G},\mathfrak{H}}(\mathfrak{g}_i) canoodle with one another in \mathfrak{H} if and only if \mathfrak{g}_i canoodle with one another in \mathfrak{G}.

And then we can ask, given two graphs G and H which may be recast as the grofs \mathfrak{G} and \mathfrak{H} respectively, whether the existence of a graph-isomorphism between G and H implies the existence of a grof-isomorphism between \mathfrak{G} and \mathfrak{H}, and vice versa. The answer to that is of course affirmative and the argument is pretty straightforward and constructive. Because, between two adjacent vertices (aka non-disjoint canoodlaria) g_1 and g_2, there is exactly one edge (aka the lone groficle \mathfrak{g} in g_1\cap g_2), given a graph-isomorphism f_{G,H}, we may define a corresponding grof-isomorphism f_{\mathfrak{G},\mathfrak{H}} by its action f_{\mathfrak{G},\mathfrak{H}}(\mathfrak{g})\in f_{G,H}(g_1)\cap f_{G,H}(g_2). This is well-defined as graph-isomorphisms take adjacent vertices to adjacent vertices. Conversely, given a grof-isomorphism f_{\mathfrak{G},\mathfrak{H}}, we may define a corresponding graph-isomorphism  f_{G,H} by the action [\mathfrak{g}_1]_{\mathfrak{g_2}}\cup\{\mathfrak{g}_2\}\mapsto[ f_{\mathfrak{G},\mathfrak{H}}(\mathfrak{g}_1)]_{ f_{\mathfrak{G},\mathfrak{H}}(\mathfrak{g}_2)}\cup\{ f_{\mathfrak{G},\mathfrak{H}}(\mathfrak{g}_2)\}. This is well-defined as grof-isomorphisms take canoodling groficles to canoodling groficles.

And finally it’s time to ask the hard questions. What is the point of the above? Why do I think an ‘edgier’ approach has an edge over the usual vertex-based ones? To be honest, the most convincing answer that I’ve been able to come up with is that this is quite, what is it that the hipsters say… ah! ‘Underground’! And the second most convincing answer (trivially so; I don’t find it very convincing) is that since grof-isomorphisms are far more stringent than any self-respecting graph-isomorphism could ever hope to be, there are far more ways in which the former may fail to exist. Computationally, this means things are easier to weed out – indeed, most non-isomorphic graph-pairs don’t even have the same number of edges to begin with! But whether this translates to some nice and hopefully earth-shattering insights into richness of mathematical structure that graph theory has to offer, we still have to see.

A note of clarification

Standard

It seems that I’ve befuddled a fair share of people already by placing the previous post in the categories ‘Objects and Morphisms’ and ‘Functors and Natural Transformations’. Though yes, Schur’s lemma can be garbed in categorical livery, there was no mention of that anywhere, and now, everybody feels duped. So, first of all, I offer my sincerest apologies. All this was intended to be merely a sardonic commentary on the lamentable state of affairs today where the ordinary folk out there hijack perfectly reasonable mathematical terminology, corrupt its meaning and use it for various diabolical purposes in their day-to-day lives. And to clear things up, I’ll briefly talk about the scheme (oops!) according to which I’m categorizing posts here.

All posts that have something to do with mathematics (as in, mathematics-for-mathematics’-sake) are placed in ‘Functors and Natural Transformations’. Likewise, all those that have something to do with physics, are placed in ‘Objects and Morphisms’. And just to miff the chemists, biologists, economists, anthropologists et al, posts relating to their respective disciplines shall be placed in ‘Sets and Functions’. :D

The subcategories are given more conventional names such as ‘Gravitation and Cosmology’ and ‘Algebraic Geometry’. Of course, this becomes an issue when I actually want to talk about Categories, with a capital C, that is. For that, I shall use the more descriptive ‘General Abstract Nonsense’ or ‘Diagram Chasing’.

I still have no idea what to do about everyone’s enfant terrible – philosophy.

Arpan Saha
IIT Bombay

P. S.: I’m keeping my fingers crossed and hoping a fortnightly schedule works out. And thank you, everyone, for your overwhelming response. :)

Schur’s lemma and Wigner’s classification

Standard

While I was investigating the restricted Lorentz group SO(1,3) as a gauge group for gravity over the summer, the following problem cropped up pretty often: given a bunch of matrices (say for instance, the Pauli spin matrices \sigma^k) and the commutators of an unknown matrix \xi with all of them, how many solutions for the unknown matrix exist and how might they (assuming there are multiple solutions) be characterized?

The first step is straightforward and is basically a rip-off of the way we analyze the solution spaces of inhomogeneous systems of linear equations – we take the difference of two solutions \xi - \xi^\prime. Since the commutator of each solution with a fixed matrix chosen among any of the given matrices is the same, the commutator of the difference of solutions with all the other matrices will vanish identically.

[\sigma_k, \xi - \xi^\prime] =  [\sigma_k, \xi] - [\sigma_k, \xi^\prime] = 0

Trivial so far. The whole problem thus reduces to that of finding a single solution for \xi and all solutions of [\sigma^k, \cdot] = 0. The first part is not something we are really going to bother with in this post; all we are interested in are matrices which commute with all the elements in a given set of matrices.

Actually, that’s not quite it. The Pauli matrices aren’t just any run-of-the-mill bunch of matrices but are subject to certain constraints. Firstly, they, along with the identity matrix I_2, are closed under multiplication and matrix inversion, and secondly, there does not exist any common similarity transformation that will bring all the three Pauli matrices into block diagonal form. There is a better way of looking at the two properties above, of course – the first simply says that the matrices form a representation of some group, and the second says that they form an irreducible representation. The latter requires a bit of clarification –  a representation in terms of linear transformations (i.e. matrices) on a vector space V is said to be irreducible, if and only if the only subspaces left invariant under the action of the said matrix group are the trivial subspaces V and \{0\}. In other words, the representation contains no nontrivial subrepresentation.

So our problem reduces to that of finding all matrices that commute with every element of a group of matrices that form an irreducible representation of a group (possibly the matrix group itself). This is where I would have liked to thrust my hand down a top hat and exclaim with theatrical gusto, ‘Voila! We have Schur’s lemma!’ but sadly, a few notational shorthands remain to be discussed before I can give the statement of this elementary yet very powerful result. (This is something that has plagued mathematicians since God-knows-when; topologists have it particularly rough.)

Given a group G, we denote a representation of the same by (V, \phi), where V is a vector space and \phi: G \rightarrow GL(V) is a homomorphism taking each element of G to its corresponding matrix representative, which essentially are invertible linear transformations on the vector space V.

Given two representations (V, \phi) and (W, \psi) of a group G, a linear map \alpha: V\rightarrow W is said to be equivariant with respect to (V, \phi) and (W, \psi) if and only if \alpha\circ\phi(g) = \psi(g)\circ\alpha for all g\in G. (Go ahead. Take a moment to see that the domains and codomains are consistent – \phi(g) is a linear map from V to itself and, likewise, \psi(g) is a linear map from W to itself .) Informally, this means that \alpha preserves, in some sense, the group structure of the representation – when we map a basis of V onto a basis of W, the matrix representatives of the elements of G in GL(V) get mapped to the corresponding matrix representatives in GL(W).

Now all Schur’s lemma says is this.

If (V, \phi) and (W, \psi) are two irreducible representations of a group G and \alpha: V\rightarrow W is equivariant with respect to (V, \phi) and (W, \psi) then \alpha is either the zero map (maps everything in V to the zero of W) or is invertible.

It’s not hard to see why. Let’s begin by taking a look at the kernel \ker(\alpha) i.e. the subspace of all elements in V that get mapped to the zero in W and how the matrix representatives of the elements of G act on it. Say, we have g\in G and v\in \ker(\alpha)\sqsubseteq V (the symbol \sqsubseteq means ‘is a subspace of’). Then

(\alpha\circ\phi(g))(v) = (\psi(g)\circ\alpha)(v) = 0

In other words, (\phi(g))(v) belongs to \ker(\alpha) as well, irrespective of our choice of g – the kernel is closed under the action of the matrix group formed by the representatives of elements of G in GL(V). But, the representation is irreducible, so the kernel, being an invariant subspace, must either be V (i.e. \alpha is the zero map) or \{0\} (i.e. \alpha is injective – if two distinct elements in V were mapped to the same element in W, their nonzero difference would be mapped to the zero of W).

To get from an injection to a bijection, we next look at the image \mathrm{im}(\alpha) i.e. the subspace of all elements in W that elements in V get mapped to and how, just as in the case of the previous step, the matrix representatives of the elements of G act on it. Say, we have g\in G and w\in \mathrm{im}(\alpha)\sqsubseteq V. Because, w lies in the image of \alpha, there is an element v\in V such that w = \alpha(v). In that case

(\psi(g))(w) = (\psi(g)\circ\alpha)(v)= (\alpha\circ\phi(g))(v)

In other words, (\psi(g))(w) belongs to \mathrm{im}(\alpha) as well, irrespective of our choice of g – the image is closed under the action of the matrix group formed by the representatives of elements of G in GL(V).  But, again, the representation is irreducible, so the image, being an invariant subspace, must either be \{0\} (this is again the zero map case, which we’ve already dealt with) or W (i.e. \alpha is surjective).

When the two pieces are put together, we see that \alpha is a bijection, or in other words, it is invertible. Q. E. D.

How this links up to our original problem is already evident (or maybe not; I’m just being a snob). If the two representations in question are the same, say (V, \phi)where the dimension of V  is n, then \phi(g), g being a generic element of group G, and \alpha both may be written as n\times n matrices, with the operation of composition being replaced by usual matrix multiplication. The condition of equivariance of \alpha with respect to (V, \phi) is tantamount to a statement about the commutativity of \phi(g) and \alpha. In other words

[\phi(g), \alpha] = 0

Of course, given the above holds, the following must also be true.

[\phi(g), \alpha - \lambda I_n] = 0

where I_n is the n\times n identity matrix and \lambda is a scalar belonging to the ground field.

Here, we will require an additional piece of hitherto unspecified information – the ground field must be algebraically closed (all non-constant polynomials over the field must have at least one zero). In most cases, the ground field happens to be the field of complex numbers which indeed is algebraically closed, so no worries! Once this condition is assured, we can regard \det(\alpha -\lambda I_n) as a polynomial in \lambda and claim that at least one such \lambda exists such that \det(\alpha-\lambda I_n) = 0.

So, we have the equivariant map, (\alpha -\lambda I_n) (that this is equivariant with respect to (V, \phi) follows from the fact that its commutator with \phi(g) vanishes for any g\in G) and we know that the representation (V, \phi) is irreducible. By Schur’s lemma, we have that (\alpha -\lambda I_n) is either invertible or is the zero matrix. We can always choose \lambda such that  \det(\alpha-\lambda I_n) = 0, meaning that \alpha - \lambda I_n is not invertible. Thus it has to be the zero matrix, or in other words, \alpha = \lambda I_n. Therefore, any matrix commuting with all the matrices of an irreducible representation of a group has to be a scalar multiple of the identity (provided the ground field is algebraically closed, of course).

And we have thus characterized all possible solutions for an unknown matrix given its commutation relationships with a bunch of matrices satisfying certain conditions – the solution is unique up to addition of a scalar multiple of the identity matrix.

So far, that takes care of slightly-more-than-half of what this post is going to deal with. Obviously, Wigner has to come in somewhere, and now that we have dwelt on Schur long enough, it must be here. Firstly, we note that if we were to replace every instance of closure as a group by closure as an algebra, group-homomorphisms by algebra-homomorphisms, and representations of groups by representations of algebra, there is nothing really stopping the above arguments from still holding water. We thus have a version of Schur’s lemma (and the corollary that followed it) for algebras. And Eugene Wigner devised an ingenious way of using this fact to classify representations of Lie algebras.

Let us first review what makes something a Lie algebra. Informally speaking, you take a set of elements, give it a vector space structure (modules would  do, but we’ll stick to vector spaces) and define a nonassociative binary operation on the elements called the Lie bracket [\cdot, \cdot], satisfying the following conditions:

  1. It is linear on both its arguments i.e. [pW + qX, rY + sZ] = pr[W,Y] + qr[X, Y] + ps[W, Z] + qs[X, Z], where p, q, r and s are scalars.
  2. It is antisymmetric under exchange of its arguments i.e. [X, Y] = -[Y, X].
  3. It obeys the Jacobi relation, [X,[Y,Z]]  + [Y,[Z,X]]  + [Z,[X,Y]] = 0.

Given a Lie algebra \frak{g}, it is always possible to construct a unital associative algebra called the universal enveloping algebra U(\frak{g}), which consists of all the elements that \frak{g} contains but whose product is defined in a manner such that the Lie bracket is realized as the commutator with respect to the product (this really isn’t much of a problem in physics, where Lie algebras almost always arise as tangent spaces of Lie groups at identity). And once we have our universal enveloping algebra, we can construct elements in it called Casimir invariants which are polynomial in the generators of \frak{g} and which commute with everything else in U(\frak{g}). Is that always possible? As long as the Lie algebra is semi-simple (which is equivalent to saying that it admits a nondegenerate Killing form B(x, y) for x, y\in \frak{g}, defined by B(x, y) = \mathrm{tr}(\mathrm{ad}_x\circ\mathrm{ad}_y) where \mathrm{ad}_x(\cdot) stands for [x,\cdot]), it is. We won’t prove it here, but don’t worry, it’s not really central to the discussion that is to follow.

Say, we have an irreducible representation (V,\phi) of the universal enveloping algebra U(\frak{g}) of a given Lie algebra \frak{g} whose ground field is algebraically closed, and also say that \Omega is a Casimir invariant of U(\frak{g}). Then \phi(\Omega) is a matrix that commutes with every other matrix representative in the said representation. It follows from Schur’s lemma that \phi(\Omega) has to be a scalar multiple of the identity matrix in GL(V). But what that scalar multiple happens to be depends on the specific representation chosen. So, Wigner’s idea was essentially this: consider all the Casimir invariants of the universal enveloping algebra of a given Lie algebra, and classify all the irreducible representations on the basis of what the scalar multiples corresponding to the Casimir invariants are in the irreducible representation (let’s just abbreviate that to ‘irrep’, shall we?) concerned.

Let’s do this for the Poincaré algebra, which is the Lie algebra associated with the Poincaré group, the group of all  group of isometries of Minkowski spacetime. The Lie algebra is spanned by four generators of translations P_\mu (the momentum operators) and six generators of rotations/Lorentz boosts J_{\mu\nu} (the angular momentum operators; antisymmetric under exchange of indices), which satisfy the following commutation relationships:

[P_\mu,P_\nu] = 0
\frac{1}{i}[J_{\mu\nu}, P_{\rho}] = \eta_{\mu\rho}P_\nu -\eta_{\nu\rho}P_\mu
\frac{1}{i}[J_{\mu\nu}, J_{\rho\sigma}] =\eta_{\mu\rho}J_{\nu\sigma}-\eta_{\mu\sigma}J_{\nu\rho} - \eta_{\nu\rho}J_{\mu\sigma}+\eta_{\nu\sigma}J_{\mu\rho}

where \eta_{\mu\nu} is the usual Minkowski metric that may be used for lowering or raising spacetime indices. We will also construct out of the above,  another set of elements which form the components a 4-pseudovector – the Pauli-Lubanski vector W_\mu = \frac{1}{2}\epsilon_{\mu\nu\rho\sigma}J^{\nu\rho}P^{\sigma}, where \epsilon_{\mu\nu\rho\sigma} is the Levi-Civita tensor and where Einstein’s summation convention has been used. These obey the following commutation relations, as may be readily verified:

[P_\mu,W_\nu] = 0
\frac{1}{i}[J_{\mu\nu},W_\rho] = \eta_{\nu\rho}W_\mu - \eta_{\mu\rho}W_\nu
\frac{1}{i}[W_\mu,W_\nu] = \epsilon_{\mu\nu\rho\sigma}P^\rho W^\sigma

The two Casimir invariants that we now propose for the purpose of classification of the irreps are P^\mu P_\mu and W^\mu W_\mu. That these are Casimir invariants is something that has to be checked and this I will leave to you. (No, seriously, do it. Science is faith in the ignorance of the expert and I am far from an expert.)

Great! We have our Casimir invariants and we have classified all the irreps of the Poincaré algebra (and by integration, the Poincaré group as well) in terms of the scalar multiples \lambda_P and \lambda_W corresponding to the two Casimir invariants. But now what? It turns out that all this hasn’t just been an idle exercise in abstraction – there is a deep physical interpretation to it. To see this, let’s define two (possibly complex) numbers m and s as follows:

\lambda_P = m^2
\lambda_W = - m^2s(s+1)

These numbers m and s (the positive value thereof, once we restrict these to be real) can be given the interpretation – or if we’re willing to be bold, be defined to be – the mass and spin. Of course, the obvious question is, mass and spin of what? Now this is a beautiful insight from Wigner again – particles can be regarded as irreps of symmetry groups of spacetime and other internal spaces. So, the mass and spin we have conjured here are simply those of the particles corresponding to the irreps in question. This is a very profound idea and the only way the profundity can be done justice is by having a whole post devoted to it. There are some technical catches that I don’t understand very well at this moment and moreover, the fact that representations can have physical meaning is something that I’m still getting used to. So, if you excuse me, I’ll take leave here – I’m feeling a little too disproportionately happy about all this and I need my private ‘Booyeah, Universe!’ moment.

Arpan Saha
IIT Bombay

Addendum: In retrospect, there are several loose ends and loopholes in the post above. Firstly, particles  are required to be unitary irreps. But if that is the case, we can’t have a finite-dimensional representation of the Poincaré group, which means our proof of the corollary to Schur’s lemma using determinants will no longer work here. So, we’ll need additional notions of convergence and compactness (the linear maps need to be compact and self-adjoint operators on a Hilbert space) for things to sail through. And finally, our definition of spin falls apart when the particle in question is massless. I’ll address these issues in a follow-up article some time later.

Imprimis

Standard

Procrastination is the mother of all invention.

There is that time when you get all introspective and begin to suspect there’s more to life than procrastination (and filling up entire paragraphs with single sentences), which is almost immediately followed by the realization that something is terribly amiss when you have raised the whole thing to practically an art form in its own right and you haven’t hit graduate school yet. And thus, this blog.

This needs a little explaining, of course. Lately, I’ve been channeling a significant proportion of my already limited faculties into certain emotionally exhausting engagements and as a result, I’m clearly not doing as much physics/math as I would like.

My case is pretty much the same. Though Unicorn Enveloping Algae-bras definitely make an entry somewhere.

And so, this then is the purpose of the blog. I’ll keep posting (highly unstructured) tidbits of stuff that I will be reading up or discussing with other folk (my roommate, for instance; he has a pretty awesome blog up and running). And the hope is that this will be enough motivation to keep the grey matter churning at all times and that writing about the stuff I come across will lead to deeper understanding.

And finally, I’ll end my ‘Hello, World’ with the usual disclaimer. The posts here are not intended to be complete nor comprehensive; they serve to only chronicle my epic struggle to come to terms with the Universe. I will try to make them as free of errors as my own understanding permits, but I cannot guarantee that some won’t lurk around. Indeed, if you find mistakes (there will be a lot, I’m certain), do point them out – after all, I’m still (?) learning.

Arpan Saha
IIT Bombay