Increasingly I’ve found the Levi-Civita symbol to be incredibly useful at deducing equalities involving cross products. I’d go through some basic applications of the symbol in linear algebra and multivariable calculus. We would discuss more advanced applications in part 2.

Consider vectors \(\mathbf{u}\) and \(\mathbf{v}\). We can consider the components of \(\mathbf{u, v}\) as follows: \(\mathbf{u} = \mathbf{u}_1 \mathbf{e}_1 + \mathbf{u}_2 \mathbf{e}_2 + \mathbf{u}_3 \mathbf{e}_3\) and \(\mathbf{v} = \mathbf{v}_1 \mathbf{e}_1 + \mathbf{v}_2 \mathbf{e}_2 + \mathbf{v}_3 \mathbf{e}_3\) where \(\mathbf{e}_i\) are the standard unit column vectors.

Introducing the Levi-Civita Symbol,

\[\varepsilon _{ijk}={\begin{cases}+1&{\text{if }}(i,j,k){\text{ is }}(1,2,3),(2,3,1),{\text{ or }}(3,1,2),\\-1&{\text{if }}(i,j,k){\text{ is }}(3,2,1),(1,3,2),{\text{ or }}(2,1,3),\\\;\;\,0&{\text{if }}i=j,{\text{ or }}j=k,{\text{ or }}k=i\end{cases}}\]

The cases for \(1\) and \(-1\) come from analyzing whether \(i,j,k\) forms an even or odd permutation of \(\{1, 2, 3\}\).

This definition may seem to come from nowhere and as such it would be helpful to connect it with some linear algebra.

Notation

From now on, we would use the variables \(i\), \(j\), \(k\) to represent some integer \(1\), \(2\), or \(3\). We would use \(\sum_i\) to denote \(\sum^3_{i = 1}\). We would also use \(\sum_{ij}\) to denote \(\sum^3_{i=1} \sum^3_{j=1}\) and etc. We would use \(\mathbf{e}_i\) to denote the standard unit column vectors. Given some vector \(\mathbf{v}\), we would use \(\mathbf{v}_i\) to denote its \(i\)-th coordinate i.e. \(\mathbf{v} = \sum_i \mathbf{v}_i \mathbf{e}_i\) holds.

Dot product

We could write \(\mathbf{v} \cdot \mathbf{w}\) as \(\sum_i \mathbf{v}_i \mathbf{w}_i\). Although sometimes it may be helpful to write it as \(\sum_{ij} \delta_{ij} \mathbf{v}_i \mathbf{w}_j\) instead, where \(\delta\) is the Kronecker delta.

Vector Product

Connecting this to the cross product, notice that

\[\mathbf{e}_i \times \mathbf{e}_j = \sum_k \varepsilon_{ijk} \mathbf{e}_k\]

As such we can express

Claim: \(\mathbf{v} \times \mathbf{w} = \sum_{ijk} \varepsilon_{ijk} \mathbf{v}_j \mathbf{w}_k \mathbf{e}_i\)

Proof:

\[\begin{align*} &\mathbf{v} \times \mathbf{w} \\ &= \sum_{ij} \mathbf{v}_i \mathbf{e}_i \times \mathbf{w}_j \mathbf{e}_j & \text{distributivity}\\ &= \sum_{ij} \mathbf{v}_i \mathbf{w}_j (\mathbf{e}_i \times \mathbf{e}_j)& \text{linearity} \\ &= \sum_{ijk} \varepsilon_{ijk} \mathbf{v}_i \mathbf{w}_j \mathbf{e}_k \\ &= \sum_{ijk} \varepsilon_{jki} \mathbf{v}_j \mathbf{w}_k \mathbf{e}_i & \text{permute indices}\\ &= \sum_{ijk} \varepsilon_{ijk} \mathbf{v}_j \mathbf{w}_k \mathbf{e}_i & (\varepsilon_{jki} = \varepsilon_{ijk})\\ &\phantom{= \sum \sum \varepsilon \mathbf{v}_i \mathbf{w}_j \mathbf{e}_k}\square \end{align*}\]

Alternatively we could say that the \(i\)-th component of \(\mathbf{v} \times \mathbf{w}\) is \(\varepsilon_{ijk} \mathbf{v}_j \mathbf{w}_k\).

We would connect the vector product to determinants in the appendix.

Multivariable calculus

The symbol could be used to deduce equalities in multivariable calculus as well.

Let’s consider the components of the gradient, \(\nabla\), by \(\nabla = \mathbf{e}_1 \partial_1 + \mathbf{e}_2 \partial_2 + \mathbf{e}_3 \partial_3\) where \(\partial_i = \frac{\partial}{\partial \mathbf{x}_i}\). Let \(\phi\) be some scalar valued function.

Claim: \(\nabla \cdot (\nabla \times \mathbf{u}) = 0\)

Proof:

\[\begin{align*} & \nabla \cdot (\nabla \times \mathbf{u}) \\ &= \sum_ i \partial_i ([\nabla \times \mathbf{u}]_i)\\ &= \sum_{ijk} \varepsilon_{ijk} \partial_i \partial_j \mathbf{u}_k \\ &= \sum_{ijk} \varepsilon_{jik} \partial_j \partial_i \mathbf{u}_k & \text{swap } i \text{ and } j \\ &= \sum_{ijk} \varepsilon_{jik} \partial_i \partial_j \mathbf{u}_k & (\partial_i \partial_j = \partial_j \partial_i) \\ &= -\sum_{ijk} \varepsilon_{ijk} \partial_i \partial_j \mathbf{u}_k & (\varepsilon_{jik} = -\varepsilon_{ijk})\\ &= - \nabla \cdot (\nabla \times \mathbf{u}) \end{align*}\]

So \(\nabla \cdot (\nabla \times \mathbf{u}) = 0\).

Claim: \(\nabla \times (\nabla \phi) = \mathbf{0}\)

Proof: Consider the \(i\)-th component,

\[\begin{align*} &[\nabla \times (\nabla \phi)]_i \\ &= \sum_{jk} \varepsilon_{ijk} \partial_j [\nabla \phi]_k\\ &= \sum_{jk} \varepsilon_{ijk} \partial_j \partial_k \phi \\ &= \sum_{kj} \varepsilon_{ikj} \partial_k \partial_j \phi & \text{swap } j \text{ and } k \\ &= \sum_{jk} \varepsilon_{ikj} \partial_j \partial_k \phi & (\partial_j \partial_k = \partial_k \partial_j) \\ &= -\sum_{jk} \varepsilon_{ijk} \partial_j \partial_k \phi & (\varepsilon_{ikj} = -\varepsilon_{ijk})\\ &= - [\nabla \times (\nabla \phi)]_i \end{align*}\]

So \([\nabla \times (\nabla \phi)]_i=0\), we’re done. \(\square\)

Claim: \(\nabla \times (\phi \mathbf{u}) = (\nabla \phi \times \mathbf{u}) + \phi(\nabla \times \mathbf{u})\)

Proof: Consider the \(i\)-th component,

\[\begin{align*} &[\nabla \times (\phi \mathbf{u})]_i \\ &= \sum_{jk} \varepsilon_{ijk} \partial_j ([\phi \mathbf{u}]_k)\\ &= \sum_{jk} \varepsilon_{ijk} \partial_j (\phi \mathbf{u}_k)\\ &= \sum_{jk} \varepsilon_{ijk} \big[(\partial_j \phi)\mathbf{u}_k + \phi (\partial_j \mathbf{u}_k)\big]\\ &= [(\nabla \phi \times \mathbf{u})]_i + [\phi(\nabla \times \mathbf{u})]_i \\ & \phantom{[(\nabla \phi \times \mathbf{u})]_i + [\phi(\nabla \times \mathbf{u})]_i}\square \end{align*}\]

Claim: \(\nabla \cdot (\phi \mathbf{u}) = (\nabla \phi) \cdot \mathbf{u} + \phi(\nabla \cdot \mathbf{u})\)

Proof:

\[\begin{align*} & \nabla \cdot (\phi \mathbf{u}) \\ &= \sum_i\partial_i ([\phi \mathbf{u}]_i) \\ &= \sum_i\partial_i (\phi \mathbf{u}_i) \\ &= \sum_i(\partial_i \phi) \mathbf{u}_i + \phi (\partial_i \mathbf{u}_i) \\ &= \sum_i [(\nabla \phi) \cdot \mathbf{u}]_i + [\phi(\nabla \cdot \mathbf{u})]_i \\ &= (\nabla \phi) \cdot \mathbf{u} + \phi(\nabla \cdot \mathbf{u}) \\ & \phantom{= (\nabla \phi) \cdot \mathbf{u} + \phi(\nabla \cdot \mathbf{u})}\square \end{align*}\]

Going further

To go further and handle terms such as \(\mathbf{u} \times (\mathbf{v} \times \mathbf{w})\), we need to understand how we could multiply Levi-Civita symbols together. We would do so in part 2.

Acknowledgements

I’m directly influenced by articles written by Patrick Guio, Jim Wheeler and R. L. Herman. Thanks to Henry Yip for helpful comments and providing an alternate proof.

Appendix: Determinants

The main result of this section is the formula for the determinant expressed in terms of Levi-Civita symbols.

\[\mathbf{u} \cdot (\mathbf{v} \times \mathbf{w}) = \det (\mathbf{u} ~\mathbf{v} ~\mathbf{w})\]

We first introduce the Leibniz formula for determinants, which gives us a technical expression of the determinant of a square matrix in terms of permutations of the matrix elements.

\[\det A = \sum _{\tau \in S_{n}}\operatorname {sgn}(\tau )\prod _{m=1}^{n}a_{m,\,\tau (m)}=\sum _{\sigma \in S_{n}}\operatorname {sgn}(\sigma )\prod _{m=1}^{n}a_{\sigma (m),\,m}\]

One can prove that the determinant is the only alternating multilinear function from \(F: M_n(\mathbb{R}) \to \mathbb{R}\) such that \(F(I) = 1\). You can find a general proof here.

It’s annoying to have the indexing set be the symmetry group. Restricting ourselves to \(3 \times 3\) matrices over the reals, by considering how \(S_3\) acts on the set \(\{1,2,3\}\), we could view \(S_3\) as a subset of functions that map \(\{1,2,3\}\) to \(\{1,2,3\}\). Notice there’s a bijection from such functions to ordered triples \(\{(i,j,k), i,j,k\in\{1,2,3\}\}\) by \(f \to (f(1), f(2), f(3))\), which is much easier to work with.

As such, we have

\[\begin{align*} \det A &= \sum _{\tau \in S_{3}}\operatorname {sgn}(\tau )a_{1\tau (1)}a_{2\tau (2)}a_{3\tau (3)} \\ &= \sum _{ijk} \varepsilon_{ijk} a_{1i} a_{2j} a_{3k} \end{align*}\]

or equivalently

\[\begin{align*} \det A &= \sum _{\sigma \in S_{3}}\operatorname {sgn}(\sigma )a_{\sigma (1)1}a_{\sigma (2)2}a_{\sigma (3)3}\\ &= \sum _{ijk} \varepsilon_{ijk} a_{i1} a_{j2} a_{k3} \end{align*}\]

as \(\varepsilon_{ijk}\) is zero if and only if \((i,j,k)\) doesn’t correspond to a permutation in \(S_3\).

In particular if we express \(A = (\mathbf{A}^1 ~\mathbf{A}^2 ~\mathbf{A}^3)\) where \(\mathbf{A}^i\) denotes the \(i\)-th column vector of \(A\), we get

\[\det A = \sum _{ijk} \varepsilon_{ijk} \mathbf{A}^1_i \mathbf{A}^2_j \mathbf{A}^3_k\]

In particular,

\[\begin{align*} & \det (\mathbf{e}_l ~\mathbf{e}_m ~\mathbf{e}_n) \\ &= \sum_{ijk} \varepsilon_{ijk} (\mathbf{e}_l)_i (\mathbf{e}_m)_j (\mathbf{e}_n)_k \\ &= \sum_{ijk} \varepsilon_{ijk} \delta_{il} \delta_{jm} \delta_{kn} \\ &= \varepsilon_{lmn} \end{align*}\]

Finally we could prove the claim that

\[\begin{align*} &\mathbf{u} \cdot (\mathbf{v} \times \mathbf{w}) \\ &= \sum_i \mathbf{u}_i [\mathbf{v} \times \mathbf{w}]_i &\text{def of dot product}\\ &= \sum_i\mathbf{u}_i \sum_{jk} \varepsilon_{ijk} \mathbf{v}_j \mathbf{w}_k \\ &= \sum_{ijk} \varepsilon_{ijk} \mathbf{u}_i \mathbf{v}_j \mathbf{w}_k\\ &= \det (\mathbf{u} ~\mathbf{v} ~\mathbf{w}) \\ &\phantom{= \det (\mathbf{u} ~\mathbf{v} ~\mathbf{w})}\square \end{align*}\]