Levi-Civita Symbol (Part II)
Continuing from part I, we’d like to go further and handle objects of form \(\mathbf{u} \times (\mathbf{v} \times \mathbf{w})\). However, this would involve multiplication of two Levi-Cevita symbols. Fortunately, we can achieve this by calculating the determinant of a product of two matrices.
How do we multiply two Levi-Cevita symbols?
We shall leave the proof of \(\det (AB) = \det A \det B\) and \(\det A^T = \det A\) to the reader. Taking that for granted, we can prove the following.
Claim:
\[\varepsilon_{ijk}\varepsilon_{lmn} = \det \begin{pmatrix} \delta_{il} & \delta_{im} & \delta_{in} \\ \delta_{jl} & \delta_{jm} & \delta_{jn} \\ \delta_{kl} & \delta_{km} & \delta_{kn} \\ \end{pmatrix}\]Proof: Note that in part I we’ve mentioned that \(\varepsilon_{ijk} = \det (\mathbf{e}_i ~\mathbf{e}_j ~\mathbf{e}_k)\). As such we can write
\[\begin{align*} &\varepsilon_{ijk}\varepsilon_{lmn} \\ &= \det (\mathbf{e}_i ~\mathbf{e}_j ~\mathbf{e}_k)^T \det (\mathbf{e}_l ~\mathbf{e}_m ~\mathbf{e}_n) \\ &= \det [(\mathbf{e}_i ~\mathbf{e}_j ~\mathbf{e}_k)^T (\mathbf{e}_l ~\mathbf{e}_m ~\mathbf{e}_n)] \\ &= \det \begin{pmatrix} \delta_{il} & \delta_{im} & \delta_{in} \\ \delta_{jl} & \delta_{jm} & \delta_{jn} \\ \delta_{kl} & \delta_{km} & \delta_{kn} \\ \end{pmatrix} \; \square \end{align*}\]In particular, we shall use the following result in computations involving vector products
Claim: For fixed \(i, j, l, m\),
\[\sum_k \varepsilon_{ijk}\varepsilon_{lmk} = \delta_{il}\delta_{jm} - \delta_{im}\delta_{jl} = \det \begin{pmatrix} \delta_{il} & \delta_{im} \\ \delta_{jl} & \delta_{jm} \\ \end{pmatrix}\]Proof:
\[\begin{align*} &\sum_k \varepsilon_{ijk}\varepsilon_{lmk} = \sum_k\det \begin{pmatrix} \delta_{il} & \delta_{im} & \delta_{ik} \\ \delta_{jl} & \delta_{jm} & \delta_{jk} \\ \delta_{kl} & \delta_{km} & 1 \\ \end{pmatrix}\\ &= \sum_k \bigg( \det \begin{pmatrix} \delta_{il} & \delta_{im} \\ \delta_{jl} & \delta_{jm} \end{pmatrix} - \delta_{jk} \det \begin{pmatrix} \delta_{il} & \delta_{im} \\ \delta_{kl} & \delta_{km} \end{pmatrix} + \delta_{ik} \det \begin{pmatrix} \delta_{jl} & \delta_{jm} \\ \delta_{kl} & \delta_{km} \end{pmatrix} \bigg) \\ &= 3 \det \begin{pmatrix} \delta_{il} & \delta_{im} \\ \delta_{jl} & \delta_{jm} \\ \end{pmatrix} - \det \begin{pmatrix} \delta_{il} & \delta_{im} \\ \delta_{jl} & \delta_{jm} \end{pmatrix} + \det \begin{pmatrix} \delta_{jl} & \delta_{jm} \\ \delta_{il} & \delta_{im} \end{pmatrix}\\ &= \det \begin{pmatrix} \delta_{il} & \delta_{im} \\ \delta_{jl} & \delta_{jm} \\ \end{pmatrix} \qquad \square\end{align*}\][Remark: I suspect there’s a more geometric / intuitive way of proving the above]
General Applications
Claim: \(\mathbf{u} \times (\mathbf{v} \times \mathbf{w}) = (\mathbf{u} \cdot \mathbf{w}) \mathbf{v} - (\mathbf{u} \cdot \mathbf{v}) \mathbf{w}\)
Proof: Consider the \(i\)-th component,
\[\begin{align*} &[\mathbf{u} \times (\mathbf{v} \times \mathbf{w})]_i \\ &=\sum_{jk}\varepsilon_{ijk} \mathbf{u}_j (\mathbf{v} \times \mathbf{w})_k \\ &=\sum_{jklm}\varepsilon_{ijk} \mathbf{u}_j \varepsilon_{klm} \mathbf{v}_l \mathbf{w}_m \\ &=\sum_{jklm}\varepsilon_{ijk} \varepsilon_{lmk} \mathbf{u}_j \mathbf{v}_l \mathbf{w}_m \\ &= \sum_{jlm} (\delta_{il}\delta_{jm} - \delta_{im}\delta_{jl}) \mathbf{u}_j \mathbf{v}_l \mathbf{w}_m \\ &= \sum_{jlm} \delta_{il}\mathbf{v}_l \delta_{jm} \mathbf{u}_j \mathbf{w}_m - \sum_{jlm} \delta_{im} \mathbf{w}_m \delta_{jl} \mathbf{u}_j \mathbf{v}_l \\ &= \mathbf{v}_i \sum_{jm} \big( \delta_{jm} \mathbf{u}_j \mathbf{w}_m \big) - \mathbf{w}_i \sum_{jl} \big(\delta_{jl} \mathbf{u}_j \mathbf{v}_l \big) \\ &= (\mathbf{u} \cdot \mathbf{w}) \mathbf{v}_i - (\mathbf{u} \cdot \mathbf{v}) \mathbf{w}_i \\ &\phantom{= (\mathbf{u} \cdot \mathbf{w}) \mathbf{v}_i - (\mathbf{u} \cdot \mathbf{v}) \mathbf{w}_i} \square \end{align*}\]Claim: \( (\mathbf{a} \times \mathbf{b}) \cdot (\mathbf{c} \times \mathbf{d}) = (\mathbf{a} \cdot \mathbf{c}) (\mathbf{b} \cdot \mathbf{d}) - (\mathbf{a} \cdot \mathbf{d})(\mathbf{b} \cdot \mathbf{c}) \)
Proof: The proof is rather tedious. We expand
\[\begin{align*} &(\mathbf{a} \times \mathbf{b}) \cdot (\mathbf{c} \times \mathbf{d}) \\ &= \bigg(\sum_{ijk} \varepsilon_{ijk} \mathbf{a}_j \mathbf{b}_k \mathbf{e}_i\bigg) \cdot \bigg(\sum_{ijk} \varepsilon_{ijk} \mathbf{c}_j \mathbf{d}_k \mathbf{e}_i\bigg)\\ &= \sum_i \bigg[ \big( \sum_{jk} \varepsilon_{ijk} \mathbf{a}_j \mathbf{b}_k \big) \big( \sum_{mn} \varepsilon_{imn} \mathbf{c}_m \mathbf{d}_n \big)\bigg] \\ &= \sum_{ijkmn} \varepsilon_{ijk} \varepsilon_{imn} \mathbf{a}_j \mathbf{b}_k \mathbf{c}_m \mathbf{d}_n \\ &= \sum_{jkmn} (\delta_{jm}\delta_{kn} - \delta_{jn}\delta_{km})\mathbf{a}_j \mathbf{b}_k \mathbf{c}_m \mathbf{d}_n \\ &= \sum_{jkmn} \big( \delta_{jm}\delta_{kn} \mathbf{a}_j \mathbf{b}_k \mathbf{c}_m \mathbf{d}_n \big) - \sum_{jkmn} \big( \delta_{jn}\delta_{km} \mathbf{a}_j \mathbf{b}_k \mathbf{c}_m \mathbf{d}_n \big) \\ &= \sum_{jk} \big( \mathbf{a}_j \mathbf{b}_k \mathbf{c}_j \mathbf{d}_k \big) - \sum_{jk} \big(\mathbf{a}_j \mathbf{b}_k \mathbf{c}_k \mathbf{d}_j \big) \\ &= \sum_j \big(\mathbf{a}_j \mathbf{c}_j \big) \sum_k \big( \mathbf{b}_k \mathbf{d}_k \big) - \sum_j \big(\mathbf{a}_j \mathbf{d}_j \big) \sum_k \big( \mathbf{b}_k \mathbf{c}_k \big) \\ &= (\mathbf{a} \cdot \mathbf{c}) (\mathbf{b} \cdot \mathbf{d}) - (\mathbf{a} \cdot \mathbf{d})(\mathbf{b} \cdot \mathbf{c}) \\ &\phantom{= (\mathbf{u} \cdot \mathbf{u}) (\mathbf{v} \cdot \mathbf{v}) - (\mathbf{u} \cdot \mathbf{v})^2} \square \end{align*}\]Vector Calculus Applications
Claim:
\[\nabla \cdot (\mathbf{u} \times \mathbf{v}) = (\nabla \times \mathbf{u}) \cdot \mathbf{v} - (\nabla \times \mathbf{v}) \cdot \mathbf{u}\]Proof:
\[\begin{align*} &\nabla \cdot (\mathbf{u} \times \mathbf{v}) \\ &= \sum_{ijk} \partial_k(\varepsilon_{ijk} \mathbf{u}_i \mathbf{v}_j) \\ &= \sum_{ijk} \varepsilon_{ijk} \partial_k(\mathbf{u}_i) \mathbf{v}_j + \sum_{ijk} \varepsilon_{ijk} \mathbf{u}_i \partial_k(\mathbf{v}_j) \\ &= \big( \sum_{ijk} \varepsilon_{ijk} \partial_k(\mathbf{u}_i) \mathbf{e}_j \big) \cdot \big( \sum_j \mathbf{v}_j \mathbf{e}_j \big) \\ &\quad + \big( \sum_{ijk} \varepsilon_{ijk} \partial_k (\mathbf{v}_j) \mathbf{e}_i) \cdot \big( \sum_i \mathbf{u}_i \mathbf{e}_i) \\ &= (\nabla \times \mathbf{u}) \cdot \mathbf{v} - (\nabla \times \mathbf{v}) \cdot \mathbf{u} \end{align*}\]Claim:
\[\nabla (\mathbf{u} \cdot \mathbf{u}) = 2 \big[(\mathbf{u} \cdot \nabla) \mathbf{u} + \mathbf{u} \times (\nabla \times \mathbf{u})\big]\]Proof: Consider the following 3 equations.
\[\begin{align} [\nabla(\mathbf{u} \cdot \mathbf{u})]_i &= 2 \sum_j \mathbf{u}_j \partial_i(\mathbf{u}_j) \\ [(\mathbf{u} \cdot \nabla) \mathbf{u}]_i &= \sum_j \mathbf{u}_j \partial_j(\mathbf{u}_i) \\ [\mathbf{u} \times (\nabla \times \mathbf{u})]_i &= \sum_j \big[ \mathbf{u}_j \partial_i(\mathbf{u}_j) - \mathbf{u}_j \partial_j(\mathbf{u}_i) \big] \end{align}\]The result follows immediately from them. We shall leave the proof of the first two equations to the reader. For the third equation, it’s similar to an earlier claim but more care has to be taken. Consider
\[\begin{align*} &[\mathbf{u} \times (\nabla \times \mathbf{u})]_i \\ &= \sum_{jlm} (\delta_{il}\delta_{jm} - \delta_{im}\delta_{jl}) \mathbf{u}_j \partial_l (\mathbf{u}_m) \\ &= \sum_{jlm} \big[\delta_{il} \partial_l (\mathbf{u}_m) \delta_{jm} \mathbf{u}_j \big] - \sum_{jlm} \big[ \delta_{im} \partial_l (\mathbf{u}_m) \delta_{jl} \mathbf{u}_j \big] \\ &= \sum_{jm} \big[\delta_{jm} \mathbf{u}_j \partial_i (\mathbf{u}_m)\big] - \sum_{jl} \big[\delta_{jl} \mathbf{u}_j \partial_l (\mathbf{u}_i)\big] \\ &= \sum_j \big[ \mathbf{u}_j \partial_i(\mathbf{u}_j) - \mathbf{u}_j \partial_j(\mathbf{u}_i) \big] \\ &\phantom{= \sum_j \big[ \mathbf{u}_j \partial_i(\mathbf{u}_j) - \mathbf{u}_j \partial_j(\mathbf{u}_i) \big]} \square \end{align*}\]The above could be extended for \(\nabla(\mathbf{u} \cdot \mathbf{v})\) by making use of the identity
\[4 (\mathbf{u} \cdot \mathbf{v}) = (\mathbf{u+v}) \cdot (\mathbf{u+v}) - (\mathbf{u-v}) \cdot (\mathbf{u-v})\]Exercises
i. Prove that
\[\nabla \times (\nabla \cdot \mathbf{u}) = \nabla(\nabla \cdot \mathbf{u}) - \nabla^2 \mathbf{u}\](source and solution at page 3 of an article by Patrick)
ii. Prove that
\[\nabla \times (\mathbf{v} \times \mathbf{w}) = (\mathbf{w} \cdot \nabla) \mathbf{v} + (\nabla \cdot \mathbf{w}) \mathbf{v} - (\nabla \cdot \mathbf{v}) \mathbf{w} - (\mathbf{v} \cdot \nabla) \mathbf{w}\](source and solution at the last page of an article by Wheeler)
iii. Prove that
\[\det (\mathbf{b} ~\mathbf{c} ~\mathbf{d}) \mathbf{a} - \det (\mathbf{c} ~\mathbf{d} ~\mathbf{a}) \mathbf{b} + \det (\mathbf{d} ~\mathbf{a} ~\mathbf{b}) \mathbf{c} - \det (\mathbf{a} ~\mathbf{b} ~\mathbf{c}) \mathbf{d} = 0\]and
\[(\mathbf{a} \times \mathbf{b}) \times (\mathbf{c} \times \mathbf{d}) = \det (\mathbf{a} ~\mathbf{b} ~\mathbf{d}) \mathbf{c} - \det (\mathbf{a} ~\mathbf{b} ~\mathbf{c}) \mathbf{d}\]Going further
A general observation could be made on how identities of cross products tend to have a positive and a negative term, largely due to the identity \(\sum_k \varepsilon_{ijk}\varepsilon_{lmk} = \delta_{il}\delta_{jm} - \delta_{im}\delta_{jl}\) For more on the vector equalities, one could check out the quadruple products.
One could also learn the Einstein notation in order to systematically drop all the summation signs we have been using.