So this is more for my own reference on how to derive various vector identities, but who knows it might prove useful to someone here on the internet so I figured I'd post it anyway. I'm going to be using Einstein notation to make this more compact.
Einstein notation is not hard to grasp, it is just a series of conventions to eliminate the writing of sum signs. When there are repeated indices you sum over them, namely \[ \sum_i a_i b_i = a_i b_i \] So in the case of this post it will always be sum from $i=1$ to $i=3$. There are a couple of important symbols that are very useful, first there is the Kronecker delta, defined as \[ \delta_{ij} = \begin{cases} 1 &: i = j \\ 0 &: i \not= j \end{cases} \] There is also the Levi-Civita symbol which is defined as \[ \varepsilon_{ijk} = \begin{cases} +1 &\text{if symmetric} \ ijk\\ -1 &\text{if antisymmetric} \ ijk\\ 0 &\text{if} \ i=j \ \text{or} \ j=k \ \text{or} \ k=i \end{cases} \] So if there is an even permutation, i.e. symmetric combination {1,2,3}, $\varepsilon = 1$. If there is an odd permutation, i.e. an antisymmetric combination {3,2,1}, then $\varepsilon = -1$, if any index is repeated then $\varepsilon = 0$
These might seem rather arbitrary however the Kronecker delta and Levi-Civita symbol make it very easy to define vector operations. First let's consider the dot product, this is given in 3 dimensional Cartesian as: \[ \mathbf{a} \cdot \mathbf{b} = \sum_{i=1}^{3} \delta_{ij} a_i b_j \] Rather conveniently the Kroncker delta is 0 everywhere except for $i=j$, so the dot product in Cartesian can be rewritten in its more familiar form \[ \mathbf{a} \cdot \mathbf{b} = \sum_{i=1}^{3} a_i b_i \] And expressed in Einstein notation this is simply \[ \mathbf{a} \cdot \mathbf{b} = a_i b_i \] Not really too much of a saving on writing the dot product using Einstein notation, but what about the cross product?
We can write the cross product by making use of the Levi-Civita symbol \[ \mathbf{a} \times \mathbf{b} = \sum_{i=1}^{3}\sum_{j=1}^{3} \sum_{k=1}^{3} \varepsilon_{ijk} \hat{\mathbf{e}}_i a_j b_k \] Now the merits of Einstein notation become more apparent, cumbersome calculations like this can be written much more easily, the same thing in Einstein notation is simply \[ \mathbf{a} \times \mathbf{b} = \varepsilon_{ijk} \hat{\mathbf{e}}_i a_j b_k \] As an aside, by writing the cross product in this way it can be seen how it is generalised. This formula is the definition of the wedge product, a generalisation of the cross product. It is valid in any number of dimension rather than the 3 or 7 dimensions that the cross product is valid in.
Let's start with the absolute basics: div, grad and curl. \[ \begin{eqnarray} \nabla \phi &=& \partial_i ( \phi ) \ \hat{\mathbf{e}}_i \\ \nabla \cdot \mathbf{F} &=& \partial_i F_i \\ \nabla \times \mathbf{F} &=& \varepsilon_{ijk} \hat{\mathbf{e}}_i \partial_j F_k \end{eqnarray} \]
So let's consider the gradient of a product $\nabla (\psi \phi)$ \[ \begin{eqnarray} \nabla (\psi \phi) &=& \partial_i ( \phi \psi ) \ \hat{\mathbf{e}}_i \\ &=& ( \phi \partial_i \psi + \psi \partial_i \phi ) \ \hat{\mathbf{e}}_i \\ &=& \phi \nabla \psi + \psi \nabla \phi \end{eqnarray} \] The divergence of a scalar and a vector $\nabla \cdot ( \phi \mathbf{F} )$ \[ \begin{eqnarray} \nabla (\phi \mathbf{F}) &=& \partial_i (\phi F_i )\\ &=& \phi \partial_i F_i + F_i \partial_i \phi \\ &=& \phi \nabla \cdot \mathbf{F} + \mathbf{F} \cdot \nabla \phi \end{eqnarray} \] The curl of a scalar and vector $\nabla \times (\phi \mathbf{F})$ \[ \begin{eqnarray} \nabla \times (\phi \mathbf{F}) &=& \varepsilon_{ijk} \hat{\mathbf{e}}_i \partial_j \phi F_k \\ &=& \varepsilon_{ijk} \hat{\mathbf{e}}_i ( \phi \partial_j F_k + F_k \partial_j \phi ) \\ &=& \phi \ \varepsilon_{ijk} \hat{\mathbf{e}}_i \partial_j F_k + \varepsilon_{ijk} \hat{\mathbf{e}}_i ( \partial_j \phi) F_k \\ &=& \phi \nabla \times \mathbf{F} + \nabla \phi \times \mathbf{F} \end{eqnarray} \] The scalar triple product $\nabla \cdot (\mathbf{A} \times \mathbf{B} )$ \[ \begin{eqnarray} \nabla \cdot (\mathbf{A} \times \mathbf{B}) &=& \partial_i \varepsilon_{ijk} A_j B_k \\ &=& \varepsilon_{ijk} \partial_i (A_j B_k) \\ &=& \varepsilon_{ijk} ( A_j \partial_i B_k + B_k \partial_i A_j ) \\ &=& \varepsilon_{ijk} A_j \partial_i B_k + \varepsilon_{ijk} B_k \partial_i A_j \\ &=& - \varepsilon_{jik} A_j \partial_i B_k + \varepsilon_{kij} B_k \partial_i A_j \\ &=& -\mathbf{A} \cdot (\nabla \times \mathbf{B}) + \mathbf{B} \cdot (\nabla \times \mathbf{A}) \end{eqnarray} \] There are more vector identities that I could derive however it would be tedious as these are the only ones needed to establish all the integral identities I am going to derive.
Let's start with an easy one, it's just a definition. The divergence of a gradient is just the Laplacian $\nabla \cdot (\nabla \phi) = \nabla^2 \phi$, this takes a scalar field and returns another scalar field.
The curl of a gradient, $\nabla \times (\nabla \phi)$, this is a slightly trickier identity to prove. Let's start with the basics and write it in Einstein notation. \[ \nabla \times (\nabla \phi) = \varepsilon_{ijk} \partial_j \partial_k \phi \] Now we can use the definition of the Levi-Civita and swap our indices to arrive at \[ \nabla \times (\nabla \phi) = -\varepsilon_{ikj} \partial_j \partial_k \phi \] But these are all just dummy indices; we can relabel them all as we please, so let's do the following \[ \begin{align} k \to j \\ j \to k \end{align} \] So now we have \[ \nabla \times (\nabla \phi) = -\varepsilon_{ijk} \partial_k \partial_j \phi \] But if we have continuous second order mixed partial derivatives, then the order in which we take them is irrelevant as they commute. From this we find that \[ \varepsilon_{ijk} \partial_j \partial_k \phi = -\varepsilon_{ijk} \partial_j \partial_k \phi \] This equation is only satisfied by $0$, so thus we conclude that \[ \nabla \times (\nabla \phi) = 0 \]
The divergence of the curl $\nabla \cdot (\nabla \times \mathbf{F})$. This is very similar to the divergence of a gradient so I shall omit the full proof. Let's write it out fully first \[ \nabla \cdot (\nabla \times \mathbf{F}) = \partial_i \varepsilon_{ijk} \partial_j F_k \] We can take the partial derivative inside the equation to yield \[ \nabla \cdot (\nabla \times \mathbf{F}) = \varepsilon_{ijk} \partial_i \partial_j F_k \] This is now the same form as in the previous identity and exactly the same logic applies, or as a general rule a symmetric symbol times an antisymmetric symbol is 0. So this identities is \[ \nabla \cdot (\nabla \times \mathbf{F}) = 0 \]
The final second order identity is the curl of a curl $\nabla \times (\nabla \times \mathbf{F})$, I'm going to resolve into components to avoid dealing with multiple unit vectors, this does not change the result of the derivation. \[ [ \nabla \times (\nabla \times \mathbf{F}) ]_i = \varepsilon_{ijk} \partial_j \varepsilon_{klm} \partial_l F_m \] This can be rearranged into a far more useful form \[ [ \nabla \times (\nabla \times \mathbf{F}) ]_i = \varepsilon_{ijk} \varepsilon_{klm} \partial_j \partial_l F_m \] There is a useful identity that we can apply here, I'm not going to prove it here as it is rather long however relatively simply. \[ \varepsilon_{ijk} \varepsilon_{klm} = \delta_{il} \delta_{jm} - \delta_{jl} \delta_{im} \] Making use of this identity we now have \[ [ \nabla \times (\nabla \times \mathbf{F}) ]_i = ( \delta_{il} \delta_{jm} - \delta_{jl} \delta_{im} )\partial_j \partial_l F_m \] The definition of the Kronecker delta makes this really easy to simplify down as only 2 pairs of indices result in a non zero Kronecker delta combination. The pairs are $l = i , \ m=j$ and $l=j, \ m=i$. So we now have \[ [ \nabla \times (\nabla \times \mathbf{F}) ]_i = \partial_i \partial_j F_j - \partial_j \partial_j F_i \] So from here we find that \[ \nabla \times (\nabla \times \mathbf{F}) = \nabla (\nabla \cdot \mathbf{F}) - \nabla^2 \mathbf{F} \]
So as a quick refresher the Divergence theorem is given as: \[ \oint_{S} \mathbf{F} \cdot \mathbf{dS} = \int_{V} (\nabla \cdot \mathbf{F}) dV \]
Now consider the situation where the vector can be represented as the product of some constant vector $\mathbf{A}$ and a function of position $\phi \equiv \phi(\mathbf{r})$, so: $\mathbf{F} = \phi \mathbf{A} $. Now let's apply the divergence theorem. \[ \oint_{S} ( \phi \mathbf{A} ) \cdot \mathbf{dS} = \int_{V} \nabla \cdot ( \phi \mathbf{A} ) dV \] This can be simplified by making use of some of the earlier vector identities, first let's consider the LHS. \[ \oint_{S} \mathbf{A} \cdot \mathbf{dS} = \mathbf{A} \cdot \oint_{S} \phi \mathbf{dS} \] And now let's consider the RHS, by applying an earlier vector identity: \[ \int_{V} \nabla \cdot ( \phi \mathbf{A} ) dV = \int_{V} (\nabla \phi) \cdot \mathbf{A} + \phi (\nabla \cdot \mathbf{A}) dV \] But remember that $ \mathbf{A} $ is a constant non zero vector so $\nabla \cdot \mathbf{A} = 0$, this simplifies our expression for the RHS to: \[ \int_{V} (\nabla \phi) \cdot \mathbf{A} dV = \mathbf{A} \cdot \int_{V} \nabla \phi dV \] Now we can take the right side from the left or vice versa and take the constant vector outside, this yields: \[ \mathbf{A} \cdot \Bigg( \oint_{S} \phi \mathbf{dS} - \int_{V} \nabla \phi dV \Bigg)= 0\] But $\mathbf{A} \not= 0$, so clearly the expression inside the brackets must be which leads to the new identity: \[ \oint_{S} \phi \mathbf{dS} = \int_{V} \nabla \phi dV \]
Now what if now the vector $\mathbf{F}$ can be represented as the cross product of a constant vector $\mathbf{A}$ and a vector function of position $\mathbf{B} \equiv \mathbf{B} (\mathbf{r}) $, so: $\mathbf{F} = \mathbf{A} \times \mathbf{B}$. Now applying the divergence theorem to this yields \[ \oint_{S} ( \mathbf{A} \times \mathbf{B} ) \cdot \mathbf{dS} = \int_{V} \nabla \cdot ( \mathbf{A} \times \mathbf{B}) dV \] As before let's first consider the LHS, this is easy to rearrange this time, just consider the cyclic relation of the scalar triple product: \[ \oint_{S} ( \mathbf{A} \times \mathbf{B} ) \cdot \mathbf{dS} = - \mathbf{A} \cdot \oint_{S} \mathbf{dS} \times \mathbf{B} \] And now let's consider the RHS, by applying an earlier vector identity: \[ \int_{V} \nabla \cdot ( \mathbf{A} \times \mathbf{B}) dV = \int_{V} \mathbf{B} \cdot (\nabla \times \mathbf{A}) - \mathbf{A} \cdot (\nabla \times \mathbf{B}) dV \] As with before remember that $\mathbf{A}$ is a constant vector so $\nabla \times \mathbf{A} = 0$, thus simplifying the RHS to: \[ \int_{V} - \mathbf{A} \cdot (\nabla \times \mathbf{B}) dV = - \mathbf{A} \cdot \int_{V} (\nabla \times \mathbf{B}) dV \] Now as before we now combine both sides and factor out $\mathbf{A}$ \[ \mathbf{A} \cdot \Bigg( - \oint_{S} \mathbf{dS} \times \mathbf{B} + \int_{V} (\nabla \times \mathbf{B}) dV \Bigg) = 0 \] Now as before $\mathbf{A} \not= 0$, so clearly the expression inside the brackets must be which leads to the new identity: \[ \oint_{S} \mathbf{dS} \times \mathbf{B} = \int_{V} (\nabla \times \mathbf{B}) dV \]
So as a quick refresher stokes' theorem is given as: \[ \oint_{C} \mathbf{F} \cdot \mathbf{ds} = \int_{S} (\nabla \times \mathbf{F}) \cdot \mathbf{dS} \]
As with the case with the divergence theorem we assume that a vector function $\mathbf{F}$ can be rewritten as the product of a scalar of position $\phi \equiv \phi(\mathbf{r})$, and a constant vector $\mathbf{A}$. So $\mathbf{F} = \phi \mathbf{A}$, now if we apply the divergence theorem. \[ \oint_{C} \phi \mathbf{A} \cdot \mathbf{ds} = \int_{S} (\nabla \times \phi \mathbf{A}) \cdot \mathbf{dS} \] Now consider the LHS of the equation, it's very similar to the case with the divergence theorem. \[ \oint_{C} \phi \mathbf{A} \cdot \mathbf{ds} = \mathbf{A} \cdot \oint_{C} \phi \mathbf{ds} \] And now let's consider the RHS \[ \int_{S} (\nabla \times \mathbf{A}) \cdot \mathbf{dS} = \int_{S} [ (\nabla \phi) \times \mathbf{A} + \phi (\nabla \times \mathbf{A}) ] \cdot \mathbf{dS} \] Now this simplifies as $\mathbf{A}$ as in a constant, so $\nabla \times \mathbf{A} = 0$. This now reduces the equation to \[ \int_{S} [ (\nabla \phi) \times \mathbf{A}] \cdot \mathbf{dS} = \mathbf{A} \cdot \int_{S} \mathbf{dS} \times ( \nabla \phi ) \] Now combining both sides \[ \mathbf{A} \cdot \Bigg( \oint_{C} \phi \mathbf{ds} - \int_{S} \mathbf{dS} \times \nabla \phi \Bigg) = 0 \] $\mathbf{A} \not= 0 $, so we arrive at the identity \[ \oint_{C} \phi \mathbf{ds} = \int_{S} \mathbf{dS} \times \nabla \phi \]
Now what if now the vector $\mathbf{F}$ can be represented as the cross product of a constant vector $\mathbf{A}$ and a vector function of position $\mathbf{B} \equiv \mathbf{B} (\mathbf{r}) $, so: $\mathbf{F} = \mathbf{A} \times \mathbf{B}$. Now applying the stokes' theorem to this yields \[ \oint_{C} ( \mathbf{A} \times \mathbf{B} ) \cdot \mathbf{ds} = \int_{S} [\nabla \times ( \mathbf{A} \times \mathbf{B}) ] \cdot \mathbf{dS} \] Consider the LHS \[ \oint_{C} ( \mathbf{A} \times \mathbf{B} ) \cdot \mathbf{ds} = - \mathbf{A} \cdot \oint_{C} \mathbf{ds} \times \mathbf{B} \] Now consider the RHS, it has two cross product and a dot product however we can manipulate it by consider the bracketed cross term as a single vector term, this reduces the expression to a scalar triple product that can easily be manipulated. First let's consider $(\mathbf{A} \times \mathbf{B})$ as a single term. \[ [\nabla \times (\mathbf{A} \times \mathbf{B})] \cdot \mathbf{dS} = \mathbf{dS} \times \nabla \cdot (\mathbf{A} \times \mathbf{B}) \] Now consider consider $\mathbf{dS} \times \mathbf{\nabla}$ as a single term \[ \mathbf{dS} \times \nabla \cdot (\mathbf{A} \times \mathbf{B}) = -\mathbf{A} \cdot [(\mathbf{dS} \times \mathbf{\nabla}) \times \mathbf{B}] \] So now we can rewrite the RHS as \[ \int_{S} [\nabla \times ( \mathbf{A} \times \mathbf{B}) ] \cdot \mathbf{dS} = -\mathbf{A} \cdot \int_{S} ( \mathbf{dS} \times \mathbf{\nabla}) \times \mathbf{B} \] Now combining these terms we arrive at the now familiar form \[ \mathbf{A} \cdot \Bigg( \oint_{C} \mathbf{ds} \times \mathbf{B} - \int_{S} ( \mathbf{dS} \times \mathbf{\nabla}) \times \mathbf{B} \Bigg) = 0 \] And again as $\mathbf{A} \not= 0$ we arrive at the identity \[ \oint_{C} \mathbf{ds} \times \mathbf{B} = \int_{S} ( \mathbf{dS} \times \mathbf{\nabla}) \times \mathbf{B} \]
I'm only deriving two of the identities here as I do not yet understand the third identity. To derive Green's identities we start with the vector identities \[ \begin{eqnarray} \nabla \cdot (\psi \nabla \phi) = \psi \nabla^2 \phi + \nabla \psi \cdot \nabla \phi \\ \nabla \cdot (\phi \nabla \psi) = \phi \nabla^2 \psi + \nabla \phi \cdot \nabla \psi \end{eqnarray} \] Now if we subtract equations 2 from equation 1 we have \[ \nabla \cdot (\psi \nabla \phi - \phi \nabla \psi) = \psi \nabla^2 \phi - \phi \nabla^2 \psi \] Now we want to consider the closed surface integral over the vector field $\psi \nabla \phi$ and apply the divergence theorem and use equation 1. This is Green's first identity. \[ \oint_{S} \psi \nabla \phi \cdot \mathbf{dS} = \int_{V} \psi \nabla^2 \phi + \nabla \psi \cdot \nabla \phi dV \] Now we consider the closed surface integral over the vector field $\psi \nabla \phi - \phi \nabla \psi$ and apply the divergence theorem again and utilise equation 3. This is Green's second identity. \[ \oint_{S} (\psi \nabla \phi - \phi \nabla \psi) \cdot \mathbf{dS} = \int_{V} (\psi \nabla^2 \phi - \phi \nabla^2 \psi) dV \]