This has very little to do with geometric algebra. We just need a couple of things:
- It is paramount that we keep track of what is being differentiated. For instance, if $f$ is a scalar-valued function and $A$ is a vector-valued function, rather than simply
$$
\nabla\times(fA) = (\nabla f)\times A + f\nabla\times A
$$
we will instead write
$$
\dot\nabla\times(\dot f\dot A) = (\dot\nabla\dot f)\times A + f\dot\nabla\times\dot A.
$$
The dots indicate precisely what each $\dot\nabla$ is differentiating. This allows us to write things like
$$
\nabla\times A = \dot\nabla\times\dot A = -\dot A\times\dot\nabla.
$$
- After doing this, all vector identities involving $\nabla$ are valid as long as it appears in a linear slot. More precisely, if $L_x(v)$ and $M_x(v)$ are two expressions linear in $v$ such that $L_x(v) = M_x(v)$ for all $v$, then symbolically
$$
L_{\dot x}(\dot\nabla) = M_{\dot x}(\dot\nabla).
$$
- The derivative of an expression is the sum of the derivatives of its subexpression. To illustrate, we directly prove your identity as follows:
$$
\dot\nabla\times(\dot f\dot A) = \dot\nabla\times(\dot fA) + \dot\nabla\times(f\dot A) = (\dot\nabla\dot f)\times A + f\dot\nabla\times\dot A.
$$
The last equality follows simply because $f$ is a scalar quantity.
As one more example, we derive an expression for $\nabla\times(A\times B)$. Recall that we have the vector identity
$$
u\times(v\times w) = (u\cdot w)v - (u\cdot v)w.
$$
Using this we see
$$\begin{aligned}
\dot\nabla\times(\dot A\times\dot B)
&= (\dot\nabla\cdot\dot B)\dot A - (\dot\nabla\cdot\dot A)\dot B
\\
&= (\dot\nabla\cdot\dot B)A + (\dot\nabla\cdot B)\dot A - (\dot\nabla\cdot\dot A)B - (\dot\nabla\cdot A)\dot B
\\
&= (\nabla\cdot B)A + (B\cdot\nabla)A - (\nabla\cdot A)B - (A\cdot\nabla)B.
\end{aligned}$$
In the last step, we have dropped the dots and used the standard convention of "differentiate directly to the right".
Elaboration and Well-Definedness
The operator $\nabla$ together with overdot notation
satifies all generic and linear properties that vectors satisfy.
Generic means the property has to be true for all vectors.
You can justify this with index notation;
we simply start with a tensor identity
and then contract each side with $\partial_i$.
Alternatively, see the third section of my answer
here.
Notice that there is a chain rule there that I forgot to cover here.
Let's use the $u\times(v\times w)$ identity in my answer as an example.
It is generic in $u$ since it is true for any $u$,
and it is linear since both sides of the identity are linear functions of $u$.
Thus we may replace $u$ with $\dot\nabla$ and arrive at a valid identity;
in fact we get four identities:
$$
\dot\nabla\times(v\times w) = (\dot\nabla\cdot w)v - (\dot\nabla\cdot v)w,
\tag1
$$$$
\dot\nabla\times(\dot v\times w)
= (\dot\nabla\cdot w)\dot v - (\dot\nabla\cdot\dot v)w,
\tag2
$$$$
\dot\nabla\times(v \times\dot w)
= (\dot\nabla\cdot\dot w)v - (\dot\nabla\cdot v)\dot w,
\tag3
$$$$
\dot\nabla\times(\dot v\times\dot w)
= (\dot\nabla\cdot\dot w)\dot v - (\dot\nabla\cdot\dot v)\dot w.
\tag4
$$
We may consistently interpret $\dot\nabla$ with no other dotted variables as
differentiating the constant $1$, or symbolically $\dot\nabla = \dot\nabla\dot
1$; thus equation (1) is simply saying $0 = 0$.
In index notation the vector identity is
$$
\epsilon^i_{jk}\epsilon^k_{lm}u^jv^lw^m = u^jw_jv^i + u^jv_jw^i.
$$
Each of the equations above are then saying
$$
\epsilon^i_{jk}\epsilon^k_{lm}(\partial^j1)v^lw^m
= (\partial^j1)w_jv^i + (\partial^j1)v_jw^i,
\tag1
$$$$
\epsilon^i_{jk}\epsilon^k_{lm}(\partial^jv^l)w^m
= w_j(\partial^jv^i) + (\partial^jv_j)w^i,
\tag2
$$$$
\epsilon^i_{jk}\epsilon^k_{lm}v^l(\partial^jw^m)
= (\partial^jw_j)v^i + v_j(\partial^jw^i),
\tag3
$$$$
\epsilon^i_{jk}\epsilon^k_{lm}\partial^j(v^lw^m)
= \partial^j(w_jv^i) + \partial^j(v_jw^i).
\tag4
$$
Higher derivatives are much that same,
but each derivative needs to separately track what it differentiates.
An expression like
$\dot\nabla\cdot\dot\nabla(\dot u\cdot\dot v)$ makes no sense.
We will use a check $\check\nabla$ exactly like an overdot,
but having both symbols
allows us to write all the possible (non-trivial) second derivatives:
$$
\check\nabla\cdot\dot\nabla(\check{\dot u}\cdot v),\quad
\check\nabla\cdot\dot\nabla(\dot u\cdot\check v),\quad
\check\nabla\cdot\dot\nabla(\check u\cdot\dot v),\quad
\check\nabla\cdot\dot\nabla(u\cdot\check{\dot v}),\quad
\check\nabla\cdot\dot\nabla(\dot u\cdot\check{\dot v}),\quad
\check\nabla\cdot\dot\nabla(\check u\cdot\check{\dot v}),\quad
\check\nabla\cdot\dot\nabla(\check{\dot u}\cdot\dot v),\quad
\check\nabla\cdot\dot\nabla(\check{\dot u}\cdot\check v),\quad
\check\nabla\cdot\dot\nabla(\check{\dot u}\cdot\check{\dot v}).
$$
The last expression is the Laplacian.
The fact that partial derivatives commute
means that e.g. $\check{\dot u} = \dot{\check u}$.
The rule for higher derivatives is the same as first derivatives:
each $\nabla$ must appear in a generic, linear slot.
Continuing with our example
$$
u\times(v\times w) = (u\cdot w)v - (u\cdot v)w,
$$
we see
$$
\check\nabla\times(\dot\nabla\times\check{\dot w})
= (\check\nabla\cdot\check{\dot w})\dot\nabla
- (\check\nabla\cdot\dot\nabla)(\check{\dot w})
= \nabla(\nabla\cdot w) - \nabla^2w.
$$