For differentiable one-dimensional $f: \mathbb{R} \rightarrow \mathbb{R}$ or $f: \mathbb{C} \rightarrow \mathbb{C}$, after finding a root $x_0$ it's possible to split out a differentiable linear factor using $f(x) = f^*(x)(x - x_0)$, allowing to search for subsequent roots on $f^*$ since $f(x) = 0 \iff x = x_0 \vee f^*(x) = 0$, whilst $f^*(x_0) \neq 0$ unless $f'(x_0) = 0$.
Is a similar mechanism possible for the multidimensional case, e.g. for differentiable $f: \mathbb{R}^n \rightarrow \mathbb{R}^n$, or at least for some class of multidimensional functions, so that a differentiable function $f^*$ can be derived from $f$ and a known root $x_0$ so that $f(x) = 0 \iff x = x_0 \vee f^*(x) = 0$ holds, but $f^*(x_0) \neq 0$ unless the Jacobian matrix of $f$ at $x_0$ is also zero.