Surprisingly the answer is actually pretty straightforward, though it did take time to think about it. We do have $\Phi(f) = \Phi(x)f'$, and the proof only uses some relatively elementary facts about $\mathcal{C}^1$ functions (well, elementary if you consider the existence of test functions with given support elementary anyway).
I initially took the same route as M W in the comments above with the difference quotient, which does provide the desired result but only for twice-differentiable functions (and didn't seem to be generalisable, at least not easily, sadly), and I'd like to think that their and Ben Grossmann's discussion was what made me attempt to answer this question, so "thank you!" to both of them.
Let's start first with what I call a "localisation" property:
If two functions $f$ and $g$ are equal on an open interval $(a,b) \subset \mathbb{R}$, then $\Phi(f)$ and $\Phi(g)$ are also equal on that interval.
By linearity of $\Phi$, it suffices to check that $f = 0$ on $(a,b)$ implies $\Phi(f) = 0$ on $(a,b)$.
To prove this, choose for $c \in (a,b)$ a test function $\varphi_c$ supported in $(a,b)$, in order to have that $f\varphi_c = 0$, and such that $\varphi_c(c) \neq 0$. We then have, using the Leibniz formula and the fact, which OP already proved, that $\Phi(\text{const}) = 0$:
$$0 = \Phi(0)(c) = \Phi(f\varphi_c)(c) = \varphi_c(c) \Phi(f)(c) + \underbrace{f(c)}_{= \, 0} \Phi(\varphi_c)(c) = \underbrace{\varphi_c(c)}_{\neq\,0} \Phi(f)(c)$$
which indeed yields $\Phi(f)(c) = 0$ for all $c \in (a,b)$.
The goal is then to use this localisation property to derive that $\Phi(f)(x_0)$ equals zero whenever $f'(x_0) = 0$. Noting that $f \mapsto f'(x_0)$ and $f \mapsto \Phi(f)(x_0)$ are linear functionals, and through a linear algebra lemma which you can for example find a generalisation of as Lemma $3.9$ in Rudin's Functional Analysis, this would imply that there must exist a scalar $\alpha_{x_0}$ such that $\Phi(f)(x_0) = \alpha_{x_0}f'(x_0)$ for all $f \in \mathcal{C}^1$ (note that this does not rely on any topology or continuity argument).
We could then conclude, looking at $\Phi(x)$ (where $x$ is the function $x \mapsto x$), that we'd have $\Phi(f) = \Phi(x) f'$ for all $f \in \mathcal{C}^1$, which is what we wanted.
Consider a function $f \in \mathcal{C}^1$ such that $f'(x_0) = 0$. Since $\Phi$ vanishes on constants, we can assume that $f(x_0) = 0$ too.
Now, consider the function $g$ defined as follows:
$$g : x \in \mathbb{R} \,\longmapsto\, \begin{cases} f(x) &\text{ if } x < x_0\\ 0 &\text{ if } x = x_0\\ -f(x) &\text{ if } x > x_0\end{cases}$$
Just think of this as reversing one half of the graph of $f$ while leaving the other in its place.
$g$ is $\mathcal{C}^1$. This is technically not obvious since a priori piecewise definitions aren't very differentiation-friendly but it's not that difficult at all. This uses the fact that $f'(x_0) = 0$, hence:
$$\lim_{x \to x_0^+} g'(x) = \lim_{x \to x_0^+} -f'(x) = -f'(x_0) = 0 = f'(x_0) = \lim_{x \to x_0^-} f'(x) = \lim_{x \to x_0^-} g'(x)$$
which then allows to use what's stated and proved in this thread: https://math.stackexchange.com/q/257907/1104384.\
Moreover we have, thanks to the localisation property and the linearity of $\Phi$:
$$\begin{cases}\Phi(g)(x) = \Phi(f)(x) &\text{ if } x < x_0 \\ \Phi(g)(x) = -\Phi(f)(x) &\text{ if } x > x_0\end{cases}$$
Yet $\Phi(g)$ and $\Phi(f)$ are continuous at $x_0$, thus so are $\Phi(f) \pm \Phi(g)$, and we get:
$$\Phi(f)(x_0) = \Phi(g)(x_0) = -\Phi(g)(x_0)$$
Therefore we have indeed:
$$f'(x_0) = 0 \,\,\,\Longrightarrow\,\,\, \Phi(f)(x_0) = 0$$
hence we are done by our previous observations.