4

I was wondering whether a function $\Phi:\mathcal C^1(\Bbb R)\to\mathcal C(\Bbb R)$ that meets

  • $\Phi(\alpha f+\beta g)=\alpha\Phi(f)+\beta\Phi(g)$ for $\alpha,\beta\in\Bbb R$.
  • $\Phi(fg)=\Phi(f)g+f\Phi(g)$

is necessarily the differentiation (that is, $\Phi(f)=f'$).

My try:

For constant, non-zero functions $k$ we have $\Phi(k)=k\Phi(1)$ (of course, I mean $1(x)=1$) and

$$\Phi(1)=\Phi\left(k\cdot\frac1k\right)=k\Phi\left(\frac1k\right)+\frac1k\Phi(k)=2\Phi(1)$$

so $\Phi(1)=0$ and hence, $\Phi(k)=k\cdot 0=0$. So far, so well.

Now, for $i(x)=x$ we have $$\Phi(i)(x)=\Phi(1\cdot i(x))=1\cdot\Phi(i)(x)+\Phi(1)\cdot i(x)=\Phi(i)(x)$$ which gets to nowhere.

I have tried induction and $f(x)=x^2$ with no progress. So I'm stuck. And I even suspect that perhaps the derivative is not the only solution.

My question: Is differentiation the only solution?

EDIT: Ben Grossmann has pointed that $\Phi(f)=a\cdot f'$ is also a solution. I still wonder if there are more solutions.

ajotatxe
  • 65,084
  • 1
    We could also have $\Phi(f) = a f'$ for any constant $a \in \Bbb R$ – Ben Grossmann Oct 25 '23 at 16:16
  • 1
    @BenGrossmann Please don't delete your comment, so my edit makes sense. – ajotatxe Oct 25 '23 at 16:19
  • By the way, for any $x_0 \in \Bbb R$, the function $D_{\Phi,x_0}:\mathcal C^\infty(\Bbb R) \to \Bbb R$ given by $D_{\Phi,x_0}(f) = \Phi(f)(x_0)$ is a derivation – Ben Grossmann Oct 25 '23 at 16:21
  • 1
    An interesting family of solutions: for any fixed $\alpha \in \mathcal C^1$, we can define the map $$ \Phi(f)(x) = \frac {d}{dx}(f(\alpha(x)) = \alpha'(x)f'(\alpha(x)). $$ – Ben Grossmann Oct 25 '23 at 16:26
  • I'm beginning to suspect that there is some strong barrier between algebra and calculus. We can define the derivative of polynomials inside the algebra, and I expected that that two axioms suffice. Chain rule is very welcome, but we even can't jump into first degree with it – ajotatxe Oct 25 '23 at 16:34
  • @BenGrossmann ... Does your interesting family of $\Phi$ satisfy the identity $\Phi(fg)=\Phi(f)g+f\Phi(g)$ – GEdgar Oct 25 '23 at 16:39
  • @GEdgar Oh you're right, I had missed something. We get $$ (f \circ \alpha)'(g \circ \alpha) + (f \circ \alpha)(g \circ \alpha)' = \Phi(f)'(g \circ \alpha) + \Phi(f)(g \circ \alpha)' $$ which is not quite what we wanted – Ben Grossmann Oct 25 '23 at 18:10
  • 1
    A modification of my earlier answer: if $\alpha$ is $\mathcal C^1$ and invertible (i.e. one to one and onto) then it seems that $\Phi$ as given by $$ \Phi(f) = \left[\frac d{dx} (f \circ \alpha) \right] \circ \alpha^{-1}, $$ which satisfies $\Phi(f): x \mapsto \alpha'(\alpha^{-1}(x))f'(x)$, should do the trick. – Ben Grossmann Oct 25 '23 at 18:45
  • 1
    @BenGrossmann I think we don't even need to bother with smoothness or compositions or invertibility - your example can be generalized as: Let $h$ be any continuous function, then $\Phi(f):=hf'$ satisfies the given conditions. Your example is the case $h=\alpha'\circ\alpha^{-1}$. – M W Oct 25 '23 at 20:02
  • @MW I'm not sure how I missed that, but you're absolutely right – Ben Grossmann Oct 25 '23 at 20:28
  • One could almost prove that this is the only example. If we knew that the difference quotient extended to a $C^1$ function then we could always write $f(x)=f(x_0)+g(x)(x-x_0)$, from which it would follow that $\Phi(f)=\Phi(x)f'$, where $x$ is the identity function. But I can't recall if that fact is true and I think it is probably false. – M W Oct 25 '23 at 21:04
  • @MW Your $g$ is extendable to a $\mathcal{C}^1$ function if (I don't know whether the "only if" direction holds) $f$ is twice-differentiable at $x_0$, with $g'(x_0) = f''(x_0)$. You just use the basic form of Taylor's theorem and the result presented here: Prove that $f'(a)=\lim_{x\rightarrow a}f'(x)$.. Hence we'll have $\Phi(f)(x_0) = \Phi(x)(x_0) \cdot f'(x_0)$ at every point at which $f$ is twice-differentiable, and in particular this means we have $\Phi(f) = \Phi(x) \cdot f'$ on the space of twice-differentiable functions. – Bruno B Nov 11 '23 at 13:03
  • If we could find for every $f \in \mathcal{C}^1$ a twice-differentiable $h$ such that $fh$ is also twice-differentiable and $h(x) \neq 0$ for all $x \in \mathbb{R}$ (or at least on a set dense in $\mathbb{R}$), then we could extend the result to $f$ by using the Leibniz formula for $\Phi$ and the fact it's already true for $h$ and $fh$ to get $h \cdot \Phi(f) = (\Phi(x)f') \cdot h$, and then simplifying by $h$ (even if only on a dense set, since then we can use the continuity of the functions involved to extend the equality to $\mathbb{R}$). However I don't know if that's feasible? – Bruno B Nov 11 '23 at 19:03
  • Final update unless I actually solve the question (I wouldn't want to comment here too much): through a straightforward calculation, one can show that the difference quotient of $f$ at $x_0$ is $\mathcal{C}^1$ at $x_0$ if and only if $f$ is twice-differentiable at $x_0$, in which case we have $f''(x_0) = 2g'x_0)$ (I missed the $2$ last time, oops). This means that MW's method won't work for functions $f$ that aren't twice-differentiable sadly, but I still think it's nice to have $\Phi(f) = \Phi(x) f'$ on such a "wide" space at least. By the way, in case that's useful, we do have that (1/2) – Bruno B Nov 13 '23 at 11:51
  • ... if $f = g$ on an interval $(a,b)$ then $\Phi(f) = \Phi(g)$ on that same interval, the same property of "localisation" that the $f \mapsto h f'$ maps have. To show this, it suffices to show that $f = 0$ on an interval means $\Phi(f) = 0$ on that interval. To that means, for $c \in (a,b)$ you pick $\varphi_c \in \mathcal{C}^1$ (smooth even if you so want) such that $f \varphi_c = 0$ and $\varphi_c(c) \neq 0$, then apply the Leibniz formula on $f \varphi_c$. (2/2) – Bruno B Nov 13 '23 at 11:56

1 Answers1

1

Surprisingly the answer is actually pretty straightforward, though it did take time to think about it. We do have $\Phi(f) = \Phi(x)f'$, and the proof only uses some relatively elementary facts about $\mathcal{C}^1$ functions (well, elementary if you consider the existence of test functions with given support elementary anyway).

I initially took the same route as M W in the comments above with the difference quotient, which does provide the desired result but only for twice-differentiable functions (and didn't seem to be generalisable, at least not easily, sadly), and I'd like to think that their and Ben Grossmann's discussion was what made me attempt to answer this question, so "thank you!" to both of them.

Let's start first with what I call a "localisation" property:

If two functions $f$ and $g$ are equal on an open interval $(a,b) \subset \mathbb{R}$, then $\Phi(f)$ and $\Phi(g)$ are also equal on that interval.

By linearity of $\Phi$, it suffices to check that $f = 0$ on $(a,b)$ implies $\Phi(f) = 0$ on $(a,b)$.
To prove this, choose for $c \in (a,b)$ a test function $\varphi_c$ supported in $(a,b)$, in order to have that $f\varphi_c = 0$, and such that $\varphi_c(c) \neq 0$. We then have, using the Leibniz formula and the fact, which OP already proved, that $\Phi(\text{const}) = 0$: $$0 = \Phi(0)(c) = \Phi(f\varphi_c)(c) = \varphi_c(c) \Phi(f)(c) + \underbrace{f(c)}_{= \, 0} \Phi(\varphi_c)(c) = \underbrace{\varphi_c(c)}_{\neq\,0} \Phi(f)(c)$$ which indeed yields $\Phi(f)(c) = 0$ for all $c \in (a,b)$.

The goal is then to use this localisation property to derive that $\Phi(f)(x_0)$ equals zero whenever $f'(x_0) = 0$. Noting that $f \mapsto f'(x_0)$ and $f \mapsto \Phi(f)(x_0)$ are linear functionals, and through a linear algebra lemma which you can for example find a generalisation of as Lemma $3.9$ in Rudin's Functional Analysis, this would imply that there must exist a scalar $\alpha_{x_0}$ such that $\Phi(f)(x_0) = \alpha_{x_0}f'(x_0)$ for all $f \in \mathcal{C}^1$ (note that this does not rely on any topology or continuity argument).
We could then conclude, looking at $\Phi(x)$ (where $x$ is the function $x \mapsto x$), that we'd have $\Phi(f) = \Phi(x) f'$ for all $f \in \mathcal{C}^1$, which is what we wanted.

Consider a function $f \in \mathcal{C}^1$ such that $f'(x_0) = 0$. Since $\Phi$ vanishes on constants, we can assume that $f(x_0) = 0$ too.
Now, consider the function $g$ defined as follows: $$g : x \in \mathbb{R} \,\longmapsto\, \begin{cases} f(x) &\text{ if } x < x_0\\ 0 &\text{ if } x = x_0\\ -f(x) &\text{ if } x > x_0\end{cases}$$ Just think of this as reversing one half of the graph of $f$ while leaving the other in its place.
$g$ is $\mathcal{C}^1$. This is technically not obvious since a priori piecewise definitions aren't very differentiation-friendly but it's not that difficult at all. This uses the fact that $f'(x_0) = 0$, hence: $$\lim_{x \to x_0^+} g'(x) = \lim_{x \to x_0^+} -f'(x) = -f'(x_0) = 0 = f'(x_0) = \lim_{x \to x_0^-} f'(x) = \lim_{x \to x_0^-} g'(x)$$ which then allows to use what's stated and proved in this thread: https://math.stackexchange.com/q/257907/1104384.\ Moreover we have, thanks to the localisation property and the linearity of $\Phi$: $$\begin{cases}\Phi(g)(x) = \Phi(f)(x) &\text{ if } x < x_0 \\ \Phi(g)(x) = -\Phi(f)(x) &\text{ if } x > x_0\end{cases}$$ Yet $\Phi(g)$ and $\Phi(f)$ are continuous at $x_0$, thus so are $\Phi(f) \pm \Phi(g)$, and we get: $$\Phi(f)(x_0) = \Phi(g)(x_0) = -\Phi(g)(x_0)$$ Therefore we have indeed: $$f'(x_0) = 0 \,\,\,\Longrightarrow\,\,\, \Phi(f)(x_0) = 0$$ hence we are done by our previous observations.

Bruno B
  • 5,027