1

The function* $$f : \mathbb{R} \rightarrow \mathbb{R}, \quad f(x) = [x > 0]\mathrm{exp}(-\frac{1}{x^2})$$

is everywhere differentiable, and furthermore it satisfies $f(x)=0$ for all $x \leq 0$. Yet somehow, at $x=0,$ it manages to "change its mind" and start growing.

Under what further assumptions can we conclude that, if a function $f : \mathbb{R} \rightarrow \mathbb{R}$ satisfies $f(x)=0$ on a non-degenerate interval, it must be everywhere zero?

Related.

*The square brackets denote the Iverson bracket.

goblin GONE
  • 67,744

1 Answers1

4

This holds for real analytic functions. Also of course for subclasses such as polynomial functions.

The property that being zero on a nondegenerate interval implies being zero on the connected component$~C$ of the domain containing that interval (which if the domain is $\Bbb R$ means $C=\Bbb R$) is easy to deduce from the definition of analytic function. Suppose $f$ is nonzero somewhere in$~C$ to the right of our interval, let $x_0$ be the infimum of the points where this happens. Then the power series $\sum_{i\geq0}a_i(x-x_0)^i$ giving $f(x)$ in a neighbourhood of$~x_0$ must converge to zero for all sufficiently large $x<x_0$, which forces all coefficients $a_i$ to be zero, and thus $f(x)=0$ in a neighbourhood of$~x_0$ which contradicts the choice of$~x_0$. A similar argument works on the left.

More generally (cited from the linked article), if the set of zeros of an analytic function $f$ has an accumulation point inside its domain, then $f$ is zero everywhere on the connected component containing the accumulation point. The argument is similar, using for $x_0$ the accumulation point.