16

Let $f:\mathbb R\to \mathbb R$ be such that for every sequence $(a_n)$ of real numbers,

$\sum a_n$ converges $\implies \sum f(a_n)$ converges

Prove there is some open neighborhood $V$, such that $0\in V$ and $f$ restricted to $V$ is a linear function.

This was given to a friend at his oral exam this week. He told me the problem is hard. I tried it myself, but I haven't made any significant progress.

Gabriel Romon
  • 35,428
  • 5
  • 65
  • 157

4 Answers4

9

I have added an answer to collect various resources where this fact is proven. This answer is community-wiki - feel free to add further references you are aware of.

Papers:

Books

  • Teodora-Liliana Radulescu, Vicentiu D. Radulescu and Titu Andreescu: Problems in Real Analysis: Advanced Calculus on the Real Axis, Problem 2.5.30, page 108

Online resources

5

Here comes a proof in the general case. We first start with the following

Lemma: Let $-\varepsilon<y<0<x<\varepsilon$ and $N\in\mathbb{N}$. Then there is a some $M\in\mathbb{N}$ and $z_{1},\dots,z_{M}\in\left\{ x,y\right\} $ with $$ \left|\left\{ i\in\left\{ 1,\dots,M\right\} \,\mid\, z_{i}=x\right\} \right|=N $$ and $$ \left|\sum_{i=1}^{K}z_{i}\right|<\varepsilon\qquad\forall K\in\left\{ 1,\dots,M\right\} .\qquad\qquad\left(\ast\right) $$

Proof: We first inductively construct an infinite sequence $\left(w_{n}\right)_{n\in\mathbb{N}}\in\left\{ x,y\right\} ^{\mathbb{N}}$ with $\left|\sum_{i=1}^{n}w_{i}\right|<\varepsilon$ for all $n\in\mathbb{N}$. It is clear that the stated condition holds for $w_{1}:=x$, since $-\varepsilon<x<\varepsilon$. Now, let $w_{1},\dots,w_{n}$ be already constructed. Distinguish two cases:

  1. We have $\alpha:=\sum_{i=1}^{n}w_{i}\geq0$. Then choose $w_{n+1}:=y$. Then $$ \varepsilon>\alpha>\alpha+y=\sum_{i=1}^{n+1}w_{i}=\alpha+y\geq y>-\varepsilon, $$ i.e. $\left|\sum_{i=1}^{n+1}w_{i}\right|<\varepsilon$.

  2. We have $\alpha:=\sum_{i=1}^{n}w_{i}<0$. Then we choose $w_{n+1}:=x$. Similar to the above, it follows $$ -\varepsilon<\alpha<\alpha+x=\sum_{i=1}^{n+1}w_{i}=\alpha+x\leq x<\varepsilon $$ and hence $\left|\sum_{i=1}^{n+1}w_{i}\right|<\varepsilon$.

Now assume that $\left\{ i\in\mathbb{N}\,\mid\, w_{i}=x\right\} $ is finite. This implies $w_{i}=y$ for all $i\geq N_{0}$ with $N_{0}\in\mathbb{N}$ suitable. But this implies $$ \sum_{i=1}^{N}w_{i}=\sum_{i=1}^{N_{0}-1}w_{i}+\sum_{i=N_{0}}^{N}w_{i}=\sum_{i=1}^{N_{0}-1}w_{i}+\left(N-N_{0}\right)y\xrightarrow[N\to\infty]{}-\infty, $$ in contradiction to $\left|\sum_{i=1}^{N}w_{i}\right|<\varepsilon$ for all $N$.

Thus, $\left\{ i\in\mathbb{N}\,\mid\, w_{i}=x\right\} $ is infinite. It is easy to see that this implies existence of $M\in\mathbb{N}$ with $\left|\left\{ i\in\left\{ 1,\dots,M\right\} \,\mid\, w_{i}=x\right\} \right|=N$, so that we can set $z_{i}:=w_{i}$ for $i\in\left\{ 1,\dots,M\right\} $. $\square$

Remark: 1. A completely similar argument show that we can choose $z_{1},\dots,z_{M}\in\left\{ x,y\right\} $ with $$ \left|\left\{ i\in\left\{ 1,\dots,M\right\} \,\mid\, z_{i}=y\right\} \right|=N $$ and such that $\left(\ast\right)$ still holds.

  1. For the above sequence, we also have $$ \left|\sum_{i=K}^{M}z_{i}\right|=\left|\sum_{i=1}^{M}z_{i}-\sum_{i=1}^{K-1}z_{i}\right|\leq\left|\sum_{i=1}^{M}z_{i}\right|+\left|\sum_{i=1}^{K-1}z_{i}\right|<2\varepsilon\qquad\qquad\left(\square\right) $$ for all $K\in\left\{ 1,\dots,M\right\} $.

By the argument given by @zhw., there is some $\delta>0$ and $C>0$ with $\left|\frac{f\left(x\right)}{x}\right|\leq C$ for all $x\in\left(-\delta,\delta\right)\setminus\left\{ 0\right\} $. We will need this constant $C$ below.

First note that $f\left(0\right)=0$ is trivial (consider the sequence $a_{n}=0$). Now, let us assume towards a contradiction that $f$ is not linear on any neighborhood of $0$. This implies that for every $n\in\mathbb{N}$ there are $x_{n}\in\left(0,\frac{1}{n^{2}}\right)$ and $y_{n}\in\left(-\frac{1}{n^{2}},0\right)$ with $$ \alpha_{n}:=\frac{f\left(x_{n}\right)}{x_{n}}\neq\frac{f\left(y_{n}\right)}{y_{n}}=:\beta_{n}. $$ Indeed, if there where no such $x_{n},y_{n}$ for some $n\in\mathbb{N}$, we could fix some $y_{n}\in\left(-\frac{1}{n^{2}},0\right)$ and would get $$ \gamma:=\frac{f\left(y_{n}\right)}{y_{n}}=\frac{f\left(x\right)}{x} $$ for all $x\in\left(0,\frac{1}{n^{2}}\right)$. Likewise, fixing now an arbitrary $x_{n}\in\left(0,\frac{1}{n^{2}}\right)$ and get $$ \frac{f\left(y\right)}{y}=\frac{f\left(x_{n}\right)}{x_{n}}=\frac{f\left(y_{n}\right)}{y_{n}}=\gamma $$ for all $y\in\left(-\frac{1}{n^{2}},0\right)$. All in all, this implies $f\left(y\right)=\gamma\cdot y$ for all $y\in\left(-\frac{1}{n^{2}},\frac{1}{n^{2}}\right)$, since we have $f\left(0\right)=0$. Thus, $f$ is linear on a neighborhood of $0$ after all, a contradiction.

Note that (by choice of $C$ above), we have $\left|\alpha_{n}\right|\leq C$ and $\left|\beta_{n}\right|\leq C$. Let $\varepsilon_{n}:=\frac{1}{n^{2}}$ and choose $N_{n}\in\mathbb{N}$ with $N_{n}\geq\frac{1+C\varepsilon_{n}}{\left|\alpha_{n}-\beta_{n}\right|\left|x_{n}\right|}$. Note that this is possible, since $\left|\alpha_{n}-\beta_{n}\right|>0$ and $\left|x_{n}\right|>0$. By the Lemma above, we find some $M_{n}\in\mathbb{N}$ and $z_{1}^{\left(n\right)},\dots,z_{M_{n}}^{\left(n\right)}\in\left\{ x_{n},y_{n}\right\} $ with $$ N_{n}=\left|\left\{ i\in\left\{ 1,\dots,M_{n}\right\} \,\mid\, z_{i}^{\left(n\right)}=x\right\} \right| $$ and $$ \left|\sum_{i=1}^{K}z_{i}^{\left(n\right)}\right|<\varepsilon_{n}=\frac{1}{n^{2}}\qquad\forall K\in\left\{ 1,\dots,M_{n}\right\} .\qquad\qquad\left(\dagger\right) $$ In particular (for $K=M_{n}$), we get $$ \left|N_{n}x_{n}+\left(M_{n}-N_{n}\right)y_{n}\right|=\left|\sum_{i=1}^{M_{n}}z_{i}^{\left(n\right)}\right|<\frac{1}{n^{2}}.\qquad\qquad\left(\ddagger\right). $$

Now define a sequence $\left(a_{k}\right)_{k\in\mathbb{N}}$ by $$ a_{k}=z_{\ell}^{\left(K\right)}\qquad\text{ for }k=\sum_{i=1}^{K-1}M_{i}+\ell\text{ with }K\in\mathbb{N}\text{ and }\ell\in\left\{ 1,\dots,M_{K}\right\} . $$ It is not too hard to see that this is well-defined. Furthermore, the series $\sum_{k}a_{k}$ is convergent, since it is Cauchy; indeed, for $\sum_{i=1}^{T-1}M_{i}+p=t\geq k=\sum_{i=1}^{K-1}M_{i}+\ell$, we have \begin{eqnarray*} \left|\sum_{n=k}^{t}a_{n}\right| & \leq & \left|\sum_{n=\ell}^{M_{K}}z_{n}^{\left(K\right)}\right|+\sum_{S=K+1}^{T-1}\left|\sum_{n=1}^{M_{S}}z_{n}^{\left(S\right)}\right|+\left|\sum_{n=p}^{M_{T}}z_{n}^{\left(T\right)}\right|\\ & < & 2\varepsilon_{K}+\sum_{S=K+1}^{T-1}\varepsilon_{S}+\varepsilon_{T}\\ & \leq & 2\sum_{n=K}^{T}\frac{1}{n^{2}}\leq2\sum_{n=K}^{\infty}\frac{1}{n^{2}}\xrightarrow[K\to\infty]{}0. \end{eqnarray*}

By assumption on $f$, this implies convergence of $\sum_{n}f\left(a_{n}\right)$. In particular, this series is Cauchy. But for $K\in\mathbb{N}$, we have \begin{eqnarray*} \left|\sum_{n=1+\sum_{i=1}^{K-1}M_{i}}^{\sum_{i=1}^{K}M_{i}}f\left(a_{n}\right)\right| & = & \left|\sum_{n=1}^{M_{K}}f\left(z_{n}^{\left(K\right)}\right)\right|\\ & = & \left|N_{n}f\left(x_{n}\right)+\left(M_{n}-N_{n}\right)f\left(y_{n}\right)\right|\\ & = & \left|N_{n}\alpha_{n}x_{n}+\left(M_{n}-N_{n}\right)\beta_{n}y_{n}\right|\\ & = & \left|N_{n}\left(\alpha_{n}-\beta_{n}\right)x_{n}+\beta_{n}\left(N_{n}x_{n}+\left(M_{n}-N_{n}\right)y_{n}\right)\right|\\ & \geq & N_{n}\left|\alpha_{n}-\beta_{n}\right|\left|x_{n}\right|-\left|\beta_{n}\right|\cdot\left|N_{n}x_{n}+\left(M_{n}-N_{n}\right)y_{n}\right|\\ & \overset{\text{see }\left(\ddagger\right)}{\geq} & N_{n}\left|\alpha_{n}-\beta_{n}\right|\left|x_{n}\right|-C\varepsilon_{n}\\ & \geq & 1 \end{eqnarray*} by choice of $N_{n}$, which shows that $\sum_{n}f\left(a_{n}\right)$ is not Cauchy and hence not convergent, a contradiction.

PhoemueX
  • 35,087
4

A start: There exists $\delta > 0$ such that $\frac{f(x)}{x}$ is bounded in $\{0<|x|<\delta\}.$ Proof: Otherwise there is a sequence of nonzero terms $x_n \to 0$ such that

$$\left|\frac{f(x_n)}{x_n}\right|> n$$

for all $n.$ We can pass to a subsequence $n_k$ such that $|x_{n_k}| < 1/k^2$ for all $k.$ We then have

$$(1)\,\,\,\,\left|\frac{f(x_{n_k})}{x_{n_k}}\right|> n_k \ge k.$$

Consider now the case where $x_{n_k}, f(x_{n_k})> 0$ for all $k.$ Then for each $k,$ there is a positive integer $m_k$ such that

$$(2)\,\,\,\,\frac{1}{k^2} < m_kx_{n_k} < \frac{2}{k^2}.$$

Think about the series

$$x_{n_1} + x_{n_1} + \cdots + x_{n_1} + x_{n_2} + \cdots + x_{n_2} + x_{n_3} + \cdots ,$$

where $x_{n_1}$ occurs $m_1$ times, $x_{n_2}$ occurs $m_2$ times, etc. By $(2)$ this series converges. But note $(1)$ shows the $k$th grouping in the "$f$-series" sums to

$$m_kf(x_{n_k}) > m_k\cdot k\cdot x_{n_k} > \frac{1}{k}.$$

Thus the $f$-series diverges, contradiction. So we are done in the case $x_{n_k}, f(x_{n_k})> 0.$ The other cases follow by similar arguments..

zhw.
  • 105,693
4

Here is a proof in the case that $f$ is differentiable at $0$.

Let $f(x) = rx + \epsilon(x)$, where $\lim_{x \to 0} {\epsilon(x)/x}=0$. I claim $\epsilon = 0$ in a a neighborhood of 0. Suppose not. Then, up to extracting subsequences and making a few sign choices that are without loss of generality, we may assume that there exists $a_n$ decreasing monotonically to $0$ with $\epsilon(a_n)>0$, and $\epsilon(a_n)/a_n$ decreasing monotonically to $0$. Up to extracting another subsequence, we are left with two cases: either $\epsilon(-a_n) \geq 0$ for all $n$, or $\epsilon(-a_n) \leq 0$ for all $n$. I claim that neither is actually possible.

First, the easy case: assume $\epsilon(-a_n) \geq 0$ for all $n$. Let $k_n$ be the least integer greater than $1 \over \epsilon(a_n)$. Define $b_i$ to be alternatingly $a_1$ and $-a_1$ for the first $2k_1$ entries. Then define $b_i$ to be alternatingly $a_2$ and $-a_2$ for the subsequent $2k_2$ entries, and so forth. It is easy to check that $\sum b_i$ converges (not absolutely), but $\sum f(b_i) \geq \sum k_n f(a_n) = \infty$.

Now, the hard case: assume $\epsilon(-a_n) \leq 0$ for all $n$. The index manipulation here is pretty awful, but here goes. Let $l_1 = 1$, let $m_{1,1}=k_1$, and let $b_i = a_{l_1}$ for $i \in \{1,...,m_{1,1}\}$. Now, given $l_\alpha$, $m_{\alpha,1}$, and $b_i$ for $i \in \{1,...,m_{\alpha,1}\}$, choose $j_\alpha > l_\alpha$ such that $a_{j_\alpha} < 1/2^\alpha$ and $${\epsilon(-a_{j_\alpha}) \over - a_{j_\alpha}} < {1 \over 4^\alpha} \epsilon(a_{l_\alpha}).$$ Now, choose $p_\alpha > 0$ such that $0 < p_\alpha a_{j_\alpha} - k_{l_\alpha} a_{l_\alpha} < a_{j_\alpha}$. Let $m_{\alpha,2} = m_{\alpha,1} + p_\alpha$, and let $b_i = -a_{j_\alpha}$ for $i \in \{m_{\alpha,1} + 1,...,m_{\alpha,2}\}$. Now pick $l_{\alpha + 1} > j_\alpha$, define $m_{\alpha + 1,1} = m_{\alpha,2} + k_{l_{\alpha + 1}}$, and define $b_i = a_{l_{\alpha + 1}}$ when $m_{\alpha,2} < i \leq m_{\alpha + 1,1}$.

To check that indeed $\sum b_i < \infty$ and $\sum f(b_i) = \infty$, note that $$|\sum\limits_{i=m_{\alpha - 1, 2} + 1}^{m_{\alpha,2}} b_i| = |p_\alpha a_{j_\alpha} - k_{l_\alpha} a_{l_\alpha}| < a_{j_\alpha} < {1 \over 2^\alpha},$$, whence $\sum b_i$ converges. Edit: well, we need to bound the partial sums within each $\sum\limits_{i=m_{\alpha - 1, 2} + 1}^{m_{\alpha,2}} b_i$, but this can be done by rearranging terms since the terms tend to zero as $\alpha \to \infty$. On the other hand, $$\sum\limits_{i=m_{\alpha - 1, 2} + 1}^{m_{\alpha,2}} f(b_i) \geq 1 - r a_{j_\alpha} + p_\alpha \epsilon(-a_{j_\alpha}).$$ Since $a_n \to 0$, we only need to check that $p_\alpha \epsilon(-a_{j_\alpha})$ is not too negative. By our choice of $j_\alpha$, $\epsilon(-a_{j_\alpha}) \geq {-1 \over 4^\alpha} a_{j_\alpha} \epsilon(a_{l_\alpha})$. By our choice of $p_\alpha$, $p_\alpha a_{j_\alpha} < 2 k_{l_\alpha} a_{l_\alpha}$, whence $p_\alpha \epsilon(-a_{j_\alpha}) \geq -2 a_{l_\alpha} k_{l_\alpha} \epsilon(a_{l_\alpha}) \geq -4 a_{l_\alpha}$. For $\alpha$ large, $a_{l_\alpha}$ is very small, so $$\sum\limits_{i=m_{\alpha - 1, 2} + 1}^{m_{\alpha,2}} f(b_i) \geq 1/2.$$ This proves the claim for $f$ differentiable.

By zhw.'s answer, the difference quotients near $0$ are bounded. By Heine-Borel, for each $a_n \to 0$, there is a subsequence $a_{n_k}$ such that $f(a_{n_k})/a_{n_k}$ converges. I'm sure that using similar arguments to the ones so far, you could show that all the subsequential limits are the same. This would show that, in fact, $f(x)/x$ converges as $x \to 0$, which would complete the proof.

Justthisguy
  • 1,551