9

If $$\lim_{x\to\infty}(f(x)+f'(x))=l$$ then prove that $$\lim_{x\to\infty}f(x)=l \text{ and } \lim_{x\to\infty}f'(x)=0 $$


I assume four cases $$\begin{array}{c|c|c|c|} & f(x) & f'(x) \\ \hline \text{1} & \infty & -\infty \\ \hline \text{2} & -\infty & \infty \\ \hline \text{3} & l & 0 \\ \hline \text{4} & 0 & l \\ \hline \end{array}$$ and elimination(1,2,4) of not possible cases can give the answer(3).


My work is not a correct/perfect or flaw proof.What would be a correct one or is this correct.

RE60K
  • 17,716

5 Answers5

5

This is the solution coming straight from Hardy's Pure Mathematics and is truly beautiful. Before solving this problem Hardy solves a simple problem:

If $f(x) \to L$ and $f'(x) \to L'$ as $x \to \infty$ then $L' = 0$.

This is easy to do via mean value theorem as we have have $f(x) - f(x/2) = (x/2)f'(c)$ for some $c \in (x/2, x)$. If $L' \neq 0$ then we have contradiction as we take limits with $x \to \infty$ in the above relation.

[If $x \to \infty$ then $x/2 \to \infty$ and hence LHS $f(x) - f(x/2) \to L - L = 0$. Again since $x/2 < c < x$ therefore when $x \to \infty$ then $c$ also tends to $\infty$ and hence $f'(c) \to L'$. Then the RHS $(x/2)f'(c)$ tends to $(x/2)L'$ i.e. to $+\infty$ if $L' > 0$ and to $-\infty$ if $L' < 0$. Hence there is no choice for $L'$ except that $L' = 0$.]

Next we attack the current problem. We are given that $f(x) + f'(x) \to L$ when $x \to \infty$ and we need to show that $f(x) \to L$ and $f'(x) \to 0$ as $x \to \infty$. Hardy simplifies the problem by setting $f(x) = \phi(x) + L$ so that $f'(x) = \phi'(x)$ and $\phi(x) + \phi'(x) \to 0$ as $x \to \infty$.

[Hardy defines a new function $\phi(x) = f(x) - L$ and then $f(x) + f'(x) = \phi(x) + L + \phi'(x) \to L$ and cancel $L$ from both sides to get $\phi(x) + \phi'(x) \to 0$].

To solve the problem we need to show that both $\phi(x)$ and $\phi'(x)$ tend to $0$ as $x \to \infty$.

[If we show $\phi(x) \to 0$ then $f(x) = L + \phi(x) \to L$ and $f'(x) = \phi'(x) \to 0$ so that we get the original version of the problem.]

Now the argument by Hardy is beautiful and I quote him verbatim:

"If $\phi'(x)$ is of constant sign, say positive, for all sufficiently large values of $x$, then $\phi(x)$ steadily increases and must tend to a limit $A$ or to $\infty$.

[If the derivative $\phi'(x)$ is positive then original function $\phi(x)$ is strictly increasing for all large values of $x$. And there is a very standard theorem that if a function $\phi(x)$ is increasing then $\phi(x)$ tends to a limit or to $\infty$ as $x \to \infty$.]

If $\phi(x) \to \infty$ then $\phi'(x) \to -\infty$ [because $\phi(x) + \phi'(x) \to 0$] which contradicts our hypothesis [we have assumed that $\phi'(x) > 0$ for large values of $x$ and this is incompatible with $\phi'(x) \to -\infty$].

If $\phi(x) \to A$ then $\phi'(x) \to -A$ [because $\phi(x) + \phi'(x) \to 0$] and this is impossible unless $A = 0$ [from our previous result mentioned in beginning of answer].

Similarly we may dispose of the case in which $\phi'(x)$ is ultimately negative.

[If $\phi'(x)$ is negative then $\phi(x)$ is decreasing for large $x$. Now there is again a very standard theorem which says that if $\phi(x)$ is decreasing for all large $x$ then it either tends to a limit or to $-\infty$ as $x \to \infty$. If $\phi(x) \to -\infty$ then because of the relation $\phi(x) + \phi'(x) \to 0$ we get that $\phi'(x) \to \infty$ and this is incompatible with the fact that $\phi'(x)$ is negative for large $x$. If on the other hand we have $\phi(x) \to B$ then again $\phi'(x) \to -B$ and by the result mentioned in the beginning of answer this is possible only when $B = 0$.]

If $\phi'(x)$ changes sign for values of $x$ which surpass all limit, then these are the maxima and minima of $\phi(x)$.

[Note that a when a derivative changes sign it must also vanish somewhere in between and hence we will obtain points where $\phi'(x) = 0$ and before and after this point the derivative $\phi'(x)$ is of opposite signs. Such points are the maxima and minima of $\phi(x)$ and $\phi(x)$ takes local minimum and maximum values at these points. Since $\phi'(x)$ changes sign for values $x$ which go beyond any limit, this will lead to infinitely many points $x$ which are maxima or minima of $\phi(x)$.]

If $x$ has a large value corresponding to a maximum or a minimum of $\phi(x)$, then $\phi(x) + \phi'(x)$ is small [because it tends to zero] and $\phi'(x) = 0$ [because derivative vanishes at points of maxima/minima], so that $\phi(x)$ is small. A fortiori the other values of $\phi(x)$ are small when $x$ is large. [We see that the maximum and minimum values of $\phi(x)$ are small and hence all the other intermediate values of $\phi(x)$ are also small so that for large $x$ all values of $\phi(x)$ are small. And thus $\phi(x) \to 0$ in this case also.]"

Thus in all cases $\phi(x) \to 0$ and hence $\phi'(x) \to 0$.

Update: Based on comment from OP I add details inline enclosed in [].

  • It took me a long time figuring it out from your solution and am still a little confused; can you please explain a liitle bit more, for I am on the level of highschool mathematics. – RE60K Jun 21 '14 at 05:42
  • @Aditya: I have tried to explain the answer in detail by putting additional explanation in []. However you will need to know some standard results from theory of differential calculus like "increasing / decreasing nature of functions based on sign of derivative", "mean value theorem", "maxima/minima". – Paramanand Singh Jun 21 '14 at 06:38
  • That's perfect +1 . I like $\phi$ tending to 0 also . :) – Koro May 25 '21 at 16:55
  • While revisiting the proof, some confusions came to my mind: in the last case when f' (using f instead of $\phi$) oscillates between positive and negative values, how does it follow f will indeed have a maximum or minimum at large x? (I'm confused because $f'(t)=0 $ doesn't necessarily imply that f has an extremum at t). Then, even if there is a point of extremum at large x, then x is local extremum so how does it follow that "A fortiori the other values of f are small when x is large"? – Koro Dec 02 '21 at 19:23
  • @Koro: assume that $f'(a) <0,f'(b)>0$ and then prove that there is a point $c\in(a, b) $ such that $f'<0$ t the left of $c$ and $f'\geq 0$ to the right of $c$. Then $c$ is a minimum. – Paramanand Singh Dec 03 '21 at 00:46
  • It's likely that I didn't express my confusion fully so I may ask that as a separate question to get more space for writing my train of thoughts. Re: your last comment, I agree that if $f'(a)f'(b)<0$ then there is a $c\in (a,b)$ s.t. $f'(c)=0$ (Darboux's theorem) and if f retains sign on left nbd. of c and retains opposite sign on some right nbd. of c then yes, c is a point of extremum, else there isn't much we can say, I think. For example: if $f(x\ne 0):=2x^2+x^2\sin \frac 1x$ and $f(0):=0$ then $f'$ doesn't "retain" sign in any nbd. of $0$. – Koro Dec 03 '21 at 04:41
4

Lemma Let $h:[a,+\infty)\to\mathbb{R}$ be a continuous function such that $\lim_{x\to\infty}h(x)=0$, Let $F$ be defined by $$F(x)=e^{-x}\int_a^xe^th(t)dt$$ Then $\lim_{x\to\infty}F(x)=0$.

Proof. Indeed, for $x>b>a$ we have $$\eqalign{ |F(x)|&=\left|e^{-x}\int_a^be^th(t)dt+e^{-x}\int_b^xe^th(t)dt\right|\cr &\leq e^{-x}\max_{x\in[a,b]}{(e^t|h(t)|)}+\left(\sup_{t\in[b,x]}|f(t)|\right)e^{-x}\int_b^xe^tdt\cr &\leq e^{-x}\max_{x\in[a,b]}{(e^t|h(t)|)}+\left(\sup_{t\in[b,x]}|f(t)|\right) }\tag{1} $$ Now, for $\epsilon>0$, there is $b>a$ such
that $$ \forall\,t>b,\qquad |f(t)|<\frac{\epsilon}{2}\tag{2} $$ With $b$ fixed as above, there is $x_0>b$ such that $$ \forall\,x>x_0,\qquad e^{-x}\max_{x\in[a,b]}{(e^t|h(t)|)}\leq\frac{\epsilon}{2}\tag{3} $$ combining $(1)$ and $(2)$ in $(1)$ we see that $|F(x)|<\epsilon$ for $x>x_0$. This proves the lemma since $\epsilon>0$ is arbitrary.$\qquad\square$

Now, let us apply the lemma to $h(x)=f(x)+f'(x)-\ell$. We have $$F(x)=e^{-x}\int_a^x((e^tf(t))'-\ell e^t)dt=f(x)-e^{-x+a}f(a)-\ell(1- e^{a-x})$$ By the Lemma $\lim_{x\to\infty}F(x)=0$ that is $\lim_{x\to\infty}f(x)=\ell$. and the desired conclusion follows.$\qquad\square$

Omran Kouba
  • 28,772
  • You seem to use that $f'(x)$ is continuous. Is it possible to remove this limitation? – Paramanand Singh Jun 21 '14 at 17:27
  • The lemma can be proved for $h:[a,+\infty)\to\mathbb{R}$ measurable instead of continuous, you just use the Dominated Convergence Theorem to prove it. So, this version is just a simplification of the proof. – Omran Kouba Jun 21 '14 at 21:13
3

The problem is easy if one assumes that

$$\lim_{x \to \infty} f'(x) = k$$ exists.

Indeed, in this case, it is easy to prove $k=0$: By L'Hospital we have

$$\lim_{x \to \infty} \frac{f(x)}{x} = k \,.$$

Then, if $k= \pm \infty$ we get $\lim_{x \to \infty} f(x)$ is the same infinity, which contradicts the given condition; while if $k$ is finite and non-zero, we get that $\lim_{x \to \infty} f(x)$ is infinite, which contradicts again the given condition.

Now, since $$\lim_{x \to \infty} f'(x) = 0$$ we get $$\lim_{x \to \infty} f(x) =\left( \lim_{x \to \infty} f(x)+f'(x)\right) -\lim_{x \to \infty} f'(x)=l $$

To get around the existence of the limit $\lim_{x \to \infty} f'(x)$, I would try to look to $\limsup f'(x)$ and $\liminf f'(x)$.

N. S.
  • 132,525
2

If one were to assume that both of the limits $\lim_{x\to\infty}f(x) \text{ and } \lim_{x\to\infty}f'(x)$ exist, then the limit of $f'$ is some number $k$. If $k\ne0$, then for large enough $x$ we have $f'(x)$ being further away from $0$ than $k/2$. That would mean that as $x$ increases by $1$ then $f(x)$ changes by at least $k/2$, and always in the same direction. That follows from the mean value theorem. If $f(x)$ changes by more than $k/2$ and always in the same direction, then $f(x)$ is not approaching any limit, and there is a contradiction, so it must be that $k=0$.

The harder problem is showing that if the limit of the sum exists, then the two limits separately exist. Maybe I'll come back to that . . . . . .

1

Given: $$f'(x) = \lim_{h\to 0} \frac{f(x+h)-f(x)}{h}$$

$$l = \lim_{x\to \infty} f(x) + f'(x) = \lim_{x\to \infty}\left[ f(x) + \left[\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}\right]\right]$$

$$l = \lim_{x\to\infty}\left[\lim_{h\to 0}\frac{f(x+h)+f(x)(h-1)}{h}\right]$$

Assuming $f(x)$ has a limit, we can write $|f(x+h) - f(x)| \leq \epsilon$ for large enough $x$. This means that we can bound the quantity inside the limits.

$$\frac{hf(x) - \epsilon}{h} \leq \frac{f(x+h)+f(x)(h-1)}{h} \leq \frac{hf(x) + \epsilon}{h}$$

$$f(x) - \frac{\epsilon}{h} \leq \frac{f(x+h)+f(x)(h-1)}{h} \leq f(x) + \frac{\epsilon}{h}$$

For any $|h| > 0$ we can find an $N$ so that for $x > N$, $\,|f(x+h) - f(x)| \leq h^2$. This is equivalent to setting $\epsilon = h^2$.

$$f(x) - h \leq \frac{f(x+h)+f(x)(h-1)}{h} \leq f(x) + h$$

Take the limit as $h\to 0$. Notice that doing this implies that $\epsilon \to 0$ which forces $x \to \infty$.

$$\lim_{x\to\infty} f(x) \leq \lim_{x\to\infty} f(x)+f'(x) \leq \lim_{x\to\infty} f(x) $$

The rest is clear from here.

I'm not going to pretend like this is formal as we essentially interchanged the limit operators but it should give you a good idea why this works.

Brad
  • 5,156
  • 2
  • 19
  • 50
  • @Aditya it should be better now but I will still take any constructive criticism. I don't see how it can be done without essentially interchanging limits. – Brad Jun 20 '14 at 19:02