3

I need a "big picture" of how those things relate to each other and probably a list of fundamental theorems that glue them up together.

My current (quite limited) understanding:

1) The fact that function $f(x)$ has a derivative at point $P$ does mean that it behaves nearly like a straight line infinitely close to the $P$ and thus might be approximated with such a straight line with infinitely small error (which becomes exactly 0 on at least two points $P, P_{neighbor}$ since both are shared by given line and original $f(x)$).

2) As such, $f(x)$ has to be continuous (not necessarily uniformly continuous) at the $P$, otherwise as long as $P_{neighbor} \rightarrow P$, $f(x)$ kind of "jumps" rather then being smooth and thus can not be estimated with a straight line to an infinitely small tolerance (but still could be approximated with a line, which obviously won't be precise and practically valuable, but still theoretically possible).

3) As such the fact that $f(x)$ is indeed continuous at $P$ guarantees that it could be "goodly" approximated with a straight line?

Am I right or wrong?


UPDATE:

I think the given answer beneath helped me to discover one mistake I wasn't able to see before. Let me try to formulate a new statement and you, guys, please correct me in case I am still mistaking things.

There are 2 different ways to approximate any (or only continuous?) $f(x)$:

1) when you pick up any 2 points belong to $f(x)$ and graph what is called a "secant line"; thus $[P_1, P_2]$ interval emerges where your linear estimation could be as precise as one wish (as long as $P_2$ approaches $P_1$).

2) But when things come down to the limit of such an constantly decreasing distance between both points, $P_2$ vanish and merges with $P_1$ such that only single point exits. Since this very moment, given line ceases to be a secant and becomes an "tangent line", which guaranteed to share at least single $P_1$ point with original $f(x)$ (and thus the error there must be precisely zero). Such an approach approximates $f(x)$ around $P_1$: to be more concrete, on some $[P_1 - \delta, P_1 + \delta]$. So whenever $$x \in [P_1 - \delta, P_1 + \delta]$$ there exists $\epsilon \gt 0$ such that: $$|L(x) - f(x)| \lt \epsilon$$

4) Now my question: $[P_1 - \delta, P_1 + \delta]$ actually ships two points, which makes me come back to the initial point: there is a secant line, crossing two points belonging for original $f(x)$.

  • 5
    the fact is, differentiability implies continuity, and uniform-continuity implies continuity. – Reza Habibi Dec 20 '17 at 09:52
  • @RezaHabibi why not the other way around? Continuity implies differentiability? – 52heartz Dec 20 '17 at 09:53
  • 2
    Continuity does not imply differentiability (counterexample : $f(x)=|x|$) – Peter Dec 20 '17 at 09:53
  • I even heard of a function being continous everywhere, but differentiable nowhere (but I do not remember the details) – Peter Dec 20 '17 at 09:57
  • @Peter, but the 3 statements made above are correct? – 52heartz Dec 20 '17 at 09:59
  • 1
    Related: https://math.stackexchange.com/questions/140428/continuous-versus-differentiable – Hans Lundmark Dec 20 '17 at 10:00
  • $3)$ is not correct. We cannot approximate $|x|$ good with a straight line near the origin. And in $2)$ how would you approximate a function having a jump with a straight line ? – Peter Dec 20 '17 at 10:03
  • 1
    Why do we need two points? Let the line pass through $P$ and be such that it offers the best linear approximation to the function near $P$. Then the slope of line is same as derivative at the point under consideration. – Paramanand Singh Dec 20 '17 at 10:04
  • @ParamanandSingh But we then have to specify in which sense we determine the optimal line. Then, we would have something like a derivate where the derivate actually doesn't exist, but only if we find a method that gives also the correct result if the derivate actually exists. – Peter Dec 20 '17 at 10:08
  • 1
    @ParamanandSingh well, according to the derivative's formula, there are always 2 points, which are infinitely close to each other. The fact that the 2nd point becomes as close as one wish doesn't vanish it's existence, those remain to be 2 different points. – 52heartz Dec 20 '17 at 10:09
  • The tangent is guaranteed to pass through one point but not necessarily through two points. – Paramanand Singh Dec 20 '17 at 10:18
  • @ParamanandSingh, then why derivative's formula heavily relies on two points? One of which approaches the other? – 52heartz Dec 20 '17 at 10:24
  • 1
    Derivative always involves one point, the symbol being $f'(a) $, but it's definition involves not just two points, but all points in neighborhood of $a$. – Paramanand Singh Dec 20 '17 at 10:29
  • Unless you grasp the fundamental and simple idea that if $a, b$ are two distinct points then they possess distinct neighborhoods, it is difficult to say how derivative is related to only one point. – Paramanand Singh Dec 20 '17 at 10:31
  • Your definition of limit (at the end) is reversed. The $\epsilon >0$ is arbitrary and there exists a $\delta>0$ corresponding to the $\epsilon$. Moreover as you can see that the definition does not involve any constantly decreasing stuff. It just involves inequalities related to all positive numbers $\epsilon $. – Paramanand Singh Dec 20 '17 at 12:13
  • @ParamanandSingh, the very question of limit arises only when there is a sequence of something. In my particular example, a monotony decreasing distance between $P_1$ (which you call $a$) and $P_2$, which as a sequence bounded to the $P_1$. In other language, $P_2 \rightarrow P_1$. As such I see no possibility to come up with a definition you mentioned bypassing the constant decreasing you denied. – 52heartz Dec 20 '17 at 12:23
  • No. And plain no! Limit is a statement about the truth of certain inequalities for every positive number $\epsilon$. That's what the definition says. – Paramanand Singh Dec 20 '17 at 14:07
  • @ParamanandSingh could you then provide complete definition of limit is? You probably agree that those inequalities are insufficient unless you specify something else, which is exact thing we are arguing on: the sequence, typically ordered and indexed which obeys given inequality under the certain requirements. Taking the complete definitions, you can not bypass sequences. Otherwise, please provide an sequences-independent definition. – 52heartz Dec 20 '17 at 14:14
  • You may refer any textbook to get $\epsilon, \delta$ definition of limit. An online resource is Wikipedia and you can see that the definition does not contain any ideas about sequences. The idea that we take values of $x$ successively getting closer and closer to $a$ and then figuring out the values of $f(x) $ is at best a vague attempt to explain the notion of limit. – Paramanand Singh Dec 20 '17 at 15:28

1 Answers1

0

Let me give you a List, with countereaxmples and theorems glueing the three concepts together:

First of all differentiability always implies continuity.

The converse is not true, take for example $f(x)=|x|$. Here you cannot find a unique tangent of the graph at zero.

Uniform continuity is a bit more tricky. It always implies continuity, but the converse may only be true in certain situations. One theorem is as follows: If $f:K\rightarrow \mathbb{R}$ is continuous and $K\subset\mathbb{R}$ is compact, then $f$ is uniformely continuous.

You cannot neglect the compactness assumption in this theorem. Take for example $f(x)=e^x$ on the whole real numbers. Assume it is uniformely continuous. The for every $\varepsilon>0$, you find a $\delta>0$ such that $$|x-y|<\delta\Rightarrow |e^x-e^y|<\varepsilon.$$ Now take $x\in\mathbb{R}$ to be arbitrary and $y=x+\frac{\delta}{2}$. Then $$\varepsilon>|e^x-e^y|=|e^x-e^{x+\frac{\delta}{2}}|=|e^x||1-e^{\frac{\delta}{2}}|\rightarrow \infty$$ for $x\rightarrow \infty$.

Now to your understanding:

1) Your understanding is not quite correct, since there is no concept of neigbour, such that the error of the linear approximation would be zero. The definition for differentiability is as follows: $f$ is differentiable in $x$ if the limit $$\lim_{y\rightarrow x}\frac{f(y)-f(x)}{y-x}$$ does exist. Now take for example $f(x)=x^2$. The derivative at zero is $f'(0)=0$. So your linear function approximating $f$ at $x=0$ is the nullfunction. If you now pick any $x\neq 0$, then $f(x)\neq 0$. Therefore $\frac{f(x)-0}{x-0}\neq 0$. Hence you cannot find a neighbouring point such that the error is zero.

To 2): Your right, differentiability implies continuity, but this kind of $f$ jumping is not really good. There can be very strange examples of functions, which are not continuous, for example $$f(x)=\left\{\begin{array}{cc}\sin\frac{1}{x},&x\neq 0\\0,&x=0\end{array}\right.$$ Here you also see, that you cannot find a line approximating $f$ at zero in any good sense, such that the error would be controllable.

To 3). $f(x)=|x|$ is continuous but not differentiable as mentioned above. You cannot find any good linear approximations, since you cannot find a unique tangent at zero.

Let me give you one last advice. Try to stop relying on your graphical intuition and start working with the abstract definitions. Then your understanding will improve.