1

I'm trying to learn calculus for myself. To do this, I'm mostly following the Calculus I course on Khan Academy.

I understand this question is rudimentary and that it probably reflects an incomplete understanding of the epsilon-delta definition of limits. If that's the case, please correct me; I'm trying to learn :)

In this video, Sal Khan defines a limit as existing if the following is true:

Alice wants to prove that $\lim_{x\to n}f(x)=L$. Bob then picks any arbitrary $\epsilon$ and says that Alice must find a $\delta$ with which given that $x$ is within $\delta$ of $n$, $f(x)$ will always be within $\epsilon$ of $L$. If Alice can find such a $\delta$, the limit exists.

This is my understanding of the definition. When I first learned it, I wanted to play around with it a little... however, it's troubling to me that I can only make sense of it when it's applied to linear functions.

Here's my problem: I can't seem to find a $\delta$ when $f$ is nonlinear, even though I know the limit exists. For example, working with the following function: $$f(x)=x^3$$ I know for sure, intuitively, that this is true: $$\lim_{x\to 1}f(x)=1$$ So I challenge myself with an arbitrary $\epsilon$, in this case say $\epsilon=\frac{1}{2}$. The problem is that it seems impossible to find a $\delta$. The function is non-linear, so its rate of change is constantly changing! There doesn't seem to be a fixed $\delta$ that you can shift around $x=1$ because the "time" that the function takes to get from $y=1-\epsilon$ to $y=1$ will be different from the time that it takes to get from $y=1$ to $y+\epsilon$! Here's a graph of what I mean: Graph

The blue dashed lines are both $\epsilon$ from $y=1$. My problem should be clear: the $\delta$ I would need are, well... two different $\delta$'s.

What part of my understanding is incorrect?

javan.g
  • 85
  • 4
  • It is usually needed some more or less hard work, but above boring, exhausting work to do that, with evaluations, estimations and etc. involved in the process. – DonAntonio Jul 05 '19 at 07:38
  • 5
    The main point is that $\delta$ can depend on $\epsilon$, so something like $\delta = \epsilon^3$ for example would be completely ok. – Dirk Jul 05 '19 at 07:40
  • 3
    "My problem should be clear: the $\delta$ I would need are, well... two different $\delta$'s." -> just pick the smallest of those. The definition does not require you to find the exact boundary point at which $x$ becomes close enough to $1$ for $f(x)$ to be within $1/2$ of $1$, you can err on the side of caution. If you found a good $\delta$, then $\delta/100$ is also good. – Wouter Jul 05 '19 at 07:45
  • The $\delta$ will not only depend on $\epsilon$ but also on $n$. – Wolfgang Kais Jul 05 '19 at 08:10

6 Answers6

5

If $|x-1| <\delta$ then $|x^{3}-1|=|x-1| |x^{2}+x+1|\leq |x-1| |(x-1)^{2}+3(x-1)+3| < \delta (\delta^{2}+3\delta+3)$. So as long as we choose a $\delta <1$ we have $|x^{3}-1|<7\delta$. Hence we can choose $\delta=\frac {\epsilon} 7$ if $\frac {\epsilon} 7<1$ and $\delta$ to be any number less than $1$ otherwise.

1

Observe that if it is going to be true that $\;|x-1|<\delta\;$, then

$$|x^3-1|=|x-1|\,|x^2+x+1|<\delta|x^2+x+1|\;\;(**)$$

and also

$$x^2+x+1=\left(x+\frac12\right)^2+\frac34\implies |x^2+x+1|=x^2+x+1<\left(x+\frac12\right)^2$$

and since

$$|x-1|<\delta\iff 1-\delta<x<1+\delta\implies\left(\frac32-\delta\right)^2<\left(x+\frac12\right)^2<\left(\frac32+\delta\right)^2$$

so we get as continuation of (**) :

$$|x^3-1|<\delta(x^2+x+1)<\delta\left(\frac32+\delta\right)^2\le4\delta$$

where the last inequality follows from requiring that $\;\delta<\frac12\;$, and thus

$$4\delta<\epsilon\implies \delta<\min\left\{\,\frac12\,,\,\,\,\frac\epsilon4\,\right\}$$

and no matter what $\;\epsilon>0\;$ is, with the above you get your corresponding $\;\delta>0\;$ .

DonAntonio
  • 211,718
  • 17
  • 136
  • 287
1

The definition demands that you find a $\delta$ such that $$|x-n|<\delta\implies |f(x)-f(n)|<\epsilon$$ It does not demand that you find a $\delta$ such that $$|x-n|<\delta\iff |f(x)-f(n)|<\epsilon$$ the latter would require you to find asymmetric $\delta$ as in your figure, but the former, which is the actual definition, does not.

The limit definition does not define a unique value for $\delta$ in function of $\epsilon$. If you find some $\delta(\epsilon)$ that satisfies the limit definition, then any $\delta'(\epsilon)$ for which $\delta'(\epsilon)<\delta(\epsilon)$, also satisfies the limit definition.

Wouter
  • 7,673
1

The smallest of your “two deltas” (or any positive number smaller than that) is a $\delta$ that fulfills the requirement.

Indeed, with such a value of $\delta$ it's true that for all $x$ in the symmetric punctured interval $(n-\delta,n) \cup (n,n+\delta)$, the value $f(x)$ is at most $\epsilon$ away from the limit $L$. It's irrelevant that $f(x)$ may also happen to lie close to $L$ for some other values of $x$.

Hans Lundmark
  • 53,395
0

We want to show that $\lim_{x\to 1}f(x)=1.$ For this, only values of $x$ "near" $1$ are relevant. Thus we can assume that $0<x<2$. Then we have

$$|f(x)-1|=|x^3-1|=|x-1| \cdot |x^2+x+1|=|x-1|(x^2+x+1) \le |x-1|(4+2+1)=7|x-1|.$$

Conclusion: if $ \epsilon >0$ is given, then $\delta=\frac {\epsilon} 7$ will do the job.

Fred
  • 77,394
0

You should look at my answer here:

What is the intuition behind uniform continuity?

discussing continuity first (ps. don't worry about the stuff afterward about uniform continuity, etc. if you don't get it - right now we only need the first part about simple continuity). I actually think it may be more profitable to start with continuity, before limits, as we can then introduce limits in terms of continuity, instead of the other way around. And this is because continuity more immediately relates easily to our prior experiences with functions, calculators, and measurements, when the proper intuition is used to frame it. Moreover, bringing in "rates of change" means you're getting a bit ahead - that's differentiation, and here, we need limits, which are prior.

Basically, when we say a function $f$ is "continuous" at a point $x$ in its domain, what the means is we can approximate the value of $f$ evaluated at that point, i.e. $f(x)$, to within any given tolerance, i.e. $\epsilon$, by choosing a suitably-accurate approximate value for $x'$. This "suitable accuracy" of the approximant $x'$ of the value to plug, is what $\delta$ is. Note that this is effectively "why you can use a calculator", as I talk about in my answer: since calculators of necessity must work with inputs with limited precision, if functions like sine, square roots, etc. were not continuous, then there'd be no reason to even hope that putting in an imprecise value would give you a guarantee of getting an accurate result for what the sine or square root of the precise value which you wanted to know that function of, was. And then we'd be in a very bad way!

So, without further ado, let's consider $f_1(x) := x^3$, and think about approximating the value at $x = 1$. An $\epsilon$ of $\frac{1}{2}$ basically means that are asking for the ability to know the value of $f_1$ at $1$ to within $0.5$ accuracy. Of course, here, we already know the exact value of $f_1$ at $1$: it is just 1. Moreover, we know "1" exactly as well - it is "1"! But we're going to ignore that and instead imagine we will be approximating it anyways.

Finding a $\delta$ for this given $\epsilon$ means, basically, that you are asking "how close do I need to approximate $1$, i.e. the input, so that the output, i.e. $f_1(1)$, is approximated to within $\frac{1}{2}$ of the true value?" Since we know the exact, it is not hard to see how accurate our approximations are by comparison. Let's take, say, $x' := 2$. Now, $f_1(x') = 8$. Is $8$ within $\epsilon = \frac{1}{2} = 0.5$ of $1$? Clearly, very much no! So let's try a closer input: $x' := 1.5$. Now, $f_1(x') = 3.375$. Still not within 0.5. Now, how about $x' := 1.1$? Then we get $f_1(x') = 1.331$, which, at least, IS now within 0.5, i.e. within $\epsilon$, of $f_1(1) = 1$. Moreover, if you take any closer $x'$ than $1.1$, say, $1.01$, then $f_1(x')$ is going to be even closer to $f_1(1)$!

Hence, one possible value for $\delta$ for this given $\epsilon$ is $\delta = 0.1$. "Rates of change" do not matter - what you are after is whether or not you can make $x'$, your approximant, close enough to the input $1$ to get the output within $0.5$ accuracy of $1$ (this $1$ from $1^3$, not the input $1$, just to keep it clear as it does look a little possibly confusing), and moreover, once it gets that accurate, it stays at least that accurate even if you make that approximant better. And that this happens for all nonzero tolerances for the desired approximation of $f(x)$.

Of course, wait - as said, we were talking limits, not continuity. Well, limits are just this: If we say

$$\lim_{x \rightarrow c} f(x) = L$$

what that means is that $L$ is the value $f$ would need to take at $c$ if it were to be continuous there, regardless of whether it is or isn't, or is even defined there, i.e. it is the value $L$ that makes the "modified" $f$

$$f^{*}(x) := \begin{cases} f(x),\ x \ne c\\ L,\ x = c \end{cases}$$

continuous at $x = c$, if such a value exists. Alternatively, a limit is the value that $f$ "looks like it is trying to approximate" if it does, in fact, do such a thing, as you tighten up the tolerance on an approximation $x$ of the point of limit taking, $c$. In terms of the formal definitions, not the similarity of the condition for continuity:

$$|x - c| < \delta \ \rightarrow\ |f(x) - f(c)| < \epsilon$$

(nb. what we were calling $x'$ earlier is called "$x$" here and what we called $x$ earlier as the "input to be approximated" is called $c$ here in this discussion of limits, as this has been written in its standard form) and that for limit in the way it's usually presented in the beginner's texts:

$$|x - c| < \delta \ \rightarrow\ |f(x) - L| < \epsilon$$

so the understanding transfers immediately when thinking of $L$ as a "suitable" $f(c)$. The point about limits is we can have them even if $f(c)$ is not defined, or is defined to be something other than $L$ - which is crucial when you get to derivatives, because there you will be finding the limit of a quotient that, if you were to directly substitute the value $c$ that limit is being taken at, it would give the nonsense expression $\frac{0}{0}$ and hence be undefined!

And since $f_1(x)$ is continuous at $x = 1$, you also know the limit value should be $f_1(1) = 1$!


ADD: You may want to ask how we can then prove that $\delta = 0.1$ works in this case - i.e. how we know that for all tighter approximations $x'$ than $1.1$, the value of $f_1(x')$ will still not exceed the desired $\epsilon = 0.5$ tolerance, i.e. there aren't any hidden "spikes" or "jumps" in there that might screw it up. Well, that's just a bit of algebra with inequalities. Suppose that we have

$$1 < x' < 1.1 = 1 + \delta$$

Now we use inequality rules to apply $f_1$, i.e. cube all these:

$$1^3 < x'^3 < 1.1^3$$

giving

$$1 < f_1(x') < 1.331$$

hence

$$f_1(1) < f_1(x') < 1.331 < 1.5 = f_1(1) + \epsilon$$.

i.e. $f_1(1)$ is approximated within the $\epsilon$-tolerance no matter the $x'$, so long as it meets the $\delta$-tolerance. Cheers! Similarly, we can and should finish up by working on the left-hand side as well, where $f_1(1 - \delta) = f_1(1 - 0.1) = f_1(0.9) = 0.729$ which is again within 0.5 of 1, now more than $1 - 0.5 = 0.5$.

  • Wow! Thank you for an excellent answer. This went beyond what I was expecting. I never saw $\epsilon$ and $\delta$ as variables of approximation, and never realised that in $\lim_{x\to c}f(x)=L$, L is the value that would make $f$ continuous at $f(x)$. Again, thank you for the background and clear explanation. – javan.g Jul 05 '19 at 09:20
  • 1
    @javan g : Yup, because sadly most math texts aren't anywhere close to as good as they could be. – The_Sympathizer Jul 05 '19 at 09:25