0

In my search for this statement I wrote this question (a few years ago) When $\delta$ decreases should $\epsilon$ decrease? (In the definition of a limit when x approaches $a$ should $f(x)$ approach its limit $L$? ) however, it had the deltas and epsilons all backwards and wrong. Then I thought I had finally solved what I was looking for when I wrote this answer. Since then I've been trying to fix my proof in that answer with no success. My list of questions about my proof just pile up. The proof is as follows:

Theorem: if $\epsilon_2 < \epsilon_1 \implies \delta^*_{\epsilon_2} \leq \delta^*_{\epsilon_1}$ for any limit in consideration $\lim_{x \rightarrow p} f(x) = q$.

Formal proof:

Let $\epsilon_2 < \epsilon_1$. Let the set of all deltas that work for a given $\epsilon_i$ to be defined as follows:

$$ D_{\epsilon_i} = \{ \delta \in R \mid \forall x \in E, 0< d(x,p) < \delta \implies d(f(x),q) < \epsilon_i \} $$

now define the largest radiuses that are within their corresponding tolerance $\epsilon_i$ as follows:

$$ \delta^*_{\epsilon_1} = \sup D_{\epsilon_1} $$ $$ \delta^*_{\epsilon_2} = \sup D_{\epsilon_2} $$

notice that for $ \delta^*_{\epsilon_2} $ (corresponding to the tighter tolerance level) we have the following condition (notice the last inequality):

$$ D_{\epsilon_1} = \sup \{ \delta \in R \mid \forall x \in E, 0< d(x,p) < \delta \implies d(f(x),q) < \epsilon_2 < \epsilon_1 \} $$

the tighter inequality implies that the set of points that satisfy that must be smaller and thus also the $\delta$ must be smaller since, if $\delta_{\epsilon_2} \in D_{\epsilon_2}$ then it must mean that $\forall x \in E, 0 < |x - p| < \delta_{\epsilon_2} \implies |f(x) - > L| < \epsilon_2$ but since this same delta $\delta_{\epsilon_2}$ satisfies $\forall x \in E, 0 < |x - p| < \delta_{\epsilon_2} \implies |f(x) - L| < \epsilon_2 < \epsilon_1$ then it must mean that $\delta_{\epsilon_1} \in D_{\epsilon_1}$.

So we have:

$$ D_{\epsilon_2} \subseteq D_{\epsilon_1} $$

since $ D_{\epsilon_1} $ contains all points of $ D_{\epsilon_2} $ or less, it cannot introduce a larger number by accident. Therefore we have the following holds for their supremums then:

$$ \sup D_{\epsilon_2} \leq \sup D_{\epsilon_1} $$

which is equivalent to:

$$ \delta^*_{\epsilon_2} \leq \delta^*_{\epsilon_1} $$

as required.

In essence my question is, why does my proof not work?

But to answer my question I think I've collected a few question to guide what is specifically bothering me and where I think the mistake is:

  1. does the constant function provide a counter example to the type of proof I am after?
  2. what class of functions satisfy the property that I want and why? Why is monotonicity so important? what step would monotonicity would make my proof work?
  3. what class of functions does my proof not work in?
  4. I know that there are example where one can have a decreasing epsilon but an increasing delta (which is counter to my intuition of how limits should work). If that is true how does that not invalidate my statement that $ D_{\epsilon_2} \subseteq D_{\epsilon_1}$? How does that not invalidate $\delta^*_{\epsilon_1} \leq \delta_{\epsilon_2}$? I just don't understand why that counter example works becuase my intuition is that if epsilon decreases then it becomes tighter and so there should be less deltas that should work. But there seems to be that not only works but that is larger...what is going on?
  5. My proof as its written is for general function so what I am really interested is working out what exact step of my proof requires monotonicity or what step would break the proof if I am not careful and what type of function or arguments would break that step. That is really what I care about more than more examples in general.

any help appreciated!

  • I am aware of the constant function example, its the first bullet point in my question. If you plan to mention it would be most helpful to mention in the constant of the questions I already have around that specific counter example and related counter examples (e.g. bounded functions where epsilons are larger than the upper bound on $f(X)$). – Charlie Parker Jul 09 '18 at 15:20

3 Answers3

4
  1. Your proof is essentially correct. The key idea is that the set of "deltas" that work for some epsilon also work for any larger epsilon.

  2. Your statement is slightly flawed. The main problem is that it uses notations that are not yet defined, and values that may not be well-defined (primarily because of constant functions, as other answerers have observed).

Let's make it right:

Theorem: if $\epsilon_2 < \epsilon_1 \implies \delta^*_{\epsilon_2} \leq \delta^*_{\epsilon_1}$ for any limit in consideration $\lim_{x > \rightarrow p} f(x) = q$.

That doesn't make sense, so let's revise:

Theorem: Suppose $f: \Bbb R \to \Bbb R$ satisfies $\lim_{x \to p} f(x) = q.$ For any $\epsilon > 0$, let $$ D_{\epsilon} = \{ \delta \in \Bbb R \mid \forall x \in E, 0< d(x,p) < \delta \implies d(f(x),q) < \epsilon \}, $$ and let $\delta_\epsilon = \sup D_\epsilon$, if this supremum exists, and $\infty$ otherwise. Further suppose that $0 < \epsilon_1 < \epsilon_2.$ Then if $\delta_{\epsilon_2} < \infty$, we have $$ \delta_{\epsilon_1} \le \delta_{\epsilon_2}.$$ If we agree to say that $\infty \le \infty$, then, regardless of the value of $\delta_{\epsilon_2}$, we can write, $$ \delta_{\epsilon_1} \le \delta_{\epsilon_2}.$$

That's not a very pretty statement of the theorem, but the case of constant functions requires that messiness in the conclusion. I've also replaced $E$, in your definition of the set $D_\epsilon$, with $\Bbb R$, because that's what seemed right. I've also swapped the roles of $\epsilon_1$ and $\epsilon_2$, because I like indices to increase reading left to right, but that's a matter of taste more than mathematical correctness. I'd certainly understand if you wanted to swap them back. On the good side, swapping them means we have to look at every line of the proof correctly. One more thing: I've gotten rid of the superscript "*" on your deltas, because it wasn't needed, and was a pain to generate in LaTeX.

Now let's move on to the proof. Because I've defined the terms in the theorem, we can get rid of some junk.

To begin, let's see how $D_{\epsilon_1}$ and $D_{\epsilon_2}$ are related: $$ D_{\epsilon_1} = \{ \delta \in \Bbb R \mid \forall x \in \Bbb R, 0< d(x,p) < \delta \implies d(f(x),q) < \epsilon_1 \}. $$ Suppose that $u \in D_{\epsilon_1}$. Then $$ 0< d(x,p) < u \implies d(f(x),q) < \epsilon_1. $$ But $\epsilon_1 < \epsilon_2$, so $$ 0< d(x,p) < u \implies d(f(x),q) < \epsilon_2, $$ which tells us that $u \in D_{\epsilon_2}$. Thus every element of $D_{\epsilon_1}$ is also in $D_{\epsilon_2}$, hence $$ D_{\epsilon_1} \subset D_{\epsilon_2}. $$

Now any upper bound of the second set is also an upper bound of the first, hence if the second set has a finite upper bound, $b$, then we know $\delta_{\epsilon_2} \le b$, and $$ \delta_{\epsilon_1} \le \delta_{\epsilon_2}. $$ On the other hand, if the second set has no upper bound, then $\delta_{\epsilon_2} = \infty$, and if (as in the theorem statement) we agree that all numbers, and $\infty$, are less-than-or-equal-to $\infty$, then we again have $$ \delta_{\epsilon_1} \le \delta_{\epsilon_2}. $$

You might want to compare my proof, line by line, with yours. Yours says things like "let the set of all deltas that work for a given epsilon be ...". That's what motivated you to define $D_{\epsilon_i}$, but "works for" is not meaningful mathematically. Fortunately, your actual definition of the set is correct (aside from some formatting issues arising from quoting). In the same way, "it cannot introduce a larger number by accident" doesn't really mean anything, so you need to get rid of it.

I think, although I haven't really checked, that you got the order of epsilons swapped at some point ... but you can check that.

I hope my rewrite is helpful to you.

John Hughes
  • 93,729
  • shouldn't your statement about $u \in D_{\epsilon_2}$ be $\subseteq$ rather than $\subset$? Regardless, your answer was incredibly helpful. Thanks so much! – Charlie Parker Jul 09 '18 at 16:23
  • random (seemingly unrelated) question. Is the infinitum of the set $D_{\epsilon}$ zero? – Charlie Parker Jul 09 '18 at 16:33
  • I know this might be redundant but I just want to triple check that I do not have a misunderstanding, monotonicity is not required ever, right? – Charlie Parker Jul 09 '18 at 16:37
  • 1
    (1) Regarding $\subset$: many mathematicians allow $A \subset A$, i.e., the subset relation includes equality. I was following that convention. (2) No; the infimum of $D_\epsilon$ is $-\infty$, because all negative values of delta are (vacuously) included in each such set. We could change the def'n of $D$ to say ${ \delta \in \Bbb R^{+} \ldots }$, and then the infimum would be zero; nothing else in the proof would need changing. (3) There's no requirement that $f$ be monotone, locally monotone, or anything like it. You can see that because we never compare $f(a)$ to $f(b)$ for any $a,b$. – John Hughes Jul 09 '18 at 17:28
  • One final point: your question asks for "the correct proof", but that's a bad goal. There may be dozens of correct proofs, and no one that's better than the others. – John Hughes Jul 09 '18 at 18:51
  • thats just a minor point, more related about the way I used language. I just wanted any proof that clarified why my proof was not correct. When I wrote that I was under the impression my proof was completely wrong, which now thanks to you understand what was missing. Thats what I was truly after. Thanks! :D – Charlie Parker Jul 09 '18 at 18:53
0

This is not correct. Example: Constant functions. For a constant function any $\delta$ satisfies the definition of limit.

If $f$ is a constant function, then $$\forall \epsilon>0\ \forall \delta>0:\ 0<|x-a|<\delta\implies |f(x)-f(a)|=0<\epsilon$$


Any function for which there the limit exists at $x=a$ and for which there is no punctured neighborhood of $a$ for which the function is constant will satisfy the condition (no monotonicity).

It is clear that is there is a punctured neighborhood of $a$ at which $f$ is constant, then tanking $\delta>0$ such that $\{x:\ 0<|x-a|<\delta\}$ is inside that neighborhood is enough to satisfy the definition of limit.

On the other hand, assume that no such neighborhood exists. Then let $\delta_0$ be infimum for which the definition of limit is satisfied for all $\epsilon$. Then on the punctured neighborhood $\{x:\ 0<|x-a|<\delta\}$ we have $|f(x)-L|<\epsilon$ for all $\epsilon$. Therefore, $|f(x)-L|=0$. Contradiction.

  • what do you man by "this" when you say in your first sentence that "This is not correct."? I already know there is something wrong with my proof, I wrote it in my question. – Charlie Parker Jul 09 '18 at 15:07
  • sorry for being dense but I don't understand what your trying to show. Perhaps stating it as a claim or theorem lemma might help? – Charlie Parker Jul 09 '18 at 15:12
  • 1
    @Pinocchio Below the line I characterize (necessary and sufficient condition) the functions for which decreasing the $\epsilon$ eventually (strictly) decreases the maximum $\delta$ that makes the definition of limits true. The condition is that there is no punctured neighborhood of the point at which the limit is taken, on which the function is constant. Above the line is the proof that not all functions require decreasing the $\delta$. –  Jul 09 '18 at 23:11
  • ok! that makes sense. Edit answer and I'm happy to upvote. – Charlie Parker Jul 10 '18 at 15:38
0

Lets address some of my points in my question.

Addressing point #1: the constant function does not provide a counter example to my proof. My proof defines $\delta^*_{\epsilon}$ to be the largest delta that satisfies $|f(x) - L| < \epsilon$ (which is what I mean by "works"). Since I am talking about supremums we always need to compare the supremums of $D_{\epsilon}$ vs $D_{\epsilon'}$ for $\epsilon \neq \epsilon'$. Otherwise it does not count as a counter example. The key thing to notice is that for $\epsilon_1 < \epsilon_2$ we always have:

$$ D_{\epsilon_1} \subseteq D_{\epsilon_2}$$

which implies:

$$ \sup D_{\epsilon_1} \leq \sup D_{\epsilon_2}$$ $$ \delta^*_{\epsilon_1} \leq \delta^*_{\epsilon_2}$$

this is universal for any function and therefore monotonicity is irrelevant addressing my point #2 and #3. The key is that the sumpremum of the "smaller" set $D_{\epsilon_1}$ can only ever increase enough such that they have the same supremum but never more. An important fact that professor John Hughe's pointed out is that when $D_{\epsilon}$ includes everything for the statement of the theorem to work we indeed require $\infty \leq \infty$ which is totally reasonable as he compactly and beautifully put it:

On the other hand, if the second set has no upper bound, then $\delta_{\epsilon_2} = \infty$, and if (as in the theorem statement) we agree that all numbers, and $\infty$, are less-than-or-equal-to $\infty$, then we again have $$ \delta_{\epsilon_1} \le \delta_{\epsilon_2}. $$

this address points #2 and #3 in my question.

To address my point #4 is simply that I need not confuse the $\delta$ that we choose in the $\epsilon-\delta$ definition with the largest one $\delta^*$. They are very different. For example since $D_{\epsilon_1} \subseteq D_{\epsilon_2}$ it means that for despite the choice of $\epsilon_1 < \epsilon_2$ we can always choose $\delta_1, \delta_2 \in D_{\epsilon_1}$ such that $\delta_1 > \delta_2$. This is because these deltas satisfy the epsilon delta definition of limit but don't restrict on the ordering between them and since both do "work" we could in principle choose them "backwards".

Last point I already addressed but will emphasize that my proof applies for general functions, however, required more careful writing for it to apply to functions that had $D_{\epsilon}$ that didn't have a "finite" least upper bound. We just need the portion of professor John Hughes's answer that I inserted here for it to work.

Thanks to everyone! :)

  • In your inclusion $D_{\epsilon_1}\subseteq D_{\epsilon_2}$ the equality may hold for arbitrarily small $\epsilon$'s. This happens for constant functions, for example. This makes the supremum of the $\delta$'s that work, not decrease, but stay constant. –  Jul 09 '18 at 23:20
  • @cactus ok. I know. I'm not sure why everyone is so obsessed about constant functions. They've already been addressed, especially by Hughes's answer. – Charlie Parker Jul 10 '18 at 15:36
  • Everyone is telling you about constant functions because you asked for it "largest $\delta$ should also decrease". It is your job to ask the question that you want answered. The answers that you are getting address what you asked. Whether that is what you wanted to ask is another story. –  Jul 10 '18 at 23:07