16

I've been delving into the concept of limits and the Epsilon-Delta definition. The most basic definition, as I understand it, states that for every real number $\epsilon \gt 0$, there exists a real number $\delta \gt 0$ such that if $0 \lt |x - a| \lt \delta$ then $|f(x) - L| \lt \epsilon$, where $a$ is the limit point and $L$ is the limit of the function $f$ at $a$.

While I grasp the formal definition, I'm grappling with the philosophical aspect of it. Specifically, I'm questioning whether this definition truly encapsulates our intuitive understanding of what a limit is. The idea of a limit, as I see it, is about a function's behavior as it approaches a certain point. However, the Epsilon-Delta definition seems to be more about the precision of the approximation rather than the behavior of the function.

In the book "The Philosophy of Mathematics Today" by Matthias Schirn, on page 159, it is stated that: "At one point, Etchemendy asks: 'How do we know that our semantic definition of consequence is extensionally correct?' He goes on to say: 'That [this question] now strikes us odd just indicates how deeply ingrained is our assumption that the standard semantic definition captures, or comes close to capturing, the genuine notion of consequence' (Etchemendy 1990, 4-5). I do not think that this diagnosis is correct for some people: for some logicians, the question is similar to: How do we know that our epsilon-delta definition of continuity is correct?".

This quote resonates with my current dilemma. Does the Epsilon-Delta definition truly capture the essence of what we mean by a 'limit'? though the epsilon-delta definition is a mathematical construct, what evidence do we have that it accurately reflects our intuitive concept of a limit? How can we be sure it is not merely a useful formalism, but a true representation of the limit as a variable approaching some value? Are there alternative definitions or perspectives that might align more closely with our intuitive understanding of limits? I would appreciate any insights or resources that could help me reconcile these aspects of the concept of limits. Thank you in advance for your help.


edit:i think i should add my motivation of asking the question, what i really want is an argument which can demonstrate that this definition of limit is the definition of limit which no better definition can come up, i can accept the definition as it is in its own axiomatic system and in itself, but whats the certainty that a hundred years from now we come up with a better definition still? its not about the thing that we cant understand i am more worried it there is something out our sphere of recognition if we are not taking note of, because everybody just seem to accept the definition without any further doubt an examination.

Joe
  • 19,636
  • 21
    Although the $\varepsilon$-$\delta$ definition is said to "capture" our intuitive idea of a limit, and I think this is true to a large extent, there is something to be said for the converse: we should adjust our intuitive idea of what a limit is to conform to the $\varepsilon$-$\delta$ definition. – Joe Jun 20 '23 at 15:50
  • 5
    "Evidence that it accurately reflects our intuitive concept of a limit"... I think there are potentially more than one "intuitions" about limits. The comment of @Joe is true in my experience: when I understood the epsilon-delta definition, I felt like I matched the definition to my intuition, but not exactly my intuition about a functions values "approaching" something. It was more of a match with my intuition about error and measurement in a lab experiment, say (I read from Colley's "Vector Calculus"). – Alex Ortiz Jun 20 '23 at 15:54
  • the way that I look at the problem is that firstly the definition is a human construction that tries to capture the essence of a phenomenon, I like to see it this way a perfect definition of limit might exist somewhere at hand, why should we adjust our whole understanding based on a human-made construction unless it is proven to be the best definition possible? it might hinder our understanding of what really a limit is – thomas graceman Jun 20 '23 at 15:58
  • 3
    " However, the Epsilon-Delta definition seems to be more about the precision of the approximation rather than the behavior of the function" but the precision of the approximation is a measure of the behavior of the function, right? – Sarvesh Ravichandran Iyer Jun 20 '23 at 17:03
  • 8
    The onus is on you to give a specific example where the Epsilon-Delta definition does not agree with your intuition understanding of limit. – Somos Jun 20 '23 at 19:52
  • 13
    The question is meaningless because there's no such thing as "our intuitive understanding of what a limit is." Different people have different intuitive understandings; whose intuition are you asking about? My intuitive definition of what a limit is is that it's what the epsilon-delta definition says it is, so the definition captures my intuitive understanding perfectly. You might counter, "That's your learned understanding of what a limit is; I meant your intuitive understanding of what a limit is." And my honest response to that would be that I have no idea what you mean by that. – Tanner Swett Jun 21 '23 at 03:20
  • 2
    Now, it would be meaningful to ask, "Does the epsilon-delta definition of a limit capture our intuitive understanding of what $f(b)$ would be if $f$ were well-behaved at $b$?" Or to ask, "Does the epsilon-delta definition capture our intuitive understanding of continuing the pattern set by $f(x)$ until $x$ reaches $b$?" Or to ask, "Does the epsilon-delta definition capture our intuitive understanding of where most of the outputs of $f$ are for inputs near $b$?" Out of these several possible questions, I have no way to tell which one, if any, you're asking. – Tanner Swett Jun 21 '23 at 03:25
  • Any proof that it is the best definition possible would be a human constructed proof. Why would you trust that human construction? – JonathanZ Jun 21 '23 at 03:50
  • @TannerSwett, the question is not meaningless at all. Our intuitive understanding of continuity, for example, fits much better with the original definition given by Cauchy (see answer https://math.stackexchange.com/a/4722832/72694) than with its long-winded epsilontic paraphrase. Obviously one can't give a definition of what constitutes "intuitive understanding", but one can certainly analyze it in a meaningful way. – Mikhail Katz Jun 21 '23 at 11:42
  • 3
    @MikhailKatz Well, I admit that continuity over an interval is a concept that humans do have an intuitive notion of. But I maintain that there are several plausible interpretations of the phrase "our intuitive understanding of what a limit is," some of which I listed in my second comment, and the question is ambiguous until the asker specifies which of the possible meanings is the one that they mean. – Tanner Swett Jun 21 '23 at 12:35
  • 1
    Thomas, can you maybe try to be more specific on how the epsilon-delta definition is not about the behaviour of a function as it approaches a point? Or simply, what is your idea of limit? – Lorenzo Pompili Jun 21 '23 at 13:02

13 Answers13

19

You are in deep philosophical waters when you ask about

our understanding of what really a limit is.

That statement assumes that there is a Platonic reality somewhere "out there" where limits and other mathematical notions have an essence or existence independent of our knowledge of them.

Some philosophers of mathematics agree, some don't. Many mathematicians act as if it were true and don't really care about the controversy.

The epsilon delta definition of a limit is, as you say in a comment, a human construction. It was designed to replace the controversial idea of two numbers that are infinitely close to one another with infinitely many individual statements each of which uses just ordinary inequalities. Replacing "infinitely close" by "infinitely many" allowed us to prove theorems.

Nonstandard analysis provides definitions with which we can talk about infinitely close directly. But that's still just another human construct. It doesn't address the reality of limits. Whether the definition of limit there matches your intuition better depends on your particular intuition.

Ethan Bolker
  • 95,224
  • 7
  • 108
  • 199
  • 3
    There is nothing in the question as posed about "reality" or "Platonic reality". You are answering a strawman. – Mikhail Katz Jun 21 '23 at 11:48
  • 14
    @MikhailKatz: As the opening words of this answer suggests, the question and comments by the OP open this door by their own wording, asking for a definition that "truly captures the essence", that is a "true representation" of our intuition, by referring to "our understanding of what a limit really is". This answer gets right to the point. – Lee Mosher Jun 21 '23 at 12:40
  • 1
    @LeeMosher, we have to agree to disagree about this. The essence of continuity (and its reformulation in terms of limits) is captured by Cauchy's original definition "infinitesimal change in $x$ produces infinitesimal change in $f$" (see answer https://math.stackexchange.com/a/4722832/72694). The user who posted this question legitimately feels that the epsilon-delta paraphrase lacks in intuitive appeal. The issue of mathematical Platonism is a transverse issue that amounts to changing the subject. There being already 2 "closing" votes indicates that people are uncomfortable with this fact – Mikhail Katz Jun 21 '23 at 12:45
  • 1
    I think there are broader reasons for close votes (I did not cast one). – Lee Mosher Jun 21 '23 at 13:01
18

Clearly we want the limit of $f(x)=2x$ to be $0$ at $x=0$. What about the limit of $g(x)=\sqrt{\lvert x\rvert}$ at $x=0$? What if the function is $h(x)=x\sin\frac1x$? What about $p(x)=\sqrt{\lvert x\rvert}\sin\frac1x$?

The function $f$ approaches $0$ as $x\to 0$ in a very simple, orderly way. The others are less well behaved: $g$ has a very sharp cusp at $x=0$, and the other two functions "wiggle" infinitely many times on the way toward $x=0$. But we have decided (with the epsilon-delta definition of a limit) that all four functions have the limit $0$ at $x=0$.

Clearly there is a lot about a function's behavior as it approaches a limit that is not captured by the epsilon-delta definition of a limit. This definition does (in my opinion) capture the notion that the value of the function must inevitably home in on a single numeric value (the limit) as the input of the function homes in on a particular value (which need not necessarily be a value at which the function is defined), which is how I intuitively understand what a mathematical "limit" is.

Exactly how the value "homes in" on the limit -- smoothly, with lots of "wiggles", even in infinitely many discontinuous jumps -- is irrelevant; if we care about further details like that, we have other concepts to describe them.

Ultimately, it comes down to the idea that in order to do mathematics, we must choose things that we want to be able to do in our mathematics, and then we must precisely describe those things. There is a particular concept that we have found to be useful and that we have decided to call a "limit"; the epsilon-delta formulation is how we define that concept in standard analysis.

If we had decided that the word "limit" should denote the kind of property we now call "differentiability at a point", only $f$ (among the four functions I described in my first paragraph) would have a limit at $x=0$. Differentiability at a point is a much stronger statement about a function's behavior as its input approaches a certain value.

David K
  • 98,388
5

Another potential support for the epsilon-delta definition is that non-standard analysis came up with a characterisation of limits based on infinite and infinitesimal quantities. For example, $f$ is continuous at $x$ if $f(x)-f(y)$ is infinitesimal whenever $x-y$ is infinitesimal. The infinite and infinitesimal quantities involved can be defined rigorously.

This isn't independent support -- we presumably wouldn't have taken a concept as being the correct non-standard definition of limits if it had been different from the epsilon-delta definition. It's still of some interest that the definition that's equivalent to epsilon-delta turns out to be a more precise version of the pre-formal definitions in terms of infinitesimals -- the 'microcontinuity' definition from non-standard analysis is Cauchy's original definitions

  • 3
    Thomas, you may want to point out that the definition you gave was, in fact, Cauchy's original definition of continuity. – Mikhail Katz Jun 21 '23 at 11:27
5

Von Neumann: “Young man, in mathematics you don't understand things. You just get used to them“

My short answer is that the definition of limit is not like that because it “matches our intuition”, but because it helps to describe mathematical facts in a nice way and allows to build intuition on mathematics itself. If you “question” it, you should probably start questioning more fundamental objects, like real numbers (or even axioms of set theory).


I think there are at least two kinds of “intuition” one can think of:

  • type I: intuition about philosophical aspects of mathematics (how mathematics should be; this could often include connections with the physical world, but also with logic, abstract geometric ideas,…),
  • type II: intuituon about mathematics itself (how math actually is; how math works, given the axioms).

To my experience, mathematicians develop a lot of intuition of type II (because they need it to work), while they do not necessarily build that of type I (it depends on the topic of course). Despite what one could think, definitions are very often (not always) used to develop type II intuition: we need definitions that are suitable to study mathematics itself. I personally don’t have much type I intuition about limits

Your question of course is about intuition of type I. If you wanted to question the definition of limit from the point of view of type I intuition, you should maybe start questioning more fundmental things, like the definition of functions and real numbers, or the axioms of set theory. If you take the basic axioms for granted, then many things actually boil down to type II intuition, because once you accept the basic rules of the mathematical world (or, say, of mathematical analysis), you can just build everything you need in the language of that world, using your type I intuition (philosophy/logic/reality…) just as a guideline, having faith that the math you build is then useful. And you study this new reality in the same way you would explore a new land, without necessarily sticking to what you already know about the rest of the planet.

In practice, how I would think of limits is in a more type II way:

Ok, I have defined functions. Now (vague question), what are all possible behaviours of a function around a point $a$? It either satisfies the epsilon-delta definition, or the function approaches different values depending on the subsequence I choose… Mhhh. You know what, I’ll just define the limit in this way. […] Ohhhh great, the limit has all these nice properties (linearity, product and composition rule…), it must be a very good definition. Well, I’ll just keep that, unless I find a better one.

And once you accept this new definition, you begin to develop type II intuition on that, and you use it all the time and you realize that it works really well and it is “natural”, in the sense that you really cannot find a better one, to the point that you don’t question the definition anymore.


To sum up, I will repeat what I said in the beginning (using types I and II). My point is that the definition of limit is not like that because it “matches our (type I) intuition”, but because it describes a mathematical truth in a nice way and allows to build (type II) intuition on mathematics itself.


Edit. P.s. You said that (the idea of) the epsilon-delta definition is more about the precision of the approximation rather than the behaviour of the function. Maybe the definition itself is counterintuitive, but I wanted to add that the purpose of the definiton is actually all about the behaviour a function has around a point. It is a very natural way of describing what a function could or could not do close to a point $a$. It is a qualitative property, there is no quantitative information about the approximation of $f(a)$ in terms of the values of $f$ near $a$.

Lorenzo Pompili
  • 4,182
  • 8
  • 29
3

This is going to be a long post, but I think it will be useful. Imagine the following discussion, in the Socratic style:

Teacher: What does it mean when we write $$\lim_{x \to a} f(x) = L?$$

Student: It means that the limit of the function $f(x)$ as $x$ approaches $a$ equals $L$.

Teacher: Yes, but what does that actually mean? What are we saying about the behavior of $f$?

Student: [Pauses to think.] Well, I guess what we are saying is that for values of $x$ "close to" $a$, the function $f(x)$ becomes "close to" $L$.

Teacher: Okay. So how are you defining the concept of "close to?" In particular, what is the notion of "closeness" in a mathematical context? Does it mean that $x = a$?

Student: No--well, maybe sometimes! Of course, if $f(a)$ is well-defined, then we just have $f(a) = L$, but that's not interesting. The whole point of limits is to have a way to describe the function's behavior around the point $x = a$ even when $f$ is not defined at $a$.

Teacher: Right. So..."closeness." How would you define this idea mathematically?

Student: [Very long pause.] I'm not sure. Well, hold on. I think I have it, but it's a sort of geometric argument. When a number $x$ is "close to" another number $a$, we are really talking about the distance between these numbers being small. Like, $2.00001$ is "close to" $2$ because the difference is $0.00001$.

Teacher: But that difference, which you call "distance," isn't necessarily "small" in and of itself, is it? After all, isn't $10^{-10^{100}}$ much, much smaller than $10^{-5}$? "Small" is relative.

Student: [With a little irritation] Yeah, but you know what I mean! If the difference is small enough, then the limit exists!

Teacher: [Chuckles] Yes, I see what you're getting at, but so far, all you've been doing is choosing different vocabulary to describe the same concept. What is "distance?" What is "small enough?" We are mathematicians--when language is insufficiently precise, how do we communicate? Take your time to think about this.

Student: [Sighs] So...what I was doing before, I was calculating a difference between $x$ and $a$ and calling it "small" if I thought it looked like a small number. But really, it's not the signed difference, but the absolute difference $|x - a|$ that matters; and since, as you put it, "small is relative," let's instead use a variable, say $\delta$ (for the "difference"), to represent some bound.... [trails off]

Teacher: Go on....

Student: All right. So if $|x-a| < \delta$, then $x$ is "close to" $a$. Where $\delta$ is some number that we choose in some way that quantifies the extent of closeness.

Teacher: Okay. Is $\delta$ allowed to be zero?

Student: Oh, of course not, no. I forgot. No, we need $$0 < |x - a| < \delta.$$ Then $x$ is, say, "delta-close" to $a$, or in a "delta-neighborhood" of $a$.

Teacher: All right. Now how are you going to tie that to the behavior of $f$?

Student: [Exasperated] Yes, yes, I'm getting to that part. Well, as I had said before, the limit is something where if $x$ is "close to" to $a$, then $f(x)$ is "close to" $L$. Obviously, it's not necessarily the case that $f(x)$ has to have the same extent of "closeness" to $L$ as $x$ does to $a$. For example, if $f(x) = 2x$, then when $x$ is within $\delta$ units of, say, $1$, then $f(x)$ is only bounded within $2\delta$ units of $2$, since $0 < |x-1| < \delta$ implies that $0 < |2x - 2| = |f(x) - 2| < 2\delta$. But functions can be arbitrarily (although not infinitely) steep. I don't see how we can quantify the relationship between the closeness of $x$ to $a$ as it impacts the closeness of $f(x)$ to $L$.

Teacher: You actually kind of touched on it already when you said that functions can be arbitrarily but not infinitely steep. Stated informally another way, it means that the function's value can change very rapidly--in fact, as rapidly as you please--but only finitely so, for some fixed change in $x$. So if you wanted to ensure that the difference between $f(x)$ and $L$, while not necessarily zero, can be made as small you please, how would you do it?

Student: [Long pause.] I think I need a little more help.

Teacher: So far, you've been thinking about using, as you put it, "delta-closeness" to force $f(x)$ to be "close to" $L$. But what if you turned it around and instead said, "I'm going to force $f(x)$ to be as close as I please to $L$; then what does that say about how close $x$ is to $a$? That way, you are guaranteeing that $f(x)$ becomes close to $L$, but the cost of that guarantee is that we need to be sure that--

Student: [Interrupts] Oh, oh! I get it now! Yes. What we need to say is that for a given amount of "closeness" of $f(x)$ to $L$, there is a $\delta$-neighborhood around $a$ where, if you pick any $x$ in that neighborhood, then $f(x)$ will be..."close enough" to $L$--that it will be within that given amount of closeness. In other words, we pick some "tolerance" or error bound between $f(x)$ and the limit $L$ that is our criterion for "close enough." And for that closeness, there is some set of corresponding $x$-values close to $a$ for which we are guaranteed that $f(x)$ meets the closeness criterion.

Teacher: Good, good. But how do we formalize this?

Student: Well it's clear that we need another variable to describe the extent of closeness between $f(x)$ and $L$...let's use $\epsilon$, for "error." And as we did before, we use the absolute difference $|f(x) - L|$ to describe the "distance" between $f(x)$ and $L$. So our criterion has to be $$|f(x) - L| < \epsilon,$$ and this time, we get to pick $\epsilon$ freely, because it represents our tolerance for how much error we will accept between the function's value and its limit, and we must be able to choose this to be arbitrarily small, but not zero.

Teacher: [Looks on silently, smiling]

Student: So let's define a procedure. Pick some $\epsilon > 0$. Then whenever $0 < |x - a| < \delta$--in other words, for every $x$ in a $\delta$-neighborhood of $a$, then $|f(x) - L| < \epsilon$. But I feel like something is missing, because there might not be such a $\delta$. Like if $$f(x) = \begin{cases}-1, & x < 0 \\ 1, & x > 0 \end{cases}$$ then if I pick $\epsilon = 1/2$, the "jump" in $f$ at $x = 0$ is of size $2$. So no matter how small I make the $\delta$-neighborhood around $a = 0$, it will always contain $x$-values that are negative, as well as $x$-values that are positive, and that means any such $\delta$-neighborhood will have points where the function has values $1$ and $-1$. It would be impossible to pick a limit $L$ that is simultaneously within $1/2$ unit of $1$ and $-1$, let alone simultaneously arbitrarily close to $1$ and $-1$.

Teacher: Correct. So if there's an example of a function that has no such $\delta$, what made it so?

Student: I don't get what you mean.

Teacher: Remember how we were talking about ensuring that the (absolute) difference between $f(x)$ and $L$ can be made as small as you please? What consequence or implications does that have on the $\delta$-neighborhood?

Student: Well, there has to be some relationship there. I mean, as our error tolerance decreases, we have to imagine that, in general, there would be fewer $x$-values around $a$ that will satisfy that tolerance, right? So $\delta$ must depend in some way on our choice of $\epsilon$. Well, except in trivial cases like if $f(x)$ is a constant, then any $\delta$ works. But the point is the existence of a $\delta$. It doesn't have to be the largest one, or even unique. We just have to be able to find a sufficiently "small" neighborhood for which all $x$-values in that neighborhood around $a$ will have function values $f(x)$ within the error tolerance we specified to $L$.

Teacher: Right. So if you were to put all of this together, how would you propose we define the concept of a limit?

Student: I'd say something like this:

We say that $$\lim_{x \to a} f(x) = L$$ if, for any $\epsilon > 0$, there exists some $\delta > 0$ such that for every $x$ satisfying $0 < |x - a| < \delta$, one also has $|f(x) - L| < \epsilon$.

heropup
  • 135,869
  • Lot of fun reading it! X) – Lorenzo Pompili Jun 23 '23 at 13:24
  • 1
    @LorenzoPompili Thanks! I hope it illustrates how I think that the formal definition arises from an intuitive notion of limit, even down to the dependence of $\delta$ on $\epsilon$ and not the other way around; that these properties of the definition "follow naturally" in some sense. In fact, I would imagine that any competent teacher could motivate the $\epsilon$-$\delta$ definition from a proficient student who has not yet learned about limits; it might not exactly follow this hypothetical conversation, but the logical steps would be similar. – heropup Jun 23 '23 at 15:40
2

I would actually argue that it is wrong-headed to insist that the $\varepsilon$-$\delta$ definition is the only "correct" way to convert our intuitive idea of a limit into a precise definition. Most intuitions which precede definitions are not precise enough to correspond to one and only one mathematical concept. If I tell you that a limit gives you information about what happens to a function when the input gets closer and closer to a point, then there's more than one way to convert that into a precise mathematical definition. There is surely a sense in which the function $x^2$ gets "closer and closer" to $-2$ as $x$ gets "closer and closer" to $0$.

But proving things in mathematics relies upon having precise definitions for each term, and so clearly this issue needs to be "resolved" somehow. What we can do is allow our intuitive idea of "limits" and "closeness" to give birth to a range of related concepts in calculus, including limits as they are defined with $\varepsilon$'s and $\delta$'s, but also continuity, differentiability, monotone convergence, and so on and so forth. In my eyes, the fact that one of these precise concepts happens to be called "the limit" does not imply that this is the one true precise concept corresponding to our intuitive ideas.

These precise concepts also give us a common language. If your pre-rigorous intuition for limits is different to mine, and it actually drove you towards the concept of "continuity", or "monotone convergence", then that's okay. You should just need to remember to use the words "limit", "continuity", and "monotone convergence" as they are defined in modern mathematical practice. That is not an admission that your intuition was wrong: it is just a convention which helps prevent mix-ups in communication.

Precise definitions also have the advantage that they lead to more precise intuitions. After learning analysis, you can develop a separate intuition for each of the terms relating to convergence. You can appreciate their differences as well as their similarities. This gives you a rock-solid foundation upon which you can learn more involved concepts, such as uniform continuity, or the definition of a limit in a general topological space.

Joe
  • 19,636
1

While I grasp the formal definition, I'm grappling with the philosophical aspect of it. Specifically, I'm questioning whether this definition truly encapsulates our intuitive understanding of what a limit is. The idea of a limit, as I see it, is about a function's behavior as it approaches a certain point. However, the Epsilon-Delta definition seems to be more about the precision of the approximation rather than the behavior of the function.

The definition says that the approximation, the limit, gets more and more precise as you get closer and closer to the limit point. This certainly matches my understanding of what a limit is.

Of course, being mathematicians, we have to turn the statement inside out:

We can make the approximation arbitrary precise ($<\epsilon$) by looking at a small enough interval ($<\delta$).

Even with this rewrite, I feel comfortable with saying that this is what a limit is.

1

The jarring contrast between the intuitive notion of continuity, on the one hand, and the alternating-quantifier bugaboo that is the epsilon-delta definition, on the other, is painfully familiar to successive generations of calculus students, and has been commented on extensively by scholars in mathematics education.

The epsilon-delta definition of continuity is in fact an after-the-fact long-winded paraphrase of the original definition due to Cauchy, which read as follows: every infinitesimal change $\alpha$ of the dependent variable $x$ produces an infinitesimal change $f(x+\alpha)-f(x)$ in the function (here $\alpha$ is Cauchy's notation for infinitesimal); see e.g., this recent article in the British Journal for the History of Mathematics: https://doi.org/10.1080/26375451.2020.1770015 It is indeed Cauchy's original definition that captures what you describe as "our intuitive understanding".

The definition via infinitesimals arguably corresponds to the intuitive notion of continuity. On the other hand, it is an established mathematical fact that the epsilon-delta definition and the definition via infinitesimals are strictly equivalent; see e.g., this recent article in Real Analysis Exchange: https://arxiv.org/abs/2305.09672 (see Section 6 there). Such mathematical equivalence provides evidence that the epsilon-delta definition does in fact correspond to the intuitive notion of continuity.

Mikhail Katz
  • 42,112
  • 3
  • 66
  • 131
  • thanks i look forward to these kind of resources, i will take a look. – thomas graceman Jun 21 '23 at 13:22
  • 4
    @thomasgraceman Note that this definition, in order to be fully rigorous, requires an attendant formal construction of a number system containing infinitesimal elements; the apparent simplicity isn't free. Such a definition isn't hard to provide, but its need is worth noting: assuming we're interested in the usual no-infinitesimals version of the real line, why should expanding to include infinitesimals be needed to understand continuity? Personally I found that the $\epsilon$/$\delta$ definition did match my own intuitions. Of course at the end of the day they're appropriately equivalent. – Noah Schweber Jun 22 '23 at 03:09
  • @thomasgraceman Schweber's comment is somewhat misleading since it fails to mention the existence of an axiomatic approach to infinitesimal analysis, which more faithfully reflects the mathematical practice as found in Leibniz, Euler, and Cauchy. – Mikhail Katz Jun 22 '23 at 09:04
  • 2
    @MikhailKatz It is my understanding that Cauchy put forward that definition roughly half a century before the real numbers were rigorously defined (by Cantor, Dedekind etc.). So one could argue that this definition merely reflects Cauchy's own intuition about real numbers. How do you know that your intuition, for example, matches Cauchy's? (TBC) –  Jun 22 '23 at 20:42
  • 1
    (Cont'd) My definition is of course learned, but not via any set of axioms but in the childhood, via observing a ruler and thinking about decimals and just extrapolating to numbers with an infinite number of digits after a decimal point. As we know, those are equivalent to the "standard" real numbers where there are no infinitesimals. So I can for certain say that I have a very vague intuition about infinitesimals (mostly gathered on a course of nonstandard analysis attended many years ago) - certainly not satisfactory enough to be able to understand what Cauchy meant in hs definition. –  Jun 22 '23 at 20:42
  • 1
    @StinkingBishop : The idea that arbitrary real numbers are characterized by their relation to the rational numbers was already present at the start of the 19th century. Turning this idea on its head, Dedekind made that characterization into a definition of real numbers in the middle of the 19th century. – Lutz Lehmann Jun 23 '23 at 06:39
  • @St, the question you raise about Cauchy is a separate question. The modern definition of continuity via infinitesimals is arguably a better match for our intuitive notion of continuity (see my revised answer). Meanwhile, what Cauchy may have had in mind is an interesting historical question that is analyzed in a number of recent publications; see e.g., https://u.math.biu.ac.il/~katzmik/cauchy.html – Mikhail Katz Jun 23 '23 at 10:32
  • @LutzLehmann: quite right. In fact, the view of numbers as represented by unending decimals is already in Simon Stevin, a few centuries earlier; see e.g., this publication: http://doi.org/10.1007/s10699-011-9228-9 – Mikhail Katz Jun 23 '23 at 10:44
1

I'm questioning whether this definition truly encapsulates our intuitive understanding of what a limit is. The idea of a limit, as I see it, is about a function's behavior as it approaches a certain point. However, the Epsilon-Delta definition seems to be more about the precision of the approximation rather than the behavior of the function.

I think you need to ask yourself* these two things:

  • is there any case in which the $\epsilon-\delta$ definition says "$L$ is the limit of $f$ at $x$", but my intuitive understanding says "no, $L$ isn't the limit of $f$ at $x$"?
  • is there any case in which the $\epsilon-\delta$ definition says "$L$ isn't the limit of $f$ at $x$", but my intuitive understanding says "no, $L$ is the limit of $f$ at $x$"?

If the answers are "no" and "no", then yes, the $\epsilon-\delta$ does exactly encapsulate your intuitive understanding. If not, then go ahead and ask another question about that specific case - "Why can't I find an $\epsilon-\delta$ limit for this thing which 'clearly' has a limit?", or contrariwse.

* you need to ask yourself because only you have exactly your "intuitive understanding of what a limit is". There is no "our intuitive understanding", since we don't share a mind...

AakashM
  • 574
0

For ease of exposition, let us first restrict ourselves to limits of sequences of numbers $(x_n)$.

Intuitively, a number $a$ is not the limit of $(x_n)$ if there is a positive gap between $a$ and $x_n$ for infinitely many $x_n$. In other words, if there is an integer $N$, so that, for every integer $M$, there is an $n>M$ so that $|a-x_n|\geq (1/N)$.

Turning this around, the only way a number $a$ can be the limit of $(x_n)$ is if, for all integers $N$, there is an integer $M$, so that we have $|a-x_n|<(1/N)$ for all $n>M$.

From the latter statement one sees that the limit of a sequence $(x_n)$ of numbers need not exist.

Secondly, the above is a requirement that the limit needs to satisfy. One may not consider it as sufficient to define a limit. Some definitions of limit, such as the requirement that $|a-x_n|<1/n$ for all $n$, appear to be stronger than this definition. However, in the context of standard set theory and first order logic, one can show that these stronger definitions lead to similar notions of limit as the given one.

Finally, it is not difficult to see that the above definition coincides with the usual $\epsilon$-$\delta$ definition in the context of standard set theory and first order logic.

Kapil
  • 1,286
  • Isn't the requirement that $|a-x_n|<1/n$ much stronger than the requirement that $(x_n)$ konverges to $a$, in the sense that there are sequences satisfying the latter but not the former? – KGM Jun 21 '23 at 13:19
  • @KGM Yes. It was wrong to say that these conditions are equivalent. However, these stronger notions do not lead to a conceptually very different theory as every limit in the weaker sense gives rise to a limit in the stronger sense for a subsequence. A change has been made to reflect your comment. – Kapil Jun 21 '23 at 17:47
0

Does the Epsilon-Delta definition truly capture the essence of what we mean by a 'limit'?

Hang on, because I'm about to get philosophical!

In the philosophy of language, there is the notion of semantic grounding, which is the idea that language can somehow be explained in terms of something else and that doing so reduces cognitive dissonance. Without getting into epistemological jargon and dip into ontological debate, it suffices to say that the fundamental concept that is involved in the 'limit' is the notion of 'continuity'. A limit is understood as non-discrete or continuous approach to something else, say, the real value 3 from above and below. So, when you ask does Epsilon-Delta capture the intuition of limit, you are asking about what it means to a person for something to be continuous.

Our grounding of continuity on an account of embodied cognition (SEP), my school of thinking, is that the language of Epsilon-Delta and the continuity it captures comes indirectly from our other senses. So, in Where Mathematics Comes From, linguist, cognitive scientist, and philosopher George Lakoff and his coauthor Raphael Nunez argue that continuity is a fundamental experience of the brain rooted in motion along a path, something that extends back into evolutionary history. The human brain has visual and kinesthetic neurological computations that allow people to inherently and intuitively understand and use motion, a notion in philosophy called naive physics, and therefore there are in the cognitive architecture, such as the visual cortex, neural pathways that are repurposed for what are known as conceptual metaphors, and Motion Along a Path is one of the fundamental metaphors we use to construct semantic content from syntactic expression.

That is Epsilon-Delta "makes sense" because it uses some arithmetic notation (an indexed set of intervals on a metric space) to allow us to "move" through the metric space by changing an interval so that the point represented as an infinitesimal interval is always contained by a compete neighborhood. If you give me an index (given by epsilon) then I can give you a complete interval (given by an expression involving delta) that is a containing interval about the point-as-infinitesimal allowing us both to reason that the point must be complete, because it is a subset of the neighborhood which we know to be complete. That's a proof of continuity that allows us to use our definition of slope on a single point by allowing us to define a point as an infinitely small interval, and why it is a product of Netwonian/Leibnizian genius and so exciting to understand!

Think about when you graphically understand the limit. You trace two points approaching in motion from opposite directions (maybe even with your fingers) to approximate a secant line on a curve to capture the notion of slope. Since strictly speaking, you cannot calculate a slope of a point tangent to a curve, you use the difference quotient (just rise/run) and then you create a fiction called an infinitesimal so that you can pretend the two points are one point (because there is no real value that separates them), while at the same time claiming there is SOME distance between them arriving at a contradiction. Like the root of -1, you are dealing with some call a surd being.

But Motion Along a Path isn't the only use of this conceptual metaphor. Think about how you understand time as approaching you from the front and then moving behind you. Time flies. Time passes. We head into the future. The Ancient Greeks claimed time overtook you from behind. The winds of time invoke motion. All of these linguistic artifacts are a reflection of the neural circuits that are used for motion being repurposed for language, and in the case, a formal system of syntax. We experience motion as a continuous event, and so when we want to talk about continuity, we talk in terms of change and motion. Epsilon delta uses the absolute value of a difference to express spatial dimension, and it appeals to the Continuum, the reals, for the notion of continuity. Thus, continuous motion in space and semantically ground Epsilon-Delta.

This accords with Kantian notions of space and time being constructs of the mind as discussed in his Critique of Pure Reason and his transcendental idealism which is an effort to deal with the discrete and subjective monads of Leibniz with the continuous and objective laws of motion of Newton. A good introduction and explanation of how TI reconciles these two metaphysical approaches is Kant and the Exact Sciences.

So, today, the notion of essences (SEP) is a bit suspect to a group of mathematical thinkers called mathematical constructivists (SEP) that goes back most recently to topological godfather Brouwer. A nominalist and constructivist would hold that the Epsilon-Delta definition is a way to specify language to construct a fiction, for instance, to specify the necessary and sufficient definitions in a syntax which ultimately serves the function of referring to the experience of continuity of motion. Of course, a neo-Platonic realist would scoff and hold an entirely different set of views, so you'd have to square away your philosophy before you might be able to accept or reject a nominalist explanation such as the one offered here. But a clever observer would note that the primary purpose of calculus was to deal with continuous motion. In fact, acceleration, velocity, and position and rates of change are some of the first application of the Calculus taught to students, and their relation to the domain of space, time, and motion is no coincidence. It's literally an expression of language that captures our intuitions about continuity and motion.

J D
  • 115
  • 1
    J D, you wrote: "A nominalist and constructivist would hold that the Epsilon-Delta definition is a way to specify language to construct a fiction, for instance, to specify the necessary and sufficient definitions in a syntax which ultimately serves the function of referring to the experience of continuity of motion." But what does this mean exactly and how does this answer the question posed? ... – Mikhail Katz Jun 23 '23 at 11:34
  • ... Try to perform a mental experiment and modify your answer by replacing "epsilon-delta definition" by "definition via infinitesimals". Your entire lengthy analysis apparently still applies, but what insight is gained thereby? – Mikhail Katz Jun 23 '23 at 11:34
  • @MikhailKatz A real definition (including that of a limit) is an attempt to organize intuitions into words (as opposed to stipulative, ostensive, lexical definitions, etc.) and if Quine's argument is extended, is an instance of semantic ascent. To answer the question, 'does the definition of limit respect intuitions?' requires an understanding of what a definition is and how it relates to intuitions. Robinson's book Definitions provides a taxonomy, so you'd have to be familiar with that to make better sense 'real definition'... – J D Jun 23 '23 at 14:32
  • but for our purposes, we can take the real definition as one that presumes the reality of what it describes, where real roughly mean physically existing. That's Platonic mathematics (mathematical realism); a constructivist like Brouwer doesn't see mathematical entities as physically existing, but interprets them as words that represent what goes on in the mind, and presumes no physical existence. For more information, you'd have to read Nominalism in the Philosophy of Mathematics (SEP). Since limits aren't real in this view... – J D Jun 23 '23 at 14:39
  • they are fictions, like the word 'unicorn' which doesn't denote anything real either. (See Frege's Über Sinn und Bedeutung for a better understanding of how definitions refer.) So, to recap, first, a definition might be seen through the lens of a realist (Plato) or an anti-realist (Brouwer), and we have to decide which notion of definition suits us. Then, we have to understand what it means to have an intuition (SEP). I'm suggesting something along the lines of sui generis... – J D Jun 23 '23 at 14:45
  • To simplify, just appreciate this quotation from the article: "When you have an intuition that A, it seems to you that A. Here ‘seems’ is understood, not in its use as a cautionary or “hedging” term, but in its use as a term for a genuine kind of conscious episode. For example, when you first consider one of de Morgan’s laws, often it neither seems true nor seems false; after a moment’s reflection, however, something happens: it now just seems true. (Bealer 1998: 207)" – J D Jun 23 '23 at 14:45
  • You asked "how does this answer the question posed?" What I have done is stripped away the appearance that "How does this definition match an intuition?" can be answered without examining both 'definition' and 'intuition' by taking a position in mathematical philosophy on what happens when we make a definition, and to point in the direction of what it means to have an intuition. Mathematica can do math, but the philosophy of math is about explaining what it means to do math, and while one can fumble around in the dark on the meaning of 'definition' and 'intuition', instead we apply... – J D Jun 23 '23 at 14:49
  • philosophical rigor by adducing two analyses of the the concept 'definition' and 'intuition'. The other answers here are coming from intuitions on what a definition is, but there's a body of mathematical philosophy one can appeal to which is how it helps explain how the Epsilon-Delta definition captures our intuition. – J D Jun 23 '23 at 14:51
  • So, let me try again, since you've asked me to clarify: The Epsilon-Delta definition captures our intuitions because it creates a formal syntax for describing a continuous metric on a space (the absolute value of a difference of two reals) which is meaningful (according to Lakoff and Nunez) because the parts of our brain that calculate how to move through space also are involved in determining if something is 'continuous'. Any definition of a 'limit' that is intuitive meaningful, must arithmetically be related to motion in space... – J D Jun 23 '23 at 14:56
  • and the Epsilon-Delta definition is a mathematical tool that allows us to 'move' through a metric space by selecting at will an increasingly small interval of points on the metric space in effect allow us to 'zoom in' on a curve approaching an infinitely small, arbitrary interval, right? We do the same in constructing the reals with Dedekind cuts out of rationals. Limits are using fundamental arithmetic operations to demonstrate completeness. – J D Jun 23 '23 at 15:00
  • So, to what end 'limits are rigorous, syntactical definitions by infinitesimals to describe motion in space'? Excellent question. Epsilon-Delta gives an analytically rigorous linguistic artifact that maps nicely to our notion that space is analog and complete. Real analysis rests on definitions that aren't merely naive and appeal to intuition, the explicitly state what mathematics is necessary to capture the fundamental experience of moving in space by giving us an real number index set of intervals on a metric space. As we move along the real continuum of the index set, intervals change... – J D Jun 23 '23 at 15:06
  • on the metric set getting smaller and smaller. – J D Jun 23 '23 at 15:06
  • Of course, you can accept the Platonic explanation which might be "our intuitions are some magical facility that put us in touch with a Pure Form of an entity called a Limit that floats around in an alternative reality which is just as real as ours but isn't empirically available. :D Of course, my characterization of the Platonic view isn't charitable. – J D Jun 23 '23 at 15:08
  • I guess I should explicitly draw attention to the fact for the OP, that doing math and explaining how math is done are two different fields. The former is mathematics, but the latter is the philosophy of mathematics, and to have a good grip on the philosophy of mathematics is to have a good grip on philosophical notions like epistemology, ontology, and metaphysics, and that a rigorous conceptual analysis is not something that can be done without having a solid notion of what it means to be a concept... – J D Jun 23 '23 at 15:16
  • if you have any additional questions on the philosophy of math, it might make sense to post the on Philosophy Stack Exchange. – J D Jun 23 '23 at 15:16
  • J D: You wrote: Of course, you can accept the Platonic explanation which might be "our intuitions are some magical facility that put us in touch with a Pure Form of an entity called a Limit that floats around in an alternative reality which is just as real as ours but isn't empirically available. However, one needn't be a Platonist to ask to what extent the epsilon-delta definition of continuity matches our intuitive notion of continuity. – Mikhail Katz Jun 25 '23 at 13:51
0

If you take the same epsilon delta definition of continuity and restrict yourself to rational numbers, then the rational step function (with the step at srqt(2) or any other non-rational number) is continuous.

This clashes with our intuitive notion of continuous function to be able to be drawn in one stroke (if not to oscillatory) and the intuition that the rational numbers are everywhere (dense in R).

My feeling is, there shd be an intrinsic way to define continuity for function from Q to Q without resorting to R. And that definition might be 'the right one' for R (probably its extension to R might be logically equivalent to the current one.)

lalala
  • 333
-4

Limits in general actually do have some "bad" properties due to the existence of incomputable functions:

Basically if you cannot compute when a certain quality of approximation is reached, even if a sequence converges, you cannot use that convergence to find an $n$ where that quality of approximation is reached, and then, computationally, this mathematically convergent sequence is about as useful as a nonconvergent one.

Following this line of thougt, such a pathological sequence is elaborated below, but first, some uncomputable functions from which this sequence will be derived need to be defined:

Let $T_1,...$ be a computable ordering of the turing machines (in the sense that you can, for a given $n$, compute the "code" of $T_n$)

Let furthermore $H$ denote the set of turing machines that eventually halt.

Define $f_1(n):=\max \limits_{T\in\{T_1,...,T_n\}\cap H} runtime(T)$

(with the convention that the max. of no numbers is $0$)

Now this well known function (i guess it even has a name) has the very peculiar property that no computable function is in $\Omega(f_1)$ because:

Assume $f\in \Omega(f_1)$ is computable, then one can choose $n_0\in\mathbb{N},A\in\mathbb{N}$ such that for $n>n_0$ also $Af(n)>f_1(n)$.

Then consider the following Algorithm:

given any $n\in\mathbb{N}$, run $T_n$ for $Af(\max(n,n_0))$ time (time:=steps)

But $Af(\max(n,n_0))>f_1(\max(n,n_0))=\max \limits_{T\in\{T_1,...,T_{\max(n,n_0)}\}\cap H} runtime(T)\geq runtime(T_n)$ if $T\in H$, so iff the machine $T_n$ halts at all, it will have halted afterwards.

So this algorithm solves the halting problem for all Turing machines, wich is not possible, so the assumption that such an computable $f$ exists was false.

Intuitively this function is not computable because it grows too fast.

Now this function can be used to find a function that is not computable because it grows to infinity, yet does so too slowly intuitively speaking:

Define $g_1(N):=\max \{n\in\mathbb{N}|\forall m\leq n:f_1(m)< N\}$

this is finite because $f_1$ is obviously bounded.

Now there are no computable functions $g$ with $g\in O(g_1)$ yet $g$ not bounded because:

Assume one has a computable unbounded function $g$ with $g\in O(g_1)$.

Then there are $n_0\in\mathbb{N},A\in\mathbb{Q}$ so that for all $n>n_0$ also $g_1(n)>Ag(n)$

But then consider $f(n):=\min \{N\in\mathbb{N}|Ag(N)>n\}$

This is well defined for all $n$ because $g$ is unbounded.

Also it can obviously be computed if $g$ can be computed by successively trying larger $N$, and it satisfies $Ag(f(n))>n$, which for $n>n_0$ implies:

$g_1(f(n))>n$ => $f_1(n)<f(n)$ by the property of $g$ and the definition of $g_1$.

Therefore, obviously one would have $f\in \Omega(f_1)$ with $f$ computable if such a $g$ existed, which we showed cant happen.

so there can't be a computable unbounded $g$ with $g\in O(g_1)$.

Now these preliminaries can be used to create a pathologic convergent sequence:

Consider $(a_n:=1/(g_1(n)+1))_{n\in\mathbb{n}}$.

Because:

  • $g_1$ obviously cant be bounded (because for each $n\in\mathbb{N}$ we have $g_1(f_1(n))\geq n$).
  • $g_1$ is obviously nonnegative.
  • $g_1$ is obviously monotonically increasing.

this sequence obviously goes to $0$, even monotonically.

However, because there are no computable $g$ with $g_1\in O(g)$, there cant be any computable sequence $(b_n)_{n\in\mathbb{n}}\rightarrow 0$ with $b_n \in \Omega(a_n)$ and the $b_n$ in a representation so they can actually be compared to rational numbers in the standard representation computationally.

This is because such a sequence could, by properly "rounding up" to rational numbers, be used to create a computable series $(b_n')_{n\in\mathbb{n}}\rightarrow 0$ in standard rational representation with $b_n' \in \Omega(a_n)$, which could in turn be used to construct a computable unbounded $g$ with $g_1\in O(g)$ via $g(n):=\rceil 1/b_n'\lceil$, which we know does not exist.

So to sum it up, the above $(a_n)_{n\in\mathbb{N}}$ is a series of rational numbers that monotonically converges to $0$, yet there is no $(b_n)_{n\in\mathbb{n}}\rightarrow 0$ with $b_n \in \Omega(a_n)$ and the $b_n$ in a representation so they can actually be compared to rational numbers in standard representation computationally, so computationally, you in some sense cannot "majorize" that series by any other series convergent to $0$.

Especially if you had some numerical Formulas computing some $a$ with $a_n$ as a bound to the Error of the nth Formula, you intuitively speaking could not use that bound to construct an algorithm to approximate $a$ to arbitrary precision, because you cannot bound the $a_n$ by something computable that goes to zero, even tough the $a_n$ actually go to zero, so knowing that the error of some formulas converges bounded above by $a_n$ is no better than not knowing whether the error actually converges at all, if you want to use the formulas in an algorithm, even tough the $a_n$ technically go to zero, it does not help you formulate an algorithm, and its not just that you can't compute any $a_n$, wich turns out you can't, you also cannot derive any other bounds allowing for arbitrary quality approximation from the $a_n$ bound (because all these derived bounds would also be practically uncomputable (actually computable in an unusable form) by the property that there is no sequence $(b_n)\rightarrow 0$ computable in a sensible representation with $b_n \in \Omega(a_n)$).

At first it would seem like bounding the error of a numerical approximation dependant on some $n$ by any convergent sequence $(e_n)$ convergent to $0$ would suffice to construct an algorithm evaluating the sought after quantity to any given precision, but that does not work the naive way if the sequence $(e_n)$ is not computable, and if the sequence $(e_n)$ is not only non-computable, but converging so slowly that it satisfies the property that there are no $b_n$ computable in a sensible representation with $b_n \in \Omega(a_n)$, intuitively speaking, the information that $(e_n)$ bounds the error becomes completely useless to create arbitrarily accurate approximations.

This series is, intuitively speaking, thus a series, wich, albeit convergent, converges so slowly, its convergence in some sense can't be grasped computationally, so there is, for each $\epsilon>0$ an $N_\epsilon$ such that for all $n>N_\epsilon$ you have $|a_n-0|<\epsilon$, but $N_\epsilon$ cant be computed by a single aglorithm for all $n\in\mathbb{N}$, so there are these $N_\epsilon$ required for the sequence to be convergent, but you cannot find them because they grow too fast, faster than anything computable, eg. $e^{1/\epsilon},e^{e^{1/\epsilon}}$ etc., which philosophically yields the question, why does it even make a difference then that these $N_\epsilon$ do exist if we cannot find them in general? Why does it make a difference that this series converges when the convergence can't be grasped?

KGM
  • 131
  • 5
    i cant really understand what are you trying convey – thomas graceman Jun 21 '23 at 16:54
  • 1
    Look, usually convergent sequences are used to create arbitrary precision approximations computationally, but this here is an example where the sequence is convergent, but you can't use it to approximate the limit to arbitrary precision computationally, because the nececary "computations" are impossible. – KGM Jun 21 '23 at 20:09
  • 1
    Somehow from a practical perspective, this case is however a little useless because the elements of the sequence $a_n$ (and any similar sequence) are not computable themselves, so just because of this you could not use it or any similar sequences for approximation anyways, but this can partially be salvaged by considering one of these troubling sequences as an estimate for the error of an actually computable sequence, and then noting that this estimate cannot be used in an algorithm, even tough it is convergent to 0, because of the outlined computability issues. – KGM Jun 21 '23 at 20:14
  • The upshot is, if a sequence is convergent, but you cannot compute when it reaches a certain quality of approximation, it is, from a computational standpoint, almost like if the sequence was not convergent at all. – KGM Jun 21 '23 at 20:16
  • 1
    What????????????? – Accelerator Jun 22 '23 at 21:21
  • As I see it, this is a general problem with the continuum and doesn't have anything to do with the limit definition in particular. Things get weird when we consider incomputable functions and incomputable numbers, which are the vast majority (all but countably many) of all functions and numbers in R. – Era Jun 23 '23 at 00:04