I'm trying to understand the difference between 1st and 2nd parts of The Fundamental Theorem of Calculus.
Let's start from the definitions:
First part says that if $f$ is continuous on $[a,b]$, then the function $g$ defined by $$g(x) = \int_a^x f(t) dt, \ a <= x <=b $$
is continuous on $[a,b]$ and differentiable on $(a,b)$, and $$\frac{d}{dx}g(x) = f(x)$$
Second part says that:
If $f$ is continuous on $[a,b]$, then $$\int_a^b f(x) dx = F(b) - F(a)$$
where $F$ is any antiderivative of $f$, that is, a function $F$ such that $$\frac{d}{dx}F(x) = f(x)$$
So the part I don't understand is why in the first equation we have integral of the function expressed as a SINGLE antiderivative, when in the second equation we have a difference of antiderivatives? What is the connection between $F(x)$ and $g(x)$?

- 6,497

- 149
-
Well, $F$ and $g$ are both antiderivatives of $f$, and any two antiderivatives of $f$ (on some interval) can only differ by an additive constant, so... – Hans Lundmark Dec 10 '22 at 18:40
-
@HansLundmark but why do we need two antiderivatives if integral can be expressed as one? – Vanconts Dec 10 '22 at 18:54
-
@HansLundmark indeed, than why do we need F(x) if we have g(x)? Why do we need F(b) - F(a) if we have g(x)? – Vanconts Dec 10 '22 at 19:02
-
1$g(a)$ is zero while $F(a)$ is not necessarily zero. Apart from that they are the same. – Kurt G. Dec 10 '22 at 19:02
-
@KurtG. g(a) is 0 since integral from a to a is zero, but what is F(a)? – Vanconts Dec 10 '22 at 19:11
-
There are many $F$s. They all differ -as Hans Lundmark wrote- by an additive constant. A personal note: I find indefinite integrals and antiderivatives a totally redundant concept. – Kurt G. Dec 10 '22 at 19:13
-
@KurtG. but again, if F is antiderivative of f, isnt it and integral then? – Vanconts Dec 10 '22 at 19:14
-
It is an indefinite integral. Add to it a constant $C$ and it won't change anything. Neither its derivative nor the difference $F(x)-F(a)$. – Kurt G. Dec 10 '22 at 19:16
-
In response to Vancounts question to @KurtG, I'll give an example. Let's think about $$ I = \int_2^x \cos(t), dt. $$ An antiderivative of $f(t) = \cos(t)$ is $F(t) = \sin(t)$, so we can compute $$ I = \sin(x) - \sin(2),$$ Now $g(x)$ is the very special antiderivative $g(x) = \sin(x)-\sin(2)$. Good luck coming up with $g(x)$ without first finding the simpler antiderivative $F(x)$. – Jamie Radcliffe Dec 10 '22 at 19:36
-
@JamieRadcliffe do you know any sources which could give me a better understanding about this topic? – Vanconts Dec 10 '22 at 20:07
-
Since these results are at the heart of calculus there are many, many sources attempting to make them clear. Different people benefit from different presentations. An elementary but rigorous and exceptionally well presented book is Spivak's Calculus. Others swear by "Calculus Made Easy" by Silvanus P. Thompson (written in 1910). That is available in book form with modern typesetting, but also free online at https://calculusmadeeasy.org/ – Jamie Radcliffe Dec 10 '22 at 22:09
2 Answers
I like to understand these theorems as kind of a 1-2 punch, where the first theorem sets things up, and the second theorem knocks them down (where "knocking things down" = "evaluating definite integrals".)
So the First Theorem defines a function $g(x)$ more-or less explicitly: What's, say, $g(7)$? Well, (assuming $7$ is between $a$ and $b$), it is $\int_a^7 f(t)dt $. Okay, how do you find that? Well, you've got to construct a bunch of Riemann sums, and then prove that they converge to a limit as the mesh gets smaller, and then that limit is the value of $g$ at $7$.
$g$ is explicitly defined, but it's a real pain in the ass to evaluate $g$ at even one point - a Riemann sum and a limit each time. But the First Theorem does give us some information about how $g$ behaves, and that's going to help us in proving the Second Theorem. Also notice that one of the things that's true about $g$, which appears to be to obvious to mention, is that $g(a) = 0$.
In the Second Theorem, we have $F(x)$. How is $F$ defined in terms of $f$? It's not, at least not like $g$ was. It can be any wild-ass function, except it does have to pass a test: $F$'s derivative has to be equal to $f$, at each point in $[a,b]$. The Second Theorem tells us if we have such an $F$, then (and here's where the sun breaks through the clouds and a chorus of angels starts singing), we can evaluate integrals of $f$ by just evaluating $F$ at two points, and subtracting. And at this point we're absolute whizzes at finding the derivatives of function.
This is amazing - no Riemann sums, no limits, just find an $F$ that passes the test, and you can evaluate definite integrals with two function evaluations and a subtraction. This is what is going let us go forward and start actually evaluating integrals. If we didn't have the Second Theorem, our calculus class might end right here, with your prof saying "Well, that's how an integral is defined, but there aren't many we can evaluate, and it's a pain to try to evaluate new ones, as we have to look for neat, tricky patterns in the Riemann sums each time." Instead we just throw out a guess, check that it has the right derivative, and we are done.
(If the Second Theorem is the meat of the matter you might ask why we even bothered with the First Theorem. Well, it is used in proving the second theorem. As to why it's pulled out separately, and isn't just a step in the Second Theorem, I'm not sure, but it's pretty established as a separate theorem by now, so it might just be historical tradition.)
As to your question about the relation between the $g$ of the First Theorem and the $F$ of the Second Theorem: Note that $F$ isn't uniquely defined; we know that we can add or subtract a constant to any existing $F$ that passes the 2nd thm's test, and get another function that also passes the same test. So the 2nd thm actually gives us a whole family of $F(x)$ functions that will work, and $g(x)$ is one of them. In fact, it is the unique one that has the value $0$ at $a$.
So if you look back at the equation in the 2nd thm, and imagine that you also know that for your $F$, $F(a) = 0$, notice that the equation
$$\int_a^b f(x) dx = F(b) - F(a)$$
becomes
$$\int_a^b f(x) dx = F(b) - 0$$
which is the same as the equation from the 1st thm
$$g(x) = \int_a^x f(t) dt$$
with just a bit of renaming and re-arranging.
If you now understand this, and in particular understand the difference between
- a definite integral, defined via limits of Reimann sums, and written $ \int_a^b f(t)dt $, and
- an indefinite integral, defined by the "has the correct derivative" test, and written $ \int f(t) dt$,
then you might enjoy my favorite statement of the (two) Fundamental Theorems of Calculus, which is
$$ \int_a^b f(t) dt = \left. \int f(t)dt \, \right|_a^b$$
It looks like almost nothing, just shifting two variables over from an long 'S' onto a vertical line, but the notation hides, and encapsulates, all the details you've just worked through, and is incredibly powerful, as I hope I've convinced you.

- 10,615
-
Hi, thank you for your answer, could you please also check previous answer's comments section, mb you could answer? – Vanconts Dec 13 '22 at 17:40
-
1Great answer. The fact that we use the word "integral" and the $\int$ notation to mean two things whose definitions are completely different, but are connected only through the FTC, hides the significance of this answer. The FTC connects two very different definitions, and allows us to use the term "integral" for both - but that very act obscures the FTC itself. – SRobertJames Jun 11 '23 at 20:39
-
1@SRobertJames - Bingo! And glad you liked it. I read somewhere that when a mathematical fact is very important and very basic, it gets absorbed into the notation, and people don't even notice it anymore. It's fun when you can "unwrap the package" and see the connection again. – JonathanZ Jun 11 '23 at 21:37
-
It's worth quoting (abridged) Courant What is Mathematics p.438 "In certain textbooks the point of FTC is obscured by poorly chosen nomenclature. Many [e.g. OpenStax] define the 'indefinite integral' simply as the inverse of the derivative, immediately combining differentiation with the word 'integral'. Only later is 'definite integral' as an area or limit of a sum introduced, without emphasizing that the word 'integral' now means something totally different. The main fact of the theory is smuggled in by the back door. We [therefore] prefer [the term] 'primitive function'." – SRobertJames Jun 11 '23 at 22:19
-
Unfortunately, instead of simply dropping the term "indefinite integral," Courant goes on to redefine it (p.437) "Sometimes the integral $F(x)$ with a variable upper limit is called an 'indefinite integral'." giving two very different definitions and usages of the same term. – SRobertJames Jun 11 '23 at 22:25
Note that $F(b) - F(a)$ is not two antiderivatives, but the difference of the values of one antiderivative at two numbers $a$ and $b$.
Now, a key point about antiderivatives is that
the antiderivatives of a function $f$ on an interval $I$ differ by a constant. In another words, if $F'(x)=G'(x)=f(x)$ for every $x$ in $I$, then there exists a constant $C$ such that $G(x)=F(x)+C$ for every $x$ in $I$.
For example, $F(x)=\frac{x^3}{3}$ is an antiderivative of $f(x)=x^2$, as well as $G_1(x)=\frac{x^3}{3}+1$, $G_2(x)=\frac{x^3}{3}+\pi$, etc and the theorem above (a consequence of the Mean Value Theorem) tells you that every antiderivative $G$ of $f(x)$ will have this form $G(x)=\frac{x^3}{3}+C$ for some constant $C$.
A consequence of the theorem above is that, if $a$ and $b$ are numbers in $I$ and $F$ and $G$ are any antiderivative of $f$, then
$$F(b)-F(a) = G(b) - G(a)$$
Indeed, we can write $G(x)=F(x)+C$ so $G(b)-G(a)=(F(b)+C)-(F(a)+C)=F(b)-F(a)$, the constant canceling out. That's why in the Fundamental Theorem of Calculus part 2, the choice of the antiderivative is irrelevant since every choice will lead to the same final result.
On the other hand, $g(x)=\int_a^x f(t)\; dt$ is a special antiderivative of $f$: it is the antiderivative of $f$ whose value at $a$ is $0$. So $g(a)=0$ by definition of $g$. Therefore, $g(b)-g(a)=g(b)$.

- 11,339
- 5
- 32
- 58
-
-
So if antiderivatives differ in some constant, and for example if I have $$F(x)=\frac{x^3}{3}$$ is an antiderivative of $$f(x)=x^2$$ and I take $$a=0$$ in this case my particular F(x) is g(x) I'm looking for? If a would be $$a=1$$, antiderivative I would look for would be $$F(x)=\frac{x^3}{3} -\frac{1}{3}$$? – Vanconts Dec 11 '22 at 21:11
-
@Vanconts: I am not sure I understand your second comment. In FTC part 2, you are looking for the value of a definite integral $\int_a^b f(x); dx$ and you are using an antiderivative (any one of them) to find it. FTC part 1 solves the problem of the existence of antiderivatives of continuous functions: every continuous function $f$ has an antiderivative, hence many, and $g(x)=\int_a^x f(t); dt$ is one of them (and the other ones are $g(x)+C$) – Taladris Dec 11 '22 at 23:09
-
I mean that If I'm looking for the antiderivative of f(x) which will have value g(a) = 0 , so my antiderivative (with g(a) = 0) for $$f(x)=x^2$$(if Im looking for area from 1 to x) is $$g(x)=\frac{x^3}{3}-\frac{1}{3}$$, since g(1) = 0, isn't it? I mean that this particular antiderivative would fit 1st part of FTC – Vanconts Dec 12 '22 at 07:48
-
-
@Vanconts - Yes, the function $g$ changes (by plus/minus a constant) if the value of $a$ changes. And that is the correct $g$ for that $f$ and the case where $a=1$. I should also note that finding the particular $g$ that equals $0$ at $a$ is usually not worried about, and is only of concern inside the guts of Theorem One. That's because we know that eventually we are going to subtract the values of our antiderivative at two points, so if they are both, say, bigger by 1/3, that won't make any difference to the final answer. – JonathanZ Dec 13 '22 at 17:57