2

Below, I have approached the proof of the Fundamental Theorem of Calculus in a way that makes sense to me. I would like to know if I am right with the approach:


Let the integral function $y(l)$, defined as $\int_{a(x)}^{l}f(\mathrm{t})\mathrm{dt}$, be continuous in $[a(x),l]$ and differentiable in $(a(x),l)$. Here, $l$ is a function of $x$. Also, let functions $a(x)$ and $b(x)$ be differentiable for all $x$ in their domains' intersection.

On differentiating $y(l)$ with respect to $x$ we get, $$\frac{\mathrm{d}}{\mathrm{dx}}y(l)=l'.y'(l)=l'.f(l)-a'(x).f(a(x))$$

Which is the first part of the Fudamental Theorem of Calculus.

Now, from our knowledge of the antiderivative,$$\int \frac{\mathrm{d}}{\mathrm{dx}}y(l)\mathrm{dx}=y(l)+C$$

Say, $g(l)=y(l)+C=\int_{a(x)}^{l}f(\mathrm{t})\mathrm{dt}+C$, where $l$ is the argument that takes values including $b(x)$ and $a(x)$.

We have,

$g(a(x))=y(a(x))+C=\int_{a(x)}^{a(x)}f(\mathrm{t})\mathrm{dt}+C=0+C=C$

$g(b(x))=y(b(x))+C=\int_{a(x)}^{b(x)}f(\mathrm{t})\mathrm{dt}+C$

On subtracting $g(a(x))$ from $g(b(x))$ we get,

$$g(b(x))-g(a(x))=\int_{a(x)}^{b(x)}f(\mathrm{t})\mathrm{dt}=y(b(x))$$

Which is the second part of the Fundamental Theorem of Calculus.

Taking the derivative of the equation in the second part with respect to $x$ we get,

$g'(b(x)).b'(x)-g'(a(x)).a'(x)=\frac{\mathrm{d}}{\mathrm{dx}}y(b(x))=b'(x).f(b(x))-a'(x).f(a(x))$

On analyzing we see,

$f(a(x))=g'(a(x))$ and $f(b(x))=g'(b(x))$

In general, $f(x)=g'(x)$

Since $g' = y'$, $y$ is the antiderivative of $f$. This concludes the theorem by stating that $g$ is the antiderivative of $f$.

Edit: There are cases where $f$ has no elementary antiderivative. For example, $e^{x^{2}}$ has no elementary antiderivative. In this case, the second part of the Fundamental Theroem of Calculus does no good in evaluating $y(b(x))$.

R004
  • 983
  • 5
    Intuitively, I'd say "no", because it makes things too complicated. That fundamental theorem may be fundamental, but the concept is simple: integrals are just limits of sums (of differences), analogous to a telescoping series. Having variable bounds of integration is a second step, and should be treated as a second step, imho. –  Jun 19 '17 at 09:33
  • Let $C^0([a,b])$ be the set of continuous functions on $[a,b]$. What you really want to say is there is a linear operator $L : C^0([a,b]) \to C^1([a,b])$ such that $Lf = 0$ and $\frac{d}{dx}Lf = f$ and $\sup_{x \in [a,b]} |Lf| \le (b-a)\sup_{x \in [a,b]} |f(x)|$. Then the real job is to show $Lf = \int_a^x f(t)dt$ where the last is the Riemann integral, defined by the limit of Riemann sums. – reuns Jun 19 '17 at 09:43
  • 1
    @ProfessorVector, my approach was indeed not intuitive. But I see some sense in my abstractness which is why I put it forward, hoping it would be acceptable. – R004 Jun 19 '17 at 09:54

0 Answers0