In Calculus we learnt to find derivative as
$$ f'\left(x\right) = \lim_{h \rightarrow 0} \frac{f\left(x+h\right)-f\left(x\right)}{h} $$
and integration as
$$ \int_a^bf\left(x\right)dx=\lim_{n \rightarrow \infty} \sum_{r=1}^nf\left(t_r\right)\phi_r $$
But these definitions in no manner seem to suggest that they might be opposite of each other but they are.
can we just prove that these to processes are just opposite of each other
ie. if we put $f\left(x\right)$ as $f'\left(x\right)$ in second expression will we get $f\left(x\right)$
-
1You can show that the derivative of the integral is the function itself. – Sonal_sqrt May 11 '18 at 04:40
-
but that isn't a proof – Abhishek Choudhary May 11 '18 at 04:43
-
4Yes it is. Integration and differentiation are linear transformations on vector space of functions. To show that they are inverse of one another it suffices to show that successive application of these on a function leaves the function unchanged. – Sonal_sqrt May 11 '18 at 04:45
-
1You might want to read about the Fundamental Theorem of Calculus. – angryavian May 11 '18 at 04:45
-
2This is the Fundamental Theorem of Calculus. Any rigorous calculus book should have a proof. This question should probably be closed since providing full proofs of standard theorems isn't generally how this site works, in the mean time you should take a look at the recommended textbooks here. If you can't get books, look for proofs of the FTC online. Not sure why this question is getting downvoted, though. – Jack M May 11 '18 at 05:28
1 Answers
Indeed they are.
For a sequence that is equally spaced by $h$, namely $u =(u_0,u_1,...)$ Its derivative is given by the application of the operator $D$: $$D = \frac{1}{h}\left[\begin{matrix} 1 & & & \\ -1 & 1 & &\\ & -1 & 1 & \\ & & \ddots& \ddots \end{matrix}\right]$$ This implicitly sets $u_0=0$ (as usual for a derivative operator that must come with a BC), therefore its is applied to $\tilde{u}=(u_2,u_2,...)$.
On the contrary the integral operator is defined as: $$I = h\left[\begin{matrix} 1 & & & \\ 1 & 1 & &\\ 1 & 1 & 1 & \\ \vdots & \ddots & \ddots& \ddots \end{matrix}\right]$$ You can see that one is the inverse of the other, i.e. $\mathbb{I} = ID=DI$, where $\mathbb{I}$ is the identity operator.

- 1,823
-
Even I can't understand this, let alone the OP. Are these the matrices of $D$ and $I$ on the space of analytic functions or something? Proving that they have this form, and justifying that analytic functions are a concept worth studying, is probably equivalent to proving the FToC five times in terms of difficulty. – Jack M May 11 '18 at 06:42
-
This is nothing strange. The operator $D$ is the definition of derivative when applied to $u$ (OP first equation) with $u_0=0$ and the operator $I$ is the definition of the integral of $u$. One can easily check it performing these multiplications and see what results. I don't understand you when you say $D$ and $I$ belong to the space of analytic functions, they are operators aren't they? – HBR May 11 '18 at 07:01
-
Your operators aren't even defined on a space of functions, they're defined on a space of sequences. This can't be a definition of the derivative unless you explain how you're mapping sequences to functions. – Jack M May 11 '18 at 07:33
-
I see... so is it sufficient to say that $u_i=u(x_i)$ where $u(x)$ is smooth enough? And $x_i$ being the support of the discretised function separated by $h$ for two consecutive $x_i$. – HBR May 11 '18 at 07:37