19

One of the things I found curious in many texts is how in certain cases interchange the $\sum$ operator with $\int$. What are the "terms" for such a swap? I understand that integration in the early days was seen as an approximation of the area under the curve by using the very definition of multiplication and area to lend a hand with very small increments where the number of samples goes to infinity.

Beyond the original question, is this also the reason why we keep the right hand $dx$ (or any other infinitesimal variable), just to remind us of the origin because it "multiplies against the function", hence giving area. Or is there more to it?

Hints, answers, references to books... I'd appreciate anything you can give me.

  • 3
    Do you mean $\sum \int f_i, dx = \int \sum f_i, dx$ as in here or $\int f_i, dx = \sum f_i \Delta,x$ as in here? –  Apr 25 '12 at 15:53
  • @J.D. Not the first one (sum rule, right?)... That just an addition for loop, that's justified. It's the second one. I would like to understand that more. – Fractal Resurgence Apr 25 '12 at 16:02
  • have a look at http://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula which essentially says $\sum = \int + \mathrm{error term}$, where the error term is hopefully controllable for nice functions. A derivation is sketched here http://math.stackexchange.com/questions/123633/pseudo-proofs-that-are-intuitively-reasonable/123895#123895 – Matthew Towers Apr 25 '12 at 17:18
  • 1
    Summation of positive terms is a special case of integral for counting measure. – sdcvvc Jun 23 '12 at 13:11

3 Answers3

14

In elementary analysis, a Riemann/Darboux integral is defined (among other equivalent definitions) as a suitable limit of a (finite) sum. Whence the folklore according to which "an integral is essentially a series". This is rather false, but you know, in elementary analysis/calculus you can almost say whatever you wish.

The $\mathrm{d}x$ is clearly a deformation of $\Delta x$ in Riemann sums. Nowadays, it denotes the measure for which the integral is defined. If the integral is just a Riemann integral, some authors suggest to write $\int_a^bf$ instead of $\int_a^bf(x)\, \mathrm{d}x$. They are right, since the Riemann integral depend on $a$, $b$, and the function $f$. The variable of integration is a dummy one.

Finally, remember that $\int$ is a calligraphic deformation of an "S", while $\sum$ is the greek "S". Hence many pioneers used to kind of confuse $\sum$ and $\int$ in their manuscripts. But, honestly, contemporary textbooks should not swap the two signs, since we live in 2012 and Cauchy died many years ago ;-)

Siminore
  • 35,136
1

IMHO, the integral which is the closest as a notion to the sum is Lebesgue integral. First, a sum is a Lebesgue integral with respect to an appropriate measure, i.e. $$ \sum\limits_{i=1}^n a_i = \int\limits_1^n a(x)\;\mu(\mathrm dx) $$ where $a(x)$ is any function with the only restriction $a(i) = a_i$ for $i=1,\dots,n$ and measure $\mu$ is concetrated at points $1,2,\dots,n$ such that $\mu(1) = \mu(2) = \dots = \mu(n)$.

Since the sum is an object with many nice properties, it is always useful when the integral also shows similar properties. E.g. if $a_i\geq 0$ and $$ \sum\limits_i a_i = 0 $$ then $a_i = 0$ for all $i$. For Lebesgue integral you have almost the same, namely if the function $f$ is such that $f(x)\geq 0$ and $$ \int\limits_X f(x)\mu(\mathrm dx) = 0 $$ then the set $\{x:f(x)\neq 0\}$ is of zero measure $\mu$.

SBF
  • 36,041
  • But probably the Riemann-Stieltjes is even closer to a sum, as Rudin points out in his celebrated book Principles of mathematical analysis. – Siminore Jul 03 '12 at 14:31
  • The key point is that interchanging a series and an integral is the same as interchanging two integrals, and Fubini's theorem applies. – Carl Mummert Jul 03 '12 at 15:02
0

I would like to show an example how we can change ∫ to ∑

If a real-valued function $f(t)$ is infinitely differentiable at $0 \leq t \leq x$ and the whole high order derivative values are defined at 0≤t≤x.

Firstly , we can write Maclaurin series of $f(t)$ at point $t=0$ If a real-valued function f is infinitely differentiable at $t=0$ and the whole high order derivative values are defined.

Maclaurin series of $f(t)$ at point $t=0$:

$ f(t) =f(0)+\frac{f'(0)t}{1!}+\frac{f''(0)t^2}{2!}+.....=\sum_{n=0}^{\infty} \frac{f^{(n)}(0)}{n!} t ^n $

$$\int _0^x {f(t) dt}=\int _0^x(\sum_{n=0}^{\infty} \frac{f^{(n)}(0)}{n!} t^n)dt=\sum_{n=0}^{\infty} (\frac{f^{(n)}(0)}{n!}\int _0^x t^n dt)=\sum_{n=0}^{\infty} (\frac{f^{(n)}(0)}{n!}\frac{x^{n+1}}{n+1})$$ $$\int _0^x {f(t) dt}=\sum_{n=0}^{\infty} \frac{f^{(n)}(0)x^{n+1}}{(n+1)!}$$ $$(1)$$


$$f(\frac{kx}{n})=\sum_{m=0}^{\infty} \frac{f^{(m)}(0)}{m!} (\frac{kx}{n})^m$$ $$\sum \limits_{k=1}^{n} k^m=\frac{n^{m+1}}{m+1}+a_mn^m+....+a_1n=\frac{n^{m+1}}{m+1}+\sum \limits_{j=1}^m a_jn^j$$ where $a_j$ are constants. More information about summation http://en.wikipedia.org/wiki/Summation

$$\lim_{n\to\infty} \frac{x}{n}\sum \limits_{k=1}^n f(\frac{kx}{n})=\lim_{n\to\infty} \frac{x}{n}\sum \limits_{k=1}^n \sum_{m=0}^{\infty} \frac{f^{(m)}(0)}{m!} (\frac{kx}{n})^m=\lim_{n\to\infty} \frac{x}{n}\sum_{m=0}^{\infty} \frac{x^m}{n^m} \frac{f^{(m)}(0)}{m!} \sum \limits_{k=1}^n k^m=\lim_{n\to\infty} \frac{x}{n}[f(0)n+\frac{f'(0)x}{n 1!}(\frac{n^2}{2}+\frac{n}{2})+ \frac{f''(0)x^2}{n^2 2!}(\frac{n^3}{3}+\frac{n^2}{2}+\frac{n}{6})+\frac{f'''(0)x^3}{n^3 3!}(\frac{n^4}{4}+\frac{n^3}{2}+\frac{n^2}{4})+\frac{f^{(4)}(0)x^4}{n^4 4!}(\frac{n^5}{5}+\frac{n^4}{2}+\frac{n^3}{3}-\frac{n}{30})+...... ]= \lim_{n\to\infty} [f(0)x+\frac{f'(0)x^2}{n^2 1!}(\frac{n^2}{2}+\frac{n}{2})+ \frac{f''(0)x^3}{n^3 2!}(\frac{n^3}{3}+\frac{n^2}{2}+\frac{n}{6})+\frac{f'''(0)x^4}{n^4 3!}(\frac{n^4}{4}+\frac{n^3}{2}+\frac{n^2}{4})+\frac{f^{(4)}(0)x^5}{n^5 4!}(\frac{n^5}{5}+\frac{n^4}{2}+\frac{n^3}{3}-\frac{n}{30})+...... ]= [f(0)x+\frac{f'(0)x^2}{ 2!}+ \frac{f''(0)x^3}{ 3!}+\frac{f'''(0)x^4}{ 4!}+\frac{f^{(4)}(0)x^5}{ 5!}+...... ]$$

$$\lim_{n\to\infty} \frac{x}{n}\sum \limits_{k=1}^n f(\frac{kx}{n})=\sum_{m=0}^{\infty} \frac{f^{(m)}(0)x^{m+1}}{(m+1)!}$$ $$(2)$$

Equation $(1)$ and equation $(2)$ are equal to each other. The proof is completed.

$$\int _0^x {f(t) dt}=\lim_{n\to\infty} \frac{x}{n}\sum \limits_{k=1}^n f(\frac{kx}{n})$$

Mathlover
  • 10,058
  • I have a doubt. For a continuous function $f$, is this not simply the definition of the integral by means of (uniform) Riemann sums? You are simply using partitions whose nodes are $0+\frac{x-0}{n}k$, as $k=0,\ldots,n$. – Siminore Jul 03 '12 at 08:19
  • More generally: $$\int a^x {f(t) dt}=\lim{n\to\infty} \frac{(x-a)}{n}\sum \limits_{k=1}^n f(a+\frac{k(x-a)}{n})$$ http://en.wikipedia.org/wiki/Fundamental_theorem_of_calculus – Mathlover Jul 03 '12 at 08:26
  • So, your answer is a definition with useless assumptions? ;-) – Siminore Jul 03 '12 at 08:29
  • It is an example how to change an integral to sum. Why do you think it is useless? Please tell me which point is wrong in my answer. – Mathlover Jul 03 '12 at 08:32
  • No, there is (probably) nothing wrong. But your answer sounds like "Any integral is (the limit of) a sum". The only useless thing is the high regularity of $f$: the formula is actually true whenever $f$ in Riemann-integrable. Ah, one more thing: by definition, $f$ is a Taylor series if and only if it is real analytic. Differentiability at a single point, as you write, is not enough. – Siminore Jul 03 '12 at 08:35
  • My answer is based on Fundamental theorem of calculus. Thanks a lot for your comment. if you have any advice what to do better for my answer , all is welcome. – Mathlover Jul 03 '12 at 08:42
  • @Siminore : I have updated my answer, I would like to hear your critics to improve my answer if required more. thanks – Mathlover Jul 03 '12 at 13:54
  • $f$ needs to be much more than differentiable: it needs to be smooth (infinitely differentiable) to be able to even talk about higher-order derivatives; and it needs the much stronger property of being analytic on $[0,x]$ to be able to equate $f$ with its Maclaurin series (in general the series need not converge anywhere except in 0, and even when it does it might not converge on all of $[0,x]$). – Generic Human Jul 03 '12 at 14:16
  • @GenericHuman : Thanks a lot for comment. I have updated my answer. Please feel free to update in it if you believe something must be changed or added. Thank you very much for your contribution to my answer. – Mathlover Jul 03 '12 at 14:29
  • No, you're still wrong. You are not grasping the difference between smooth and analytic functions. This is perhaps difficult, before you see the following counter example: http://en.wikipedia.org/wiki/Non-analytic_smooth_function – GraduateStudent Sep 04 '14 at 18:24