852

As I have heard people did not trust Euler when he first discovered the formula (solution of the Basel problem) $$\zeta(2)=\sum_{k=1}^\infty \frac{1}{k^2}=\frac{\pi^2}{6}$$ However, Euler was Euler and he gave other proofs.

I believe many of you know some nice proofs of this, can you please share it with us?

Jam
  • 10,325

54 Answers54

384

OK, here's my favorite. I thought of this after reading a proof from the book "Proofs from the book" by Aigner & Ziegler, but later I found more or less the same proof as mine in a paper published a few years earlier by Josef Hofbauer. On Robin's list, the proof most similar to this is number 9 (EDIT: ...which is actually the proof that I read in Aigner & Ziegler).

When $0 < x < \pi/2$ we have $0<\sin x < x < \tan x$ and thus $$\frac{1}{\tan^2 x} < \frac{1}{x^2} < \frac{1}{\sin^2 x}.$$ Note that $1/\tan^2 x = 1/\sin^2 x - 1$. Split the interval $(0,\pi/2)$ into $2^n$ equal parts, and sum the inequality over the (inner) "gridpoints" $x_k=(\pi/2) \cdot (k/2^n)$: $$\sum_{k=1}^{2^n-1} \frac{1}{\sin^2 x_k} - \sum_{k=1}^{2^n-1} 1 < \sum_{k=1}^{2^n-1} \frac{1}{x_k^2} < \sum_{k=1}^{2^n-1} \frac{1}{\sin^2 x_k}.$$ Denoting the sum on the right-hand side by $S_n$, we can write this as $$S_n - (2^n - 1) < \sum_{k=1}^{2^n-1} \left( \frac{2 \cdot 2^n}{\pi} \right)^2 \frac{1}{k^2} < S_n.$$

Although $S_n$ looks like a complicated sum, it can actually be computed fairly easily. To begin with, $$\frac{1}{\sin^2 x} + \frac{1}{\sin^2 (\frac{\pi}{2}-x)} = \frac{\cos^2 x + \sin^2 x}{\cos^2 x \cdot \sin^2 x} = \frac{4}{\sin^2 2x}.$$ Therefore, if we pair up the terms in the sum $S_n$ except the midpoint $\pi/4$ (take the point $x_k$ in the left half of the interval $(0,\pi/2)$ together with the point $\pi/2-x_k$ in the right half) we get 4 times a sum of the same form, but taking twice as big steps so that we only sum over every other gridpoint; that is, over those gridpoints that correspond to splitting the interval into $2^{n-1}$ parts. And the midpoint $\pi/4$ contributes with $1/\sin^2(\pi/4)=2$ to the sum. In short, $$S_n = 4 S_{n-1} + 2.$$ Since $S_1=2$, the solution of this recurrence is $$S_n = \frac{2(4^n-1)}{3}.$$ (For example like this: the particular (constant) solution $(S_p)_n = -2/3$ plus the general solution to the homogeneous equation $(S_h)_n = A \cdot 4^n$, with the constant $A$ determined by the initial condition $S_1=(S_p)_1+(S_h)_1=2$.)

We now have $$ \frac{2(4^n-1)}{3} - (2^n-1) \leq \frac{4^{n+1}}{\pi^2} \sum_{k=1}^{2^n-1} \frac{1}{k^2} \leq \frac{2(4^n-1)}{3}.$$ Multiply by $\pi^2/4^{n+1}$ and let $n\to\infty$. This squeezes the partial sums between two sequences both tending to $\pi^2/6$. Voilà!

Hans Lundmark
  • 53,395
  • 31
    I might add that, as an alternative, one can evaluate the equivalent sum $\sum_{m=0}^{\infty} (2m+1)^{-2}=\pi^2/8$ by summing only over the odd-numbered gridpoints. Then the midpoint $\pi/4$ never enters the computation, and one gets an even simpler recurrence, of the form $T_n = 4 T_{n-1}$. – Hans Lundmark Oct 30 '10 at 21:20
  • 2
    So Euler calculated a limit?! – Downvoter Nov 12 '11 at 09:42
  • 8
    @Downvoter: Well, yes, at least from a modern perspective, since we define series using limits. I don't know if Euler thought about it that way. What's your point? – Hans Lundmark Nov 12 '11 at 10:13
  • Oh not much, I just found it kind of interesting that he would be doing something calculus-y. :) – Downvoter Nov 14 '11 at 01:05
  • 37
    @Downvoter: it's hard to know whether you're really serious, but if so...Euler probably did more calculus-y things than any other mathematician in history (including Newton and Leibniz). – Pete L. Clark Mar 04 '12 at 19:36
  • 31
    @Downvoter Are you confusing Euler with Euclid? – Akiva Weinberger Sep 30 '14 at 03:26
  • 24
    @AkivaWeinberger: Just saw this (sorry it's 3 years late), but I must have been, because I'm not sure what else I could've been thinking either... – Downvoter Feb 19 '17 at 22:20
257

We can use the function $f(x)=x^{2}$ with $-\pi \leq x\leq \pi $ and find its expansion into a trigonometric Fourier series

$$\dfrac{a_{0}}{2}+\sum_{n=1}^{\infty }(a_{n}\cos nx+b_{n}\sin nx),$$

which is periodic and converges to $f(x)$ in $[-\pi, \pi] $.

Observing that $f(x)$ is even, it is enough to determine the coefficients

$$a_{n}=\dfrac{1}{\pi }\int_{-\pi }^{\pi }f(x)\cos nx\;dx\qquad n=0,1,2,3,...,$$

because

$$b_{n}=\dfrac{1}{\pi }\int_{-\pi }^{\pi }f(x)\sin nx\;dx=0\qquad n=1,2,3,... .$$

For $n=0$ we have

$$a_{0}=\dfrac{1}{\pi }\int_{-\pi }^{\pi }x^{2}dx=\dfrac{2}{\pi }\int_{0}^{\pi }x^{2}dx=\dfrac{2\pi ^{2}}{3}.$$

And for $n=1,2,3,...$ we get

$$a_{n}=\dfrac{1}{\pi }\int_{-\pi }^{\pi }x^{2}\cos nx\;dx$$

$$=\dfrac{2}{\pi }\int_{0}^{\pi }x^{2}\cos nx\;dx=\dfrac{2}{\pi }\times \dfrac{ 2\pi }{n^{2}}(-1)^{n}=(-1)^{n}\dfrac{4}{n^{2}},$$

because

$$\int x^2\cos nx\;dx=\dfrac{2x}{n^{2}}\cos nx+\left( \frac{x^{2}}{ n}-\dfrac{2}{n^{3}}\right) \sin nx.$$

Thus

$$f(x)=\dfrac{\pi ^{2}}{3}+\sum_{n=1}^{\infty }\left( (-1)^{n}\dfrac{4}{n^{2}} \cos nx\right) .$$

Since $f(\pi )=\pi ^{2}$, we obtain

$$\pi ^{2}=\dfrac{\pi ^{2}}{3}+\sum_{n=1}^{\infty }\left( (-1)^{n}\dfrac{4}{ n^{2}}\cos \left( n\pi \right) \right) $$

$$\pi ^{2}=\dfrac{\pi ^{2}}{3}+4\sum_{n=1}^{\infty }\left( (-1)^{n}(-1)^{n} \dfrac{1}{n^{2}}\right) $$

$$\pi ^{2}=\dfrac{\pi ^{2}}{3}+4\sum_{n=1}^{\infty }\dfrac{1}{n^{2}}.$$

Therefore

$$\sum_{n=1}^{\infty }\dfrac{1}{n^{2}}=\dfrac{\pi ^{2}}{4}-\dfrac{\pi ^{2}}{12}= \dfrac{\pi ^{2}}{6}$$


Second method (available on-line a few years ago) by Eric Rowland. From

$$\log (1-t)=-\sum_{n=1}^{\infty}\dfrac{t^n}{n}$$

and making the substitution $t=e^{ix}$ one gets the series expansion

$$w=\text{Log}(1-e^{ix})=-\sum_{n=1}^{\infty }\dfrac{e^{inx}}{n}=-\sum_{n=1}^{ \infty }\dfrac{1}{n}\cos nx-i\sum_{n=1}^{\infty }\dfrac{1}{n}\sin nx,$$

whose radius of convergence is $1$. Now if we take the imaginary part of both sides, the RHS becomes

$$\Im w=-\sum_{n=1}^{\infty }\dfrac{1}{n}\sin nx,$$

and the LHS

$$\Im w=\arg \left( 1-\cos x-i\sin x\right) =\arctan \dfrac{-\sin x}{ 1-\cos x}.$$

Since

$$\arctan \dfrac{-\sin x}{1-\cos x}=-\arctan \dfrac{2\sin \dfrac{x}{2}\cdot \cos \dfrac{x}{2}}{2\sin ^{2}\dfrac{x}{2}}$$

$$=-\arctan \cot \dfrac{x}{2}=-\arctan \tan \left( \dfrac{\pi }{2}-\dfrac{x}{2} \right) =\dfrac{x}{2}-\dfrac{\pi }{2},$$

the following expansion holds

$$\dfrac{\pi }{2}-\frac{x}{2}=\sum_{n=1}^{\infty }\dfrac{1}{n}\sin nx.\qquad (\ast )$$

Integrating the identity $(\ast )$, we obtain

$$\dfrac{\pi }{2}x-\dfrac{x^{2}}{4}+C=-\sum_{n=1}^{\infty }\dfrac{1}{n^{2}}\cos nx.\qquad (\ast \ast )$$

Setting $x=0$, we get the relation between $C$ and $\zeta (2)$

$$C=-\sum_{n=1}^{\infty }\dfrac{1}{n^{2}}=-\zeta (2).$$

And for $x=\pi $, since

$$\zeta (2)=2\sum_{n=1}^{\infty }\dfrac{(-1)^{n-1}}{n^{2}},$$

we deduce

$$\dfrac{\pi ^{2}}{4}+C=-\sum_{n=1}^{\infty }\dfrac{1}{n^{2}}\cos n\pi =\sum_{n=1}^{\infty }\dfrac{(-1)^{n-1}}{n^{2}}=\dfrac{1}{2}\zeta (2)=-\dfrac{1}{ 2}C.$$

Solving for $C$

$$C=-\dfrac{\pi ^{2}}{6},$$

we thus prove

$$\zeta (2)=\dfrac{\pi ^{2}}{6}.$$

Note: this 2nd method can generate all the zeta values $\zeta (2n)$ by integrating repeatedly $(\ast\ast )$. This is the reason why I appreciate it. Unfortunately it does not work for $\zeta (2n+1)$.

Note also the $$C=-\dfrac{\pi ^{2}}{6}$$ can be obtained by integrating $(\ast\ast )$ and substitute $$x=0,x=\pi$$ respectively.

  • 4
    Would using fractional calculus to integrate $0.5$ times allow you to obtain $\zeta(2n+1)$? – Alice Ryhl Feb 17 '15 at 15:45
  • @KristofferRyhl I actually don't know. – Américo Tavares Feb 17 '15 at 16:29
  • 4
    Definitely the best answer! Awesome job. I never really understood a proof of this until I read your post. – Neil Apr 06 '15 at 05:51
  • Thank you for Fourier series proof! It's a very useful method in general – Yuriy S Mar 20 '16 at 12:31
  • 2
    @KristofferRyhl Sorry to revive a year old comment, but... I tried to integrate $()$ .5 times and (if I did it correctly) got a denominator of $n^{2.5}$ so that doesn't work. Integrating $()$ once gives us a denominator of $n^3$ but that also gives us $sin(nx)$ which equals $0$ when $x=0$ - so that seems to be the why this method doesn't work for $\zeta(2n+1)$ – zerosofthezeta Apr 27 '16 at 19:23
  • 1
    @zerosofthezeta I played a bit around with it myself just now, I can see why it only works for $\zeta(2n)$ and not $\zeta(2n+a)$ for any $0<a<2$: in order to do the trick you need to substitute $x=\text{something}$ such that $f=d^n\sin/dx^n$ applied as $f(nx) = 1$, for any integer $n$. However it is only when the argument to $\zeta$ is even that you can find such an $x$. – Alice Ryhl Apr 27 '16 at 20:01
  • 1
    It is far from clear how you can integrate the expression (*), what doesnt seems to converge uniformly of converge at all for $|x|<1$. – Masacroso Sep 15 '18 at 15:02
  • @Masacroso The series on the right of (*) seems to converge to a periodic function with period $2\pi$ as suggested by this plot of the 10th partial sum. The discontinuities of the periodic function are located at $2k\pi$. – Américo Tavares Sep 16 '18 at 18:40
  • @Masacroso ... And at these points ($x=2k\pi$) the series value is $0$, which is the arithmetic mean of $-1,+1$ (the lateral limits of the periodic function at the discontinuities). – Américo Tavares Sep 16 '18 at 19:52
220

Here is an other one which is more or less what Euler did in one of his proofs.

The function $\sin x$ where $x\in\mathbb{R}$ is zero exactly at $x=n\pi$ for each integer $n$. If we factorized it as an infinite product we get

$$\sin x = \cdots\left(1+\frac{x}{3\pi}\right)\left(1+\frac{x}{2\pi}\right)\left(1+\frac{x}{\pi}\right)x\left(1-\frac{x}{\pi}\right)\left(1-\frac{x}{2\pi}\right)\left(1-\frac{x}{3\pi}\right)\cdots =$$ $$= x\left(1-\frac{x^2}{\pi^2}\right)\left(1-\frac{x^2}{2^2\pi^2}\right)\left(1-\frac{x^2}{3^2\pi^2}\right)\cdots\quad.$$

We can also represent $\sin x$ as a Taylor series at $x=0$:

$$\sin x = x - \frac{x^3}{3!}+\frac{x^5}{5!}-\frac{x^7}{7!}+\cdots\quad.$$

Multiplying the product and identifying the coefficient of $x^3$ we see that

$$\frac{x^3}{3!}=x\left(\frac{x^2}{\pi^2} + \frac{x^2}{2^2\pi^2}+ \frac{x^2}{3^2\pi^2}+\cdots\right)=x^3\sum_{n=1}^{\infty}\frac{1}{n^2\pi^2}$$ or $$\sum_{n=1}^\infty\frac{1}{n^2}=\frac{\pi^2}{6}.$$

Here are two interesting links:

  • 46
    This is a very cool peek into the way math was done in the 18th century. I love the total kamikaze approach of the initial assumption, which, as the Sandifer paper discusses on p. 6, is obviously not strictly justifiable. Sandifer gives $e^x\sin x$ as an alternative function with the same zeroes. –  Feb 11 '12 at 15:47
  • Alfredo Z has given a similar presentation of this below with some interesting differences. –  Feb 11 '12 at 16:14
  • @BenCrowell I love this argument too. But as you say one should be aware that is it not "Cauchy stringent", I mean Euler did not have any $\epsilon$-$\delta$ arguments to his proofs, however his "feeling" is often correct. – AD - Stop Putin - Feb 11 '12 at 17:27
  • 28
    @BenCrowell I think that Euler would argue that $e^x\sin x$ has an infinite-degree zero at $-\infty$, requiring $(1+\frac x\infty)^\infty$ to be appended to the infinite product… which is correct, when interpreted correctly. – Akiva Weinberger Jan 15 '16 at 17:11
  • @AkivaWeinberger Could you spell out this interpretation? :-) – Ant Jun 20 '16 at 21:50
  • 1
    @Ant Append $\lim_{N\to\infty}(1+\frac xN)^N=e^x$ to the infinite product – Akiva Weinberger Jun 20 '16 at 23:08
  • @AkivaWeinberger I was thinking of something more rigorous.. that does not add much to the intuition that $e^x$ has a "zero" at $-\infty$ :-) – Ant Jun 21 '16 at 12:43
  • 4
    @Ant I do not think Euler was known for his rigor. :) – Akiva Weinberger Jun 21 '16 at 13:09
  • @AkivaWeinberger Yeah absolutely! That is why I was intrigued by your comment and thought that you were talking about some profound and rigorous interpretation that could make that statement rigorous by modern standard :-) It was interesting nonetheless! :) – Ant Jun 21 '16 at 13:26
  • 1
    Sir, proof is very beautiful and very easy to understand. Thank you so... much for sharing. – Akash Patalwanshi Oct 27 '19 at 03:06
  • 1
    @Ant This can, like so many things Euler did, be rigorously expressed in the language of Nonstandard Analysis: pick a positive unlimited number $\omega\in {}^*\mathbb R$, i.e. $\omega>x\forall x\in\mathbb R$. Then one can show that $(1+\tfrac{x}{\omega})^{\omega}$ is infinitesimally close to $\exp(x)=\sum_{k\in\mathbb N} \tfrac{1}{k!}x^k$ for all $x\in\mathbb R$, and indeed it does have a zero of infinite order at $x=-\omega$. Which $\omega$ you pick is irrelevant, since the behaviour of the function on the reals is the same up to infinitesimal change. – Hyperplane Jul 20 '21 at 11:48
  • "Multiplying the product and identifying the coefficient of $x^3$ we see that" Can you please add the steps how is it done ? – An_Elephant Mar 26 '23 at 07:03
170

Define the following series for $ x > 0 $

$$\frac{\sin x}{x} = 1 - \frac{x^2}{3!}+\frac{x^4}{5!}-\frac{x^6}{7!}+\cdots\quad.$$

Now substitute $ x = \sqrt{y}\ $ to arrive at

$$\frac{\sin \sqrt{y}\ }{\sqrt{y}\ } = 1 - \frac{y}{3!}+\frac{y^2}{5!}-\frac{y^3}{7!}+\cdots\quad.$$

if we find the roots of $\frac{\sin \sqrt{y}\ }{\sqrt{y}\ } = 0 $ we find that

$ y = n^2\pi^2\ $ for $ n \neq 0 $ and $ n $ in the integers

With all of this in mind, recall that for a polynomial

$ P(x) = a_{n}x^n + a_{n-1}x^{n-1} +\cdots+a_{1}x + a_{0} $ with roots $ r_{1}, r_{2}, \cdots , r_{n} $

$$\frac{1}{r_{1}} + \frac{1}{r_{2}} + \cdots + \frac{1}{r_{n}} = -\frac{a_{1}}{a_{0}}$$

Treating the above series for $ \frac{\sin \sqrt{y}\ }{\sqrt{y}\ } $ as polynomial we see that

$$\frac{1}{1^2\pi^2} + \frac{1}{2^2\pi^2} + \frac{1}{3^2\pi^2} + \cdots = -\frac{-\frac{1}{3!}}{1}$$

then multiplying both sides by $ \pi^2 $ gives the desired series.

$$\frac{1}{1^2} + \frac{1}{2^2} + \frac{1}{3^2} + \cdots = \frac{\pi^2}{6}$$

  • 3
    Does the formula you use actually hold for all entire functions defined by power series? Are there conditions that need to be present for this to work? As an entire function is not determined by its roots (e.g. $f(z)$ versus $e^{g(z)}f(z)$), is it clear that making such a change wouldn't affect the answer? Or does this rely on Euler's formula for $\sin x$ as an infinite product? This is certainly an interesting idea, but I fear it could be a misleading coincidence. – Aaron Aug 14 '11 at 01:45
  • 12
    This is closely related to the method of Euler already described above by AD. –  Feb 11 '12 at 16:14
  • @BenCrowell Yes, but slightly different anyway, love this one too.. :) – AD - Stop Putin - Feb 11 '12 at 17:29
  • @Alfredo Z. Crazy that we think exactly alike (⊙o⊙) Must upvote! – Vim Feb 20 '15 at 03:01
  • However I am confused with this problem (take a look at my question here if you don't mind): How to rectify that the fundamental theorem of algebra also holds for an infinite polynomial? – Vim Feb 20 '15 at 03:27
159

This method apparently was used by Tom Apostol in $1983$. I will outline the main ideas of the proof, the details can be found in here or this presentation (page $27$)

Consider

$$\begin{align} \int_{0}^{1} \int_{0}^{1} \frac{1}{1 - xy} dy dx &= \int_{0}^{1} \int_{0}^{1} \sum_{n \geq 0} (xy)^n dy dx \\ &= \sum_{n \geq 0} \int_{0}^{1} \int_{0}^{1} x^n y^n dy dx \\ &= \sum_{n \geq 1} \frac{1}{n^2} \\ \end{align}$$

You can verify that the left hand side is indeed $\frac{\pi^2}{6}$ by letting $x = u - v$ and $y = v + u.$

user91500
  • 5,606
IAmNoOne
  • 3,274
  • 13
    This proof is provided in the book, Topics in number theory, volume 1, William Judson Leveque, 1956. Read page 122: https://ia601900.us.archive.org/34/items/in.ernet.dli.2015.134692/2015.134692.Topics-In-Number-Theory-Volume-1.pdf – FDP Jul 14 '17 at 09:39
100

I have two favorite proofs. One is the last proof in Robin Chapman's collection; you really should take a look at it.

The other is a proof that generalizes to the evaluation of $\zeta(2n)$ for all $n$, although I'll do it "Euler-style" to shorten the presentation. The basic idea is that meromorphic functions have infinite partial fraction decompositions that generalize the partial fraction decompositions of rational functions.

The particular function we're interested in is $B(x) = \frac{x}{e^x - 1}$, the exponential generating function of the Bernoulli numbers $B_n$. $B$ is meromorphic with poles at $x = 2 \pi i n, n \in \mathbb{Z}$, and at these poles it has residue $2\pi i n$. It follows that we can write, a la Euler,

$$\frac{x}{e^x - 1} = \sum_{n \in \mathbb{Z}} \frac{2\pi i n}{x - 2 \pi i n} = \sum_{n \in \mathbb{Z}} - \left( \frac{1}{1 - \frac{x}{2\pi i n}} \right).$$

Now we can expand each of the terms on the RHS as a geometric series, again a la Euler, to obtain

$$\frac{x}{e^x - 1} = - \sum_{n \in \mathbb{Z}} \sum_{k \ge 0} \left( \frac{x}{2\pi i n} \right)^k = \sum_{k \ge 0} (-1)^{n+1} \frac{2 \zeta(2n)}{(2\pi )^{2n}} x^{2n}$$

because, after rearranging terms, the sum over odd powers cancels out and the sum over even powers doesn't. (This is one indication of why there is no known closed form for $\zeta(2n+1)$.) Equating terms on both sides, it follows that

$$\frac{1}{(2n)!} B_{2n} = (-1)^{n+1} \frac{2 \zeta(2n)}{(2\pi)^{2n}}$$

or

$$\zeta(2n) = (-1)^{n+1} \frac{B_{2n} (2\pi)^{2n}}{2(2n)!}$$

as desired. To compute $\zeta(2)$ it suffices to compute that $B_2 = \frac{1}{6}$, which then gives the usual answer.

Qiaochu Yuan
  • 419,620
  • 5
    This is my favorite proof and the one I was going to post, although Qiaochu's explanation is better than mine would have been. :) Instead, I will just add that there's a nice discussion in Concrete Mathematics (2nd edition, pp 285-286) that relates this argument to proof #7 in Robin's list. – Mike Spivey Oct 30 '10 at 19:59
  • In your last equation, shouldn't it be $(2\pi)^{2n}$? See https://en.wikipedia.org/wiki/Riemann_zeta_function#Specific_values – zerosofthezeta Nov 29 '13 at 08:54
  • @evil: yes, thanks for the correction. Edited. – Qiaochu Yuan Nov 29 '13 at 22:30
  • Actually, your partial fraction decomposition of $\frac{x}{\mathrm e^x - 1}$ does not converge :/ – Célestin Aug 15 '17 at 15:52
  • @Phoenix: yes, that's what makes this proof "Euler-style." – Qiaochu Yuan Aug 16 '17 at 02:12
  • "Euler-style" is rather weird ^^'... Another way to prove the $\zeta(2n)$ formula is integrate $\pi\cot \pi z/z^{2n}$ over a rectangle which encloses the integers singularites ;) ($B_{2n}$ appears within residues...) – Célestin Aug 16 '17 at 02:40
  • @QiaochuYuan You say Euler-style: what needs to be done to make the argument more rigorous? I can see clearly that it is unrigorous, but I ask as a student to find out how it can be strengthened. – FShrike Feb 05 '22 at 21:45
  • May I ask how do you get the factor 2 in $\sum_{k \ge 0} (-1)^{n+1} \frac{2 \zeta(2n)}{(2\pi )^{2n}} x^{2n}$. The one multiplying $\zeta(2n)$ – Ivan Gonzalez Nov 15 '22 at 23:33
  • Replying to myself. Be careful because $\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}$ and not over $\mathbb Z$. That is how you have the 2. I am still missing what happens when $n=0$ in the term $\frac{2\pi in}{x-2\pi in}$ – Ivan Gonzalez Nov 16 '22 at 04:30
87

Here is one more nice proof, I learned it from Grisha Mikhalkin:

Lemma: Let $Z$ be a complex curve in $\mathbb{C}^2$. Let $R(Z) \subset \mathbb{R}^2$ be the projection of $Z$ onto its real parts and $I(Z)$ the projection onto its complex parts. If these projections are both one to one, then the area of $R(Z)$ is equal to the area of $I(Z)$.

Proof: There is an obvious map from $R(Z)$ to $I(Z)$, given by lifting $(x_1, x_2) \in R(Z)$ to $(x_1+i y_1, x_2 + i y_2) \in Z$, and then projecting to $(y_1, y_2) \in I(Z)$. We must prove this map has Jacobian $1$. WLOG, translate $(x_1, y_1, x_2, y_2)$ to $(0,0,0,0)$ and let $Z$ obey $\partial z_2/\partial z_1 = a+bi$ near $(0,0)$. To first order, we have $x_2 = a x_1 - b y_1$ and $y_2 = a y_1 + b x_1$. So $y_1 = (a/b) x_1 - (1/b) x_2$ and $y_2 = (a^2 + b^2)/b x_1 - (a/b) x_2$. So the derivative of $(x_1, x_2) \mapsto (y_1, y_2)$ is $\left( \begin{smallmatrix} a/b & - 1/b \\ (a^2 + b^2)/b & -a/b \end{smallmatrix} \right)$ and the Jacobian is $1$. QED

Now, consider the curve $e^{-z_1} + e^{-z_2} = 1$, where $z_1$ and $z_2$ obey the following inequalities: $x_1 \geq 0$, $x_2 \geq 0$, $-\pi \leq y_1 \leq 0$ and $0 \leq y_2 \leq \pi$.

Given a point on $e^{-z_1} + e^{-z_2} = 1$, consider the triangle with vertices at $0$, $e^{-z_1}$ and $e^{-z_1} + e^{-z_2} = 1$. The inequalities on the $y$'s states that the triangle should lie above the real axis; the inequalities on the $x$'s state the horizontal base should be the longest side.

Projecting onto the $x$ coordinates, we see that the triangle exists if and only if the triangle inequality $e^{-x_1} + e^{-x_2} \geq 1$ is obeyed. So $R(Z)$ is the region under the curve $x_2 = - \log(1-e^{-x_1})$. The area under this curve is $$\int_{0}^{\infty} - \log(1-e^{-x}) dx = \int_{0}^{\infty} \sum \frac{e^{-kx}}{k} dx = \sum \frac{1}{k^2}.$$

Now, project onto the $y$ coordinates. Set $(y_1, y_2) = (-\theta_1, \theta_2)$ for convenience, so the angles of the triangle are $(\theta_1, \theta_2, \pi - \theta_1 - \theta_2)$. The largest angle of a triangle is opposite the largest side, so we want $\theta_1$, $\theta_2 \leq \pi - \theta_1 - \theta_2$, plus the obvious inequalities $\theta_1$, $\theta_2 \geq 0$. So $I(Z)$ is the quadrilateral with vertices at $(0,0)$, $(0, \pi/2)$, $(\pi/3, \pi/3)$ and $(\pi/2, 0)$ and, by elementary geometry, this has area $\pi^2/6$.

  • 1
    Very nice indeed! (Although it took me a while to understand that the triangle lives in its own complex plane, not related to the $z_1$ and $z_2$ planes.) But I think it should be $x_1\ge 0$, $x_2\ge 0$, $e^{-x_1}+e^{-x_2} \le 1$, and the quadrilateral should have vertices at $(0,0)$, $(0,\pi/2)$, $(\pi/3,\pi/3)$ and $(\pi/2,0)$. – Hans Lundmark Oct 31 '10 at 09:35
  • Thanks for the corrections! I still think $e^{- x_1} + e^{- x_2} \geq 1$ is right, but I've fixed the others. – David E Speyer Oct 31 '10 at 12:12
  • Ah, you're right about that one, of course. Sorry. – Hans Lundmark Oct 31 '10 at 14:44
  • 2
    I have another comment too, which I posted as a separate answer because it was too long, and also because I wanted to include an image: http://math.stackexchange.com/questions/8337/different-methods-to-compute-sum-n1-infty-frac1n2/8516#8516 – Hans Lundmark Nov 01 '10 at 12:37
  • @DavidSpeyer , do you think your method (or a similar) can be applied here: http://math.stackexchange.com/questions/1284161/visual-proof-of-sum-n-1-infty-frac1n4-frac-pi490 ? – VividD May 24 '15 at 08:00
84

I'll post the one I know since it is Euler's, and is quite easy and stays in $\mathbb{R}$. (I'm guessing Euler didn't have tools like residues back then).

Let

$$s = {\sin ^{ - 1}}x$$

Then

$$\int\limits_0^{\frac{\pi }{2}} {sds} = \frac{{{\pi ^2}}}{8}$$

But then

$$\int\limits_0^1 {\frac{{{{\sin }^{ - 1}}x}}{{\sqrt {1 - {x^2}} }}dx} = \frac{{{\pi ^2}}}{8}$$

Since

$${\sin ^{ - 1}}x = \int {\frac{{dx}}{{\sqrt {1 - {x^2}} }}} = x + \frac{1}{2}\frac{{{x^3}}}{3} + \frac{{1 \cdot 3}}{{2 \cdot 4}}\frac{{{x^5}}}{5} + \frac{{1 \cdot 3 \cdot 5}}{{2 \cdot 4 \cdot 6}}\frac{{{x^7}}}{7} + \cdots $$

We have

$$\int\limits_0^1 {\left\{ {\frac{{dx}}{{\sqrt {1 - {x^2}} }}\int {\frac{{dx}}{{\sqrt {1 - {x^2}} }}} } \right\}} = \int\limits_0^1 {\left\{ {x + \frac{1}{2}\frac{{{x^3}}}{3}\frac{{dx}}{{\sqrt {1 - {x^2}} }} + \frac{{1 \cdot 3}}{{2 \cdot 4}}\frac{{{x^5}}}{5}\frac{{dx}}{{\sqrt {1 - {x^2}} }} + \cdots } \right\}} $$

But

$$\int\limits_0^1 {\frac{{{x^{2n + 1}}}}{{\sqrt {1 - {x^2}} }}dx} = \frac{{2n}}{{2n + 1}}\int\limits_0^1 {\frac{{{x^{2n - 1}}}}{{\sqrt {1 - {x^2}} }}dx} $$

which yields

$$\int\limits_0^1 {\frac{{{x^{2n + 1}}}}{{\sqrt {1 - {x^2}} }}dx} = \frac{{\left( {2n} \right)!!}}{{\left( {2n + 1} \right)!!}}$$

since all powers are odd.

This ultimately produces:

$$\frac{{{\pi ^2}}}{8} = 1 + \frac{1}{2}\frac{1}{3}\left( {\frac{2}{3}} \right) + \frac{{1 \cdot 3}}{{2 \cdot 4}}\frac{1}{5}\left( {\frac{{2 \cdot 4}}{{3 \cdot 5}}} \right) + \frac{{1 \cdot 3 \cdot 5}}{{2 \cdot 4 \cdot 6}}\frac{1}{7}\left( {\frac{{2 \cdot 4 \cdot 6}}{{3 \cdot 5 \cdot 7}}} \right) \cdots $$

$$\frac{{{\pi ^2}}}{8} = 1 + \frac{1}{{{3^2}}} + \frac{1}{{{5^2}}} + \frac{1}{{{7^2}}} + \cdots $$

Let

$$1 + \frac{1}{{{2^2}}} + \frac{1}{{{3^2}}} + \frac{1}{{{4^2}}} + \cdots = \omega $$

Then

$$\frac{1}{{{2^2}}} + \frac{1}{{{4^2}}} + \frac{1}{{{6^2}}} + \frac{1}{{{8^2}}} + \cdots = \frac{\omega }{4}$$

Which means

$$\frac{\omega }{4} + \frac{{{\pi ^2}}}{8} = \omega $$

or

$$\omega = \frac{{{\pi ^2}}}{6}$$

Pedro
  • 122,002
73

The most recent issue of The American Mathematical Monthly (August-September 2011, pp. 641-643) has a new proof by Luigi Pace based on elementary probability. Here's the argument.

Let $X_1$ and $X_2$ be independent, identically distributed standard half-Cauchy random variables. Thus their common pdf is $p(x) = \frac{2}{\pi (1+x^2)}$ for $x > 0$.

Let $Y = X_1/X_2$. Then the pdf of $Y$ is, for $y > 0$, $$p_Y(y) = \int_0^{\infty} x p_{X_1} (xy) p_{X_2}(x) dx = \frac{4}{\pi^2} \int_0^\infty \frac{x}{(1+x^2 y^2)(1+x^2)}dx$$ $$=\frac{2}{\pi^2 (y^2-1)} \left[\log \left( \frac{1+x^2 y^2}{1+x^2}\right) \right]_{x=0}^{\infty} = \frac{2}{\pi^2} \frac{\log(y^2)}{y^2-1} = \frac{4}{\pi^2} \frac{\log(y)}{y^2-1}.$$

Since $X_1$ and $X_2$ are equally likely to be the larger of the two, we have $P(Y < 1) = 1/2$. Thus $$\frac{1}{2} = \int_0^1 \frac{4}{\pi^2} \frac{\log(y)}{y^2-1} dy.$$ This is equivalent to $$\frac{\pi^2}{8} = \int_0^1 \frac{-\log(y)}{1-y^2} dy = -\int_0^1 \log(y) (1+y^2+y^4 + \cdots) dy = \sum_{k=0}^\infty \frac{1}{(2k+1)^2},$$ which, as others have pointed out, implies $\zeta(2) = \pi^2/6$.

Mike Spivey
  • 55,550
68

Just as a curiosity, a one-line-real-analytic-proof I found by combining different ideas from this thread and this question:

$$\begin{eqnarray*}\zeta(2)&=&\frac{4}{3}\sum_{n=0}^{+\infty}\frac{1}{(2n+1)^2}=\frac{4}{3}\int_{0}^{1}\frac{\log y}{y^2-1}dy\\&=&\frac{2}{3}\int_{0}^{1}\frac{1}{y^2-1}\left[\log\left(\frac{1+x^2 y^2}{1+x^2}\right)\right]_{x=0}^{+\infty}dy\\&=&\frac{4}{3}\int_{0}^{1}\int_{0}^{+\infty}\frac{x}{(1+x^2)(1+x^2 y^2)}dx\,dy\\&=&\frac{4}{3}\int_{0}^{1}\int_{0}^{+\infty}\frac{dx\, dz}{(1+x^2)(1+z^2)}=\frac{4}{3}\cdot\frac{\pi}{4}\cdot\frac{\pi}{2}=\frac{\pi^2}{6}.\end{eqnarray*}$$


Update. By collecting pieces, I have another nice proof. By Euler's acceleration method or just an iterated trick like my $(1)$ here we get: $$ \zeta(2) = \sum_{n\geq 1}\frac{1}{n^2} = \color{red}{\sum_{n\geq 1}\frac{3}{n^2\binom{2n}{n}}}\tag{A}$$ and the last series converges pretty fast. Then we may notice that the last series comes out from a squared arcsine. That just gives another proof of $ \zeta(2)=\frac{\pi^2}{6}$.


A proof of the identity $$\sum_{n\geq 0}\frac{1}{(2n+1)^2}=\frac{\pi}{2}\sum_{k\geq 0}\frac{(-1)^k}{2k+1}=\frac{\pi}{2}\cdot\frac{\pi}{4}$$ is also hidden in tired's answer here. For short, the integral $$ I=\int_{-\infty}^{\infty}e^y\left(\frac{e^y-1}{y^2}-\frac{1}{y}\right)\frac{1}{e^{2y}+1}\,dy $$ is clearly real, so the imaginary part of the sum of residues of the integrand function has to be zero.


Still another way (and a very efficient one) is to exploit the reflection formula for the trigamma function: $$\psi'(1-z)+\psi'(z)=\frac{\pi^2}{\sin^2(\pi z)}$$ immediately leads to: $$\frac{\pi^2}{2}=\psi'\left(\frac{1}{2}\right)=\sum_{n\geq 0}\frac{1}{\left(n+\frac{1}{2}\right)^2}=4\sum_{n\geq 0}\frac{1}{(2n+1)^2}=3\,\zeta(2).$$


2018 update. We may consider that $\mathcal{J}=\int_{0}^{+\infty}\frac{\arctan x}{1+x^2}\,dx = \left[\frac{1}{2}\arctan^2 x\right]_0^{+\infty}=\frac{\pi^2}{8}$.
On the other hand, by Feynman's trick or Fubini's theorem $$ \mathcal{J}=\int_{0}^{+\infty}\int_{0}^{1}\frac{x}{(1+x^2)(1+a^2 x^2)}\,da\,dx = \int_{0}^{1}\frac{-\log a}{1-a^2}\,da $$ and since $\int_{0}^{1}-\log(x)x^n\,dx = \frac{1}{(n+1)^2}$, by expanding $\frac{1}{1-a^2}$ as a geometric series we have $$ \frac{\pi^2}{8}=\mathcal{J}=\sum_{n\geq 0}\frac{1}{(2n+1)^2}. $$

Jack D'Aurizio
  • 353,855
67

Note that $$ \frac{\pi^2}{\sin^2\pi z}=\sum_{n=-\infty}^{\infty}\frac{1}{(z-n)^2} $$ from complex analysis and that both sides are analytic everywhere except $n=0,\pm 1,\pm 2,\cdots$. Then one can obtain $$ \frac{\pi^2}{\sin^2\pi z}-\frac{1}{z^2}=\sum_{n=1}^{\infty}\frac{1}{(z-n)^2}+\sum_{n=1}^{\infty}\frac{1}{(z+n)^2}. $$ Now the right hand side is analytic at $z=0$ and hence $$\lim_{z\to 0}\left(\frac{\pi^2}{\sin^2\pi z}-\frac{1}{z^2}\right)=2\sum_{n=1}^{\infty}\frac{1}{n^2}.$$ Note $$\lim_{z\to 0}\left(\frac{\pi^2}{\sin^2\pi z}-\frac{1}{z^2}\right)=\frac{\pi^2}{3}.$$ Thus $$\sum_{n=1}^{\infty}\frac{1}{n^2}=\frac{\pi^2}{6}.$$

xpaul
  • 1
67

This is not really an answer, but rather a long comment prompted by David Speyer's answer. The proof that David gives seems to be the one in How to compute $\sum 1/n^2$ by solving triangles by Mikael Passare, although that paper uses a slightly different way of seeing that the area of the region $U_0$ (in Passare's notation) bounded by the positive axes and the curve $e^{-x}+e^{-y}=1$, $$\int_0^{\infty} -\ln(1-e^{-x}) dx,$$ is equal to $\sum_{n\ge 1} \frac{1}{n^2}$.

This brings me to what I really wanted to mention, namely another curious way to see why $U_0$ has that area; I learned this from Johan Wästlund. Consider the region $D_N$ illustrated below for $N=8$:

A shape with area = sum of reciprocal squares

Although it's not immediately obvious, the area of $D_N$ is $\sum_{n=1}^N \frac{1}{n^2}$. Proof: The area of $D_1$ is 1. To get from $D_N$ to $D_{N+1}$ one removes the boxes along the top diagonal, and adds a new leftmost column of rectangles of width $1/(N+1)$ and heights $1/1,1/2,\ldots,1/N$, plus a new bottom row which is the "transpose" of the new column, plus a square of side $1/(N+1)$ in the bottom left corner. The $k$th rectangle from the top in the new column and the $k$th rectangle from the left in the new row (not counting the square) have a combined area which exactly matches the $k$th box in the removed diagonal: $$ \frac{1}{k} \frac{1}{N+1} + \frac{1}{N+1} \frac{1}{N+1-k} = \frac{1}{k} \frac{1}{N+1-k}. $$ Thus the area added in the process is just that of the square, $1/(N+1)^2$. Q.E.D.

(Apparently this shape somehow comes up in connection with the "random assignment problem", where there's an expected value of something which turns out to be $\sum_{n=1}^N \frac{1}{n^2}$.)

Now place $D_N$ in the first quadrant, with the lower left corner at the origin. Letting $N\to\infty$ gives nothing but the region $U_0$: for large $N$ and for $0<\alpha<1$, the upper corner of column number $\lceil \alpha N \rceil$ in $D_N$ lies at $$ (x,y) = \left( \sum_{n=\lceil (1-\alpha) N \rceil}^N \frac{1}{n}, \sum_{n=\lceil \alpha N \rceil}^N \frac{1}{n} \right) \sim \left(\ln\frac{1}{1-\alpha}, \ln\frac{1}{\alpha}\right),$$ hence (in the limit) on the curve $e^{-x}+e^{-y}=1$.

Hans Lundmark
  • 53,395
  • 10
    That's a neat observation. – David E Speyer Nov 01 '10 at 14:53
  • 6
    Tracing through the proof that $D_N = \sum_{d=1}^{N} 1/d^2$, I discovered the following curiosity: If you look at all the rectangle in $D_N$ of the form $1/j \times 1/k$ with $GCD(j,k)=d$, their total area is $1/d^2$. In particular, if you look at the rectangles of the form $1/j \times 1/k$ with $GCD(x,y)=1$, in the limit they are spread everywhere across the region $e^{-x} + e^{-y} \geq 1$, with density equal to the probability that two randomly chosen integers are relatively prime, namely $6/\pi^2$. – David E Speyer Aug 22 '14 at 00:46
  • @DavidSpeyer: That's also a neat observation! :-) – Hans Lundmark Aug 22 '14 at 09:43
  • @HansLundmark What about applying the same idea here: http://math.stackexchange.com/questions/1284161/visual-proof-of-sum-n-1-infty-frac1n4-frac-pi490 ? – VividD May 24 '15 at 08:03
  • 3
    @VividD: Feel free to try, I'm not going to stop you! ;-) – Hans Lundmark May 24 '15 at 08:18
  • 3
    Kiran Kedlaya pointed out to the following to me: In the interval $[0,1]$, consider all fractions $p/q$ with $q \leq N$. For example, when $N=3$, look at $(0/1, 1/3, 1/2, 2/3, 1/1)$. Look at the blocks of $D_3$ of size $1/q \times 1/q'$ with $GCD(q,q')=1$. The pairs $(q,q')$ which occur are precisely the successive denominators. For example, $1 \times 1/3$, $1/3 \times 1/2$, $1/2 \times 1/3$, $1/3 \times 1$ in $D_3$. We have $1/(q q') = (p/q)-(p'/q')$ (this is a well known property of Farey fractions) so the areas add up to $1$ because this is the length of the interval. – David E Speyer Jul 08 '16 at 19:35
  • Wastlund has another (?) paper in 2010 explicitly constructing the circle which size approaches infinite. It is recently made into a video by 3blue1brown with very illuminating (pun intended) animations. – Lee David Chung Lin Mar 14 '18 at 09:10
63

Here is a complex-analytic proof.

For $z\in D=\mathbb{C}\backslash${$0,1$}, let

$$R(z)=\sum\frac{1}{\log^2 z}$$

where the sum is taken over all branches of the logarithm. Each point in $D$ has a neighbourhood on which the branches of $\log(z)$ are analytic. Since the series converges uniformly away from $z=1$, $R(z)$ is analytic on $D$.

Now a few observations:

(i) Each term of the series tends to $0$ as $z\to0$. Thanks to the uniform convergence this implies that the singularity at $z=0$ is removable and we can set $R(0)=0$.

(ii) The only singularity of $R$ is a double pole at $z=1$ due to the contribution of the principal branch of $\log z$. Moreover, $\lim_{z\to1}(z-1)^2R(z)=1$.

(iii) $R(1/z)=R(z)$.

By (i) and (iii) $R$ is meromorphic on the extended complex plane, therefore it is rational. By (ii) the denominator of $R(z)$ is $(z-1)^2$. Since $R(0)=R(\infty)=0$, the numerator has the form $az$. Then (ii) implies $a=1$, so that $$R(z)=\frac{z}{(z-1)^2}.$$

Now, setting $z=e^{2\pi i w}$ yields $$\sum\limits_{n=-\infty}^{\infty}\frac{1}{(w-n)^2}=\frac{\pi^2}{\sin^2(\pi w)}$$ which implies that $$\sum\limits_{k=0}^{\infty}\frac{1}{(2k+1)^2}=\frac{\pi^2}{8},$$ and the identity $\zeta(2)=\pi^2/6$ follows.

The proof is due to T. Marshall (American Mathematical Monthly, Vol. 117(4), 2010, P. 352).

46

In response to a request here: Compute $\oint z^{-2k} \cot (\pi z) dz$ where the integral is taken around a square of side $2N+1$. Routine estimates show that the integral goes to $0$ as $N \to \infty$.

Now, let's compute the integral by residues. At $z=0$, the residue is $\pi^{2k-1} q$, where $q$ is some rational number coming from the power series for $\cot$. For example, if $k=1$, then we get $- \pi/3$.

At $m \pi$, for $m \neq 0$, the residue is $z^{-2k} \pi^{-1}$. So $$\pi^{-1} \lim_{N \to \infty} \sum_{-N \leq m \leq N\ m \neq 0} m^{-2k} + \pi^{2k-1} q=0$$ or $$\sum_{m=1}^{\infty} m^{-2k} = -\pi^{2k} q/2$$ as desired. In particular, $\sum m^{-2} = - (\pi^2/3)/2 = \pi^2/6$.

Common variants: We can replace $\cot$ with $\tan$, with $1/(e^{2 \pi i z}-1)$, or with similar formulas.

This is reminiscent of Qiaochu's proof but, rather than actually establishing the relation $\pi^{-1} \cot(\pi z) = \sum (z-n)^{-1}$, one simply establishes that both sides contribute the same residues to a certain integral.

39

A short way to get the sum is to use Fourier's expansion of $x^2$ in $x\in(-\pi,\pi)$. Recall that Fourier's expansion of $f(x)$ is $$ \tilde{f}(x)=\frac{1}{2}a_0+\sum_{n=1}^\infty(a_n\cos nx+b_n\sin nx), x\in(-\pi,\pi)$$ where $$ a_0=\frac{2}{\pi}\int_{-\pi}^{\pi}f(x)\;dx, a_n=\frac{2}{\pi}\int_{-\pi}^{\pi}f(x)\cos nx\; dx, b_n=\frac{2}{\pi}\int_{-\pi}^{\pi}f(x)\sin nx\; dx, n=1,2,3,\cdots $$ and $$ \tilde{f}(x)=\frac{f(x-0)+f(x+0)}{2}. $$ Easy calculation shows $$ x^2=\frac{\pi^2}{3}+4\sum_{n=1}^\infty(-1)^n\frac{\cos nx}{n^2}, x\in[-\pi,\pi]. $$ Letting $x=\pi$ in both sides gives $$ \sum_{n=1}^\infty\frac{1}{n^2}=\frac{\pi^2}{6}.$$

Another way to get the sum is to use Parseval's Identity for Fourier's expansion of $x$ in $(-\pi,\pi)$. Recall that Parseval's Identity is $$ \int_{-\pi}^{\pi}|f(x)|^2dx=\frac{1}{2}a_0^2+\sum_{n=1}^\infty(a_n^2+b_n^2). $$ Note $$ x=2\sum_{n=1}^\infty(-1)^{n+1}\frac{\sin nx}{n}, x\in(-\pi,\pi). $$ Using Parseval's Identity gives $$ 4\sum_{n=1}^\infty\frac{1}{n^2}=\int_{-\pi}^{\pi}|x|^2dx$$ or $$ \sum_{n=1}^\infty\frac{1}{n^2}=\frac{\pi^2}{6}.$$

xpaul
  • 1
39

Another variation. We make use of the following identity (proved at the bottom of this note):

$$\sum_{k=1}^n \cot^2 \left( \frac {2k-1}{2n} \frac{\pi}{2} \right) = 2n^2 – n. \quad (1)$$

Now $1/\theta > \cot \theta > 1/\theta - \theta/3 > 0$ for $0< \theta< \pi/2 < \sqrt{3}$ and so $$ 1/\theta^2 – 2/3 < \cot^2 \theta < 1/\theta^2. \quad (2)$$

With $\theta_k = (2k-1)\pi/4n,$ summing the inequalities $(2)$ from $k=1$ to $n$ we obtain

$$2n^2 – n < \sum_{k=1}^n \left( \frac{2n}{2k-1}\frac{2}{\pi} \right)^2 < 2n^2 – n + 2n/3.$$

Hence

$$\frac{\pi^2}{16}\frac{2n^2-n}{n^2} < \sum_{k=1}^n \frac{1}{(2k-1)^2} < \frac{\pi^2}{16}\frac{2n^2-n/3}{n^2}.$$

Taking the limit as $n \rightarrow \infty$ we obtain

$$ \sum_{k=1}^\infty \frac{1}{(2k-1)^2} = \frac{\pi^2}{8},$$

from which the result for $\sum_{k=1}^\infty 1/k^2$ follows easily.

To prove $(1)$ we note that

$$ \cos 2n\theta = \text{Re}(\cos\theta + i \sin\theta)^{2n} = \sum_{k=0}^n (-1)^k {2n \choose 2k}\cos^{2n-2k}\theta\sin^{2k}\theta.$$

Therefore

$$\frac{\cos 2n\theta}{\sin^{2n}\theta} = \sum_{k=0}^n (-1)^k {2n \choose 2k}\cot^{2n-2k}\theta.$$

And so setting $x = \cot^2\theta$ we note that

$$f(x) = \sum_{k=0}^n (-1)^k {2n \choose 2k}x^{n-k}$$

has roots $x_j = \cot^2 (2j-1)\pi/4n,$ for $j=1,2,\ldots,n,$ from which $(1)$ follows since ${2n \choose 2n-2} = 2n^2-n.$

35

Theorem: Let $\lbrace a_n\rbrace$ be a nonincreasing sequence of positive numbers such that $\sum a_n^2$ converges. Then both series $$s:=\sum_{n=0}^\infty(-1)^na_n,\,\delta_k:=\sum_{n=0}^\infty a_na_{n+k},\,k\in\mathbb N $$ converge. Morevere $\Delta:=\sum_{k=1}^\infty(-1)^{k-1}\delta_k$ also converges, and we have the formula $$\sum_{n=0}^\infty a_n^2=s^2+2\Delta.$$ Proof: Knopp. Konrad, Theory and Application of Infinite Series, page 323.

If we let $a_n=\frac1{2n+1}$ in this theorem, then we have $$s=\sum_{n=0}^\infty(-1)^n\frac1{2n+1}=\frac\pi 4$$ $$\delta_k=\sum_{n=0}^\infty\frac1{(2n+1)(2n+2k+1)}=\frac1{2k}\sum_{n=0}^\infty\left(\frac1{2n+1}-\frac1{2n+2k+1}\right)=\frac{1}{2k}\left(1+\frac1 3+...+\frac1 {2k-1}\right)$$ Hence, $$\sum_{n=0}^\infty\frac1{(2n+1)^2}=\left(\frac\pi 4\right)^2+\sum_{k=1}^\infty\frac{(-1)^{k-1}}{k}\left(1+\frac1 3+...+\frac1 {2k-1}\right)=\frac{\pi^2}{16}+\frac{\pi^2}{16}=\frac{\pi^2}{8}$$ and now $$\zeta(2)=\frac4 3\sum_{n=0}^\infty\frac1{(2n+1)^2}=\frac{\pi^2}6.$$

Note: $$\sum_{k=1}^\infty\frac{(-1)^{k-1}}{k}\left(1+\frac1 3+...+\frac1 {2k-1}\right)=\frac{\pi^2}{16}$$

comes from the fact that

$$\left(\frac\pi4\right)^2=\left(\sum_{n=1}^\infty(-1)^{n-1}\frac1{2n-1}\right)\cdot\left(\sum_{n=1}^\infty(-1)^{n-1}\frac1{2n-1}\right)$$

user91500
  • 5,606
  • 1
    This is the most elementary proof here. +1 – Anshuman Agrawal Apr 02 '23 at 20:39
  • $\sum\limits_{k=1}^\infty\frac{(-1)^{k-1}}{k}\left(1+\frac1 3+...+\frac1 {2k-1}\right)=\frac{\pi^2}{16}$ might benefit from some explanation. – robjohn May 18 '23 at 06:10
  • @robjohn I've added some explanation. Thanks. – user91500 May 19 '23 at 16:59
  • $\sum\limits_{k=1}^\infty\frac{(-1)^{k-1}}{k}\left(1+\frac1 3+...+\frac1 {2k-1}\right)$ has terms with even denominators, whereas $\left(\sum\limits_{n=1}^\infty(-1)^{n-1}\frac1{2n-1}\right)\cdot\left(\sum\limits_{n=1}^\infty(-1)^{n-1}\frac1{2n-1}\right)$ does not. So while both of the last two equations are true, I don't think the former follows directly from the latter. – robjohn May 19 '23 at 22:42
  • @robjohn But the former obtains by the Cauchy product of the latter, doesn't it? – user91500 May 20 '23 at 13:41
  • See $\text{(1e)}$ from this answer. I believe that the double series on the right side of the plus sign is your series, and the following steps show how to get $\frac{\pi^2}{16}$. – robjohn May 21 '23 at 05:32
  • @robjohn Yes, You got $\frac{\pi^2}{16}$ directly without use of the Cauchy product and the Abel theorem. – user91500 May 24 '23 at 16:58
29

At risk of contravening group etiquette w.r.t. old questions, I'm going to take this opportunity to post my own version. I don't see it in a transparent form in any of the other posts or in Robin Chapman's article, so I invite anyone to point out the correspondence if it's there. I like this argument because it's physical and can be followed without mathematical formalism.

We start by assuming the well-known series for $\pi/4$ in alternating odd fractions. We can recognize it as the sum of the Fourier series of the square wave, evaluated at the origin:

$\cos(x) - \cos(3x)/3 + \cos(5x)/5 ...$

It is easily argued on physical grounds that this adds up to a square wave; and that the height of the wave is pi/4 follows from the alternating sequence already mentioned. Now we are going to interpret this wave as an electric current flowing through a resistor. There are two ways of calculating the power and they must agree. First, we can just take square of the amplitude; in the case of this square wave, this is obviously a constant and it is just $\,\,\pi^2/16$. The other way is to add up the power of the sinusoidal components. These are the squares of the individual amplitudes:

$1 + 1/9 + 1/25 .... = (?)\, \pi^2/16 \,\,??$

No, not quite; I've been a little sloppy and neglected to mention that when calculating the power of a sine wave, you use its RMS amplitude and not its peak amplitude. This introduces a factor of two; so in fact the series as written adds up to $\,\pi^2/8.$ This isn't quite what we want; remember we've just added up the odd fractions. But the even fractions contribute in a rather picturesque way; it's easy to group them by powers of two into a geometric sum leading to the desired result of $\,\,\pi^2/6.$

learner
  • 6,726
Marty Green
  • 1,917
  • 4
    At the risk of being rude, you've used "It is easily argued on physical grounds" in place of a theorem on pointwise convergence of fourier series, and a particular physical manifestation/application of Plancherel's theorem. You gain "intuition" for why the result is plausible (assuming you have the corresponding physics background), but you lose both rigor and clarity. The problem with making a physical argument for any mathematical fact is that even if you know that certain calculations work for physically relevant examples, it's hard to say what condition "physically relevant" imposes. – Aaron Aug 14 '11 at 01:14
  • 2
    Thanks for the feedback. I'm understanding that my argument wasn't so sketchy that you weren't able to fill in the details as necessary. I am blown away by the mathematical sophistication of the people who post hear but I still wish I would see more arguments made the way I do. – Marty Green Aug 14 '11 at 01:51
  • 4
    Well, you lucked out that I had seen the argument before (though not phrased with such language), and I remembered enough physics to understand what you were doing. I appreciate how you feel: technical arguments can be difficult to digest and sometimes offer no intuition about the result. A heuristic explanation, even if it isn't fully rigorous, is often a wonderful addition. However, for mathematics, the heuristic cannot be everything, as the mathematical battleground is littered with the bodies of proofs which are simple, intuitive, and wrong. – Aaron Aug 14 '11 at 02:08
27

I like this one:

Let $f\in Lip(S^{1})$, where $Lip(S^{1})$ is the space of Lipschitz functions on $S^{1}$. So its well defined the number for $k\in \mathbb{Z}$ (called Fourier series of $f$) $$\hat{f}(k)=\frac{1}{2\pi}\int \hat{f}(\theta)e^{-ik\theta}d\theta.$$

By the inversion formula, we have $$f(\theta)=\sum_{k\in\mathbb{Z}}\hat{f}(k)e^{ik\theta}.$$

Now take $f(\theta)=|\theta|$, $\theta\in [-\pi,\pi]$. Note that $f\in Lip(S^{1})$

We have $$ \hat{f}(k) = \left\{ \begin{array}{rl} \frac{\pi}{2} &\mbox{ if $k=0$} \\ 0 &\mbox{ if $|k|\neq 0$ and $|k|$ is even} \\ -\frac{2}{k^{2}\pi} &\mbox{if $|k|\neq 0$ and $|k|$ is odd} \end{array} \right. $$

Using the inversion formula, we have on $\theta=0$ that $$0=\sum_{k\in\mathbb{Z}}\hat{f}(k).$$

Then,

\begin{eqnarray} 0 &=& \frac{\pi}{2}-\sum_{k\in\mathbb{Z}\ |k|\ odd}\frac{2}{k^{2}\pi} \nonumber \\ &=& \frac{\pi}{2}-\sum_{k\in\mathbb{N}\ |k|\ odd}\frac{4}{k^{2}\pi} \nonumber \\ \end{eqnarray}

This implies $$\sum_{k\in\mathbb{N}\ |k|\ odd}\frac{1}{k^{2}} =\frac{\pi^{2}}{8}$$

If we multiply the last equation by $\frac{1}{2^{2n}}$ with $n=0,1,2,...$ ,we get $$\sum_{k\in\mathbb{N}\ |k|\ odd}\frac{1}{(2^{n}k)^{2}} =\frac{\pi^{2}}{2^{2n}8}$$

Now $$\sum_{n=0,1,...}(\sum_{k\in\mathbb{N}\ |k|\ odd}\frac{1}{(2^{n}k)^{2}}) =\sum_{n=0,1,...}\frac{\pi^{2}}{2^{2n}8}$$

The sum in the left is equal to: $\sum_{k\in\mathbb{N}}\frac{1}{k^{2}}$

The sum in the right is equal to :$\frac{\pi^{2}}{6}$

So we conclude: $$\sum_{k\in\mathbb{N}}\frac{1}{k^{2}}=\frac{\pi^{2}}{6}$$

Note: This is problem 9, Page 208 from the boof of Michael Eugene Taylor - Partial Differential Equation Volume 1.

Tomás
  • 22,559
26

Here's a proof based upon periods and the fact that $\zeta(2)$ and $\frac{\pi^2}{6}$ are periods forming an accessible identity.

The definition of periods below and the proof is from the fascinating introductory survey paper about periods by M. Kontsevich and D. Zagier.

Periods are defined as complex numbers whose real and imaginary parts are values of absolutely convergent integrals of rational functions with rational coefficient over domains in $\mathbb{R}^n$ given by polynomial inequalities with rational coefficients.

The set of periods is therefore a countable subset of the complex numbers. It contains the algebraic numbers, but also many of famous transcendental constants.

In order to show the equality $\zeta(2)=\frac{\pi^2}{6}$ we have to show that both are periods and that $\zeta(2)$ and $\frac{\pi^2}{6}$ form a so-called accessible identity.

First step of the proof: $\zeta(2)$ and $\pi$ are periods

There are a lot of different proper representations of $\pi$ showing that this constant is a period. In the referred paper above following expressions (besides others) of $\pi$ are stated:

\begin{align*} \pi= \iint \limits_{x^2+y^2\leq 1}dxdy=\int_{-\infty}^{\infty}\frac{dx}{1+x^2} \end{align*}

showing that $\pi$ is a period. The known representation

\begin{align*} \zeta(2)=\iint_{0<x<y<1} \frac{dxdy}{(1-x)y} \end{align*}

shows that $\zeta(2)$ is also a period.

$$ $$

Second step: $\zeta(2)$ and $\frac{\pi^2}{6}$ form an accessible identity.

An accessible identity between two periods $A$ and $B$ is given, if we can transform the integral representation of period $A$ by application of the three rules: Additivity (integrand and domain), Change of variables and Newton-Leibniz formula to the integral representation of period $B$.

This implies equality of the periods and the job is done.

In order to show that $\zeta(2)$ and $\frac{\pi^2}{6}$ are accessible identities we start with the integral $I$

$$I=\int_{0}^{1}\int_{0}^{1}\frac{1}{1-xy}\frac{dxdy}{\sqrt{xy}}$$

Expanding $1/(1-xy)$ as a geometric series and integrating term-by-term,

we find that

$$I=\sum_{n=0}^{\infty}\left(n+\frac{1}{2}\right)^{-2}=(4-1)\zeta(2),$$

providing another period representation of $\zeta(2)$.

Changing variables:

$$x=\xi^2\frac{1+\eta^2}{1+\xi^2},\qquad\qquad y=\eta^2\frac{1+\xi^2}{1+\eta^2}$$

with Jacobian $\left|\frac{\partial(x,y)}{\partial(\xi,\eta)}\right|=\frac{4\xi\eta(1-\xi^2\eta^2)}{(1+\xi^2)(1+\eta^2)} =4\frac{(1-xy)\sqrt{xy}}{(1+\xi^2)(1+\eta^2)}$, we find

$$I=4\iint_{0<\eta,\xi\leq 1}\frac{d\xi}{1+\xi^2}\frac{d\eta}{1+\eta^2} =2\int_{0}^{\infty}\frac{d\xi}{1+\xi^2}\int_{0}^{\infty}\frac{d\eta}{1+\eta^2},$$

the last equality being obtained by considering the involution $(\xi,\eta) \mapsto (\xi^{-1},\eta^{-1})$ and comparing this with the last integral representation of $\pi$ above we obtain: $$I=\frac{\pi^2}{2}$$

So, we have shown that $\frac{\pi^2}{6}$ and $\zeta(2)$ are accessible identities and equality follows.

Markus Scheuer
  • 108,315
24

As taken from my upcoming textbook:

There is yet another solution to the Basel problem as proposed by Ritelli (2013). His approach is similar to the one by Apostol (1983), where he arrives at

$$\sum_{n\geq1}\frac{1}{n^2}=\frac{\pi^2}{6}\tag1$$

by evaluating the double integral

$$\int_0^1\int_0^1\dfrac{\mathrm{d}x\,\mathrm{d}y}{1-xy}.\tag2$$

Ritelli evaluates in this case the definite integral shown in $(4)$. The starting point comes from realizing that $(1)$ is equivalent to

$$\sum_{n\geq0}\frac{1}{(2n+1)^2}=\frac{\pi^2}{8}\tag3$$

To evaluate the above sum we consider the definite integral

$$\int_0^\infty\int_0^\infty\frac{\mathrm{d}x\,\mathrm{d}y}{(1+y)(1+x^2y)}.\tag4$$

We evaluate $(4)$ first with respect to $x$ and then to $y$

$$\begin{align} \int_0^\infty\left(\frac{1}{1+y}\int_0^\infty\frac{\mathrm{d}x}{1+x^2y}\right)\mathrm{d}y &=\int_0^\infty\left(\frac{1}{1+y}\left[\frac{\tan^{-1}(\sqrt{y}\,x)}{\sqrt{y}}\right]_{x=0}^{x=\infty}\right)\mathrm{d}y\\ &=\frac\pi2\int_0^\infty\frac{\mathrm{d}y}{\sqrt{y}(1+y)}\\ &=\frac\pi2\int_0^\infty\frac{2u}{u(1+u^2)}\mathrm{d}u=\frac{\pi^2}{2},\tag5 \end{align}$$

where we used the substitution $y\leadsto u^2$ in the last step. If we reverse the order of integration one gets

$$\begin{align} \int_0^\infty\left(\int_0^\infty\frac{\mathrm{d}y}{(1+y)(1+x^2y)}\right)\mathrm{d}x&=\int_0^\infty\frac{1}{1-x^2}\left(\int_0^\infty\left(\frac{1}{1+y}-\frac{x^2}{1+x^2y}\right)\mathrm{d}y\right)\mathrm{d}x\\ &=\int_0^\infty\frac{1}{1-x^2}\ln\frac1{x^2}\mathrm{d}x=2\int_0^\infty\frac{\ln x}{x^2-1}\mathrm{d}x.\tag6 \end{align}$$

Hence since $(5)$ and $(6)$ are the same, we have

$$\int_0^\infty\frac{\ln x}{x^2-1}\mathrm{d}x=\frac{\pi^2}{4}.\tag7$$

Furthermore

$$\begin{align} \int_0^\infty\frac{\ln x}{x^2-1}\mathrm{d}x&=\int_0^1\frac{\ln x}{x^2-1}\mathrm{d}x+\int_1^\infty\frac{\ln x}{x^2-1}\mathrm{d}x\\ &=\int_0^1\frac{\ln x}{x^2-1}\mathrm{d}x+\int_0^1\frac{\ln u}{u^2-1}\mathrm{d}u,\tag8 \end{align}$$

where we used the substitution $x\leadsto1/u$. Combining $(7)$ and $(8)$ yields

$$\int_0^1\frac{\ln x}{x^2-1}\mathrm{d}x=\frac{\pi^2}{8}.\tag{9}$$

By expanding the denominator of the integrand in $(10)$ into a geometric series and using the Monotone Convergence Theorem,

$$\int_0^1\frac{\ln x}{x^2-1}\mathrm{d}x=\int_0^1\frac{-\ln x}{1-x^2}\mathrm{d}x=\sum_{n\ge0}\int_0^1(-x^{2n}\ln x)\mathrm{d}x.\tag{10}$$

Using integration by parts one can see that

$$\int_0^1(-x^{2n}\ln x)\mathrm{d}x=\left[-\frac{x^{2n+1}}{2n+1}\ln x\right]^1_0+\int_0^1\frac{x^{2n}}{2n+1}\mathrm{d}x=\frac{1}{(2n+1)^2}\tag{11}$$

Hence from $(10)$, and $(11)$

$$\int_0^1\frac{\ln x}{x^2-1}\mathrm{d}x=\sum_{n\geq0}\frac{1}{(2n+1)^2},\tag{12}$$

which finishes the proof. $$\tag*{$\square$}$$

References:

Daniele Ritelli (2013), Another Proof of $\zeta(2)=\frac{\pi^2}{6}$ Using Double Integrals, The American Mathematical Monthly, Vol. 120, No. 7, pp. 642-645

T. Apostol (1983), A proof that Euler missed: Evaluating $\zeta(2)$ the easy way, Math. Intelligencer 5, pp. 59–60, available at http://dx.doi.org/10.1007/BF03026576.

  • This seems to be a reordering of Mike Spivey (http://math.stackexchange.com/users/2370/mike-spivey), Different methods to compute $\sum\limits_{k=1}^\infty \frac{1}{k^2}$, URL (version: 2011-08-13): http://math.stackexchange.com/q/57301 – Job Bouwman Nov 26 '16 at 14:10
23

I saw this proof in an extract of the College Mathematics Journal.

Consider the Integeral : $I = \int_0^{\pi/2}\ln(2\cos x)dx$

From $2\cos(x) = e^{ix} + e^{-ix}$ , we have:

$$\int_0^{\pi/2}\ln\left(e^{ix} + e^{-ix}\right)dx = \int_0^{\pi/2}\ln\left(e^{ix}(1 + e^{-2ix})\right)dx=\int_0^{\pi/2}ixdx + \int_0^{\pi/2}\ln(1 + e^{-2ix})dx$$

The Taylor series expansion of $\ln(1+x)=x -\frac{x^2}{2} +\frac{x^3}{3}-\cdots$

Thus , $\ln(1+e^{-2ix}) = e^{-2ix}- \frac{e^{-4ix}}{2} + \frac{e^{-6ix}}{3} - \cdots $, then for $I$ :

$$I = \frac{i\pi^2}{8}+\left[-\frac{e^{-2ix}}{2i}+\frac{e^{-4ix}}{2\cdot 4i}-\frac{e^{-6ix}}{3\cdot 6i}-\cdots\right]_0^{\pi/2}$$

$$I = \frac{i\pi^2}{8}-\frac{1}{2i}\left[\frac{e^{-2ix}}{1^2}-\frac{e^{-4ix}}{2^2}+\frac{e^{-6ix}}{3^2}-\cdots\right]_0^{\pi/2}$$

By evaluating we get something like this..

$$I = \frac{i\pi^2}{8}-\frac{1}{2i}\left[\frac{-2}{1^2}-\frac{0}{2^2}+\frac{-2}{3^2}-\cdots\right]_0^{\pi/2}$$

Hence

$$\int_0^{\pi/2}\ln(2\cos x)dx=\frac{i\pi^2}{8}-i\sum_{k=0}^\infty \frac{1}{(2k+1)^2}$$

So now we have a real integral equal to an imaginary number, thus the value of the integral should be zero.

Thus, $\sum_{k=0}^\infty \frac{1}{(2k+1)^2}=\frac{\pi^2}{8}$

But let $\sum_{k=0}^\infty \frac{1}{k^2}=E$ .We get $\sum_{k=0}^\infty \frac{1}{(2k+1)^2}=\frac{3}{4} E$

And as a result $$\sum_{k=0}^\infty \frac{1}{k^2} = \frac{\pi^2}{6}$$

John Bentin
  • 18,454
Meadara
  • 601
22

This popped up in some reading I'm doing for my research, so I thought I'd contribute! It's a more general twist on the usual pointwise-convergent Fourier series argument.


Consider the eigenvalue problem for the negative Laplacian $\mathcal L$ on $[0,1]$ with Dirichlet boundary conditions; that is, $\mathcal L f:=-f_n'' = \lambda_n f_n$ with $f_n(0) = f_n(1) = 0$. Through inspection we can find that the admissible eigenvalues are $\lambda_n = n^2\pi^2$ for $n=1,2,\ldots$

One can verify that the integral operator $\mathcal Gf(x) = \int_0^1 G(x,y)f(y)\,dy$, where $$G(x,y) = \min(x,y) - xy = \frac{1}{2}\left( -|x-y| + x(1-y) + y(1-x) \right)~~,$$ inverts the negative Laplacian, in the sense that $\mathcal L \mathcal G f = \mathcal G \mathcal L f = f$ on the admissible class of functions (twice weakly differentiable, satisfying the boundary conditions). That is, $G$ is the Green's function for the Dirichlet Laplacian. Because $\mathcal G$ is a self-adjoint, compact operator, we can form an orthonormal basis for $L^2([0,1])$ from its eigenfunctions, and so may express its trace in two ways: $$ \sum_n <f_n,\mathcal G f_n> = \sum_n \frac{1}{\lambda_n} $$and $$\sum_n <f_n,\mathcal G f_n> = \int_0^1 \sum_n f_n(x) <G(x,\cdot),f_n>\,dx = \int_0^1 G(x,x)\,dx~~.$$

The latter quantity is $$ \int_0^1 x(1-x)\,dx = \frac 1 2 - \frac 1 3 = \frac 1 6~~.$$

Hence, we have that $$\sum_n \frac 1 {n^2\pi^2} = \frac 1 6~~\text{, or}~~ \sum_n \frac 1 {n^2} = \frac {\pi^2} 6~~.$$

22

Here is Euler's Other Proof by Gerald Kimble

\begin{align*} \frac{\pi^2}{6}&=\frac{4}{3}\frac{(\arcsin 1)^2}{2}\\ &=\frac{4}{3}\int_0^1\frac{\arcsin x}{\sqrt{1-x^2}}\,dx\\ &=\frac{4}{3}\int_0^1\frac{x+\sum_{n=1}^{\infty}\frac{(2n-1)!!}{(2n)!!}\frac{x^{2n+1}}{2n+1}}{\sqrt{1-x^2}}\,dx\\ &=\frac{4}{3}\int_0^1\frac{x}{\sqrt{1-x^2}}\,dx +\frac{4}{3}\sum_{n=1}^{\infty}\frac{(2n-1)!!}{(2n)!!(2n+1)}\int_0^1x^{2n}\frac{x}{\sqrt{1-x^2}}\,dx\\ &=\frac{4}{3}+\frac{4}{3}\sum_{n=1}^{\infty}\frac{(2n-1)!!}{(2n)!!(2n+1)}\left[\frac{(2n)!!}{(2n+1)!!}\right]\\ &=\frac{4}{3}\sum_{n=0}^{\infty}\frac{1}{(2n+1)^2}\\ &=\frac{4}{3}\left(\sum_{n=1}^{\infty}\frac{1}{n^2}-\frac{1}{4}\sum_{n=1}^{\infty}\frac{1}{n^2}\right)\\ &=\sum_{n=1}^{\infty}\frac{1}{n^2} \end{align*}

Markus Scheuer
  • 108,315
19

I have another method as well. From skimming the previous solutions, I don't think it is a duplicate of any of them

In Complex analysis, we learn that $\sin(\pi z) = \pi z\Pi_{n=1}^{\infty}\Big(1 - \frac{z^2}{n^2}\Big)$ which is an entire function with simple zer0s at the integers. We can differentiate term wise by uniform convergence. So by logarithmic differentiation we obtain a series for $\pi\cot(\pi z)$. $$ \frac{d}{dz}\ln(\sin(\pi z)) = \pi\cot(\pi z) = \frac{1}{z} - 2z\sum_{n=1}^{\infty}\frac{1}{n^2 - z^2} $$ Therefore, $$ -\sum_{n=1}^{\infty}\frac{1}{n^2 - z^2} = \frac{\pi\cot(\pi z) - \frac{1}{z}}{2z} $$ We can expand $\pi\cot(\pi z)$ as $$ \pi\cot(\pi z) = \frac{1}{z} - \frac{\pi^2}{3}z - \frac{\pi^4}{45}z^3 - \cdots $$ Thus, \begin{align} \frac{\pi\cot(\pi z) - \frac{1}{z}}{2z} &= \frac{- \frac{\pi^2}{3}z - \frac{\pi^4}{45}z^3-\cdots}{2z}\\ -\sum_{n=1}^{\infty}\frac{1}{n^2 - z^2}&= -\frac{\pi^2}{6} - \frac{\pi^4}{90}z^2 - \cdots\\ -\lim_{z\to 0}\sum_{n=1}^{\infty}\frac{1}{n^2 - z^2}&= \lim_{z\to 0}\Big(-\frac{\pi^2}{6} - \frac{\pi^4}{90}z^2 - \cdots\Big)\\ -\sum_{n=1}^{\infty}\frac{1}{n^2}&= -\frac{\pi^2}{6}\\ \sum_{n=1}^{\infty}\frac{1}{n^2}&= \frac{\pi^2}{6} \end{align}

dustin
  • 8,241
19

Consider the function $\pi \cot(\pi z)$ which has poles at $z=\pm n$ where n is an integer. Using the L'hopital rule you can see that the residue at these poles is 1.

Now consider the integral $\int_{\gamma_N} \frac{\pi\cot(\pi z)}{z^2} dz$ where the contour $\gamma_N$ is the rectangle with corners given by ±(N + 1/2) ± i(N + 1/2) so that the contour avoids the poles of $\cot(\pi z)$. The integral is bouond in the following way: $\int_{\gamma_N} |\frac{\pi\cot(\pi z)}{z^2} |dz\le Max |(\frac{\pi\cot(\pi z)}{z^2}) | Length(\gamma_N)$. It can easily be shown that on the contour $\gamma_N$ that $\pi \cot(\pi z)< M$ where M is some constant. Then we have

$\int_{\gamma_N} |\frac{\pi\cot(\pi z)}{z^2} |dz\le M Max |\frac{1}{z^2} | Length(\gamma_N) = (8N+4) \frac{M}{\sqrt{2(1/2+N)^2}^2}$

where (8N+4) is the lenght of the contour and $\sqrt{2(1/2+N)^2}$ is half the diagonal of $\gamma_N$. In the limit that N goes to infinity the integral is bound by 0 so we have $\int_{\gamma_N} \frac{\pi\cot(\pi z)}{z^2} dz =0$

by the cauchy residue theorem we have 2πiRes(z = 0) + 2πi$\sum$Residues(z$\ne$ 0) = 0. At z=0 we have Res(z=0)=$-\frac{\pi^2}{3}$, and $Res (z=n)=\frac{1}{n^2}$ so we have

$2\pi iRes(z = 0) + 2\pi i\sum Residues(z\ne 0) = -\frac{\pi^2}{3}+2\sum_{1}^{\infty} \frac{1}{n^2} =0$

Where the 2 in front of the residue at n is because they occur twice at +/- n.

We now have the desired result $\sum_{1}^{\infty} \frac{1}{n^2}=\frac{\pi^2}{6}$.

  • can you please explain why you divide $M \cdot \text{Length}(\gamma_{N})$ by half the diagonal of $\gamma_{N}$? The only thing I can think of is that it's some kind of bounds on $Max \vert \frac{1}{z^{2}} \vert$. But if you could explain it that would be great. –  May 04 '16 at 23:15
16

I would like to present you a method I found recently here.

Let $A_n=\int_0^{\pi/2}\cos^{2n}x\;\mathrm{d}x$ and $B_n=\int_0^{\pi/2}x^2\cos^{2n}x\;\mathrm{d}x$.

The first integral is well known, by per partes we get the recurecnce relation :

$$A_{n}=\frac{2n-1}{2n}A_{n-1}\tag{1}$$

By per partes for the second integral:

$$A_n=\int_0^{\pi/2}\cos^{2n}x\;\mathrm{d}x=x\cos^{2n}x\bigg{|}_0^{\pi/2}-\frac{x^2}{2}(\cos^{2n}x)'\bigg{|}_0^{\pi/2}+\frac{1}{2}\int_0^{\pi/2}x^2(\cos^{2n}x)''\;\mathrm{d}x$$

First two terms vanish, so we are left only with the integral and since $(\cos^{2n}x)''=2n(2n-1)\cos^{2n-2}x-4n^2\cos^{2n}x$ we have :

$$A_n=(2n-1)nB_{n-1}-2n^2B_{n}\tag{2}$$

for $n\geq 1$. Rearranging and substituing $(2n-1)=2n\frac{A_n}{A_{n-1}}$ from $(1)$ we get :

$$\frac{1}{n^2}=2\frac{B_{n-1}}{A_{n-1}}-2\frac{B_n}{A_n}\tag{3}$$

Summing from $1$ to some $k$ natural we get by telescoping property

$$\sum_{n=1}^k\frac{1}{n^2}=2\frac{B_0}{A_0}-2\frac{B_k}{A_k}=\frac{\pi^2}{6}-2\frac{B_k}{A_k}\tag{4}$$

Next, using the inequality $\sin x\geq \frac{2x}{\pi}$ on $(0,\frac{\pi}{2})$ and by $(1)$ :

$$\frac{4}{\pi^2}B_{n-1}=\frac{4}{\pi^2}\int_0^{\pi/2}x^2\cos^{2n-2}x\;\mathrm{d}x<\int_0^{\pi/2}\sin^2x\cos^{2n-2}x\;\mathrm{d}x=A_{n-1}-A_n=\frac{A_{n-1}}{2n}$$

so in the limit the last term vanishes by the sqeeze theorem, so we are left with

$$\sum_{n=1}^\infty\frac{1}{n^2}=\frac{\pi^2}{6}\tag{4}$$

That concludes the result.

Machinato
  • 2,883
  • I think you mean $\sin x\geq 2x/\pi$. But this is a clever approach. Another reason to appreciate integration by parts. –  Jun 09 '17 at 00:51
  • Thanks for sharing, very easy to follow and totally new to me. Fixed the typo mentioned by @Bryan. – AD - Stop Putin - Jun 09 '17 at 10:11
16

Applying the usual trick 1 transforming a series to an integral, we obtain

$$\sum_{n=1}^\infty\frac1{n^2}=\int_0^1\int_0^1\frac{dxdy}{1-xy}$$

where we use the Monotone Convergence Theorem to integrate term-wise.

Then there's this ingenious change of variables 2, which I learned from Don Zagier during a lecture, and which he in turn got from a colleague:

$$(x,y)=\left(\frac{\cos v}{\cos u},\frac{\sin u}{\sin v}\right),\quad0\leq u\leq v\leq \frac\pi2$$

One verifies that it is bijective between the rectangle $[0,1]^2$ and the triangle $0\leq u\leq v\leq \frac\pi2$, and that its Jacobian determinant is precisely $1-x^2y^2$, which means $\frac1{1-x^2y^2}$ would be a neater integrand. For the moment, we have found

$$J=\int_0^1\int_0^1\frac{dxdy}{1-x^2y^2}=\frac{\pi^2}8$$ (the area of the triangular domain in the $(u,v)$ plane).


There are two ways to transform $\int\frac1{1-xy}$ into something $\int\frac1{1-x^2y^2}$ish:

  • Manipulate $S=\sum_{n=1}^\infty\frac1{n^2}$: We have $\sum_{n=1}^\infty\frac1{(2n)^2}=\frac14S$ so $\sum_{n=0}^\infty\frac1{(2n+1)^2}=\frac34S$. Applying the series-integral transformation, we get $\frac34S=J$ so $$S=\frac{\pi^2}6$$

  • Manipulate $I=\int_0^1\int_0^1\frac{dxdy}{1-xy}$: Substituting $(x,y)\leftarrow(x^2,y^2)$ we have $I=\int_0^1\int_0^1\frac{4xydxdy}{1-x^2y^2}$ so $$J=\int_0^1\int_0^1\frac{dxdy}{1-x^2y^2}=\int_0^1\int_0^1\frac{(1+xy-xy)dxdy}{1-x^2y^2}=I-\frac14I$$ whence $$I=\frac43J=\frac{\pi^2}6$$

(It may be seen that they are essentially the same methods.)


After looking at the comments it seems that this looks a lot like Proof 2 in the article by R. Chapman.

See also: Multiple Integral $\int\limits_0^1\!\!\int\limits_0^1\!\!\int\limits_0^1\!\!\int\limits_0^1\frac1{1-xyuv}\,dx\,dy\,du\,dv$

1 See e.g. Proof 1 in Chapman's article.
2 It may have been a different one; maybe as in the above article. Either way, the idea to do something trigonometric was not mine.

Bart Michels
  • 26,355
  • 2
    The second proof is by Beukers, Kolk and Calabi, which is in here https://pdfs.semanticscholar.org/35be/01e63c0bfd32b82c97d58ccc9c35471c3617.pdf – Vivek Kaushik Jun 28 '17 at 15:19
15

This is, by no measure, the best nor the simplest approach, but I think the approach is pretty peculiar.

We estimate the number $N(x)$ of integer solutions to $a^2+b^2+c^2+d^2\leq x$ as $x\rightarrow\infty$. On one hand, this is the number of lattice points inside the the $4$-ball of radius $\sqrt{x}$, which has volume $\frac{1}{2}\pi^2x^2$, hence $N(x)=\frac{\pi^2}{2}x^2+O(x^{3/2})$.

On the other hand, let $r_4(n)$ be the number of solutions to $a^2+b^2+c^2+d^2=n$. Following the derivation in the book by Iwaniec-Kowalski, by Jacobi's four-square identity we can write $$N(x)=\sum_{n\leq x}r_4(n)=8\sum_{m\leq x}(2+(-1)^m)\sum_{dm\leq x,d\text{ odd}} d \\ =8\sum_{m\leq x}(2+(-1)^m)\left(\frac{x^2}{4m^2}+O\left(\frac{x}{m}\right)\right)\\ =2x^2\sum_{m\leq x}(2+(-1)^m)m^{-2}+O(x\log x)\\ =3x^2\zeta(2)+O(x\log x)$$ (I have copied the steps as they were in the book, it's a neat exercise to justify every transition). In particular, we have $$\zeta(2)=\lim\limits_{x\rightarrow\infty}\frac{N(x)}{3x^2}=\frac{\pi^2}{6}.$$

Chappers
  • 67,606
Wojowu
  • 26,600
  • (+1) I wonder if one can do the same by only exploiting the fact that the average value of $r_2(n)$ is $\pi$ by Gauss circle problem. – Jack D'Aurizio Nov 09 '17 at 04:50
14

See evaluations of Riemann Zeta Function $\zeta(2)=\sum_{n=1}^\infty\frac{1}{n^2}$ in mathworld.wolfram.com and a solution by in D. P. Giesy in Mathematics Magazine:

D. P. Giesy, Still another elementary proof that $\sum_{n=1}^\infty \frac{1}{n^2}=\frac{\pi^2}{6}$, Math. Mag. 45 (1972) 148–149.

Unfortunately I did not get a link to this article. But there is a link to a note from Robin Chapman seems to me a variation of proof's Giesy.

Elias Costa
  • 14,658
13

There is a simple way of proving that $\sum_{n=1}^{\infty}\frac{1}{n^2} = \frac{\pi^2}{6}$ using the following well-known series identity: $$\left(\sin^{-1}(x)\right)^{2} = \frac{1}{2}\sum_{n=1}^{\infty}\frac{(2x)^{2n}}{n^2 \binom{2n}{n}}.$$ From the above equality, we have that $$x^2 = \frac{1}{2}\sum_{n=1}^{\infty}\frac{(2 \sin(x))^{2n}}{n^2 \binom{2n}{n}},$$ and we thus have that: $$\int_{0}^{\pi} x^2 dx = \frac{\pi^3}{12} = \frac{1}{2}\sum_{n=1}^{\infty}\frac{\int_{0}^{\pi} (2 \sin(x))^{2n} dx}{n^2 \binom{2n}{n}}.$$ Since $$\int_{0}^{\pi} \left(\sin(x)\right)^{2n} dx = \frac{\sqrt{\pi} \ \Gamma\left(n + \frac{1}{2}\right)}{\Gamma(n+1)},$$ we thus have that: $$\frac{\pi^3}{12} = \frac{1}{2}\sum_{n=1}^{\infty}\frac{ 4^{n} \frac{\sqrt{\pi} \ \Gamma\left(n + \frac{1}{2}\right)}{\Gamma(n+1)} }{n^2 \binom{2n}{n}}.$$ Simplifying the summand, we have that $$\frac{\pi^3}{12} = \frac{1}{2}\sum_{n=1}^{\infty}\frac{\pi}{n^2},$$ and we thus have that $\sum_{n=1}^{\infty}\frac{1}{n^2} = \frac{\pi^2}{6}$ as desired.

13

by using Fourier series of $f(x)=1, x\in[0,1]$ $$1=\sum_{n=1}^\infty\frac{4}{(2n-1)\pi}\sin (2n-1)\pi x$$ integrate both sides when integration limits are $x=0 \rightarrow 1$ $$\int_{0}^{1}1.dx=\int_{0}^{1} \sum_{n=1}^\infty\frac{4}{(2n-1)\pi}\sin (2n-1)\pi x dx$$ $$1=\sum_{n=1}^\infty\frac{8}{(2n-1)^2\pi^2}$$ $$\sum_{n=1}^\infty\frac{1}{(2n-1)^2}=\frac{\pi^2}{8}$$ then we use the equality series $$\sum_{n=1}^\infty\frac{1}{n^2}=\sum_{n=1}^\infty\frac{1}{(2n-1)^2}+\sum_{n=1}^\infty\frac{1}{(2n)^2}$$ simplify it to get $$\sum_{n=1}^\infty\frac{1}{n^2}=\frac{4}{3}\sum_{n=1}^\infty\frac{1}{(2n-1)^2}$$ so, $$\sum_{n=1}^\infty\frac{1}{n^2}=\frac{4}{3}\frac{\pi^2}{8}=\frac{\pi^2}{6}$$

E.H.E
  • 23,280
13

$$ \begin{align} \log(2\cos(x)) &=\log\left(e^{ix}+e^{-ix}\right)\tag{1a}\\ &=ix+\log\left(1+e^{-2ix}\right)\tag{1b}\\ &=-ix+\log\left(1+e^{2ix}\right)\tag{1c}\\ &=\cos(2x)-\frac{\cos(4x)}2+\frac{\cos(6x)}3-\cdots\tag{1d} \end{align} $$ Explanation:
$\text{(1a)}$: $2\cos(x)=e^{ix}+e^{-ix}$
$\text{(1b)}$: factor out $e^{ix}$
$\text{(1c)}$: factor out $e^{-ix}$
$\text{(1d)}$: average $\text{(1b)}$ and $\text{(1c)}$ using the power series for $\log(1+x)$ $$ \begin{align} \sum_{k=1}^\infty\frac1{k^2} &=\frac1{2\pi}\int_0^{2\pi}\sum_{k=1}^\infty\frac{e^{ikx}}k\sum_{k=1}^\infty\frac{e^{-ikx}}k\,\mathrm{d}x\tag{2a}\\ &=\frac1{2\pi}\int_0^{2\pi}\left|\log(1-e^{ix})\right|^2\,\mathrm{d}x\tag{2b}\\ &=\frac1{2\pi}\int_{-\pi}^\pi\left|\log(1+e^{ix})\right|^2\,\mathrm{d}x\tag{2c}\\ &=\frac1{2\pi}\int_{-\pi}^\pi\left|\,\log\left(2\cos\left(\frac x2\right)\right)+\frac{ix}2\,\right|^{\,2}\,\mathrm{d}x\tag{2d}\\ &=\frac1{2\pi}\int_{-\pi}^\pi\left(\log\left(2\cos\left(\frac x2\right)\right)^2+\frac{x^2}4\right)\,\mathrm{d}x\tag{2e}\\ &=\frac{\pi^2}{12}+\frac1{2\pi}\int_{-\pi}^\pi\left(\cos(x)-\frac{\cos(2x)}2+\frac{\cos(3x)}3-\cdots\right)^2\,\mathrm{d}x\tag{2f}\\ &=\frac{\pi^2}{12}+\frac12\sum_{k=1}^\infty\frac1{k^2}\tag{2g}\\ &=\frac{\pi^2}6\tag{2h} \end{align} $$ Explanation:
$\text{(2a)}$: use the orthogonality of $e^{ijx}$ and $e^{ikx}$ when $j\ne k$
$\text{(2b)}$: use the power series for $\log(1+x)$
$\text{(2c)}$: substitute $x\mapsto x+\pi$
$\text{(2d)}$: $1+e^{ix}=2\cos(x/2)e^{ix/2}$
$\text{(2e)}$: $\left|\,x+iy\,\right|^2=x^2+y^2$
$\text{(2f)}$: apply $(1)$
$\text{(2g)}$: use the orthogonality of $\cos(jx)$ and $\cos(kx)$ for $j\ne k$
$\text{(2h)}$: subtract the original from twice $\text{(2g)}$

robjohn
  • 345,667
12

Another proof i have (re?)discovered.

I want to prove that,

$\displaystyle J:=\int_0^1 \frac{\ln(1+x)}{x}dx=\frac{\pi^2}{12}$

Let $f$, be a function, such that, for $s\in[0;1]$,

$\displaystyle f(s)=\int_0^{\frac{\pi}{2}} \arctan\left(\frac{\sin t}{\cos t+s}\right)\,dt$

Observe that,

$\begin{align} f(0)&=\int_0^{\frac{\pi}{2}}\arctan\left(\frac{\sin t}{\cos t}\right)\,dt\\ &=\int_0^{\frac{\pi}{2}} t\,dt\\ &=\left[\frac{t^2}{2}\right]_0^{\frac{\pi}{2}}\\ &=\frac{\pi^2}{8} \end{align}$

For $t$ in $\left[0,\frac{\pi}{2}\right]$,

$\begin{align} \frac{\sin t}{\cos t+1}&=\frac{2\sin\left(\frac{t}{2}\right)\cos\left(\frac{t}{2}\right)}{\cos^2\left(\frac{t}{2}\right)-\sin^2\left(\frac{t}{2}\right)+1}\\ &=\frac{2\sin\left(\frac{t}{2}\right)\cos\left(\frac{t}{2}\right)}{2\cos^2\left(\frac{t}{2}\right)}\\ &=\tan\left(\frac{t}{2}\right) \end{align}$

Therefore,

$\begin{align} f(1)&=\int_0^{\frac{\pi}{2}}\arctan\left(\frac{\sin t}{\cos t+1}\right)\,dt\\ &=\int_0^{\frac{\pi}{2}}\arctan\left(\tan\left(\frac{t}{2}\right)\right)\,dt\\ &=\int_0^{\frac{\pi}{2}} \frac{t}{2}\,dt\\ &=\left[\frac{t^2}{4}\right]_0^{\frac{\pi}{2}}\\ &=\frac{\pi^2}{16} \end{align}$

For $s$ in $[0,1]$,

$\begin{align} f^\prime(s)&=-\int_0^{\frac{\pi}{2}}\frac{\sin t}{1+2s\cos t+s^2}\,dt\\ &=\left[\frac{\ln(1+2s\cos t+s^2)}{2s}\right]_0^{\frac{\pi}{2}}\\ &=\frac{1}{2}\frac{\ln\left(1+s^2\right)}{s}-\frac{\ln\left(1+s\right)}{s} \end{align}$

Therefore,

$\begin{align} f(1)-f(0)&=\int_0^1 f^\prime(s)ds\\ &=\frac{1}{2}\int_0^1\frac{\ln\left(1+s^2\right)}{s}\,ds-\int_0^1 \frac{\ln\left(1+s\right)}{s}\,ds\\ \end{align}$

In the first integral perform the change of variable $y=s^2$, therefore,

$\displaystyle f(1)-f(0)=-\frac{3}{4}J$

But,

$\begin{align} f(1)-f(0)&=\frac{\pi^2}{16}-\frac{\pi^2}{8}\\ &=-\frac{\pi^2}{16} \end{align}$

Therefore,

$\boxed{\displaystyle J=\frac{\pi^2}{12}}$

PS:

To obtain the value of $J$ knowing that $\displaystyle \zeta(2)=-\int_0^1 \frac{\ln(1-x)}{x}dx$

$\begin{align} \int_0^1 \frac{\ln(1+t)}{t}\,dt+\int_0^1 \frac{\ln(1-t)}{t}\,dt=\int_0^1 \frac{\ln(1-t^2)}{t}\,dt \end{align}$

Perform the change of variable $y=t^2$ in RHS integral,

$\begin{align} \int_0^1 \frac{\ln(1+t)}{t}\,dt+\int_0^1 \frac{\ln(1-t)}{t}\,dt=\frac{1}{2}\int_0^1 \frac{\ln(1-t)}{t}\,dt \end{align}$

Therefore,

$\begin{align} \int_0^1 \frac{\ln(1+t)}{t}\,dt=-\frac{1}{2}\int_0^1 \frac{\ln(1-t)}{t}\,dt \end{align}$

$\boxed{\displaystyle \int_0^1 \frac{\ln(1+t)}{t}\,dt=\frac{1}{2}\zeta(2)}$

FDP
  • 13,647
  • 1
    I have never seen such soluton - well done ! It reminds me a similar trick (substitutions and taking the difference) as done in evaluation to obtain $\int_{0}^{\pi}\ln\sin x ;\mathrm{d}x=-\frac{\pi}{2}\ln 2$ – Machinato Aug 25 '17 at 12:15
10

Here is an interesting solution that evaluates three sums, one of which is $\zeta(2).$

Let us start with the double integral

\begin{equation} \tag{1}\label{Double Integral} \int_{0}^{1} \int_{0}^{\sqrt{1-x^2}} \frac{1}{\sqrt{x^2+y^2} \ (1+x^2+y^2)} \ dy \ dx. \end{equation} A quick polar coordinates transformation $x=r \cos(\theta), y= r \sin(\theta)$ transforms \eqref{Double Integral} into $$\int_{0}^{\frac{\pi}{2}}\int_{0}^{1} \frac{1}{1+r^2} \ dr \ d \theta=\frac{\pi^2}{8}.$$

Hence, \eqref{Double Integral} is equal to $\frac{\pi^2}{8}.$

Now integrate \eqref{Double Integral} with respect to $y$ using the fact

$$\int \frac{1}{\sqrt{x^2+y^2} (1+x^2+y^2)} \ dy = \frac{\tanh^{-1} \left( \frac{y}{\sqrt{1+x^2}\sqrt{x^2+y^2}} \right)}{\sqrt{1+x^2}}$$ to see that \eqref{Double Integral} becomes

\begin{equation} \tag{2} \label{arctanh} \int_{0}^{1} \frac{\tanh^{-1} \left( \frac{\sqrt{1-x^2}}{\sqrt{1+x^2}} \right)}{\sqrt{1+x^2}} \ dx. \end{equation}

Next, observe that \eqref{arctanh} is equal to the double integral

\begin{equation} \tag{3} \label{double integral 2} \int_{0}^{1} \int_{0}^{\sqrt{1-x^2}} \frac{1}{1+x^2-y^2} \ dy \ dx, \end{equation} which can be confirmed by integrating the inner integral of \eqref{double integral 2} with respect to $y.$ Now here comes the interesting part. Use polar coordinates with $x=r\cos(\theta),y=r\sin(\theta)$ on \eqref{double integral 2} and then with $x=r\sin(\theta),y=r\cos(\theta)$ on \eqref{double integral 2} and average the two to see that \eqref{double integral 2} is the same as

$$\frac{1}{2}\int_{0}^{\frac{\pi}{2}} \int_{0}^{1} \frac{r}{1+r^2\cos(2\theta)} \ dr \ d \theta + \frac{1}{2}\int_{0}^{\frac{\pi}{2}} \int_{0}^{1} \frac{r}{1-r^2\cos(2\theta)} \ dr \ d \theta \,$$ which simplifies down to \begin{align} \int_{0}^{\frac{\pi}{2}} \frac{\ln(1+\cos(2\theta))}{4\cos(2\theta)}-\frac{\ln(1-\cos(2\theta))}{4\cos(2\theta)} \ d \theta & = \int_{0}^{\frac{\pi}{2}} \frac{\ln \left(\frac{1+\cos(2\theta)}{1-\cos(2\theta)} \right)}{4\cos(2\theta)} \ d \theta\\ & = \int_{0}^{\frac{\pi}{2}} -\frac{\ln(\tan^2(\theta))}{4\cos(2\theta)} \ d \theta \tag{4} \label{double angle} \\ & = \int_{0}^{\infty} \frac{\ln(u)}{2(u^2-1)} \ du \tag{5} \label{pi^2/4} \end{align} with \eqref{double angle} following from simplifying the logarithmic term with the double angle formulas $$\sin^2(\theta)=\frac{1-\cos(2\theta)}{2}, \quad \cos^2(\theta)=\frac{1+\cos(2\theta)}{2},$$ and \eqref{pi^2/4} following from the substitution $u=\tan(\theta).$

Splitting \eqref{pi^2/4} into $$ \int_{0}^{1} \frac{\ln(u)}{2(u^2-1)} \ du + \int_{1}^{\infty} \frac{\ln(u)}{2(u^2-1)} \ du,$$ a substitution $u=\frac{1}{t}$ on the second term shows \eqref{pi^2/4} is equal to $$ 2 \int_{0}^{1} \frac{\ln(u)}{2(u^2-1)} \ du = \int_{0}^{1} \frac{\ln(u)}{u^2-1} \ du.$$ Hence, we have \begin{align} \tag{6} \label{pi^2/8} \int_{0}^{1} \frac{\ln(u)}{u^2-1} \ du = \frac{\pi^2}{8} \end{align}

Now following the other users' answers, convert the integrand in the left hand side of \eqref{pi^2/8} into a geometric series, apply the Monotone Convergence Theorem to interchange sum and integral, to see we have $$\sum_{n=0}^{\infty} \frac{1}{(2n+1)^2} = \frac{\pi^2}{8},$$ and observing \begin{align} \zeta(2) & =\sum_{n=1}^{\infty} \frac{1}{(2n)^2}+ \sum_{n=0}^{\infty} \frac{1}{(2n+1)^2} \\ & = \frac{1}{4} \zeta(2) + \frac{\pi^2}{8}, \end{align} we see $$\zeta(2)=\frac{\pi^2}{6}.$$

Those are the first two sums. We refer back to \eqref{arctanh}. Make the substitution $u=\sqrt{\frac{{1-x^2}}{{1+x^2}} }$ and simplify to see \eqref{arctanh} becomes to get: \begin{equation} \tag{7} \label{complicated sub} \int_{0}^{1} \frac{\sqrt{2} u\tanh^{-1}(u)}{\sqrt{1-u^2}(1+u^2)}\ du. \end{equation} Substituting $u=\tanh(\theta)$ transforms \eqref{complicated sub} into \begin{align} \sqrt{2}\int_{0}^{\infty}\frac{\theta(e^{2\theta}-1)}{e^{4\theta}+1}\ d\theta \end{align} and substituting $z=e^{\theta}$ shows that \eqref{arctanh} is the same as $$\sqrt{2}\int_{1}^{\infty}\frac{(z^2-1)\ln(z)}{z^4+1}\ dz,$$ and splitting the region of integration as with \eqref{pi^2/4} to get \eqref{pi^2/8}, we see \eqref{arctanh} is

\begin{equation} \tag{7} \label {crazy integral} \sqrt{2}\int_{0}^{1}\frac{(t^2-1)\ln(t)}{t^4+1}\ dt. \end{equation}

Expanding this integrand into a geometric series and integrating term by term, we see that \begin{align}\frac{\pi^2}{8} & =\sqrt{2}\left(\sum_{n=0}^{\infty}\frac{(-1)^n}{(4n+1)^2}-\frac{(-1)^n}{(4n+3)^2}\right) \\ & =\sqrt{2}\sum_{n=0}^{\infty}\frac{(-1)^n}{(4n+1)^2} +\sqrt{2} \sum_{n=-\infty}^{-1}\frac{(-1)^n}{(4n+1)^2} \\ & = \sqrt{2}\sum_{n=-\infty}^{\infty}\frac{(-1)^n}{(4n+1)^2}. \end{align} Thus, \begin{align} \sum_{n=-\infty}^{\infty}\frac{(-1)^n}{(4n+1)^2}=\frac{\pi^2}{8\sqrt{2}}, \end{align} which is the third sum.

10

Let $X$ be an independent Laplace random variable with $X\sim L(0,1) = \frac12 \exp{(-|x|)}$, then its characteristic function :

$$\varphi_X(t)=\mathbb{E}[e^{itX}]=\frac{1}{1+t^2} \newcommand{\var}[1]{\mathrm{var}\left[#1\right]}$$

By symmetry $\mathbb{E}[X]=0$ we write (generally) :

$$\varphi_X(t)=\mathbb{E}[e^{itX}]=\mathbb{E}[1+itX-t^2X^2+\cdots\,]=1-\var{X}t^2+O(t^3)\tag{A}$$

since $\var{X}=\mathbb{E}[X^2]-\mathbb{E}[X]^2=\mathbb{E}[X^2]$, for our case : $$\frac{1}{1+t^2}=1-t^2+O(t^3) \rightarrow \var{X}=1$$ Now consider a set of such variables $X_n$, independent of each other, for a construction of a new random variable $Y$ as follows :

$$Y=\sum_{n=1}^{\infty}\frac{X_n}{n}$$

Then, taking variance of both sides :

$$\var{Y}=\var{\sum_{n=1}^{\infty}\frac{X_n}{n}}=\sum_{n=1}^{\infty}\var{\frac{X_n}{n}}=\sum_{n=1}^{\infty}\frac{\var{X_n}}{n^2}=\var{X}\zeta(2)=\zeta(2)\tag{B}$$

However, for characteristic function instead, using properities of characteristic function :

$$\varphi_Y(t)=\varphi_{\sum_{n=1}^{\infty}X_n/n}\left(t\right)=\prod_{n=1}^\infty \varphi_{X}\left(\frac{t}{n}\right) = \prod_{n=1}^\infty \frac{1}{1+\frac{t^2}{n^2}}=\frac{\pi t}{\sinh \pi t} = 1-\frac{\pi^2}{6}t^2+O(t^3)\tag{C}$$

Combining it with $(A)$ and $(B)$ we get : $$\zeta(2)=\var{Y}=\frac{\pi^2}{6}$$

NOTE : As long as the set $\{X_n\}_{n\in\mathbb{N}}$ consist of independent variables with idential pdf. the steps are the same upto $\var{X}=1$. So, there might be distributions for which the product in $C$ is easily evaluable.

NOTE2 : by mystake I posted just a fisrst sentence of this answer, so after deleting, this is the second copy

Machinato
  • 2,883
  • This is equivalent to Euler's infinite product with sin(x)/x, no? Take the reciprocal and let $x=i\pi t$.

    Otherwise I am not sure how you conclude the product equals $\frac{\pi t}{\sinh{\pi t}}$

    – nimish Feb 26 '21 at 06:11
10

I propose a solution... Consider for $n\in\mathbb N^*$ : $$(1) : \int_0^\pi \left(\alpha t+\beta t^2\right)\cos(nt)\,\mathrm dt = \dfrac{1}{n^2} $$ Integrate by parts : $$ \int_0^\pi t\cos(nt)\,\mathrm dt = \underbrace{\left.\dfrac{t\sin(nt)}{n}\right\vert_0^\pi}_{=\,0} -\int_0^\pi \dfrac{\sin(nt)}{n}\,\mathrm dt = -\underbrace{\int_0^{n\pi}\dfrac{\sin x}{n^2}\,\mathrm dx}_{\mathrm{substitution\;by\;}x=nt} = \dfrac{\cos(n\pi)-1}{n^2}$$ and $$ \begin{split}\int_0^\pi t^2\cos(nt)\,\mathrm dt &= \underbrace{\left.\dfrac{t^2\sin(nt)}{n}\right\vert_0^\pi}_{=\,0} - \int_0^\pi \dfrac{2t\sin(nt)}{n}\,\mathrm dt = \left.\dfrac{2t\cos(nt)}{n^2}\right\vert_0^\pi - \int_0^\pi \dfrac{2\cos(nt)}{n^2}\,\mathrm dt \\&= \dfrac{2\pi\cos(n\pi)}{n^2} - \underbrace{\int_0^{n\pi}\dfrac{2\cos x}{n^3}\,\mathrm dx}_{\mathrm{substition\;by\;}x=nt} = \dfrac{2\pi\cos(n\pi)}{n^2}- \underbrace{\left.\dfrac{2\sin x}{n^3}\right\vert_0^{n\pi}}_{=\,0} \\&=\dfrac{2\pi\cos(n\pi)}{n^2} \end{split}$$ Thus $$ \int_0^\pi \left(\alpha t+\beta t^2\right)\cos(nt)\,\mathrm dt = \alpha \cdot \dfrac{\cos(n\pi)-1}{n^2} + \beta\cdot\dfrac{2\pi\cos(n\pi)}{n^2} $$ We deduce that $\alpha = -1$ and $\beta = 1/2\pi$ satisfies $(1)$.

Since for $x\in\mathbb R\backslash 2\pi\mathbb Z$ : $$ \sum_{k=1}^n \cos(kx) =-\dfrac{1}{2} + \dfrac{\sin(nx+x/2)}{2\sin(x/2)} $$ we have $$ \begin{split}\sum_{k=1}^n \dfrac{1}{k^2} &= \sum_{k=1}^n \int_0^\pi \left(\dfrac{t^2}{2\pi}-t\right)\cos(kt)\,\mathrm dt \\&= \int_0^\pi \left(\dfrac{t^2}{2\pi}-t\right)\sum_{k=1}^n \cos(kt)\,\mathrm dt\\ &= -\dfrac{1}{2}\int_0^\pi \left(\dfrac{t^2}{2\pi}-t\right)\mathrm dt + \int_0^\pi \left(\dfrac{t^2}{2\pi}-t\right)\cdot \dfrac{\sin(nt+t/2)}{2\sin(t/2)}\,\mathrm dt \end{split}$$ However $\sin(nt+t/2) = \sin(t/2)\cos(nt)+\sin(nt)\cos(t/2)$. Let $\phi$ and $\psi$ such that $$\phi(t) = \dfrac{t^2}{4\pi}-\dfrac{t}{2} \;\mathrm{and}\; \psi(t) = \left(\dfrac{t^2}{2\pi}-t\right)\cdot\dfrac{\cos(t/2)}{2\sin(t/2)}$$ so that $$\int_0^\pi \left(\dfrac{t^2}{2\pi}-t\right)\cdot \dfrac{\sin(nt+t/2)}{2\sin(t/2)} = \int_0^\pi \phi(t)\cos(nt)\,\mathrm dt + \int_0^\pi \psi(t)\sin(nt)\,\mathrm dt$$ $\phi$ is continuous on $[0,\pi]$. And $\psi$ can be extended at $t=0$. Indeed as $t\to 0$ $$ \psi(t) = \underbrace{\dfrac{t\cos(t/2)}{2\pi}}_{\to\, 0}\cdot\underbrace{\dfrac{t/2}{\sin(t/2)}}_{\to \ 1}- \underbrace{\cos(t/2)}_{\to \, 1}\cdot\underbrace{\dfrac{t/2}{\sin(t/2)}}_{\to\, 1} \xrightarrow[t\to 0]{} -1 $$ Therefore, $\psi$ is continuous (by extension) on $[0,\pi]$.

There remains to apply Lebesgue-Riemann Lemma, which tells us that : $$ \int_0^\pi \phi(t)\cos(nt)\,\mathrm dt \xrightarrow[n\to \infty]{} 0\;\;\mathrm{and}\;\;\int_0^\pi \psi(t)\sin(nt)\,\mathrm dt\xrightarrow[n\to \infty]{} 0$$ Consequently $$ \int_0^\pi \left(\dfrac{t^2}{2\pi}-t\right)\cdot \dfrac{\sin(nt+t/2)}{2\sin(t/2)} \xrightarrow[n\to \infty]{} 0 $$ and $$ \sum_{k=1}^n \dfrac{1}{k^2} \xrightarrow[n\to \infty]{} -\dfrac{1}{2}\int_0^\pi \left(\dfrac{t^2}{2\pi}-t\right)\mathrm dt$$ Now, we can evaluate this integral : $$-\dfrac{1}{2}\int_0^\pi \left(\dfrac{t^2}{2\pi}-t\right)\mathrm dt = -\dfrac{1}{2}\left[\dfrac{t^3}{6\pi}-\dfrac{t^2}{2}\right]_0^\pi = -\dfrac{1}{2}\left[\dfrac{\pi^2}{6}-\dfrac{\pi^2}{2}\right] = \dfrac{\pi^2}{6} $$ Then... the desired result : $$ \boxed{\sum_{k=1}^\infty \dfrac{1}{k^2} = \dfrac{\pi^2}{6}}$$

SuperFoxy
  • 151
10

This is a similar proof as posted by Hans Lundmark, but I find it to be a little simpler. I ran across this approach in a Dover copy of The USSR Olympiad Problem Book. It is also based on the observation that $$\cot^2x<\frac{1}{x^2}<\csc^2x\,.$$

We first have the trig identity $$\sin(2n+1)\alpha=\sum_{k=0}^n(-1)^k\binom{2n+1}{2k+1}\cos^{2(n-k)}\alpha\sin^{2k+1}\alpha$$ which is arguably the hardest part of this proof. This directly manipulates into $$\sin(2n+1)\alpha=\sin^{2n+1}\alpha\sum_{k=0}^n(-1)^k\binom{2n+1}{2k+1}\cot^{2(n-k)}\alpha\,.$$ This formula reveals that the $n$ distinct quantities below $$\cot^2\frac{\pi}{2n+1},\quad\cot^2\frac{2\pi}{2n+1},\quad\ldots,\quad\cot^2\frac{n\pi}{2n+1}$$ are the roots of the polynomial $$\sum_{k=0}^n(-1)^k\binom{2n+1}{2k+1}x^{n-k}\,.$$ After scaling by the lead coefficient, Viete's Formulas then imply that $$\sum_{k=1}^n \cot^2\frac{k\pi}{2n+1}=\frac{n(2n-1)}{3}$$ By another elementary trig identity, we also get $$\sum_{k=1}^n \csc^2\frac{k\pi}{2n+1}=\frac{2n(n+1)}{3}$$ The inequality above then gives us $$\frac{n(2n-1)}{3}<\frac{(2n+1)^2}{\pi^2}\left(1+\frac{1}{2^2}+\cdots+\frac{1}{n^2}\right)<\frac{2n(n+1)}{3}$$ which gives us the desired conclusion after taking limits.

  • I later found out that this is one of the three different proofs of the Basel Problem that was presented in "Proofs from the Book". –  May 01 '18 at 15:54
10

The sum can be written as the integral: $$\int_0^{\infty} \frac{x}{e^x-1} dx $$ This integral can be evaluated using a rectangular contour from $0$ to $\infty$ to $\infty + \pi i$ to $ 0$ .

bob
  • 2,167
Asier Calbet
  • 2,480
8

Following proof rely on this integral identity :

$$\int_{a}^{1}\frac{\arccos x}{\sqrt{x^2-a^2}}\mathrm{d}x=-\frac{\pi}{2}\ln a\qquad ;\,a\in(0,1]$$

We will prove it later on. Now, let's make a power series :

$$\zeta(2)=\sum_{n=1}^{\infty}\frac{1}{n^2}=\int_0^1\frac{1}{x}\sum_{n=1}^{\infty}\frac{x^n}{n}\,\mathrm{d}x=-\int_0^1\frac{\ln(1-x)}{x}\,\mathrm{d}x=-\int_0^1\frac{\ln x}{1-x}\,\mathrm{d}x$$

Inserting the formula above we get :

$$\zeta(2)=\frac{2}{\pi}\int_0^1\int_{x}^{1}\frac{\arccos y}{(1-x)\sqrt{y^2-x^2}}\,\mathrm{d}y\,\mathrm{d}x$$

Interchanging the order of integration :

$$\zeta(2)=\frac{2}{\pi}\int_0^1\int_{0}^{y}\frac{\arccos y}{(1-x)\sqrt{y^2-x^2}}\,\mathrm{d}x\,\mathrm{d}y\tag{A}$$

But, with help of substitution $x=y \cos{\theta}$ and universal $t=\tan\frac{\theta}{2}$ : $$\int_{0}^{y}\frac{\mathrm{d}x}{(1-x)\sqrt{y^2-x^2}}=\int_{0}^{\frac{\pi}{2}}\frac{\mathrm{d}\theta}{1-y \cos{\theta}}=\int_{0}^{1}\frac{\frac{2\mathrm{d}t}{1+t^2}}{1-y\frac{1-t^2}{1+t^2}}=\frac{\pi-\arccos{y}}{\sqrt{1-y^2}}$$

Plugging this to $(A)$ we get :

$$ \begin{align*}&\zeta(2)=\frac{2}{\pi}\int_{0}^{1}\frac{\pi\arccos{y}-\arccos^2 y}{\sqrt{1-y^2}}\,\mathrm{d}y=\frac{2}{\pi}\left(\frac{\pi}{2}\arccos^2 y- \frac{1}{3}\arccos^3 y\right)\bigg{|}_{1}^{0}= \\ \\ &\frac{2}{\pi}\left(\frac{\pi}{2}\left(\frac{\pi}{2}\right)^2-\frac{1}{3}\left(\frac{\pi}{2}\right)^3\right) = \frac{2}{\pi}\left(\frac{\pi}{2}\right)^3 \left(1-\frac{1}{3}\right) =\frac{\pi^2}{6} \end{align*}$$

ADDENDUM : Proof of the apriori integral :

$$\begin{align*}&\int_{a}^{1}\frac{\arccos x}{\sqrt{x^2-a^2}}\mathrm{d}x=\int_{a}^{1}\frac{\arccos\left(\frac{x}{y}\right)}{\sqrt{x^2-a^2}}\bigg{|}_{y=x}^{y=1}\mathrm{d}x=\int_{a}^{1}\int_{x}^{1}\frac{x}{y}\frac{\mathrm{d}y\,\mathrm{d}x}{\sqrt{x^2-a^2}\sqrt{y^2-x^2}} = \\ \\ & \int_{a}^{1}\int_{a}^{y}\frac{x}{y}\frac{\mathrm{d}x\,\mathrm{d}y}{\sqrt{x^2-a^2}\sqrt{y^2-x^2}} = \frac{\pi}{2}\int_{a}^{1}\frac{\mathrm{d}y}{y} = -\frac{\pi}{2}\ln a \end{align*}$$

Where the inner integral was computed via substitution $x^2=a^2\cos^2\theta+y^2\sin^2\theta$ it is clear, taking differential, that $2x\;\mathrm{d}x=2\left(y^2-a^2\right)\sin\theta\cos\theta\;\mathrm{d}\theta$, then :

$$(x^2-a^2)(y^2-x^2)=(a^2\cos^2\theta+y^2\sin^2\theta-a^2)(y^2-a^2\cos^2\theta-y^2\sin^2\theta)=(y^2\sin^2\theta-a^2\sin^2\theta)(y^2\cos^2\theta-a^2\cos^2\theta)=(y^2-a^2)^2\sin^2\theta\cos^2\theta$$

Or $$\sqrt{x^2-a^2}\sqrt{y^2-x^2}= \left(y^2-a^2\right)\sin\theta\cos\theta\ = x\,\mathrm{d}x$$

Therefore :

$$\int_{a}^{y}\frac{x\,\mathrm{d}x}{\sqrt{x^2-a^2}\sqrt{y^2-x^2}}=\int_{0}^{\frac{\pi}{2}}\frac{x\,\mathrm{d}x}{x\,\mathrm{d}x}=\frac{\pi}{2}$$

Machinato
  • 2,883
8

Define $f$ on $[0;2\pi]$,

$\displaystyle f(a)=\int_0^1 \dfrac{\ln(x^2-2x\cos(a)+1)}{x}dx\tag 0$

Theorem:

For all $a\in [0;2\pi]$,

$f(a)=-\dfrac{1}{2}a^2+\pi a-\dfrac{\pi^2}{3}\tag 1$

For all $a\in [0;2\pi]$,

$\displaystyle f\left(\frac{a}{2}\right)+f\left(\pi-\frac{a}{2}\right)=\frac{f(a)}{2}\tag 2$

Proof:

$\begin{align} f\left(\frac{a}{2}\right)+f\left(\pi-\frac{a}{2}\right)&=\int_0^1 \frac{\ln\left(\left(x^2-2x\cos\left(\frac{a}{2}\right)+1\right)\left(x^2+2x\cos\left(\frac{a}{2}\right)+1\right) \right)}{x}dx\\ &=\int_0^1 \frac{\ln\left(x^4-2x^2\cos(a)+1\right)}{x}dx\\ \end{align}$

Perform the change of variable $y=x^2$ in the latter integral to obtain (2).

According to theorems about functions défined by integrals $f^{\prime\prime}$ exists and it is continuous.

Derive twice (2),

For all $a\in [0;2\pi]$,

$\displaystyle f^{\prime\prime}\left(\frac{a}{2}\right)+f^{\prime\prime}\left(\pi-\frac{a}{2}\right)=2f^{\prime\prime}(a)\tag 3$

$f^{\prime\prime}$ is continuous on $[0;2\pi]$ therefore this fonction has a maximum $M$ and a minimum $m$ that are obtainable.

Therefore it exists $a_0\in[0;2\pi]$ such that $f^{\prime\prime}(a_0)=M$.

Plug $a_0$ into (3),

$\displaystyle f^{\prime\prime}\left(\frac{a_0}{2}\right)+f^{\prime\prime}\left(\pi-\frac{a_0}{2}\right)=2f^{\prime\prime}(a_0)=2M$

But, $ f^{\prime\prime}\left(\frac{a_0}{2}\right)\leq M$ et $f^{\prime\prime}\left(\pi-\frac{a_0}{2}\right)\leq M$ according to the définition of $M$.

therefore $f^{\prime\prime}\left(\frac{a_0}{2}\right)=f^{\prime\prime}\left(\pi-\frac{a_0}{2}\right)=M$

By recurrence reasoning, for all $n\geq 1$, natural integer,

$f^{\prime\prime}\left(\frac{a_0}{2^n}\right)=M\tag 4$

$f^{\prime\prime}$ is continuous in $0$ therefore taking $n$ to infinity in (4) one obtains,

$M=f^{\prime\prime}(0)$.

Considering $m$ the minimum of $f^{\prime\prime}$ using the same way it can be proved that,

$m=f^{\prime\prime}(0)$

Since $m=M$, therefore $f^{\prime\prime}$ is a constant function.

therefore, there exist $\alpha,\beta,\gamma$ real such that,

For all $a\in[0;2\pi]$,

$f(a)=\alpha a^2+\beta a+\gamma\tag 5$

Plug (5) into (3), one obtains:

$\alpha\pi+\dfrac{\beta}{2}=0$ and $\alpha \pi^2 +\beta \pi+\dfrac{3}{2}\gamma=0$

On the other hand, for all $a\in [0;2\pi]$,

$\displaystyle f^\prime(a)= 2\sin a\int_0^1\dfrac{1}{x^2-2x\cos a+1}dx$

If $a=\dfrac{\pi}{2}$ one obtains,

$\begin{align} f^\prime\left(\dfrac{\pi}{2}\right)&=2\int_0^1 \dfrac{1}{x^2+1}dx\\ &=2\times \dfrac{\pi}{4}\\ &=\dfrac{\pi}{2} \end{align}$

Taking derivative of (5), one obtains for all $a\in [0;2\pi]$,

$f^\prime(a)=2\alpha a+\beta$

Therefore,

$\alpha \pi +\beta=\dfrac{\pi}{2}$

One have obtained a linear system of three equations in $\alpha,\beta,\gamma$.

To achieve the proof of the theorem solve it.

To get the value of $\zeta(2)$, apply the theorem with $a=0$, one obtains,

$\displaystyle \int_0^1 \dfrac{\ln(1-x)}{x}dx=-\dfrac{\pi^2}{6}$

And then, continue in usual way, expand the integrand...

From, Euler's integrals, H. Haruki and S. Haruki, The mathematical gazette, 1983.

FDP
  • 13,647
  • This one is very simple and straightforward to follow. However, I would like to see the argument for continuity for the second derivative $f''(x)$ to be more explained (or am I so blind to see it?). – Machinato Aug 25 '17 at 12:20
  • 1
    It uses general theorems about derivation under the integral sign as Lebesgue's dominated convergence theorem (but weaker theorems do exist for the Riemann integral). – FDP Aug 25 '17 at 17:21
7

Using Fourier's expansion of $f(x)=x(1-x)$, we get $$a_{0}=\frac{1}{6}$$ $$a_{n}=\frac{1}{n^2\pi^2}$$ $$b_{n}=0$$ Therefore ,we have $$x(1-x)=\frac{1}{6}-\sum_{n=1}^{\infty}\frac{\cos2xn\pi}{(n\pi)^2}$$ putting $x=0$ we get $$\sum _{k=1}^{\infty} \frac{1}{k^2} = \frac{\pi^2}{6}$$

vito
  • 1,893
  • 16
  • 21
7

I really like this one. Consider $f(x)=x^2-\pi^2$. Compute it's Fourier expansion to obtain $$f(x)=\frac{2}{3}\pi^2-4\sum_{n=1}^\infty\frac{(-1)^n}{n^2}\cos nx.$$ Now let $x=\pi$, then it quickly follows that $$4\zeta(2)=\frac{2}{3}\pi^2\implies \zeta(2)=\frac{\pi^2}{6}.$$

pshmath0
  • 10,565
5

First, we relate the sum of the reciprocals of the squares of the odd natural numbers with the alternating sum of their reciprocals: $$ \begin{align} \sum_{k=0}^\infty\frac1{(2k+1)^2} &=\left(\sum_{k=0}^\infty\frac{(-1)^k}{2k+1}\right)^2-\sum_{k=0}^\infty\sum_{j=k+1}^\infty\frac{(-1)^{j+k}}{(2k+1)(2j+1)}\\ &\phantom{{}=\left(\sum_{k=0}^\infty\frac{(-1)^k}{2k+1}\right)^2}-\sum_{k=0}^\infty\sum_{j=0}^{k-1}\frac{(-1)^{j+k}}{(2k+1)(2j+1)}\tag{1a}\\ &=\left(\sum_{k=0}^\infty\frac{(-1)^k}{2k+1}\right)^2-2\sum_{k=0}^\infty\sum_{j=k+1}^\infty\frac{(-1)^{j+k}}{(2k+1)(2j+1)}\tag{1b}\\ &=\left(\sum_{k=0}^\infty\frac{(-1)^k}{2k+1}\right)^2+2\sum_{k=0}^\infty\sum_{j=0}^\infty\frac{(-1)^j}{(2k+1)(2j+2k+3)}\tag{1c}\\ &=\left(\sum_{k=0}^\infty\frac{(-1)^k}{2k+1}\right)^2+2\sum_{j=0}^\infty\sum_{k=0}^\infty\frac{(-1)^j}{2j+2}\left(\frac1{2k+1}-\frac1{2j+2k+3}\right)\tag{1d}\\ &=\left(\sum_{k=0}^\infty\frac{(-1)^k}{2k+1}\right)^2+2\sum_{j=0}^\infty\sum_{k=0}^j\frac{(-1)^j}{2j+2}\frac1{2k+1}\tag{1e}\\ &=\left(\sum_{k=0}^\infty\frac{(-1)^k}{2k+1}\right)^2+2\sum_{k=0}^\infty\sum_{j=k}^\infty\frac{(-1)^j}{2j+2}\frac1{2k+1}\tag{1f}\\ &=\left(\sum_{k=0}^\infty\frac{(-1)^k}{2k+1}\right)^2+2\sum_{k=0}^\infty\sum_{j=0}^\infty\frac{(-1)^{j+k}}{2j+2k+2}\frac1{2k+1}\tag{1g}\\ &=\left(\sum_{k=0}^\infty\frac{(-1)^k}{2k+1}\right)^2+2\sum_{k=0}^\infty\sum_{j=0}^\infty\frac{(-1)^{j+k}}{2j+1}\left(\frac1{2k+1}-\frac1{2j+2k+2}\right)\tag{1h}\\ &=\left(\sum_{k=0}^\infty\frac{(-1)^k}{2k+1}\right)^2+\sum_{k=0}^\infty\sum_{j=0}^\infty\frac{(-1)^{j+k}}{(2j+1)(2k+1)}\tag{1i}\\ &=2\left(\sum_{k=0}^\infty\frac{(-1)^k}{2k+1}\right)^2\tag{1j} \end{align} $$ Explanation:
$\text{(1a):}$ break up $\left(\sum\limits_{k=0}^\infty\frac{(-1)^k}{2k+1}\right)^2=\sum\limits_{k=0}^\infty\sum\limits_{j=0}^\infty\frac{(-1)^{j+k}}{(2k+1)(2j+1)}$
$\phantom{\text{(1a):}}$ $\sum\limits_{k=0}^\infty\frac1{(2k+1)^2}$ accounts for $j=k$
$\phantom{\text{(1a):}}$ $\sum\limits_{k=0}^\infty\sum\limits_{j=k+1}^\infty\frac{(-1)^{j+k}}{(2k+1)(2j+1)}$ accounts for $j\gt k$
$\phantom{\text{(1a):}}$ $\sum\limits_{k=0}^\infty\sum\limits_{j=0}^{k-1}\frac{(-1)^{j+k}}{(2k+1)(2j+1)}$ accounts for $j\lt k$
$\text{(1b):}$ changing the order of summation then swapping $j$ and $k$ yields
$\phantom{\text{(1b):}}$ $\sum\limits_{k=0}^\infty\sum\limits_{j=0}^{k-1}\frac{(-1)^{j+k}}{(2k+1)(2j+1)}=\sum\limits_{k=0}^\infty\sum\limits_{j=k+1}^\infty\frac{(-1)^{j+k}}{(2k+1)(2j+1)}$
$\text{(1c):}$ substitute $j\mapsto j+k+1$
$\text{(1d):}$ change the order of summation then partial fractions
$\text{(1e):}$ evaluating a telescoping series yields
$\phantom{\text{(1e):}}$ $\sum\limits_{k=0}^\infty\left(\frac1{2k+1}-\frac1{2j+2k+3}\right)=\sum\limits_{k=0}^j\frac1{2k+1}$
$\text{(1f):}$ change the order of summation
$\text{(1g):}$ substitute $j\mapsto j+k$
$\text{(1h):}$ partial fractions
$\text{(1i):}$ average $\text{(1g)}$ and $\text{(1h)}$
$\text{(1j):}$ $\sum\limits_{k=0}^\infty\sum\limits_{j=0}^\infty\frac{(-1)^{j+k}}{(2k+1)(2j+1)}=\left(\sum\limits_{k=0}^\infty\frac{(-1)^k}{2k+1}\right)^2$


Next, apply the Leibniz Formula: $$ \begin{align} \sum_{k=0}^\infty\frac1{(2k+1)^2} &=2\,\left(\sum_{k=0}^\infty\frac{(-1)^k}{2k+1}\right)^2\tag{2a}\\ &=\frac{\pi^2}8\tag{2b} \end{align} $$ Explanation:
$\text{(2a):}$ summarize $(1)$
$\text{(2b):}$ apply the Leibniz Formula: $\sum\limits_{k=0}^\infty\frac{(-1)^k}{2k+1}=\frac\pi4$


Finally, use a geometric series to answer the Basel Problem: $$ \begin{align} \sum_{n=1}^\infty\frac1{n^2} &=\sum_{k=0}^\infty\sum_{j=0}^\infty\frac1{\left(2^j(2k+1)\right)^2}\tag{3a}\\ &=\frac43\sum_{k=0}^\infty\frac1{(2k+1)^2}\tag{3b}\\ &=\frac{\pi^2}6\tag{3c} \end{align} $$ Explanation:
$\text{(3a):}$ every positive integer is uniquely representable
$\phantom{\text{(3a):}}$ as the product of a power of $2$ and an odd integer
$\text{(3b):}$ apply the geometric series $\sum\limits_{j=0}^\infty\frac1{4^j}=\frac43$
$\text{(3c):}$ apply $(2)$

robjohn
  • 345,667
4

I found this proof on YouTube but I did little changes:

Lets start with

\begin{align} I&=\int_0^{\pi/2}\ln(2\cos x)\ dx=\int_0^{\pi/2}\ln\left(e^{ix}(1+e^{-2ix})\right)\ dx\\ &=\int_0^{\pi/2}ix\ dx-\sum_{n=1}^\infty \frac{(-1)^n}{n}\int_0^{\pi/2}e^{-2inx}\ dx\\ &=\frac{\pi^2}{8}i-\sum_{n=1}^\infty\frac{(-1)^n}{n}\left(-\frac{(-1)^n-1}{2in}\right)\\ &=\frac{\pi^2}{8}i-\frac12i\left(\zeta(2)-\operatorname{Li}_2(-1)\right)\\ &=\frac{\pi^2}{8}i-\frac12i\left(\zeta(2)+\frac12\zeta(2)\right)\\ &=i\left(\frac{\pi^2}{8}-\frac34\zeta(2)\right) \end{align}

By comparing the imaginary parts, we have

$$0=\frac{\pi^2}{8}-\frac34\zeta(2)\Longrightarrow\zeta(2)=\frac{\pi^2}{6}$$

Ali Shadhar
  • 25,498
4

Since $\int_0^1 \frac{dx}{1+x^2}=\frac{\pi}{4}$, we have

$$\frac{\pi^2}{16}=\int_0^1\int_0^1\frac{dydx}{(1+x^2)(1+y^2)}\overset{t=xy}{=}\int_0^1\int_0^x\frac{dtdx}{x(1+x^2)(1+t^2/x^2)}$$

$$=\frac12\int_0^1\int_t^1\frac{dxdt}{x(1+x^2)(1+t^2/x^2)}\overset{x^2\to x}{=}\frac12\int_0^1\left(\int_{t^2}^1\frac{dx}{(1+x)(x+t^2)}\right)dt$$

$$=-\frac12\int_0^1\frac{\ln\left(\frac{4t^2}{(1+t^2)^2}\right)}{1-t^2}dt\overset{t=\frac{1-x}{1+x}}{=}-\frac12\int_0^1\frac{\ln\left(\frac{1-x^2}{1+x^2}\right)}{x}dx$$

$$\overset{x^2\to x}{=}-\frac14\int_0^1\frac{\ln\left(\frac{1-x}{1+x}\right)}{x}dx=-\frac14\int_0^1\frac{\ln\left(\frac{(1-x)^2}{1-x^2}\right)}{x}dx$$

$$=-\frac12\int_0^1\frac{\ln(1-x)}{x}dx+\frac14\underbrace{\int_0^1\frac{\ln(1-x^2)}{x}dx}_{x^2\to x}$$

$$=-\frac38\int_0^1\frac{\ln(1-x)}{x}dx\Longrightarrow \int_0^1\frac{-\ln(1-x)}{x}dx=\frac{\pi^2}{6}$$


Remark:

This solution can be considered a proof that $\zeta(2)=\frac{\pi^2}{6}$ as we have $\int_0^1\frac{-\ln(1-x)}{x}dx=\text{Li}_2(x)|_0^1=\text{Li}_2(1)=\zeta(2)$

SSS
  • 451
3

Looking through the answers I’m honestly surprised that no one has posted this method yet, so I guess I shall do it since I just recently did a write up on some stuff that included this in it.

MSE doesn’t have the cancel package smh


Consider the Basel Problem $$S=\sum^{\infty}_{n=1}\frac{1}{n^2}$$ we may note that the Laplace transform of $x$ gives a similar form to the series as follows $$\mathcal{L}\left\{x\right\}(n)=\frac{1}{n^2}=\int^{\infty}_{0}xe^{-nx}\text{ d}x,\qquad n>0$$ Since our sum violates no restrictions, we may substitute this integral into it giving \begin{align} S&=\sum^{\infty}_{n=1}\int^{\infty}_{0}xe^{-nx}\text{ d}x\\ &=\int^{\infty}_{0}x\sum^{\infty}_{n=1}e^{-nx}\text{ d}x\\ &=\int^{\infty}_0\frac{x\text{ d}x}{e^x-1}=I \end{align} Showing the integral and sum interchange is valid in $(2)$ is trivial since our integrand is always positive along the integration interval, and would simplify to the Basel Problem when the Fubini–Tonelli theorem is applied. $(3)$ on the other hand is a trivial geometric series.

How should we evaluate this integral? To do this, we will consider the rather counterintuitive function $$f(z)=\frac{z^2}{e^z-1}$$ This function has poles at every $z=2\pi i n$, $n\in\mathbb{Z}$, but notice that the "pole" at the origin actually is a removable singularity, which can be shown by taking the limit of the function as it goes to the origin. This means that when we set up our rectangular contour, we don't actually need to take the principal value to the origin through a circular indent, because the function is well behaved there and the integral along that circular indent would go to $0$ regardless.

Hence, consider the following contour

enter image description here

$$\oint_{\mathcal{C}}f(z)\text{ d}z=\int_B+\int_{\mathcal{R}}+\int_T+\int_r+\int_Lf(z)\text{ d}z=0$$ Parameterization of the contour gives \begin{alignat*}{5} B&:\text{ }z=x,\qquad &\text{d}z&=\text{d}x,\qquad &x&\in[0, R\,]\\ \mathcal{R}&:\text{ }z=R+iy,\qquad &\text{d}z&=i\text{ d}y,\qquad &y&\in[0, 2\pi]\\ T&:\text{ }z=2\pi i+x,\qquad &\text{d}z&=\text{d}x,\qquad &x&\in[R, \epsilon\,]\\ r&:\text{ }z=2\pi i+\epsilon e^{i\theta},\qquad &\text{d}z&=i\epsilon e^{i\theta}\text{ d}\theta,\qquad &y&\in\left[0, -\frac{\pi}{2}\right]\\ L&:\text{ }z=iy,\qquad &\text{d}z&=i\text{ d}y,\qquad &y&\in[2\pi-\epsilon, 0] \end{alignat*} Instead of evaluating these integrals one by one however, this time we will first begin by adding two of them and cancelling terms, which is the reason why we increased the numerator power by $1$ in our $f(z)$. We have it such that \begin{align} \lim_{R\to+\infty,\,\epsilon\to+0}\int_B+\int_Tf(z)\text{ d}z&=\lim_{R\to+\infty,\,\epsilon\to+0}\int^R_0\frac{x^2\text{ d}x}{e^x-1}+\int_R^{\epsilon}\frac{(2\pi i+x)^2\text{ d}x}{{e^{2\pi i}}e^x-1}\\ &=\int^{\infty}_0\frac{x^2\text{ d}x}{e^x-1}-\int_0^{\infty}\frac{(2\pi i+x)^2\text{ d}x}{e^x-1}\\ &={\int^{\infty}_0\frac{x^2\text{ d}x}{e^x-1}}-{\int^{\infty}_0\frac{x^2\text{ d}x}{e^x-1}}-\int^{\infty}_0\frac{4\pi i x\text{ d}x}{e^x-1}+\int^{\infty}_0\frac{4\pi^2\text{ d}x}{e^x-1}\\ &=-4\pi iI+\int^{\infty}_0\frac{4\pi^2\text{ d}x}{e^x-1} \end{align} Next, we have \begin{align} \left|\int_{\mathcal{R}}f(z)\text{ d}z\right|&\le\int^{2\pi}_0\frac{\left|R+iy\right|^2\cdot{|i|}\text{ d}y}{\left|\displaystyle e^R e^{iy}-1\right|}\\ &\le\int^{2\pi}_0\frac{R^2+y^2}{\left|\left|\displaystyle e^R\right|{\left|e^{iy}\right|}-|1|\right|}\text{ d}y\\ &=\int^{2\pi}_0\frac{R^2+y^2}{e^R-1}\text{ d}y=\frac{2\pi\left(4\pi^2+3R^2\right)}{3\left(e^R-1\right)} \end{align} So this integral is just $0$ since $$\frac{2\pi}{3}\lim_{R\to+\infty}\frac{4\pi^2+3R^2}{e^R-1}=0$$ The integral about $r$ gives $$\int^{-\frac{\pi}2}_0\frac{\left(2\pi i + \epsilon e^{i\theta}\right)^2}{{e^{2\pi i}}e^{\epsilon e^{i\theta}}-1}\cdot i\epsilon e^{i\theta}\text{ d}\theta$$ Fix a $-\frac{\pi}2\le\theta\le0$, and we can bound the denominator and then the whole integrand by a constant since the integral is over a finite interval, which allows us to swap the $\epsilon$ limit and the integral. Meanwhile, the limit itself can be solved by L'Hôpital's rule, which yields $$\lim_{\epsilon\to+0}\int^{-\frac{\pi}2}_0\frac{\left(2\pi i + \epsilon e^{i\theta}\right)^2}{e^{\epsilon e^{i\theta}}-1}\cdot i\epsilon e^{i\theta}\text{ d}\theta=\int^{-\frac{\pi}2}_0\lim_{\epsilon\to+0}\frac{\left(2\pi i + \epsilon e^{i\theta}\right)^2}{e^{\epsilon e^{i\theta}}-1}\cdot i\epsilon e^{i\theta}\text{ d}\theta=4i\pi^2\int^0_{-\frac{\pi}{2}}\text{d}\theta=2i\pi^3$$ We have one more integral to go, where we can see that $$\lim_{\epsilon\to+0}\int_Lf(z)\text{ d}z=\lim_{\epsilon\to+0}\int^{0}_{2\pi-\epsilon}\frac{-y^2\cdot i\text{ d}y}{e^{iy}-1}=\int^{2\pi}_{0}\frac{iy^2\text{ d}y}{e^{iy}-1}$$ All in all, we end up with $$\oint_{\mathcal{C}}f(z)\text{ d}z=\int_B+{\int_{\mathcal{R}}}+\int_T+\int_r+\int_Lf(z)\text{ d}z=-4\pi iI+\int^{\infty}_0\frac{2\pi^2\text{ d}x}{e^x-1}+2i\pi^3+\int^{2\pi}_0\frac{iy^2\text{ d}y}{e^{iy}-1}=0$$ Note that the two unsolved integrals that remain are divergent, and will actually cancel each other out if we take a principal value "over" the two integrals, but we do not need to deal with them. Instead, we can pass the equation through the imaginary part function and equate the results, which give $$-4\pi I+2\pi^3+\int^{2\pi}_0\Im\left(\frac{iy^2}{e^{iy}-1}\right)\text{ d}y=0$$ We can separate real and imaginary parts of the integrand of the last integral with some simple conjugate multiplication, and we would end up with $$-4\pi I+2\pi^3+\int^{2\pi}_0\frac{-y^2\text{ d}y}{2}=0$$ $$I=S=\frac{-2\pi^3+\frac{4}{3}\pi^3}{-4\pi}=\boxed{\frac{\pi^2}{6}}$$

Max0815
  • 3,505
2

Let $f(x)=\frac 12-x$ on the interval $[0, 1)$, and extend $f$ to be periodic on $ \mathbb{R} $.

By definition, \begin{align*} \hat f(0)=\int_0^1 f(x)dx=\int_0^1 \left(\frac 12-x\right)dx=0. \end{align*} And for $ \kappa\ne 0 $: \begin{align*} \hat f(\kappa)&=\int_{0}^{1}f(x)e^{-2\pi i\kappa x }dx=\int_0^1\left( \frac 12 -x \right)e^{-2\pi i\kappa x}dx=-\int_0^1xe^{-2\pi i \kappa x}dx\\ &=\frac{1}{2\pi i\kappa }\int_{0}^{1}xd(e^{-2\pi i\kappa x})=\left.\frac{1}{2\pi i\kappa}xe^{-2\pi i\kappa x}\right|_0^1+\frac{1}{2\pi i\kappa}\int_0^1 e^{-2\pi i\kappa x}dx\\ &=\frac{1}{2\pi i\kappa}. \end{align*}

By the Parseval identity \begin{align*} \int_{0}^{1}|f(x)|^2dx=\sum_{k=-\infty}^{\infty}|\hat{f}(k)|^2=|\hat{f}(0)|^2+2\sum_{k=1}^{\infty}|\hat{f}(k)|^2=2\sum_{k=1}^{\infty}\frac{1}{4\pi^2 k^2}. \end{align*} On the other hand, \begin{align*} \int_{0}^{1}|f(x)|^2dx&=\int_{0}^{1}\left( \frac{1}{2}-x \right)^2 dx=\frac 14-\frac 12+\frac 13=\frac 1{12}. \end{align*} Hence, we have $$ \sum_{k=1}^{\infty}\frac{1}{k^2}=\frac{\pi^2}{6}. $$


Remark: This is an exercise(Chapter 8.13 on page 254 ) in Folland's book.

Bach
  • 5,730
  • 2
  • 20
  • 41
2

Here's mine. I'm answering late, I know that, but I am still answering it.
We'll use the expansion of $\tanh^{-1}$: $$\frac{1}{2}\log\frac{1+y}{1-y}=\sum_{n\geq0}\frac{y^{2n+1}}{2n+1},\quad|y|<1$$ We start with this inequality:
$$\int_{-1}^{1}\int_{-1}^{1}\frac{1}{1+2xy+y^2}dy\,dx=\int_{-1}^{1}\frac{1}{1+2xy+y^2}dx\,dy$$ The LHS of this equality gives: $$\int_{-1}^{1}\int_{-1}^{1}\frac{1}{1+2xy+y^2}dy\,dx=\int_{-1}^{1}\frac{\arctan \frac{x+y}{\sqrt{1-x^2}}}{\sqrt{1-x^2}}dx\Biggr|_{y=-1}^{y=1}\\ \quad\,\,\quad\quad\quad\quad\quad\quad\quad\quad\quad=\int_{-1}^{1}\frac{\pi}{2\sqrt{1-x^2}}dx=\frac{\pi^2}{2}$$ The RHS of the former equality yields: \begin{align} \int_{-1}^{1}\int_{-1}^{1}\frac{1}{1+2xy+y^2}dy\,dx&=\int_{-1}^{1}\frac{\log(1+2xy+y^2)}{2y}dy\Biggr|_{x=-1}^{x=1}\\ &=\int_{-1}^{1}\frac{\log\frac{1+y}{1-y}}{y}dy\\ &=2\int_{-1}^{1}\sum_{n\geq0}\frac{y^{2n}}{2n+1}dy\\ &=4\sum_{n\geq0}\frac{1}{(2n+1)^2} \end{align} Hence, $$\sum_{r\geq0}\frac{1}{(2n+1)^2}=\frac{\pi^2}{8}$$ Now $$\frac{3}{4}\zeta(2)=\zeta(2)-\frac{1}{4}\zeta(2)=\sum_{n\geq 1}\frac{1}{n^2}=\sum_{m\geq1}\frac{1}{(2m)^2}=\sum_{r\geq0}\frac{1}{(2r+1)^2}=\frac{\pi^2}{8}$$ Solving this we get $$\zeta(2)=\frac{\pi^2}{6}$$ as desired. Source:https://www.emis.de/journals/GM/vol16nr4/ivan/ivan.pdf
Here are more proofs.

2

The following proof is by Khalaf Ruhemi ( he is not a MSE member)

By partial fraction decomposition, we have $$\frac{y}{(1+y^2)(y^2+x^2)}=\frac{1}{x^2-1}\left(\frac{y}{1+y^2}-\frac{y}{y^2+x^2}\right).$$ Integrate both sides from $y=0$ to $y=\infty$, \begin{gather*} \int_0^\infty\frac{y}{(1+y^2)(y^2+x^2)}\mathrm{d}y=\frac{1}{x^2-1}\int_0^\infty\left[\frac{y}{1+y^2}-\frac{y}{y^2+x^2}\right]\mathrm{d}y\\ =\frac{1}{x^2-1}\left[\frac12\ln(1+y^2)-\frac12\ln(y^2+x^2)\right]_0^\infty=\frac{1}{2(x^2-1)}\left[\ln\left(\frac{1+y^2}{y^2+x^2}\right)\right]_0^\infty\\ =\frac{1}{2(x^2-1)}\left[\ln(1)-\ln\left(\frac{1}{x^2}\right)\right]=\frac{1}{2(x^2-1)}\left[2\ln(x)\right]=\frac{\ln(x)}{x^2-1}. \end{gather*} Next, integrate both sides from $x=0$ to $x=\infty$ \begin{gather*} \int_0^\infty\frac{\ln(x)}{x^2-1}\mathrm{d}x=\int_0^\infty\int_0^\infty\frac{y}{(1+y^2)(y^2+x^2)}\mathrm{d}y\,\mathrm{d}x\\ \{\text{change the order of integration}\}\\ =\int_0^\infty\frac{1}{1+y^2}\left[\int_0^\infty\frac{y\,\mathrm{d}x}{y^2+x^2}\right]\mathrm{d}y\\ =\int_0^\infty\frac{1}{1+y^2}\left[\arctan\left(\frac{x}{y}\right)\right]_0^\infty dy=\int_0^\infty\frac{1}{1+y^2}\left[\frac{\pi}{2}-0\right] \mathrm{d}y\\ =\frac{\pi}{2}\int_0^\infty\frac{1}{1+y^2} dy=\frac{\pi}{2}\arctan(y)\bigg|_0^\infty=\frac{\pi}{2}\cdot\frac{\pi}{2}=\frac{\pi^2}{4}. \end{gather*} Thus, \begin{gather*} \frac{\pi^2}{4}=\int_0^\infty\frac{\ln(x)}{x^2-1}\mathrm{d}x=\left(\int_0^1+\int_1^\infty\right)\frac{\ln(x)}{x^2-1}\mathrm{d}x\\ =\int_0^1\frac{\ln(x)}{x^2-1}\mathrm{d}x+\underbrace{\int_1^\infty\frac{\ln(x)}{x^2-1}\mathrm{d}x}_{x\to1/x}\\ =2\int_0^1\frac{\ln(x)}{x^2-1}\mathrm{d}x=-\int_0^1\frac{\ln(x)}{1-x}\mathrm{d}x-\int_0^1\frac{\ln(x)}{1+x}\mathrm{d}x\\ \left\{\text{use $\frac{1}{1+x}=\frac{1}{1-x}-\frac{2x}{1-x^2}$ in the second integral}\right\}\\ =-2\int_0^1\frac{\ln(x)}{1-x}\mathrm{d}x+2\underbrace{\int_0^1\frac{x\ln(x)}{1-x^2}\mathrm{d}x}_{x^2\to x}\\ =-2\int_0^1\frac{\ln(x)}{1-x}\mathrm{d}x+\frac12\int_0^1\frac{\ln(x)}{1-x}\mathrm{d}x\\ =-\frac32\int_0^1\frac{\ln(x)}{1-x}\mathrm{d}x\overset{1-x\to y}{=}-\frac32\int_0^1\frac{\ln(1-y)}{y}\mathrm{d}y\\ \{\text{expand $\ln(1-y)$ in series}\}\\ =\frac32\sum_{n=1}^\infty \frac{1}{n}\int_0^1 y^{n-1}\mathrm{d}y=\frac32\sum_{n=1}^\infty\frac{1}{n^2}=\frac{3}{2}\zeta(2). \end{gather*} So we have $$\frac{\pi^2}{4}=\frac{3}{2}\zeta(2)\Longrightarrow \zeta(2)=\frac{\pi^2}{6}.$$

Ali Shadhar
  • 25,498
  • This looks identical to Daniele Ritelli's solution to the Basel Problem. We would like to refer you to our joint AMS Publication for a generalization of this particular proof: https://www.ams.org/journals/qam/2018-76-03/S0033-569X-2018-01499-3/ – Vivek Kaushik Mar 05 '21 at 23:02
  • @Vivek Kaushik can you provide a link to his solution? – Ali Shadhar Mar 06 '21 at 18:13
  • Here is his paper: https://arxiv.org/abs/1208.5981. The joint publication listed earlier also references some other authors with similar solutions. – Vivek Kaushik Mar 06 '21 at 18:59
  • @Vivek Kaushik thanks – Ali Shadhar Mar 07 '21 at 00:09
1

I thought I'd just add a rigorous, yet only slightly more involved version of @QiaochuYuan 's proof:

So the generating function of the Bernoulli numbers, given by $$ g(z) = \frac{z}{e^z - 1} $$ does have a partial fraction decomposition, but we have to divide by $z$ in order to make the pole sum converge, so $$ \frac{g(z)}{z} = \frac{1}{e^z - 1} = -\frac{1}{2} + \frac{1}{z} + \sum_{n=1}^\infty \frac{2z}{z^2 + (2πn)^2} $$ where we already collected the terms for each $n$ and $-n$ together. This is justified because the difference between $g(z)/z$ and the partial fraction decomposition is bounded and holomorphic and hence constant, and the limit may easily be computed at $0$. Near $0$ all of this converges absolutely, so that we are justified in permuting the order of summation in order to obtain the expression $$ \sum_{n=1}^\infty \frac{2 (-1)^k}{(2πn)^2(2πn)^{2k}} $$ for the $k+1$-st coefficient in the power series expansion of $g(z)/z$. Equating coefficients then yields the desired formula.

Cloudscape
  • 5,124
1

The extraction of the desired values will also consider the simple fact that $\displaystyle \sum_{n=1}^{\infty}\frac{1}{n^2}= \sum_{n=1}^{\infty}\frac{1}{(2n)^2}+ \sum_{n=1}^{\infty}\frac{1}{(2n-1)^2}=\frac{1}{4} \sum_{n=1}^{\infty}\frac{1}{n^2}+ \sum_{n=1}^{\infty}\frac{1}{(2n-1)^2}$ that leads to $\displaystyle \sum_{n=1}^{\infty}\frac{1}{n^2}=\frac{4}{3}\sum_{n=1}^{\infty}\frac{1}{(2n-1)^2}$. Exploiting the power series $\displaystyle \operatorname{arctanh}(x)=\sum_{n=1}^{\infty}\frac{x^{2n-1}}{2n-1}$, we write

$$4\sum_{n=1}^{\infty} \frac{1}{(2n-1)^2}=4\int_0^1 \frac{\operatorname{arctanh}(x)}{x}\textrm{d}x =-\int_0^1\left( \int_0^1\frac{\partial}{\partial x} \left(\frac{\log((1+y)^2-4 x y)}{y}\right)\textrm{d}x\right) \textrm{d}y $$ $$ =4\int_0^1\left( \int_0^1 \frac{1}{(y+1-2x)^2+(2 \sqrt{x (1-x)})^2}\textrm{d}y\right) \textrm{d}x $$ $$=4\int_0^1\frac{1}{2 \sqrt{x (1-x)}}\arctan\left(\frac{y+1-2x}{2 \sqrt{x (1-x)}}\right)\biggr|_{y=0}^{y=1}\textrm{d}x$$

\begin{equation*} =2\int_0^1\frac{1}{\sqrt{x (1-x)}}\left(\arctan\left(\frac{\sqrt{1-x}}{ \sqrt{x}}\right)-\arctan\left(\frac{1-2x}{2 \sqrt{x (1-x)}}\right)\right)\textrm{d}x \end{equation*} \begin{equation*} \overset{x=\cos^2(t)}{=}4 \int_0^{\pi/2}x \textrm{d}x +4 \underbrace{\int_0^{\pi/2}\arctan(\cot(2x))\textrm{d}x}_{\displaystyle 0 \text{ by symmetry}}=\frac{\pi^2}{2}, \end{equation*} which gives $\displaystyle \sum_{n=1}^{\infty} \frac{1}{(2n-1)^2}=\frac{\pi^2}{8}$, and combined with $\displaystyle \sum_{n=1}^{\infty}\frac{1}{n^2}=\frac{4}{3}\sum_{n=1}^{\infty}\frac{1}{(2n-1)^2}$, we arrive at the desired result, and the proof is complete.

Another creative solution may be found in (Almost) Impossible Integrals, Sums, and Series (2019), Chapter $3$, Section $3.1$, pp. $55-57$.

user97357329
  • 5,319
0

I present my two proofs here, the second one is the simpler proof but relies heavily on the Weierestrass factorization. The first proof is slightly more involved but more rigorous:

Proof 1):

We know that:

$\arctan(x)=x-\dfrac{x^3}{3}+\dfrac{x^5}{5}-\dfrac{x^7}{7}+\cdots$

Now,$$\int x^1\arctan(x) \,dx=\dfrac{x^3}{1\cdot3}-\dfrac{x^5}{3\cdot5}+\dfrac{x^7}{5\cdot7}+\cdots--(1)$$

$$\int x^3\arctan(x) \,dx=\dfrac{x^5}{1\cdot5}-\dfrac{x^7}{3\cdot7}+\dfrac{x^9}{5\cdot9}+\cdots--(2)$$

$$\int x^5\arctan(x) \,dx=\dfrac{x^7}{1\cdot7}-\dfrac{x^9}{3\cdot9}+\dfrac{x^11}{5\cdot11}+\cdots--(3)\cdots$$

Notice that if one were to add all these equations and replace $x$ with $i$, they would get the residual value from the square of the Leibniz pi formula times $i$.... I.E:

$$\dfrac{\pi}{4}\cdot\dfrac{\pi}{4}=\left(1-\dfrac{1}{3}+\dfrac{1}{5}-\dfrac{1}{7}+\dfrac{1}{9}+\cdots \right)\left(1-\dfrac{1}{3}+\dfrac{1}{5}-\dfrac{1}{7}+\dfrac{1}{9}+\cdots \right)$$

$$\dfrac{\pi^2}{16}=\dfrac{1}{1^2}+\dfrac{1}{3^2}+\dfrac{1}{5^2}+\dfrac{1}{7^2}+\cdots+2\left(-\dfrac{1}{1\cdot3}+\dfrac{1}{1\cdot5}-\dfrac{1}{1\cdot7}+\cdots-\dfrac{1}{3\cdot5}+\dfrac{1}{3\cdot7}+\cdots \right)$$

$$\dfrac{\pi^2}{16}=\left(\dfrac{1}{1^2}+\dfrac{1}{3^2}+\dfrac{1}{5^2}+\dfrac{1}{7^2}+\cdots \right)+2K(Say)--(@)$$

Now $$K=\sum_{n=0}^{\infty}\int I^{2n+1} dx=\sum_{n=0}^{\infty} \int x^{2n+1}\arctan(x) dx$$, Where $I^{1}=(1); I^{3}=(2); I^{5}=(3)\cdots$

$$I^{1}=\int x^1\arctan(x) \,dx=\dfrac{(x^2+1)\arctan(x)-x}{2}$$ $$I^{3}=\int x^3\arctan(x) \,dx=\dfrac{(3x^4-3)\arctan(x)-x^3+3x}{12}$$ $$I^{5}=\int x^5\arctan(x) \,dx=\dfrac{(15x^6+15)\arctan(x)-3x^5+5x^3-15x}{90}$$

On replacing $x=i$ in all these equations, we get: $I^{1}=-\dfrac{i}{2}; I^{3}=\dfrac{i}{3}; I^{5}=-\dfrac{23i}{90}; I^{7}=\dfrac{22i}{105}\cdots$

Upon adding these values we get:

$$K=-\dfrac{i}{2}+\dfrac{i}{3}-\dfrac{23i}{90}+\dfrac{22i}{105}-\cdots--(*)$$

Now, we know the Taylor series expansion for $\arctan^2(x)=x^2 - \dfrac{2}{3}x^4 + \dfrac{23}{45}x^6 - \dfrac{44}{105}x^8 + \dfrac{563}{1575}x^{10} + \cdots$

By observation, we can see that (*) is equal to the negative half of the $\arctan^2(x)$ expansion at $x=1$

Therefore, $$\dfrac{\arctan^2(1)}{2}=-(*)=-K=\dfrac{\pi^2}{32}=(-1)\cdot\left(-\dfrac{1}{1\cdot3}+\dfrac{1}{1\cdot5}-\dfrac{1}{1\cdot7}+\cdots-\dfrac{1}{3\cdot5}+\dfrac{1}{3\cdot7}+\cdots \right)$$

This result can be substituted in $(@)$, to get: $$\dfrac{\pi^2}{16}=\dfrac{1}{1^2}+\dfrac{1}{3^2}+\dfrac{1}{5^2}+\dfrac{1}{7^2}+\cdots+2K=\dfrac{1}{1^2}+\dfrac{1}{3^2}+\dfrac{1}{5^2}+\dfrac{1}{7^2}+\cdots-\dfrac{\pi^2}{16}$$

Which results in,$$\dfrac{\pi^2}{8}=\dfrac{1}{1^2}+\dfrac{1}{3^2}+\dfrac{1}{5^2}+\dfrac{1}{7^2}+\dfrac{1}{9^2}+\dfrac{1}{11^2}+\dfrac{1}{13^2}+\dfrac{1}{15^2}+\cdots--(!)$$

Now consider:$$L=\dfrac{1}{1^2}+\dfrac{1}{2^2}+\dfrac{1}{3^2}+\dfrac{1}{4^2}+\dfrac{1}{5^2}+\dfrac{1}{6^2}+\cdots=(!)+(L/4)$$

$$=>\dfrac{3L}{4}=(!)=\dfrac{\pi^2}{8}$$

Therefore,$$L=\dfrac{\pi^2}{6}=\dfrac{1}{1^2}+\dfrac{1}{2^2}+\dfrac{1}{3^2}+\dfrac{1}{4^2}+\dfrac{1}{5^2}+\dfrac{1}{6^2}+\cdots$$



Proof 2):

This is by no means rigorous but I found it was enough to give me a general intuition to approach problems like these.

Consider a polynomial:$a_nx^n+a_{n-1}x^{n-1}+\cdots+a_2x^2+a_1x+a_0$

Now the summation of the reciprocal products (Two at a time):

$$\sum{\dfrac{1}{r_i\cdot r_j}}=\dfrac{a_2}{a_0}$$

Consider $\cos(x)$ as the "polynomial" function. The Taylor expansion is:

$1-\dfrac{x^2}{2!}+\dfrac{x^4}{4!}-\cdots$

The roots of $\cos(x)$ are: $\dfrac{\pi}{2},\dfrac{-\pi}{2},\dfrac{3\pi}{2},\dfrac{-3\pi}{2},\dfrac{5\pi}{2},\cdots$

Here, $\sum{\dfrac{1}{r_i\cdot r_j}}=\dfrac{a_2}{a_0}=\dfrac{-1}{2}$

So,$$\dfrac{-1}{2}=\dfrac{1}{\dfrac{\pi}{2}}(\dfrac{1}{\dfrac{3\pi}{2}}+\dfrac{1}{\dfrac{5\pi}{2}}+\cdots+\dfrac{1}{\dfrac{-\pi}{2}}+\dfrac{1}{\dfrac{-3\pi}{2}}+\cdots)+\dfrac{1}{\dfrac{3\pi}{2}}(\dfrac{1}{\dfrac{\pi}{2}}+\dfrac{1}{\dfrac{5\pi}{2}}+\cdots+\dfrac{1}{\dfrac{-\pi}{2}}+\dfrac{1}{\dfrac{-3\pi}{2}}+\cdots)+\dfrac{1}{\dfrac{5\pi}{2}}(\dfrac{1}{\dfrac{\pi}{2}}+\dfrac{1}{\dfrac{3\pi}{2}}+\dfrac{1}{\dfrac{7\pi}{2}}+\cdots+\dfrac{1}{\dfrac{-\pi}{2}}+\dfrac{1}{\dfrac{-3\pi}{2}}+\dfrac{1}{\dfrac{-5\pi}{2}}\cdots)+\cdots$$

This results in:

$$\dfrac{1}{\dfrac{\pi^2}{2}}(\dfrac{1}{\dfrac{3}{2}}+\dfrac{1}{\dfrac{5}{2}}+\cdots+\dfrac{1}{\dfrac{-1}{2}}+\dfrac{1}{\dfrac{-3}{2}}+\cdots)+\dfrac{1}{\dfrac{3\pi^2}{2}}(\dfrac{1}{\dfrac{1}{2}}+\dfrac{1}{\dfrac{5}{2}}+\cdots+\dfrac{1}{\dfrac{-1}{2}}+\dfrac{1}{\dfrac{-3}{2}}+\cdots)+\cdots$$

Resulting in:

$$-\dfrac{1}{2}=\dfrac{1}{\dfrac{\pi^2}{2}}\dfrac{1}{(\dfrac{-1}{2})}+\dfrac{1}{\dfrac{3\pi^2}{2}}\dfrac{1}{(\dfrac{-3}{2})}+\dfrac{1}{\dfrac{5\pi^2}{2}}\dfrac{1}{(\dfrac{-5}{2})}+\cdots$$

Therefore, $$\dfrac{\pi^2}{8}=\dfrac{1}{1^2}+\dfrac{1}{3^2}+\dfrac{1}{5^2}+\dfrac{1}{7^2}+\cdots$$

Consequently, $$\dfrac{\pi^2}{6}=\dfrac{1}{1^2}+\dfrac{1}{2^2}+\dfrac{1}{3^2}+\dfrac{1}{4^2}+\cdots$$

LithiumPoisoning
  • 1,164
  • 1
  • 16
  • If I were to replace cosx-0 with, say cosx-1, cosx-1/2 or cosx-√3/2, cosx-sinx, etc... I would get a whole family of equations analogous to the Basel problem. These results can be seemingly generalized as (Where $X_e=\dfrac{\pi}{n}$): $$\dfrac{X_e^2}{2}(\csc^2(X_e)(1+\cos(X_e)))=\dfrac{\pi^2}{2\cdot n^2}(\csc^2(\dfrac{\pi}{n})(1+\cos(\dfrac{\pi}{n})))$$$=\dfrac{\pi^2}{2n^2}\dfrac{1}{1-\cos(\dfrac{\pi}{n})}=\dfrac{1}{1^2}+\dfrac{1}{(2n-1)^2}+\dfrac{1}{(2n+1)^2}+\dfrac{1}{(1-4n)^2}+\dfrac{1}{(1+4n)^2}+\cdots$(For n>1) :) – LithiumPoisoning Sep 19 '23 at 17:17
0

This proof comes from here The triangle inequality implies $\displaystyle 1 + \frac{1}{2^2} + \frac{1}{3^2} + \cdots = \frac{\pi^2}{6} $
Pick three random points $A$, $B$, $C$ on the unit circle. Let the sides of triangle $ABC$ have length $a$,$b$,$c$. The probability that $a+b\gt xc$ is $$\mathbb{Pr}(\frac{a+b}c\gt x)=\frac4{\pi^2}\int_{-\pi/2}^{\pi/2} \arcsin(\frac{\cos\phi}x)d\phi$$ which has the series $$\frac8{\pi^2}\sum_{n=0}^{\infty}\frac1{(2n+1)^2x^{2n+1}}$$ By the triangle inequality, the limit of the probability as $x\to1$ is $1$, which establishes the sum.
Proof:
The angles at $A$,$B$ and $C$ are half the angles subtended at the centre so$(\alpha,\beta,\gamma)$ is uniformly distributed over the flat region bounded by $(\pi,0,0),(0,\pi,0),(0,0,\pi)$.
Since $\frac{\sin\alpha}a=\frac{\sin\beta}b=\frac{\sin\gamma}c$, the key ratio equals $$\frac{a+b}c=\frac{\sin\alpha+\sin\beta}{\sin\gamma}\\=\frac{2\sin\frac{\alpha+\beta}2\cos{\frac{\alpha-\beta}2}}{\sin(\alpha+\beta)}\\ =\frac{\cos\frac{\alpha-\beta}2}{\cos\frac{\alpha+\beta}2} $$ Define $$\theta =\frac{\alpha+\beta}2,\phi=\frac{\alpha-\beta}2$$ The relevant region is $$0\le\theta\le\frac\pi2,\\-\theta\le\phi\le\theta$$ which has area $\pi^2/4$.
The inequality we want is equivalent to $$\cos\theta\le\frac{\cos\phi}x\\ \arccos\frac{\cos\phi}x\le\theta\le\frac\pi2$$ Integrate to find the region's area, divided by the domain's area to get $$\mathbb{Pr}(\frac{a+b}c\ge x)=\frac4{\pi^2} \int_{-\pi/2}^{\pi/2}\arcsin\frac{\cos\phi}xd\phi$$ To establish the series, differentate to get $$\frac4{\pi^2}\int_{-\pi/2}^{\pi/2}\frac{-\cos\phi d\phi}{x^2\sqrt{1-\frac{\cos^2\phi}{x^2}}}\\ =\frac{-4}{x\pi^2}\int_{-\pi/2}^{\pi/2} \frac{\cos\phi d\phi}{\sqrt{x^2-1+\sin^2\phi}}\\ =\frac{-8}{x\pi^2}\int_0^{1/\sqrt{x^2-1}}\frac{du}{\sqrt{1+u^2}}\\ =\frac{-8}{x\pi^2}\operatorname{asinh}(1/\sqrt{x^2-1})\\ =\frac{-8}{x\pi^2}\operatorname{atanh}(1/x)\\ =\frac{-8}{\pi^2} \sum_{n=0}^{\infty}\frac1{(2n+1)x^{2n+2}} $$ Integrate this series to get the original probability.

Gary
  • 31,845
Empy2
  • 50,853