1

In calculating $$\int_{0}^{\infty}e^{-x^{2}} dx = \frac{\sqrt{\pi}}{2}$$ What is the motivation and where does the idea to define $$F(x) = \int_{0}^{\infty}\frac{e^{-x(1+t^{2})}}{1+t^{2}}dt $$ come from?

Two thoerems we proved prior to this example:

Dominating Convergence Theorem(but over continuous functions)

If $f_{n}$ is a sequence of continuous functions on the closed interval $[a,b]$, converging uniformly to $f(x)$, fixing $c \in [a,b]$ then $\lim_{n\rightarrow \infty} \int_{c}^{x}f_{n}(t) dt = \int_{c}^{x}\lim_{n\rightarrow \infty}f_{n}(t) dt = \int_{c}^{x}f(t)$

Leibniz Rule: Assume $f(x,t)$, $d_{x}f(x,t)$ are continuous on $[a,b] X [a,b]$. If $F(x) = \int_{c}^{d} f(x,t)dt$ then $\frac{dF(x)}{dx} = \int_{c}^{d} d_{x}f(x,t) dt$

I ask because when solving this expression we use this function with the necessary theorems (Leibniz Rule, Integral Convergence/ Dominating Convergence) to find our result, but it feels as if $F(x,t)$ was pulled out of thin air.

D.C. the III
  • 5,619
  • 2
    Are we expected to know what proof you are talking about? There are many ways to prove this. – Peter Foreman Jul 28 '19 at 19:17
  • @PeterForeman, I got this from an example from the textbook I am working with. To arrive at the claim I'm making the author had just previously proven Leibniz's Rule and DOminating Convergence....I'll edit my post to include those two theorems perhpas that will make the question more concrete – D.C. the III Jul 28 '19 at 19:21
  • 2
    Check this post out https://math.stackexchange.com/questions/9286/proving-int-0-infty-mathrme-x2-dx-frac-sqrt-pi2?noredirect=1&lq=1 – coreyman317 Jul 28 '19 at 19:26
  • 1
    I think the first expression should be equal to $\frac {\sqrt \pi}2$ – Kinheadpump Jul 28 '19 at 19:27
  • @Kinheadpump...yes, thanks – D.C. the III Jul 28 '19 at 19:32
  • 2
    Your second integral is a function of $x$ only, so you can't call it $F(x,t)$. – TonyK Jul 28 '19 at 19:48
  • @TonyK...edited – D.C. the III Jul 28 '19 at 19:54
  • 1
    It was indeed pulled out of thin air, by a very clever person after drinking lots of coffee. That's how mathematics is done. – TonyK Jul 28 '19 at 19:55
  • 1
    In physics, these moves are called “Feynman / Schwinger tricks,” since they were used with great success in their quantum field theory calculations. This type of trick goes back to the usual 19th century suspects though, I think, even though they weren’t widely taught enough to not need to be rediscovered. – spaceisdarkgreen Jul 28 '19 at 19:56
  • 1
    @spaceisdarkgreen I assume by Schwinger you mean this, but with Feynman do you mean this or this? – J.G. Jul 28 '19 at 19:58
  • 1
    @J.G. “Feynman trick” usually refers to the loop integral parameter in the first one, but of course he was known for using creative differentiations under the integral sign more generally. I’ve seen the very first link both attributed to Schwinger and attributed to both. – spaceisdarkgreen Jul 28 '19 at 20:08
  • 1
    @spaceisdarkgreen It's also common for people to conflate the differentiation trick with Schwinger parameterization where the two give very similar-looking proofs, such as in evaluating $\int_0^\infty\frac{\sin x dx}{x}$. – J.G. Jul 28 '19 at 20:15

2 Answers2

2

If you'll pardon me, it'll help with what comes next if I instead write $$F(t):=\int_0^\infty\frac{\exp(-t(1+y^2))}{1+y^2}dy.$$

Now, there are many ways to evaluate $J:=\int_0^\infty e^{-x^2}dx$; the best compilation of them I know is here. I'll refer to proofs therein with its numbering. Almost all the proofs use a double integral, sometimes by squaring one integral. So if you try inventing your own proof, it's often wise to look for why the original integral squared should have a nice behaviour (especially in view of its value being a nice square root).

The OP asks about the motivation behind proof 4. We start with $$-2Je^{-t^2}=-2e^{-t^2}\int_0^\infty e^{-x^2}dx=-2te^{-t^2}\int_0^\infty e^{-t^2y^2}dy=\frac{d}{dt}F(t^2).$$(It's a little inconvenient for our purposes that the variable labelling I've borrowed from the above link differs from that of the OP.) Integrating from $t=0$ to $t=\infty$, $-2J^2=F(\infty)-F(0)=-\frac{\pi}{2}.$So we can state the motive of $F$'s definition as writing $J$ times the function we're integrating as the derivative of something we can evaluate at the ends of the integration range. More generally, $$A:=\int_a^b f(t)dt\implies A^2=\int_a^b Af(t) dt,$$so it'd be nice to somehow write $Af$ as a derivative.

J.G.
  • 115,835
  • Correct me if I'm wrong in my thinking but this process of finding something based on the derivatives of something we can evaluate gives me a feeling of partial differential equations. To arrive at such a techinque definitely must've took quite a few cups of something.... – D.C. the III Jul 28 '19 at 20:06
  • @dc3rd Good eye; look at proof 3. – J.G. Jul 28 '19 at 20:08
1

I don't know about the details of this proof, but I can imagine this: if you use differentiation under the integral sign, you will have the integrand

$$e^{-x(1+t^2)}=e^{-x}e^{-xt^2}.$$

After pulling the first factor out, rescaling the variable will make a factor $\sqrt x$ appear. The term $+1$ in $t^2+1$ is introduced to avoid a singularity at $t=0$.