3

Notation: For $\varphi \in [0,\frac{\pi}{2}]$ and $k \in [0,1)$ the definitions $$ \operatorname{F}(\varphi,k) = \int \limits_0^\varphi \frac{\mathrm{d} \theta}{\sqrt{1-k^2 \sin^2(\theta)}} = \int \limits_0^{\sin(\varphi)} \frac{\mathrm{d} x}{\sqrt{(1 - k^2 x^2)(1-x^2)}} $$ and $\operatorname{K}(k) = \operatorname{F}(\frac{\pi}{2},k)$ will be used for the elliptic integrals of the first kind.

When answering this question, I came across the function $$ \psi \colon [0,1) \to (0,\infty) \, , \, \psi(k) = \int \limits_0^1 \frac{\operatorname{K}(k x)}{\sqrt{(1-k^2 x^2)(1-x^2)}} \, \mathrm{d}x = \int \limits_0^{\pi/2} \frac{\operatorname{K}(k \sin(\theta))}{\sqrt{1-k^2 \sin^2 (\theta)}} \, \mathrm{d} \theta \, .$$

While $\psi(k) = \frac{\pi^2}{4} [1+ \frac{3}{8} k^2 + \mathcal{O}(k^4)]$ near $k=0$ is readily found using Maclaurin series, the expansion at $k=1$ is more elusive.

The naive attempt of replacing $\operatorname{K}(kx)$ by $\operatorname{K}(k)$ (since the largest contribution to the integral comes from the region near $x=1$) yields $\psi(k) \simeq \operatorname{K}^2 (k)$, which according to plots is not too far off but also not quite right. Integration by parts reproduces this term, but the remaining integral does not look very nice: \begin{align} \psi(k) &= \operatorname{K}^2 (k) - k \int \limits_0^1 \operatorname{K}'(k x) \operatorname{F}(\arcsin(x),k) \, \mathrm{d} x \\ &= \operatorname{K}^2 (k) - \int \limits_0^1 \left[\frac{\operatorname{E}(k x)}{1-k^2 x^2} - \operatorname{K}(kx)\right] \frac{\operatorname{F}(\arcsin(x),k)}{x} \, \mathrm{d} x \, . \end{align} The expansion $\operatorname{K}(k) = -\frac{1}{2} \log(\frac{1-k}{8}) + \mathcal{o}(1)$ is useful for the final steps, but I do not know how to extract all the leading terms, so:

How can we find the asymptotic expansion (ideally up to and including the constant terms) of $\psi(k)$ as $k \nearrow 1$ ?

  • I actually spent a couple hours last night trying to crack your integral and wound up either going in circles or making things worse. One thing I haven’t looked at yet is taking the real function $\psi$ you’ve defined and examinjng its derivatives and find an ODE that it satisfies. – David H Feb 16 '19 at 08:00
  • @DavidH Thanks for your efforts! Taking the derivative looks promising at first, but unfortunately I always get stuck after a few steps. – ComplexYetTrivial Feb 16 '19 at 19:14

3 Answers3

3

EllipitcElena's answer and Maxim's correction show that we have $$ \psi(k) = - \frac{1}{4} \operatorname{K}(k) \log(1-k^2) + 2 \log(2) \operatorname{K}(k) + \chi(k) \, ,$$ where ($\psi_0$ is the digamma function, so $\psi$ was not the best choice) $$ \chi (k) = \sum \limits_{m=1}^\infty \frac{\left(\frac{1}{2}\right)_m^2}{m!^2} \int \limits_0^1 \frac{(1-k^2 x^2)^{m-\frac{1}{2}}}{\sqrt{1-x^2}} \left[-\frac{1}{2} \log(1-k^2 x^2) + \psi_0(m+1)-\psi_0 \left(m + \frac{1}{2}\right)\right] \, \mathrm{d} x \, . $$ Near $k=1$ we find $$ \psi (k) = \frac{1}{8} \left[\log^2 \left(\frac{1-k}{32}\right) - 4 \log^2 (2)\right] + \chi(1) + \mathcal{o}(1) \, .$$ $\chi(1)$ can be computed exactly: \begin{align} \chi(1) &= \sum \limits_{m=1}^\infty \frac{\left(\frac{1}{2}\right)_m^2}{m!^2} \int \limits_0^1 (1-x^2)^{m-1} \left[-\frac{1}{2} \log(1- x^2) + \psi_0(m+1)-\psi_0 \left(m + \frac{1}{2}\right)\right] \, \mathrm{d} x \\ &= \frac{1}{2} \sum \limits_{m=1}^\infty \frac{\left(\frac{1}{2}\right)_m^2}{m!^2} \left[-\frac{1}{2} \partial_1 \operatorname{B}\left(m,\frac{1}{2}\right) + \left(\psi_0(m+1)-\psi_0 \left(m + \frac{1}{2}\right)\right) \operatorname{B}\left(m,\frac{1}{2}\right) \right] \\ &= \frac{1}{2} \sum \limits_{m=1}^\infty \frac{\left(\frac{1}{2}\right)_m^2}{m!^2} \operatorname{B}\left(m,\frac{1}{2}\right) \left[\frac{1}{2}\left(\psi_0 \left(m + \frac{1}{2}\right) - \psi_0(m)\right) + \psi_0(m+1)-\psi_0 \left(m + \frac{1}{2}\right) \right] \, . \end{align} Using $\operatorname{B}(m,\frac{1}{2}) = \frac{(m-1)!}{\left(\frac{1}{2}\right)_m}$, $\left(\frac{1}{2}\right)_m = \frac{(2m)!}{4^m m!}$ and special values of $\psi_0$ this expression can be simplified: $$ \chi(1) = \sum \limits_{m=1}^\infty \frac{{2m \choose m}}{2m 4^m} \left[\log(2) + H_m - H_{2m-1}\right] = \frac{1}{2} \log^2 (2) \, .$$ The final sum follows from the series $$ \sum \limits_{m=1}^\infty \frac{{2m \choose m}}{2m 4^m} x^m = \log(2) - \log(1+\sqrt{1-x}) \, , \, x \in [-1,1] \, , $$ and is discussed in this question as well.

Putting everything together we obtain the rather nice result $$ \boxed{\psi (k) = \frac{1}{8} \log^2 \left(\frac{1-k}{32}\right) + \mathcal{o} (1)} $$ as $k \nearrow 1$.

2

Some heuristics: From DMLF we get the asymptotic Expansion of $K(z)$ up toll all orders around $z=1_-$ $$ K(z)\sim-\frac12\log(1-z^2)+2\log(2)+O((1-z^2) \log(1-z^2)) $$

disregarding the terms which vanish at $z=1_{-}$ we obtain (Unluckily i don't see a way of making a precise estimate of the $O$-term, but it should definitly be $o(1)$)

$$ \psi(k)\sim -\int_0^1\frac{1}{2}\frac{\log(1-k^2x^2)}{\sqrt{1-x^2}{\sqrt{1-(kx)^2}}}dx-2 \log(2)\int_0^1\frac{1}{\sqrt{1-x^2}{\sqrt{1-(kx)^2}}} $$

or (a proof for the first integral can be found here in section 7., the second is just the very defintion of $K(k)$)

$$ \psi(k)\sim -\frac12K(k)\log(1-k^2)-2\log(2)K(k) $$

which fits Pretty well (the relative error for $k=0.999999999$ is about $2.1$% and for $k=0.9999999$ is about $2.5$%, so we are most likely off by a constant)

Using the asympototic Expansion from the beginning of the post again we could further simplifiy the above, but i leave it like that...

  • There is a typo in the last formula: the first integral contains a $1/2$ factor and $\ln k' = \ln \sqrt {1 - k^2}$ gives another $1/2$ factor, so the first term should be $-K(k) \ln(1 - k^2)/4$. This gives $\psi(k) \sim \ln^2(1 - k)/8$. – Maxim Feb 16 '19 at 18:31
  • Thank you! It is quite surprising that the first integral has such a nice value. In addition to Maxim's comment, your second term should have a positive sign. The terms of higher order do in fact lead to an additional constant, but it can be computed explicitly (see my own answer). – ComplexYetTrivial Feb 16 '19 at 20:09
1

This is not an answer.

Interested by the answer you gave in the linked post, I was asking myself the same question and I did not arrive to anything.

However, using numerical integration, what it seems is that, close to $k=1$, we can approximate the result by something like $a(1-k)^{-b}$.

Using $100$ data points between $k=0.9900$ and $k=0.9999$, a quick and dirty regression gives $$\begin{array}{clclclclc} \text{} & \text{Estimate} & \text{Standard Error} & \text{Confidence Interval} \\ a & 3.34787 & 0.02566 & \{3.29694,3.39880\} \\ b & 0.19918 & 0.00125 & \{0.19671,0.20166\} \end{array}$$ The problem is that, doing the same between $k=0.99900$ and $k=0.99999$ the results are quite different $$\begin{array}{clclclclc} \text{} & \text{Estimate} & \text{Standard Error} & \text{Confidence Interval} \\ a & 4.42245 & 0.03192 & \{4.35909,4.48581\} \\ b & 0.16323 & 0.00087 & \{0.16151,0.16496\} \end{array}$$

May be, this could give you some ideas.