4

Recall the Riemann–Siegel θ-function: $$\theta(z) = \arg\Gamma\left(\frac{1}{4}+\frac{i\,z}{2}\right) - \frac{z\,\log \pi}{2},$$ that describes the complex phase of the Riemann $\zeta$-function on the critical line.

There is a known approximation for its inverse: $$\theta^{\small(-1)}(x)=\frac{\pi+8{\tiny\text{ }}x}{4\,W\!\left(\frac{\pi+8{\tiny\text{ }}x}{8{\tiny\text{ }}\pi{\tiny\text{ }}e}\right)}+o(1),$$ where $W(x)$ is the Lambert W-function, which becomes more precise as $x$ grows.

I wonder if it is possible to improve this approximation by including higher-order terms, so that the remaining error term decays as $o(x^{-1})$, $o(x^{-2})$, etc. Can those higher-order terms be expressed using only elementary functions and $W(x)$?

3 Answers3

6

We start with the asymptotics $$ \theta (t) = \frac{t}{2}\log \frac{t}{{2\pi }} - \frac{t}{2} - \frac{\pi }{8} + \frac{1}{{48t}} + \mathcal{O}\!\left( {\frac{1}{{t^3 }}} \right), $$ i.e., $$ \frac{{\theta (t)}}{\pi } + \frac{1}{8} = \frac{t}{{2\pi }}\log \frac{t}{{2\pi }} - \frac{t}{{2\pi }} + \frac{1}{{48\pi t}} + \mathcal{O}\!\left( {\frac{1}{{t^3 }}} \right). $$ This may be re-written in the form $$ \frac{{\theta (t)}}{\pi } + \frac{1}{8} = \left( {\frac{t}{{2\pi }} + g(t)} \right)\log \left( {\frac{t}{{2\pi }} + g(t)} \right) - \left( {\frac{t}{{2\pi }} + g(t)} \right), $$ where $$ g(t) = \frac{1}{{48\pi t\log \frac{t}{{2\pi }}}} + \mathcal{O}\!\left( {\frac{1}{{t^3 \log t}}} \right). $$ Thus, $$ \frac{1}{e}\left( {\frac{{\theta (t)}}{\pi } + \frac{1}{8}} \right) = \frac{{\frac{t}{{2\pi }} + g(t)}}{e}\log \frac{{\frac{t}{{2\pi }} + g(t)}}{e}, $$ i.e., $$ \frac{{\frac{{\theta (t)}}{\pi } + \frac{1}{8}}}{{W\!\left( {\frac{1}{e}\left( {\frac{{\theta (t)}}{\pi } + \frac{1}{8}} \right)} \right)}} = \frac{t}{{2\pi }} +g(t)= \frac{t}{{2\pi }} + \frac{1}{{48\pi t\log \frac{t}{{2\pi }}}} + \mathcal{O}\!\left( {\frac{1}{{t^3 \log t}}} \right). $$ Iterating this once yields $$ \frac{{\frac{{\theta (t)}}{\pi } + \frac{1}{8}}}{{W\!\left( {\frac{1}{e}\left( {\frac{{\theta (t)}}{\pi } + \frac{1}{8}} \right)} \right)}} = \frac{t}{{2\pi }} + \frac{1}{{96\pi ^2 \left[ {\frac{{\frac{{\theta (t)}}{\pi } + \frac{1}{8}}}{{W\left( {\frac{1}{e}\left( {\frac{{\theta (t)}}{\pi } + \frac{1}{8}} \right)} \right)}}} \right]\log \left[ {\frac{{\frac{{\theta (t)}}{\pi } + \frac{1}{8}}}{{W \left( {\frac{1}{e}\left( {\frac{{\theta (t)}}{\pi } + \frac{1}{8}} \right)} \right)}}} \right]}} \\ + \mathcal{O}\!\left( {\frac{{\log ^2 \theta (t)}}{{\theta ^3 (t)}}} \right). $$ By solving for $t$, simplifying and introducing the inverse function, we find $$ \theta ^{ - 1} (t) = \frac{{8t + \pi }}{{4W\!\left( {\frac{{8t + \pi }}{{8\pi e}}} \right)}} - \frac{{W\!\left( {\frac{{8t + \pi }}{{8\pi e}}} \right)}}{{6 (8t + \pi )\left( {\log \left( {\frac{{8t + \pi }}{{8\pi }}} \right) - \log W\!\left( {\frac{{8t + \pi }}{{8\pi e}}} \right)} \right)}} + \mathcal{O}\!\left( {\frac{{\log ^2 t}}{{t^3 }}} \right). $$ For $t=100$ this, without the error term, gives $108.5639773824\ldots$ whereas the exact value is $108.5639773815\ldots$. It is possible to obtain higher terms by using more terms from the asymptotics of $\theta(t)$, obtaining more terms for $g(t)$ and so on. But this leads to elaborate computations once one starts iterating.

Gary
  • 31,845
4

$$\theta^{-1}(x)=\frac{8 x+\pi }{4 W\left(\frac{8 x+\pi }{8 e \pi }\right)}-\frac 1{8}\left(\frac{8 x+\pi }{4 W\left(\frac{8 x+\pi }{8 e \pi }\right)} \right)^{-3/2}$$ seems to be a slight improvement $$\left( \begin{array}{cccc} x & \text{first approximation}& \text{second approximation} & \text{exact}\\ 1 & 19.67670118 & 19.67526905 & 19.67484567 \\ 2 & 21.36685143 & 21.36558582 & 21.36525782 \\ 3 & 22.95388274 & 22.95274610 & 22.95248141 \\ 4 & 24.46021637 & 24.45918309 & 24.45896286 \\ 5 & 25.90107407 & 25.90012579 & 25.89993815 \\ 6 & 27.28736031 & 27.28648338 & 27.28632040 \\ 7 & 28.62720976 & 28.62639366 & 28.62624986 \\ 8 & 29.92688609 & 29.92612257 & 29.92599401 \\ 9 & 31.19133680 & 31.19061924 & 31.19050300 \\ 10 & 32.42455244 & 32.42387543 & 32.42376931 \\ 20 & 43.56093755 & 43.56050278 & 43.56044353 \\ 30 & 53.35930910 & 53.35898840 & 53.35894405 \\ 40 & 62.37144533 & 62.37119157 & 62.37115427 \\ 50 & 70.84503043 & 70.84482081 & 70.84478766 \\ 60 & 78.91754646 & 78.91736816 & 78.91733781 \\ 70 & 86.67507580 & 86.67492089 & 86.67489261 \\ 80 & 94.17593155 & 94.17579478 & 94.17576813 \\ 90 & 101.4618807 & 101.4617584 & 101.4617331 \\ 100 & 108.5641121 & 108.5640016 & 108.5639774 \end{array} \right)$$

3

(this is not an answer but too long for a comment)

(+1) Interesting discussion and answers! Three years earlier I searched the best constant $C$ in following approximate value of the imaginary part of the $n$-th non-trivial zero (from your initial expression of course) : $$\;t_n\approx 2\pi\,\exp(W((n-7/8-C)/e)+1)=2\pi\dfrac{n-7/8-C}{W((n-7/8-C)/e)}$$ and conjectured that $C$ had to be exactly $\dfrac 12$ (computing different moving averages and so on). Further the actual error doesn't exceed $\pm 1$ for the first $2$ million zeros as illustrated :

errors for 2 million terms

Notice the vertical symmetry around $0$ and the slow decrease of the variance of the error with $n$ (a correction term depending of $n$ appears less interesting than in your question, if needed at all, since the mean error remains near $0$ for values as large as $10^{22}$ using Andrew Odlyzko's tables ).

Anyway I found this a neat illustration of the gentle statistical distribution of the zeros.
We seem further able to find the position of the $n$-th zero for $n$ as large as we want with an error of less than one (the error for the $10^4$ zeros following $10^{22}$ is less than $0.21$).
For $\,n=10^{22}+1\,$ for example the formula gives us
$t_n\approx 1370919909931995308226.770224\ $ while the actual zero is at : $t_n= 1370919909931995308226.680161\cdots$

Raymond Manzoni
  • 43,021
  • 5
  • 86
  • 140
  • 1
    That's interesting. I am currently experimenting with $Z!\left(\theta^{\small(-1)}(x)\right)$, that is a version of Riemann–Siegel $Z$-function, dilated so to have the same distance between its zeros on average, regardless of how large $x$ is. If we think of offsets of zeros (relative to idealized equidistant positions) as of random values, it appears they obey a normal distribution whose standard deviation I am trying to determine. – Vladimir Reshetnikov Sep 12 '20 at 03:55
  • @VladimirReshetnikov: I updated my answer with 'larger' numerical results : the formula appears to hold for the $10^5,10^6,10^{12},10^{21},10^{22}$ zeros (with mean values in the $,\pm 6\cdot 10^{-5},$ range and slowly decreasing variance of $;0.0453, 0.0335,0.00989, 0.00359,0.00328,$ respectively for the $10^4$ following zeros). Not sure this will help much since you are interested by the behaviour of the whole Riemann $Z$-function while I considered only the zeros. Excellent continuation anyway! – Raymond Manzoni Sep 12 '20 at 23:53
  • An approximation of the variance (restricted to the provided values) is $;1/(\ln(n)^2/10+\ln(n)-3)$. – Raymond Manzoni Sep 14 '20 at 07:52
  • Could you please clarify what do you mean by the variance that depends on $n$? Is it the variance of the sample containing the first $n$ values? Or is it some kind of variance density? I'm not familiar with it. – Vladimir Reshetnikov Sep 17 '20 at 16:24
  • I considered $s$ to be one of the values $;10^5,10^6,10^{12},10^{21},10^{22};$ and computed the mean value and ordinary variance of the $10^4$ imaginary parts of zeros of index $s+1,s+2,\cdots,s+10^4$ (sum of (imaginary part minus mean value) squared divided by $(10^4-1)$. In the approximation of the variance I considered $n=s+10^4/2$. So nothing standard (should such thing exist for infinite sequences) just an arbitrary choice of mine linked to the size of the tables provided by Odlyzko. – Raymond Manzoni Sep 17 '20 at 22:59
  • (not really related but..). The other answers show that the correction terms are much smaller than the reciprocal of the main term and could indeed be ignored for large $s$. – Raymond Manzoni Sep 17 '20 at 22:59
  • (the "imaginary part" should be replaced by "the error on the estimation of the imaginary part" i.e. the ($t_n$ LamberW estimated value minus the actual imaginary part of the $n$-th zero) as in the illustration) – Raymond Manzoni Sep 18 '20 at 09:35