6

Given a series of the type:

$$Q(s,n) = \ln(1)^s + \ln(2)^s + \ln(3)^s + \cdots+ \ln(n)^s $$

How does one evaluate it?

Something I noticed was:

$$Q(1,n) = \ln(1) + \ln(2) + \ln(3)+ \cdots+\ln(n) = \ln(1\cdot 2\cdot 3 \cdots n) = \ln(n!) $$

I also noticed that:

$$\int^{n}_{1}\ln(x)^s\, dx\quad\sim\quad\sum^{n}_{i = 1}\ln(i)^s$$

But I am really interested in an exact formula or at least one whose difference from the actual value progressively decreases as opposed to merely whose ratio from the actual progressively decreases.

  • 2
    I've solved this using the idea of an "indefinite summation"-formula and with an ansatz which should mimic that how the Bernoulli-polynomials were found for sums of like powers. It's too long to write it down here, but perhaps you find it instructive to read http://go.helms-net.de/math/divers/BernoulliForLogSums.pdf – Gottfried Helms Jul 06 '13 at 20:04
  • the sum is divergent , regularizatio is needed so $ (-1)^{s}\zeta ^{(s)} (0) $ – Jose Garcia Jul 06 '13 at 20:06
  • I would just like the n'th value not an infinite – Sidharth Ghoshal Jul 06 '13 at 20:07
  • 1
    @frogeyedpeas: Please note that \ln produces the correct upright notation $\ln$, whereas ln just means "the product of two variables named $l$ and $n$". – Zev Chonoles Jul 06 '13 at 20:30
  • oh, I'll keep that in mind next time :) – Sidharth Ghoshal Jul 06 '13 at 20:31
  • Related: http://math.stackexchange.com/questions/207455 asks about the exponential of your Q(s,n)-function (but with no answer), and in http://math.stackexchange.com/questions/279401 I looked at the convergence-radius of the power-series solution for $t_1(x)$ in my answer – Gottfried Helms Jul 07 '13 at 05:50

3 Answers3

2

Here is a short excerpt of the discussion to which I've linked in my first comment.

For $s=1$ (which is somehow nearly trivial) we can define the function $$ t_1(x)=-\zeta '(0)-\ln(\Gamma(\exp(x)))$$ which gives for instance $$ t_1(\ln(2)) - t_1(\ln(4)) = \ln(2)+\ln(3) $$ and in general $$ t_1(\ln(a)) - t_1(\ln(b)) = \sum _{k=a}^{b-1} \ln(k) $$ The key is, that the artificial-looking version of $t_1(x)$ gives the infinite series $$ t_1(\ln(x)) = \sum _{k=x}^\infty \ln(x) = \ln(x) + \ln(x+1) + \ldots $$

The coefficients of the power series of $t_1(x)$ can easily be given for instance using Pari/GP

t_1(x) + O(x^8)
%1321 = 0.91893853 + 0.57721566*x - 0.53385920*x^2 - 0.32557879*x^3 
      - 0.12527414*x^4 - 0.033725651*x^5 - 0.0068593536*x^6 - 0.0011726081*x^7
      + O(x^8)   

where the coefficients can be described exactly by compositions of Stirling numbers 2nd kind and $\zeta()$-values at positive integer arguments, and where moreover $\zeta(1)$ is replaced by the Euler-$\gamma $ (which, btw, indicates that we have somehow the Ramanujan-like zeta-renormalization at work here)

The first answer is then $$ Q(1,n) = t_1(\ln(1)) - t_1(\ln(n+1)) $$


For $s=2$ $$ t_2(\ln(x)) = \sum_{k=x}^{\infty} \ln(x)^2 $$

such that analoguously $$ Q(2,n) = t_2(\ln(1)) - t_2(\ln(n+1)) = \sum_{k=1}^n \ln(k)^2 $$

I don't have an exact representation for the power series in terms of zetas and Euler-gamma; here is an approximation, where the constant term is $\zeta''(0)$ (the generation scheme allows arbitrary precision depending on the possible size of involved matrices):

t_2(x) = -2.006356455908585 - 0.1456316909673534*x + 0.6345699670487060*x^2 
        - 0.3868588771980126*x^3 - 0.2407113770463571*x^4 - 0.09916202534448954*x^5
        - 0.02847303775799426*x^6 - 0.005923792714748150*x^7 - 0.0009884022636657563*x^8
        - 0.0001620035246035620*x^9 - 0.00002414672567100699*x^10 
        - 0.000001216451660450317*x^11 + 0.0000001409130267444575*x^12
        - 0.0000001437552825860954*x^13 - 0.00000003587528042872192*x^14 
        + 0.00000001359539422026695*x^15 + O(x^16)

and $$Q(2,n) = - \sum_{k=1}^\infty c_k \cdot \ln(n+1)^k $$ where $c_k$ are the coefficients of the power series and the index $k$ begins at $1$ such that the constant term is skipped.

The numbers and the generation-scheme (even for the higher $s$) can be taken from the discussion to which I've linked in my first comment.

1

I hope you don't mind if I use $\log$ as you use $\ln$ (it is more standard in analytic number theory).

Since $\log$ is monotonic increasing: $$\int_{1}^{n}\log^s x \ dx < Q(s,n) < \int_{1}^{n+1}\log^s x \ dx$$ (using left/right endpoints and $Q(s,1)=0$).

This shows that $Q(s,n)=\int_1^n\log^s x + O(\log^s n)$, which is already a pretty good asymptotic formula; this error term is massively dwarfed by the main term (even though it is not a decreasing function as you have asked for - this requirement might be too strict).

Evaluating integrals, we obtain: $$Q(s,n)=\int_{1}^{n}\log^s x \ dx+O(\log^s n)=n\left(\sum_{k=0}^{s}(-1)^{s-k}\frac{s!}{k!}\log^k n\right)+O(\log^s n)\qquad (*)$$ when $s$ is an integer, and an analogous expression in terms of the incomplete gamma function otherwise.

Suffice it to say, as far as you are probably concerned, $$Q(s,n)=n\log^s(n)+O\left[n\log^{s-1}(n)\right]$$The full version is $(*)$.

pre-kidney
  • 30,223
  • Can this be transformed to a series expansion of some type? for example $Q(s,n) = a_1n\ln^s(n) + a_2n\ln^{s-1}(n)... a_sn\ln(n) + a_{s+1}n + a_{s+2} + a_{s+3}n/\ln(n) + a_{s+4}n/\ln(n)^2...$ – Sidharth Ghoshal Jul 06 '13 at 23:05
  • Your latex got a little messed up so I'm not sure what you mean, but I think what you are getting at is covered by the integral I evaluated. Terms like $n\log^k(n)$ with $k<0$ don't appear, because they are dwarfed by the $O(\log^s n)$ term already. And the $k>=0$ case appear explicitly in the integral evaluation $(*)$. – pre-kidney Jul 06 '13 at 23:08
  • I corrected the latex The idea was that I had a series that went from $\log^k(n) to \log^0(n)$ and then all the negative powers of $\log(n)$ each multiplied by some coefficient and the value of n – Sidharth Ghoshal Jul 06 '13 at 23:10
  • I edited my answer to clarify your question. See $(*)$, which gives the series expansion you desire. – pre-kidney Jul 06 '13 at 23:10
  • Just gave it a try. printing the results by Q followed by the explicite summation I got this:$$ Q(1,5)= 3.04718956217 \quad ( 4.78749174278) \ Q(1,15)=25.6207530165 \quad (27.8992713838) \ Q(2,15)=58.7615323423 \quad ( 60.4520312916) \ Q(2,115)= 1727.81941430 \quad ( 1737.07712499) $$ – Gottfried Helms Nov 07 '14 at 13:12
  • Seems like a pretty decent approximation. Also did you try plugging in terms from the series I labelled (*)? – pre-kidney Dec 15 '14 at 07:13
1

You may use Euler-Maclaurin formula to get $Q(x,s)=\sum_{n\leq x}(\log n)^s$. That would be $x(\log x)^s-s\int_{1}^{x}(\log t)^{s-1} \ dt+O((\log x)^s)$. It should be a fine approximation for your work!

Kunnysan
  • 2,050
  • You should expand your answer. Using just the first 3 terms of the Euler-Macclaurin formula, you can show that the error is less than $\frac{s}{12n}(\log(n))^{s-1}$ for $s\gt 1$. Unfortunately the error bound doesn't start going down until $n \gt e^{s-1}$. – Apprentice Queue Jul 07 '13 at 18:16
  • @Apprentice Queue: That's a good observation. But, are you sure that the error term will be $O((\log n)^{s-1}/n)$, I mean does that go to zero? – Kunnysan Jul 07 '13 at 18:23
  • yes it does approach zero because if you calculate the error term or Euler-Maclaurin, you will see it is bounded by the first derivative (ie $s(\log(n))^{s-1}/n$). – Apprentice Queue Jul 07 '13 at 22:10