8

This post is a continuation of Generalization of the Bernoulli polynomials ( in relation to the Index ) , the definition of the Bernoulli polynomial $B_t(x)$ with $|x|<1$ has an extension through $B_t(x+1)=B_t(x)+t x^{t-1}$.

Two equivalent definitions for $B_t(x)$ with $|x|<1$:

$$B_t(x):=-t\zeta(1-t,x)$$ or

\begin{align*} B_t(x+1):=&-\frac{2\Gamma(1+t)}{(2\pi)^t}\cos \left( \frac{\pi t}{2} \right) \sum_{k=0}^\infty (-1)^k \frac{(2\pi x)^{2k}}{(2k)!}\zeta(t-2k) \\ &-\frac{2\Gamma(1+t)}{(2\pi)^t}\sin \left( \frac{\pi t}{2} \right) \sum_{k=0}^\infty (-1)^k \frac{(2\pi x)^{2k+1}}{(2k+1)!}\zeta(t-1-2k) \end{align*}

with $-t\in\mathbb{R}\setminus\mathbb{N}$.

With https://www.researchgate.net/publication/238803313_Bernoulli_numbers_and_polynomials_of_arbitrary_complex_indices , page 86, Theorem 5, using equation (11) with the lower limit of $1$ instead of $0$ ($k=1$ instead of $k=0$) the formula for the sum of fractional powers is $$S_x(t):=\sum\limits_{k=1}^x k^t =\frac{B_{t+1}(x+1)-B_{t+1}(1)}{t+1}$$ with $x\in\mathbb{N}_0$ and $t\in\mathbb{R}_0^+$ (general: $t$ can be complex but I don’t need this possibility here).

The right side may be differentiated by $x$ and therefore one can write $$\frac{\partial}{\partial x} S_x(t)=B_t(x+1)$$ On the other hand differentiated by $t$ and the definition with $M_x(t):=\prod\limits_{k=1}^x k^{k^t} $ it's $$\ln M_x(t)=\frac{\partial}{\partial t}S_x(t)=\frac{\partial}{\partial t}\frac{B_{t+1}(x+1)-B_{t+1}(1)}{t+1}$$

Together one gets (by exchanging the derivatives, which is possible here) $$\frac{\partial}{\partial t}B_t(x+1)=\frac{\partial}{\partial x}\ln M_x(t)$$

Note:

Perhaps this equation becomes a bit clearer if one looks at $$\frac{\partial}{\partial t}\Delta B_t(x)=\frac{\partial}{\partial x}\Delta \ln M_{x-1}(t)$$ with $\Delta B_t(x):=B_t(x+1)-B_t(x)=tx^{t-1}$ and $\Delta \ln M_x(t):=\ln M_{x+1}(t)-\ln M_x(t)=(x+1)^t\ln(x+1)$.

The problem now is:

I need a formula for $\ln M_x(t)$ or $M_x(t)$, independend of $B_t(x)$ (otherwise it's a trivial identity), where $x$ and $t$ are variable. It could be a series of (more or less known) functions of $x$ (or perhaps $x$ and $t$) which becomes a sum/term for $t\in\mathbb{N}$ - similar to $B_t(x)$.

Alternative: To proof that the two definitions above for $B_t(x)$ are indeed equivalent (a link to the literatur is enough).

Note:

The Euler-MacLaurin-formula can perhaps give a formula for $\ln M_x(t)$. Does someone know a link, where this is computed ?

Addition:

Maybe http://ac.els-cdn.com/S0377042798001927/1-s2.0-S0377042798001927-main.pdf?_tid=36ead884-7132-11e6-ac53-00000aab0f6b&acdnat=1472837296_60501a990f4d37792d48c76ad38c7e4b , page 198, equation (21), can help. (I will see.)


An application example with $\ln M_x(1)$:

The fourier series of $B_t(x)$ is $$ \Re \left( \sum\limits_{k=1}^{\infty}{\frac{e^{i2\pi kx}}{\left( ik \right) ^t}} \right) =\frac{\left( 2\pi \right) ^t}{2\Gamma \left( 1+t \right)}B_t\left( x \right) $$ for $|x|<1$ and $t>0$.

It is known, that $\frac{d}{dx}\ln M_x(1)=-\ln\sqrt{2\pi}+\frac{1}{2}+x+\ln\Gamma(1+x)$.

Using
$$\frac{\partial}{\partial t}B_t(x)|_{t=1}=\frac{d}{dx}\ln M_{x-1}(1)$$ and derivating the fourier series of $B_t(x)$ (above) by $t$ and having regard to $(\ln\Gamma(1+t))'|_{t=1}=1-\gamma$ one gets

$$\sum_{k=1}^{\infty}{\frac{\ln k}{k}}\sin \left( 2\pi kx \right) =\frac{\pi}{2}\left( \ln \frac{\Gamma \left( x \right)}{\Gamma \left( 1-x \right)}-\left( 1-2x \right) \left( \gamma +\ln \left( 2\pi \right) \right) \right) $$

which can be seen in http://reader.digitale-sammlungen.de/en/fs1/object/display/bsb10525489_00011.html?zoom=1.0 (on the top of page 4) and in http://arxiv.org/pdf/1309.3824.pdf (page 30, formula 65.)


A second application example where I use $\frac{d}{dx}\ln M_x(m+1)|_{x=0}$ with $m\in\mathbb{N}_0$:

Adamchik had computed $$\zeta’(-m)=\frac{B_{m+1}H_m}{m+1}-A_m$$ where $B_n$ are the Bernoulli-numbers, $H_n$ are the harmonic numbers and $A_n$ are the generalized Glaisher-Kinkelin constants. See e.g. http://www.sciencedirect.com/science/article/pii/S0377042798001927 (Article; last page, equation (24)) .

Dissolving the equation (5.4) on page 36 of
https://www.fernuni-hagen.de/analysis/docs/bachelorarbeit_aschauer.pdf
for $\ln M_x(k)$, using $\frac{B_{k+1}(x+1+w_2)- B_{k+1}(1+w_2)}{k+1}$ instead of $\sum\limits_{j=1}^x (w_2+j)^k$ and setting $(w_1;w_2):=(1;0)$ results in

\begin{align*} \ln M_x(m)&=H_m\frac{B_{m+1}(x+1)- B_{m+1}(1)}{m+1}+\ln Q_m(x)+ \\ &+\sum_{k=0}^{m-1}\binom{m}{k}(-x)^{m-k}\sum_{v=0}^k \binom{k}{v}x^{k-v}(\ln A_v -\ln Q_v(x)) \end{align*}

The definition of $Q_m(x)$ is (4.2) on page 13, it’s something like a modified Multiple-Gamma-Function. $\frac{d}{dx}\ln M_x(m)$ can be computed by using the differentiation rule (4.4) for the equation above.

Now one gets with $B_t(1)=-t\zeta(1-t)$ and $\frac{d}{dt}B_t(1)|_{t=m}=\frac{d}{dx}\ln M_x(m)|_{x=0}$ the equation chain $$\frac{B_{m+1}(1)}{m+1}+(m+1)\zeta’(-m)= \zeta(-m)+(m+1)\zeta’(-m)=(-t\zeta(1-t))’$$ $$=\frac{d}{dt}B_t(1)|_{t=m+1}=\frac{d}{dx}\ln M_x(m+1)|_{x=0}=H_{m+1}B_{m+1}(1)-(m+1)\ln A_m$$ and this result dissolved for $\zeta’(-m)$ and took into account that $H_{m+1}-\frac{1}{m+1}=H_m$ and $H_m B_{m+1}(1)=H_m B_{m+1}$ for $m\in\mathbb{N}_0$ one gets Adamchik’s result.


Most simple solution for proofing $\displaystyle \frac{\partial}{\partial t}B_t(x+1)=\frac{\partial}{\partial x}\ln M_x(t)$

by using the 2nd development of G Cab with the Hurwitz Zeta function:

$\zeta(a,b):= \sum\limits_{k=0}^\infty (b+k)^{-a}$

$\displaystyle \frac{B_{t+1}(x+1)-B_{t+1}(1)}{t+1}=S_x(t)=\zeta(-t,1)-\zeta(-t,x+1)$ and therefore
$\displaystyle \frac{\partial}{\partial t}S_x(t)=\ln M_x(t)=\sum\limits_{k=0}^\infty (k+1)^t\ln(k+1) - \sum\limits_{k=0}^\infty (k+x+1)^t\ln (k+x+1)$

$\displaystyle \frac{\partial}{\partial x}S_x(t)= B_t(x+1)=-t\zeta(1-t,x+1)\,$ (as mentioned by gammatester, first link above)

\begin{align*} \frac{\partial}{\partial t}B_t(x+1)&= \frac{\partial}{\partial t}\frac{\partial}{\partial x}(\zeta(-t,1)-\zeta(-t,x+1)) \\ &=\frac{\partial}{\partial x}\frac{\partial}{\partial t}(\zeta(-t,1)-\zeta(-t,x+1))=\frac{\partial}{\partial x}\ln M_x(t) \end{align*}

Note:

Substituting $B_t(x)$ and $\ln M_x(t)$ by other formulas are leading to non-trivial equations (as shown in the application examples above).

user90369
  • 11,518
  • When it comes to defining things you should be a bit more careful. For example $\sum_{k=1}^x$ carries no meaning as a sum when $x$ is not an integer. I would instead define $M_x$ by the $\partial_t B_{t+1}(x+1) + \ldots$ formula and then note that when $x$ is an integer we have this product formula. Same for $S_x(t)$. – Winther Sep 06 '16 at 12:38
  • ...I would also explicitly add the generalized $B_t(x)$ formula you assume in the linked question to the question here as it's relevant here. – Winther Sep 06 '16 at 12:41
  • I know (of course) that this sum has no meaning by itself. :-) But it becomes a meaning by using the Bernoulli-polynomials. I will add the definition, thanks for the hint. The sense of the application examples is to make the text understandable. – user90369 Sep 06 '16 at 12:53
  • You say that but then you also say “The problem now is I need a formula for $M_x$ independent of $B_t(x)$ (otherwise it's a trivial identity)”. As I read it this seems to be begging for us trying to extract meaning from the product formulation of $M_x$ independent of the definition given by the formula involving $B_t(x)$. I fear this question is ill defined, atleast I don't understand what you are really asking for. – Winther Sep 06 '16 at 13:31
  • In the ideal case of an answer is a formula for $\ln M_x(t)$ which contains $\ln M_x(m)$, mentioned in the second application example. But any other formula is wellcome, which leads to a nontrivial formula between $B_t(x)$ and $\ln M_x(t)$ - I don't expect only one solution, of course different points of view exist. E.g. G Cab gave me another idea. - Nobody is begged, nobody is forced to understand what I mean. But perhaps someone is interested in such "exotic" problems. And I am glad and thankful about any idea. – user90369 Sep 06 '16 at 14:04

2 Answers2

2

This is just a "what if ?" consideration, not an answer, and I just guess that it might be of some help to your scope. So, flanking the analysis you are conducting, you may consider this alternative development for $S_x(t)$.

  • 1st development $$ \begin{gathered} S_x (t) = \sum\limits_{k = 1}^x {k^{\,t} } = \sum\nolimits_{\;k = 1}^{\;x + 1} {k^{\,t} } = \frac{{B_{\,t + 1} (x + 1) - B_{\,t + 1} (1)}} {{t + 1}} = \quad \quad \left( \text{1} \right) \hfill \\ = \sum\nolimits_{\;k = 0}^{\;x} {\left( {k + 1} \right)^{\,t} } = \sum\nolimits_{\;k = 0}^{\;x} {\sum\limits_{0\, \leqslant \,j} {\left( \begin{gathered} t \hfill \\ j \hfill \\ \end{gathered} \right)k^{\,j} } } = \sum\limits_{0\, \leqslant \,j} {\left( \begin{gathered} t \hfill \\ j \hfill \\ \end{gathered} \right)\sum\nolimits_{\;k = 0}^{\;x} {k^{\,j} } } = \hfill \\ = \sum\limits_{0\, \leqslant \,j} {\left( \begin{gathered} t \hfill \\ j \hfill \\ \end{gathered} \right)\left( {\frac{{B_{\,j + 1} (x) - B_{\,j + 1} (0)}} {{j + 1}}} \right)} = \quad \quad \left( 2 \right) \hfill \\ = \sum\nolimits_{\;k = 0}^{\;x} {\sum\limits_{\begin{array}{*{20}c} {0\, \leqslant \,j} \\ {0\, \leqslant \,l\,\left( { \leqslant \,j} \right)} \\ \end{array} } {\left( \begin{gathered} t \hfill \\ j \hfill \\ \end{gathered} \right)\left\{ \begin{gathered} j \\ l \\ \end{gathered} \right\}k^{\,\underline {\,l\,} } } } = \hfill \\ = \sum\limits_{\begin{array}{*{20}c} {0\, \leqslant \,j} \\ {0\, \leqslant \,l\,\left( { \leqslant \,j} \right)} \\ \end{array} } {\left( \begin{gathered} t \hfill \\ j \hfill \\ \end{gathered} \right)\left\{ \begin{gathered} j \\ l \\ \end{gathered} \right\}\frac{{x^{\,\underline {\,l + 1\,} } }} {{l + 1}}} = \sum\limits_{\begin{array}{*{20}c} {0\, \leqslant \,j} \\ {0\, \leqslant \,l\,\left( { \leqslant \,j} \right)} \\ \end{array} } {\frac{{t^{\,\underline {\,j\,} } }} {{j!}}\left\{ \begin{gathered} j \\ l \\ \end{gathered} \right\}\frac{{x^{\,\underline {\,l + 1\,} } }} {{l + 1}}} \quad \quad \left( 3 \right) \hfill \\ \end{gathered} $$ where the symbol $\sum\nolimits_{\;k = 1}^{\;x + 1} {}$ indicates the indefinite sum , computed between the indicated bounds, and the curly backets the Stirling N. of 2nd kind.
    To the purpose of derivating vs. $t$ and $x$ , you may replace the falling factorials $t^{\,\underline {\,j\,} } $ and$x^{\,\underline {\,l + 1\,} } $ with the corresponding Stirling devopment in $t^n$ and $x^m$ or with their expression through the Gamma function.
  • 2nd development
    You can also write $S_x(t)$ in terms of the Hurwitz zeta function $$ \begin{gathered} S_x (t) = \sum\limits_{k = 1}^x {k^{\,t} } = \sum\nolimits_{\;k = 1}^{\;x + 1} {k^{\,t} } = \hfill \\ = \sum\nolimits_{\;k = 1}^{\;\infty } {k^{\,t} } - \sum\nolimits_{\;k = x + 1}^{\;\infty } {k^{\,t} } = \sum\nolimits_{\;k = 0}^{\;\infty } {\left( {k + 1} \right)^{\,t} } - \sum\nolimits_{\;j = 0}^{\;\infty } {\left( {j + x + 1} \right)^{\,t} } = \hfill \\ = \zeta ( - t,1) - \zeta ( - t,x + 1)\quad \quad \left( 4 \right) \hfill \\ \end{gathered} $$
  • Note concerning the handling of sums and products with non-integer bounds
    First let's note that $$ \begin{gathered} S_x (t) = \sum\limits_{k = 1}^x {k^{\,t} } \quad \Rightarrow \hfill \\ \Rightarrow \quad x^{\,t} = S_{x + 1} (t) - S_{x + 1} (t) = \left( {S_{x + 1} (t) + c(x + 1)} \right) - \left( {S_x (t) + c(x)} \right) \hfill \\ \end{gathered} $$ and $$ \begin{gathered} M_x (t) = \prod\limits_{k = 1}^x {k^{\,k^{\,t} } } = \prod\nolimits_{\;k = 1\;}^{\;x + 1} {k^{\,k^{\,t} } } = \prod\nolimits_{\;k = 0\;}^{\;x} {\left( {k + 1} \right)^{\,\left( {k + 1} \right)^{\,t} } } \quad \Rightarrow \hfill \\ \Rightarrow \quad \left( {x + 1} \right)^{\,\left( {x + 1} \right)^{\,t} } = \frac{{M_{x + 1} (t)}} {{M_x (t)}} = \frac{{c(x + 1)M_{x + 1} (t)}} {{c(x)M_x (t)}} \hfill \\ \end{gathered} $$ with $$ c(x)\;:\quad \text{any}\,\text{periodic}\,\text{function}\text{,}\,\text{with}\,\text{period}\,\;1 $$ Then let's take for example the starting base of your development, we get the following two different "definitions" for $B_t(x+1)$ $$ \begin{array}{*{20}c} {S_x (t) = \frac{{B_{\,t + 1} (x + 1) - B_{\,t + 1} (1)}} {{t + 1}}} \hfill & \begin{gathered} \hfill \\ = \hfill \\ \hfill \\ \end{gathered} \hfill & \begin{gathered} = \sum\limits_{k = 1}^x {k^{\,t} } = \sum\nolimits_{\;k = 1}^{\;x + 1} {k^{\,t} } = \hfill \\ = \sum\nolimits_{\;k = 0}^{\;\infty } {\left( {k + 1} \right)^{\,t} } - \sum\nolimits_{\;k = 0}^{\;\infty } {\left( {k + x + 1} \right)^{\,t} } \hfill \\ \end{gathered} \hfill \\ \hline \begin{gathered} \quad \quad \quad \quad \Downarrow \hfill \\ \frac{\partial } {{\partial \,x}}S_x (t) = \hfill \\ = \frac{1} {{t + 1}}\frac{\partial } {{\partial \,x}}B_{\,t + 1} (x + 1) = \hfill \\ = B_{\,t} (x + 1) = \hfill \\ = - t\sum\nolimits_{\;k = 0}^{\;\infty } {\left( {k + x + 1} \right)^{\,t - 1} } \hfill \\ \end{gathered} \hfill & \begin{gathered} | \hfill \\ | \hfill \\ | \hfill \\ | \hfill \\ | \hfill \\ | \hfill \\ | \hfill \\ | \hfill \\ \end{gathered} \hfill & \begin{gathered} \quad \quad \quad \quad \Downarrow \hfill \\ \frac{{B_{\,t + 1} (x + 1)}} {{t + 1}} = f(t + 1) - \sum\nolimits_{\;k = 0}^{\;\infty } {\left( {k + x + 1} \right)^{\,t} } \hfill \\ \quad \quad \quad \quad \Downarrow \hfill \\ \hfill \\ B_{\,t} (x + 1) = \hfill \\ = t\,f(t) - t\sum\nolimits_{\;k = 0}^{\;\infty } {\left( {k + x + 1} \right)^{\,t - 1} } \hfill \\ \end{gathered} \hfill \\ \end{array} $$ where
  • the derivate in $x$ is first taken in extending to real index the known property for integer index, and then by derivating the espression of $S(x)$ as difference of the two sums;
  • $f(t)$ can be any function in $t$, and in particular it could be $B_t(1)$, which in turn can be taken as $t\;\zeta (1 - t)$, as it is in many papers concerning the extension of Bernoulli polynomials.

Thus it is evident that such mathematical entities shall be handled with great care, and specially when taking derivatives.

G Cab
  • 35,272
  • Thank you for your efforts! - I need a term of functions for $\ln M_x(t)$ so that we have new insights with combining it with the Bernoulli-polynomials. I think only to develope series doesn't help to proof the equation between $B_t(x+1)$ and $\ln M_x(t)$, but who knows, perhaps it works (I will try). Anyway, you have helped me that I could make my question at least a bit clearer, thanks. – user90369 Sep 03 '16 at 14:04
  • I have added a comment at the end of my post above. The problem is that I don't get a closed form for the spezial case $t\in\mathbb{N}$. – user90369 Sep 06 '16 at 11:54
  • @user90369 a) I checked formulas 1 & 2 & 3 with my old CAS for 0 <= integer $t$ and $x$ <= 10 and re-verified the various steps: could not find errors b) added to my answer the derivation in terms of Hurwitz zeta c) cannot follow you in the use of $M_x(t)$ etc., specially because I am missing the final scope: is your aim to find a formulation for the derivative of $S_x(t)$ vs. $x$ and $t$ , or to find the properties of $M_x(t)$, or else ? – G Cab Sep 06 '16 at 14:39
  • Thank you for your addition! (1) For the proof of $\frac{\partial}{\partial t}B_t(x+1)=\frac{\partial}{\partial x}\ln M_x(t)$ I need an expression for $\ln M_x(t)$ with variable $x$ and $t$. (2) The ideal case of an answer is a formula for $\ln M_x(t)$ which contains $\ln M_x(m)$ mentioned in the second application example. But of course any idea is welcome. Perhaps the term with Hurwitz zeta function will work, I will see. :-) – user90369 Sep 06 '16 at 14:55
  • Thanks again! You fill with $c(x)$ the space between two natural numbers, that's o.k. . But please define $f(t)$ and $g(t)$ before the first use and it's better to bring the lines in an order ( I cannot see the equations clearly because the lines of the left and right part of the equations are not on the same height), I am a bit confused. :-) – user90369 Sep 07 '16 at 14:53
  • @user90369, ok I try and put the last table in a more understandable way and adding some clarifications. Hope it is clear now that starting from the expressions for $S(x)$ in the top row, by derivating in $x$ we get the results in first bottom column, while starting from the difference in $B$s and in sums, we get the bottom second column, which completes the result with a $f(t)$ , that in fact should be added upon taking the partial derivative in $x$. – G Cab Sep 07 '16 at 18:48
  • Thank you very much for your patient elaboration of your ideas. - Note: Choosing $f(t)$ any function irritates me. :-) And I still don't know what I have done wrong in the last part of my post using your idea. It will need some days till I answer/understand. – user90369 Sep 08 '16 at 11:13
  • @user90369 it's a pity you get irritated, that's however what is to be paid for entering in the otherwise very interesting but very delicate subject of infinite sum and products. Just take like that: you are going to "interpolate" bernoulli polynomials for the index, same as Gamma "interpolates" $n!$. But Gamma is not the only function interpolating them (gamma + period 1 function does as well). To define Gamma took much efforts from our venerable ancestors , and needs to add conditions concerning its "smoothness" (concavity) to select it out of the many. Not speaking of the definition of psi – G Cab Sep 09 '16 at 19:15
  • Perhaps "irritated" was the wrong word - I ment "confused" (English is not my mother language). I am sorry. I am still trying to understand and to proof and to find my mistake. I think the Hurwitz zeta function is (also) a good idea, as you have written. I will write a few days later more. Thank you for your ideas and efforts! – user90369 Sep 09 '16 at 19:21
  • After checking the ideas and possibilities: The relation of $S_x(t)$ with $B_t(x)$ doesn't answer the question (I turn in a circle) - I need two independent series/expressions. I think my problem can be solved by a proof of what I had written at the beginning of the post: two definitions for $B_t(x)$ - that they are indeed equivalent. One is the Hurwitz zeta function (the variable is in the denominator of each member of the series), the other one is a Taylor series (the variable is in the numerator of each member of the series). – user90369 Sep 11 '16 at 15:38
  • I have used the 2nd development for the proof (last part of my post above). I think this works - thanks again for your ideas. :-) – user90369 Sep 13 '16 at 08:49
  • Glad to have been of help ! – G Cab Sep 13 '16 at 11:16
2

Applying the Euler-Maclaurin Sum Formula

The Euler-Maclaurin Sum Formula can be applied to $k^t$ to get the approximation $$ \sum_{k=1}^nk^t=\zeta(-t)+\frac1{t+1}n^{t+1}+\frac12n^t+\frac{t}{12}n^{t-1}-\frac{t^3-3t^2+2t}{720}n^{t-3}+O\!\left(n^{t-5}\right) $$ When $t\lt-1$, this describes how the series for $\zeta(-t)$ converges.


Possible Extension to Non-Integral Summation Limits

Consider $$ \begin{align} \lim_{\delta\to0}\frac1\delta\left(\sum_{k=1}^{n+\delta}k^t\color{#C00000}{-\sum_{k=1}^{m+\delta}k^t}\right) &=\lim_{\delta\to0}\frac1\delta\sum_{k=m+1+\delta}^{n+\delta}k^t\\ &=\lim_{\delta\to0}\frac1\delta\sum_{k=m+1}^n(k+\delta)^t\\ &=t\sum_{k=m+1}^nk^{t-1} \end{align} $$ Thus, if we give a meaning to taking a derivative with respect to the upper limit of summation, it would give $$ \frac{\mathrm{d}}{\mathrm{d}n}\sum_{k=1}^nk^t=t\sum_{k=1}^nk^{t-1}\color{#C00000}{+C} $$ where $C$ is related to the behavior near $m=0$.

robjohn
  • 345,667
  • Thank you for answering my question of the note! But with this formula I have still the problem to derivate by $n$. – user90369 Sep 03 '16 at 16:10
  • @user90369: I have added a section about a possible extension that might allow differentiation with respect to a limit of summation. – robjohn Sep 03 '16 at 16:38
  • An interesting addition, thanks. But to derivate $\sum\limits_{k=1}^n k^t$ by $n$ is not the problem because this can be done with the help of the Bernoulli polynomials. What I ment was the differentiation of the Euler-MacLaurin sum formula for this sum (in your case: $O\left(n^{t-3}\right)$. – user90369 Sep 03 '16 at 19:39
  • I have added a comment at the end of my post above. – user90369 Sep 06 '16 at 11:57
  • 1
    Certainly, you can take the derivative of the polynomial whose values at the integers agrees with the sum, but taking the derivative of a sum with respect to the upper limit is not quite the same. The upper and lower limits of a sum generally differ by an integer. It is not immediately apparent how to define $\sum\limits_{k=1}^{5.2}k^2$. – robjohn Sep 06 '16 at 12:07
  • But e.g. $\sum\limits_{k=1}^{5.2}k^2$ is exactly what I need. :-) I extended my post to clarify my motivation and background. – user90369 Sep 06 '16 at 12:20
  • 1
    $\sum\limits_{k=1}^{5.2}k^2$ has no clear meaning by itself. We can find a polynomial that agrees with the sum when the upper limit is an integer, and apply that to $5.2$; however, there are many functions that agree with the sum when the upper limit is an integer. Why should the polynomial be chosen? – robjohn Sep 06 '16 at 12:35
  • I know (of course) that this sum has no meaning by itself. :-) But it becomes a meaning by using the Bernoulli-polynomials. The definition for that can be seen in of http://math.stackexchange.com/questions/1911400/generalization-of-the-bernoulli-polynomials-in-relation-to-the-index (mentioned at the beginning of the post). Maybe my text is not good enough but I think with the application examples can be seen very well what I mean. I am glad about every comment which helps me to make the text understandable. – user90369 Sep 06 '16 at 12:48
  • Winther gave me the advice to add the definition of $B_t(x)$ here. Maybe it's better now to understand, why the extension of $\sum\limits_{k=1}^x k^t$ related on $x$ works. – user90369 Sep 06 '16 at 13:08
  • Hi. I went through this for the sum of $k^{5.5},$ so your $t = 5.5$ The exponents after the $t n^{t-1}/12$ term, the next exponents should be $n^{t-3}$ and $n^{t-5}$ in the $O$ term. – Will Jagy Aug 21 '21 at 18:49
  • @WillJagy: thanks. Fixed. I don't know how that happened; I have Mathematica code that gives me the right answer, but I must have miscopied. – robjohn Aug 21 '21 at 20:38