4

For every $x>0$, consider the sequence $(x_n)$ defined by $x_0=x$ and, for every $n\geqslant0$, $$x_{n+1} = \sqrt{x_n + \frac12}$$ Then $x_n\to x_*=\frac{1+\sqrt3}2\ne0$ hence the sequence $$S_n(x)=\sum_{k=1}^n(-1)^kx_k^4$$ diverges. Consider its Cesàro sums, defined by $$C_n(x)=\frac1n\sum_{k=1}^nS_k(x)$$

The question is to prove that $C_n(x)\to C(x)=\frac18-x^2$.

One can probably use telescoping and / or differentiation techniques.

As safety checks, note that the proposed limit $C(x)$ satisfies the relations $$C(x_*)=-\frac12x_*^4\qquad C\left(x^2-\frac12\right)=-x^4-C(x)$$

Did
  • 279,727
mick
  • 15,946
  • 3
    As $n\to\infty,;f(x,n)\to{1+\sqrt3\over2}$ regardless of the initial value, so the terms in your infinite sum do not tend to 0, so the sum can't converge at all. In particular, it can't converge to ${1\over8}-x^2$. – Ivan Neretin Dec 13 '16 at 12:41
  • 2
    I don't understand. Is $f$ a function of one variable or of several variables? This is terrible notation. – Dan Rust Dec 13 '16 at 14:15
  • Even a series with alternate signs does not converge unless its general term converges to zero, hence the LHS of the identity you ask to prove, is undefined. Please explain. – Did Dec 14 '16 at 08:23
  • @Did see the edit i made. Cesàro summation is used. – mick Dec 16 '16 at 22:46
  • Sorry but what does that even mean? The Cesaro sum of a series $\sum(-1)^nx_n$ such that $x_n\to\left(\frac12(1+\sqrt3)\right)^4$ is... $\frac18-x^2$? Would you be trying to "save" your question by throwing words like "Cesaro summation" in the air, just in case? – Did Dec 17 '16 at 01:07
  • No Did. Check the claim on a computer if you want. You only considered the " tail " in your argument. – mick Dec 17 '16 at 02:04
  • Try x = 1. As example. – mick Dec 17 '16 at 02:06
  • "Check the claim on a computer if you want" No need to, your claim is clearly bogus. (If ever you find the time, please explain why the case x=1 should prove anything at all... And next time, please use @.) – Did Dec 21 '16 at 18:44
  • @Did: I checked it on the computer. It does exactly what mick says it does. You have to take the whole series into account, not just the tail behavior as $n \rightarrow \infty$. – The_Sympathizer Jan 03 '17 at 06:11
  • Edited the post to use proper notation, cleaner wording -- hopefully that makes the question "clearer" and maybe could be re-opened. I think it's a reasonable question. For reference: $f^n(x)$ was denoted $f(x, n)$ before but ran into conflict with notation $f(x)$ as function of single variable. – The_Sympathizer Jan 03 '17 at 06:36
  • Voted to re-open the question. – The_Sympathizer Jan 03 '17 at 06:37
  • @mike4ty4 "Voted to re-open the question" If you want to be serious about it, you should produce the computation that "does exactly what mick says it does". – Did Jan 03 '17 at 08:52
  • @Did: In a comment? – The_Sympathizer Jan 03 '17 at 12:48
  • @Did: OK, I decide on a screenshot. Here: https://drive.google.com/file/d/0ByblVJgMQHggX3NHQ05ORnE0RHM/view?usp=sharing – The_Sympathizer Jan 03 '17 at 12:59
  • @Did: That tests 300 partial sums on both $x = 0.5$ and $x = 1.0$, and compares against $1/8 - x^2$. Agreement is to on the order of $10^{-3}$. I could get better agreement, but it would require a table cache due to the slow convergence and nested loops. I posted this version because the code is more transparent. – The_Sympathizer Jan 03 '17 at 13:08
  • @OP Is the revised version of the question, added before the line, corresponding to what you actually wanted to ask after the comments showing the original version of the question was inaccurate? – Did Jan 03 '17 at 13:49
  • @mike4ty4 Thanks for your work, which did a lot to clear things up. – Did Jan 03 '17 at 14:01
  • @Did: You're welcome. – The_Sympathizer Jan 03 '17 at 14:09
  • As a generalization we could multisect similar Sums like $a(x) = x + f^2(x) + f^4(x) + f^6(x) + ...$ and then take the solution as $( a(x) - a(f(x)))/2$. However we get an issue with divergent sums again ... But there is probably a way around it. Maybe this makes a good follow-Up question in a new question. Then again nobody seems intrested in these kind of sums and i consider removing my account and deleting my questions. – mick Jan 08 '17 at 17:17
  • Hmm generalized Cesàro summation might do the trick. – mick Jan 08 '17 at 17:23
  • 3
    @mick Are you not interested in cleaning the mess on this page? This would involve: 1. Erasing the part of the question below the line (aka the version so unclear it took several users and countless comments to understand what you wanted). 2. Unaccepting the currently accepted answer, which does not address the question. 3. Posting a solution, following mike4ty4's precise hint. (On further thought, I proceeded with point 1. myself. Remain points 2. and 3., that are all yours...) – Did Jan 23 '17 at 12:38

3 Answers3

2

$A(x)=- f(x)^4 + f(x,2)^4 - f(x,3)^4 + f(x,4)^4 - ...$

The derivative of $A$, with respect to $x$ is

$B(x)=- 4f(x)^3f'(x) + 4f(x,2)^3f'(x,2) - 4f(x,3)^3 f'(x,3)+...$

It can be proved that

$f'(x,n)=f'(x,n-1)\times \frac{1}{2f(x,n)}$

Also, the first term in $B(x)$ can be rewritten as

$- 4f(x)^3f'(x)=-2f^2(x,1)f'(x,0)$

Using the two new relations, we get

$B(x)=-2(f^2(x,1)f'(x,0)-f^2(x,2)f'(x,1)+f^2(x,3)f'(x,2)-...)$

Writing the same thing, in a compact way

$B(x)=-2\sum_{n=0}f^2(x,2n+1)f'(x,2n)+2\sum_{n=1}f^2(x,2n)f'(x,2n-1)$

Then, another relation helps. Substituting $f^2(x,n+1)=f(x,n)+\frac{1}{2}$, in the last equation, gives

$-\sum_{n=0}f'(x,2n-1)-\sum_{n=0}f'(x,2n)+\sum_{n=1}f'(x,2n-2)+\sum_{n=1}f'(x,2n-1)$

Now, consider the first and the last summations together and the summations in the middle together to have

$B(x)=-f'(x,-1)$

The iterative relation gives

$f(x,0)^2=f(x,-1)+\frac{1}{2}$

Therefore

$f(x,-1)=x^2-\frac{1}{2}$

Now, having $f'(x,-1)=2x$

$A'(x)=B(x)=-2x$

Therefore

$A(x)=-x^2+c$

Now, to find the constant $c$, you may notice a trick

$A(0)=-A(-\frac{1}{2})$

which is

$c=-c+\frac{1}{4}$

Finally

$c=\frac{1}{8}$

Med
  • 2,530
  • A little bit more detail would be Nice. +1 already. I wonder if the idea of derivative came natural to you , or you followed my OP. Also - to all readers - , I wonder if a proof based on telescoping alone is possible. Also at the moment it is not clear to me how to find or prove * similar * statements. ( i have similar identitities as in the OP such as the bisquares replaced by other powers ). – mick Dec 13 '16 at 22:31
  • Well, I just prevented a long solution. I tried to use your hints, as I thought you are told to use derivatives, but you were not sure how to use it. – Med Dec 14 '16 at 00:32
  • I assumed derivatives were helpful because of the polynomial. I was not told so. This is not a homework thing. – mick Dec 14 '16 at 00:38
  • @mick, Then you have a good intuition in math. May I ask where did you get the problem from? – Med Dec 14 '16 at 08:21
  • 2
    The series $A(x)$ and $B(x)$ considered in this answer diverge for every $x\geqslant0$. Thus the conclusions that are reached in this answer by manipulating $A(x)$ and $B(x)$ do not hold. Give me a divergent series and I can "prove" that $1=0$ pretty easily...(As an aside, I find slightly annoying that this point, essentially made by @IvanNeretin quite clearly in a comment posted 11 minutes after the question was asked, has been virtually ignored since then.) – Did Dec 15 '16 at 09:13
  • It reminds me Ramanujan summation. It assigns the series to a number, for each $x$. Although it is not well-defined. – Med Dec 15 '16 at 18:36
  • The LHS used Cesàro summation. Sorry i was sloppy. I edited the Op. I assume all issues are now resolved for the question. Not sure about the answer tbh ( sorry again ). Thanks guys ! – mick Dec 16 '16 at 22:33
  • The derivative of a Cesàro Sum = the Cesàro Sum of the derivative [*]. I also accept the telescoping trick within the context of Cesàro Sums. Hence I Will accept the answer. I had to think about these details so forgive my hesitation time. If anyone disagrees with [*] or the telescoping let me know. But imho the proof is correct. As an intresting remark , this proof gives insights to generalisations. Although info about generalisations are still welcome: a nonalgebraic one !! ). My mentor is upset about the closure and finds it remarkable that nobody else is. There is even a delete vote !!?!! – mick Dec 28 '16 at 21:52
  • @mick -- the problem is you need to be clear and precise with your notation. Maths is not tolerant of ambiguity! :) – The_Sympathizer Jan 03 '17 at 06:06
  • @mick I discover only now your comment from Dec 28 '16 at 21:52. It is very strange. How can an answer not even touching on the Cesaro aspect of the revised version of the question (the only one that makes sense), provide a proof? – Did Jan 03 '17 at 14:00
  • @Did: I just found a real proof. By expanding the iterates via $f^n(x) = f(f^{n-1}(x))$ you can expand the 4th powers into sums of $f^{n-2}$ and $f^{n-1}$ and then you get telescoping cancellation and the final (Cesaro) sum can be obtained with no differentiation at all. Telescoping series was the right answer. – The_Sympathizer Jan 03 '17 at 14:16
  • 1
    @mike4ty4 You mean, using that $x_n^4=(x_{n-1}+\frac12)^2=x_{n-1}^2+x_{n-1}+\frac14=x_{n-2}+x_{n-1}+\frac34$ for every $n\geqslant2$? Indeed, this leads to a fully rigorous solution... – Did Jan 03 '17 at 14:50
  • @Did: Yep, exactly. – The_Sympathizer Jan 04 '17 at 01:42
  • @mike4ty4 : mick has pointed me in the tetrationforum to this question and an older question of mine in the forum. As I exactly discussed this problem of a closed form of this alternating sum it would be helpful if either here or there could a proof be appended for completeness in this questions/threads ... (forum: http://math.eretrandre.org/tetrationforum/showthread.php?tid=245&pid=5896&highlight=Easter#pid5896 ) – Gottfried Helms Feb 05 '17 at 19:53
  • @Gottfried Helms: Yes, I can post the proof now -- the question had been closed and I had to wait to get it reopened. – The_Sympathizer Feb 06 '17 at 01:38
2

Here is the complete proof. I had wanted to post this before but had to wait for all close voters to rescind their votes after trying to save this question.

First off -- I think the key step in the proof is made a little clearer by using the iterated function notation for the question instead of the other that has been put to use now -- so we will first start off with a rephrase as follows: Let $f(x) = \sqrt{x + \frac{1}{2}}$. Then $x_n = f^n(x)$ and we want to prove that

$$\sum_{k=1}^{\infty} (-1)^k x_k^4 \stackrel{\mathrm{pseudo}}{=} \frac{1}{8} - x^2$$

where the left is divergent but reinterpreted using Cesaro summability (hence the pseudo-equality), i.e. that

$$\lim_{n \rightarrow \infty} C_n(x) = \frac{1}{8} - x^2$$

with $$C_n(x) = \frac{1}{n} \sum_{k=1}^{n} \left(\sum_{l=1}^{k} (-1)^l x_l^4\right)$$.

To do this, first replace $x_l$ by the corresponding iterated functions $f^l(x)$:

$$ \begin{align} C_n(x) &= \frac{1}{n} \sum_{k=1}^{n} \left(\sum_{l=1}^{k} (-1)^l [f^l(x)]^4\right)\\ &= \frac{1}{n} \sum_{k=1}^{n} \left(-[f^1(x)]^4 + [f^2(x)]^4 - \cdots + (-1)^l [f^l(x)]^4\right)\\ \end{align} $$

Now, we note that, by definition of iterated functions that $f^l(x) = f(f^{l-1}(x))$ and this allows us to expand out $[f^l(x)]^4$ as follows:

$$ \begin{align} [f^l(x)]^4 &= [f(f^{l-1}(x))]^4 \\ &= \sqrt{f^{l-1}(x) + \frac{1}{2}}^4 \\ &= \left(f^{l-1}(x) + \frac{1}{2}\right)^2 \\ &= [f^{l-1}(x)]^2 + f^{l-1}(x) + \frac{1}{4} \\ &= [f(f^{l-2}(x))]^2 + f^{l-1}(x) + \frac{1}{4} \\ &= \sqrt{f^{l-2}(x) + \frac{1}{2}}^2 + f^{l-1}(x) + \frac{1}{4} \\ &= f^{l-2}(x) + \frac{1}{2} + f^{l-1}(x) + \frac{1}{4} \\ &= f^{l-2}(x) + f^{l-1}(x) + \frac{3}{4} \end{align} $$

We now plug this back into the previous series to get

$$ \begin{align} C_n(x) &= \frac{1}{n} \sum_{k=1}^{n} \left(-[f^1(x)]^4 + [f^2(x)]^4 - \cdots + (-1)^l [f^l(x)]^4\right)\\ &= \frac{1}{n} \sum_{k=1}^{n} \left(-[f^{-1}(x) + f^0(x) + \frac{3}{4}] + [f^0(x) + f^1(x) + \frac{3}{4}] - [f^1(x) + f^2(x) + \frac{3}{4}] + [f^2(x) + f^3(x) + \frac{3}{4}] - \cdots + (-1)^k [f^{k-2}(x) + f^{k-1}(x) + \frac{3}{4}]\right)\\ &= \frac{1}{n} \sum_{k=1}^{n} \left(-f^{-1}(x) - [f^0(x) - f^0(x)] - \frac{3}{4} + [f^1(x) - f^1(x)] + \frac{3}{4} - [f^2(x) - f^2(x)] - \frac{3}{4} + [f^3(x) - f^3(x)] + \frac{3}{4} - \cdots + (-1)^k [f^{k-1}(x) - f^{k-1}(x)] + (-1)^k \frac{3}{4}]\right) \end{align} $$

Thus we now have the series in telescoping form and it telescopes down to

$$\begin{align} C_n(x) &= \frac{1}{n} \sum_{k=1}^{n} \left(-f^{-1}(x) - [k \mod 2 = 1] \frac{3}{4}\right) \end{align} $$

Now of course $f^{-1}(x) = x^2 - \frac{1}{2}$ so

$$ C_n(x) = \frac{1}{n} \sum_{k=1}^{n} \left(-x^2 + \frac{1}{2} - [k \mod 2 = 1] \frac{3}{4}\right) $$

This then becomes three separate Cesaro means

$$ C_n(x) = \left(\frac{1}{n} \sum_{k=1}^{n} (-x^2)\right) + \left(\frac{1}{n} \sum_{k=1}^{n} \frac{1}{2}\right) - \left(\frac{1}{n} \sum_{k=1}^{n} [k \mod 2 = 1] \frac{3}{4}\right) $$

Now we take the limit as $n \rightarrow \infty$. The first two means are means of identical numbers, thus they respectively equal $-x^2$ and $\frac{1}{2}$. The last one is like the mean of Grandi's series, which has limit $\frac{1}{2}$, so this has limit $\frac{3}{8}$. Thus the final Cesaro sum is

$$ \begin{align} C(x) &= \lim_{n \rightarrow \infty} C_n(x)\\ &= -x^2 + \frac{1}{2} - \frac{3}{8}\\ &= \frac{1}{8} - x^2 \end{align} $$.

or

$$\sum_{k=1}^{\infty} (-1)^k x_k^4 \stackrel{\mathrm{pseudo}}{=} \frac{1}{8} - x^2$$

QED.

  • Very nice. Meanwhile it seems, that the two-way-infinite series is as well Cesaro-/Eulersummable and my numerical evaluations for $x\lt t$ where $t$ is the fixpoint $f(t)=t$ give always zero... – Gottfried Helms Feb 06 '17 at 06:26
  • @Gottfried Helms: Yes. I also wonder if Med's proof with differentiation can be made to work too -- I think one just needs to be clear in the notation that one is not working with normal summation of the divergent series, and add a few details to prove that the grouping of terms and differentiation termwise do not affect the Cesaro means. Since the conclusion is valid, I think it can be done. (I think if the Cesaro means converge uniformly, it should work and the interchange of sum and derivative is justified. But I haven't tested that convergence.) – The_Sympathizer Feb 06 '17 at 07:21
1

Update 18.10.2017: In the previous version of my answer I'd assumed the series begins at $x_0$ ; instead of $-x_1$ as it was defined in the question. I adapted now my results and matrices accordingly.


Giving examples to questions in the comments.
I used Pari/GP; there is a Cesaro-sum-compatible procedure in it sumalt() With this I got the following table:

f(x)=sqrt(x+1/2)
p=4                      \\ used as exponent for the series
list=vectorv(20)         \\ takes 20 solutions   
\\ --------- put the following commands of the loop in a "bracketed block"
{ for(q=1,20,  x0=x1=q-1;  
       su=sumalt(k=1,
              (-1)^k * ( x1 = f(x1))^p); \\ for iteration (k>0)
       list[q] = [ x0 , su , 1.0/8 - x0^2  ]
    ); }
  \\ ---------
  printp(Mat(list)); 

...

  x0   !  sum by"sumalt" !  1/8 -x0^2         
       !   (cesarosum)   !  equals "sum"
  -----+ ----------------+ ------------------
     0   0.125000000000   0.125000000000
     1  -0.875000000000  -0.875000000000
     2   -3.87500000000   -3.87500000000
     3   -8.87500000000   -8.87500000000
     4   -15.8750000000   -15.8750000000
     5   -24.8750000000   -24.8750000000
     6   -35.8750000000   -35.8750000000
     7   -48.8750000000   -48.8750000000
     8   -63.8750000000   -63.8750000000

The differences are in the digits near software-epsilon


[Update] There are some more examples of interesting sums, see the following answer in another MSE-thread

There is also an empirical/a heuristic/conjectured result using my earlier discussed matrix-approach, which employs Carleman-matrices and for the summation of the alternating iteration series -when this is possible- the Neumann-series of that Carleman matrix.
My Pari/GP-tools give me the following

pc_f=polcoeffs(f(x),32)~   \\ put the (leading) coefficients of the
                           \\ powerseries-expansion of f(x) into a vector.
                           \\ Because of finite size we can always -at best- 
                           \\ get approximative solutions  
\\ show the first few coefficients:(actually I work with first 32 coeffs.)
[0.707106781187, 0.707106781187, -0.353553390593, 0.353553390593,        
 -0.441941738242, ... ]

F = mkCarlemanmatrix(pc_f)   \\ user defined procedure 

\\The top-left of F is

  1   0.707106781187  0.500000000000   0.353553390593  0.250000000000   0.176776695297
  0   0.707106781187   1.00000000000    1.06066017178   1.00000000000   0.883883476483
  0  -0.353553390593               0   0.530330085890   1.00000000000    1.32582521472
  0   0.353553390593               0  -0.176776695297               0   0.441941738242
  0  -0.441941738242               0   0.132582521472               0  -0.110485434560
  0   0.618718433538               0  -0.132582521472               0  0.0662912607362

With a vector $\small V(x)=[1,x,x^2,x^3,...]$ we can then evaluate the series by doing the dotproduct

  V(x) * F = [1, f(x), f(x)^2, f(x)^3, f(x)^4 , ... ]
            = V(f(x)) 

Here the series which occur at odd indices ($\small f(x),f(x)^3,f(x)^5$) have convergence-radius $\small \rho<=1$
Note, that in the fifth column (=at column with index 4) we get the 4'th power of the function: $\small f(x)^4$ which is of course what shall interest us below for our problem.


Now we try to get a meaningful matrix A by the Neumann-Series, which interprets the alternating geometric-series for matrices (as far as this is at all possible). Because your series begins at $x_1$ and not $x_0$ I omit the first term which would be the identity-matrix $F^0$:

$$\small A = - F+ F^2 - F^3 ... = F*(I + F)^{-1} $$

That matrix-inversion must be done with much care; in such cases as here I use LDU-decomposition, (exact(!)) inversion of the components and construction of the inverse from the product of inverses of the L,D,U components applying Eulersummation when convergences in the dotproducts are bad. But even the naive inversion-procedure in Pari/GP gives a seemingly meaningful approximate solution:

  A = -F * (matid(32) + F)^-1 

 \\ the top-left of A
     -1/2   -1.3477094     1.0977094    -0.57076948  0.12500000    0.11653321
        0   -1.0048085  0.0048084741  -0.0024627863           0  0.0013720680
        0    9.3831760    -9.3831760      4.4965232  -1.0000000   -0.94229215
        0  -0.36697128    0.36697128     -1.1877337           0    0.10440858
        0   -5.4154218     5.4154218     -3.2908264           0    0.49966112
        0   -3.9408033     3.9408033     -2.0077436           0    0.11044718
        0    2.4551648    -2.4551648      1.4308645           0    -1.1883271
        0    1.5382960    -1.5382960     0.88397114           0   -0.55144081
        0   -3.4084780     3.4084780     -1.6751387           0    0.95787115
        0   -2.8478962     2.8478962     -1.3608389           0    0.80723231

The dotproducts with a $V(x)$-vector should give approximately: $$ V(x) \cdot_{\mathfrak E} A = [ a_0 , a_1(x), a_2(x) , a_3(x), a_4(x), ... ] $$ The ${\mathfrak E}$ means here that possibly Eulersummation is involved as far as the occuring summations are not convergent, or converge only badly.

But we have two interesting columns here: they have only finitely many entries so that the alternating series for that exponents might be computable by finite polynomials:

  column     represents                           gives value
                                                  by evaluation
  -----------------------------------------------------------------
   a_0    =  - x_1^0 + x_2^0 - ... + ...      =  -1/2
   a_4(x) =  - x_1^4 + x_2^4 - ... + ...      =   1/8 -1*x^2   

The value for $\small a_0$ complies with the evaluation of $\small -1+1-1 \ldots$ by Eulersummation and the values for $\small a_4(x)$ comply with the results gotten by series-summation $\small -x_1^4 + x_2^4 - ... + ...$


Such heuristics by the Neumann-matrices can be found in many places in tetration and iteration-series, however I had never time and energy to sit down and do the formal proofs for that concluded properties....