2

Consider the sequence $a(n)$ defined by $a(n) = \sin(n x) \exp(-nt)$, where $n = 0, 1, 2, 3, 4, \ldots$. The parameter $x$ is a real number. Parameter $t$ is a positive real number. It is clear that the sequence $a(n)$ converges to $0$ as $n \rightarrow \infty$.

We define $S(k)$ as the partial sum of the sequence $a(n)$ from $n = 0$ to $k$. It is straightforward to show that the $S(k)$, in the limit of $k \rightarrow \infty $, converges to the following value $S$:

$$S = \frac{\sin(x)}{2 - 2\cos(x) + (e^t -e^{-t})^2}$$

We see that $S = 0$ for $x = 0$. Now consider $S$ in the limit of $t \rightarrow +0$. We see that in general, except for the case $x = 0$, the result becomes:

$$S = 0.5*cot(x/2) \qquad x \neq 0$$

It is worth noting that for $t > 0$ there is not really a hyperbolic divergence at $x = 0$. The correct limit of $S$ when both $x$ and $t$ are small is $S = x/(x^2 + 4t^2)$. So actually $S$ is continuous! The result $S = 0$ for $x = 0$ is re-confirmed. Furthermore $S$ has a maximum $0.25/t$ fot $x= 2t$ and a minimum $-0.25/t$ for $x = -2t$. Only in the strict limit of $t$ to $0$ the interfacial region vanishes.

Question 1: Suppose we had defined $a(n)$ with a different convergence factor. So instead of the exponential factor $exp(-nt)$ we had used e.g. $(1 + nt)exp(-nt)$ or $1/(exp(nt)-nt)$ or a Gaussian. Would this lead to the same result for $S$ in the limit $t \rightarrow +0$, or to a different one?

Question 2: Under which conditions is it mathematically allowed to extend the result for $S$ in the limit $t \rightarrow +0$ to the case $t = 0$, where the sequence $a(n)$ becomes $sin(n x)$ which is no longer convergent?

EDIT: I now understand that the term "convergence factor" is rarely used in mathematics, and that the preferred terminology is "tempered distribution". I have been informed by Strants that the summation method used above is known as "Abel summation".

M. Wind
  • 3,624
  • 1
  • 14
  • 20
  • 1
    "Mathematically allowed" depends on what you want to claim. If you want to claim that $\sum \sin nx$ converges to $\frac12\tan\frac x2$, that's not "allowed" because it's false. If you want to claim that the sum's value at $t$ has a limit as $t\to0^+$, and that limit equals $\frac12\tan\frac x2$, then that's "allowed" because it's true. If you have another statement in mind, then clearly stating it should allow you to determine whether it's true or false. – Greg Martin Nov 23 '14 at 22:04
  • The expression $ \sum_{n=0}^ \infty sin(nx)$ is not defined because $sin( \infty)$ is not defined. It is possible to give meaning to the sum, by choosing a proper representation. The obvious choice is to incorporate a convergence factor. The summation can then be performed without ambiguity. The result is the function $S$. Since it is analytic, we can perform any operation on it. We can also set $t = 0$. The method of transforming something to an analytic function is of course standard practice. A well-known example is the Gamma function. – M. Wind Nov 25 '14 at 03:49
  • 1
    I disagree. Yes, if you change the sum, then you can change it to something that unambiguously converges. But there are many choices one can make, and they lead to different answers. Also, "the expression $\sum \sin nx$ is not defined because $\sin \infty$ is not defined" doesn't make sense. No function on the real numbers is defined when one "plugs in $\infty$"; yet plenty of series converge. – Greg Martin Nov 25 '14 at 05:31
  • I challenge you to come up with a different choice for the infinite sum, which leads to a different answer that is (equally) meaningful. Furthermore I invite you to come up with an example of an oscillatory function of which the amplitude does not drop to zero for large values of the argument, but nevertheless the sum of the series converges. – M. Wind Nov 25 '14 at 13:45
  • 1
    Strants is addressing the first "challenge" in their answer. As for the second, of course such an example does not exist - a necessary condition for a series to converge is that the terms tend to $0$ in the limit. I never claimed that was possible. If that is what you meant by saying "$\sin \infty$ is not defined", then I for one don't find that terminology clear or accurate. – Greg Martin Nov 25 '14 at 23:53
  • You comment on my English and my terminology. I was hoping you would focus on the mathematics. E.g. generalizations and analytic extensions of sums; the connection between sums/series and real functions via their series expansions (Taylor; Fourier). – M. Wind Nov 26 '14 at 01:08

2 Answers2

2

EDIT: I have learned the definition of convergence factor I assumed below is not the definition M. Wind intended.

Question 1

For question 1, the answer is that (at least for some not unreasonable choices of $x$) we can define a convergence factor such that that $\lim_{t\to 0} S \not= \frac{1}{2}\cot\left(\frac{x}{2}\right)$. To see this, let us consider the following question:

Question (1'): Does there exist a function $g:\mathbb{N} \times \mathbb{R}^{\ge 0}$ such that $g(n,0) = 0$ for all $n$, $\sum \sin(nx)g(n,t)$ converges for all $t > 0$ and $$\lim_{t \to 0} \sum_{n=0}^\infty \sin(nx) g(n,t) \not= 0.$$

The answer to this question is related to the answer to your question 1: if such a $g$ exists, then we can take the new convergence factor $\exp(-nt) + g(n,t)$ and get a new limit $\lim_{t \to 0} S$; alternatively, if the answer question 1' is no, then for any convergence factor $f(n,t)$, we have that for $g(n,t) = \exp(-nt) - f(n,t)$, $$\lim_{t \to 0} \sum_{n=0}^\infty \sin(nx) g(n,t) = 0$$ so $$\lim_{t \to 0} \sum_{n=0}^\infty \sin(nx) \exp(-nt) = \lim_{t \to 0} \sum_{n=0}^\infty \sin(nx) f(n,t).$$

I claim such a $g$ exists. Specifically, define $g$ by

$$g(n,t) = \left\{\begin{array}{cc} 1 & t \not= 0 \mbox{ and }n \ge \frac{1}{t} \mbox{ is the least positive integer such that } \sin(nx) \in \left[\frac{1}{2} - t, \frac{1}{2} + t\right]\\ 0 & \mbox{else}\end{array}\right.$$

For $\frac{x}{2\pi}$ irrational, $g$ is well-defined, since $\left\{\sin(nx)| n \in \mathbb{N}\right\}$ is dense in $[-1,1]$. (See here)

Then, $$\lim_{t \to 0} \sum_{n=0}^\infty \sin(nx)g(n,t) = \frac{1}{2}.$$ Question 2

As a thought on question 2, a series is defined to be Abel summable if $$\lim_{x \to 1^-} \sum_{n=0}^\infty a_nx^n$$ exists and is finite. If we set $x = e^{-t}$, the condition $x \to 1^-$ becomes $t \to 0^+$, so we are left with $$\lim_{t \to 0^+}\sum_{n=0}^{\infty}a_ne^{-nt},$$ which, if we let $a_n = \sin(nx)$, is exactly what you have. So, you can say the series is abel summable to sum $\frac{1}{2}\cot\left(\frac{x}{2}\right)$. In fact, this result is mentioned in the last exercise of this document.

  • Thank you for your input. I am afraid that your choice for $g(n, t)$ does not meet the criteria for a convergence factor. Specifically, it is not a monotonically decreasing function of $y = nt$. – M. Wind Nov 25 '14 at 19:31
  • Could you include a definition of convergence factor in your question? I followed the broader definition provided here. –  Nov 25 '14 at 20:41
  • Yes, of course! On general grounds and in accordance with the way convergence factors are commonly used in physics, I selected the following 4 criteria : [1] $g(0) = 1$; [2] $g(\infty) = 0$ ; [3] g is monotonically decreasing; [4] g is smooth. – M. Wind Nov 26 '14 at 01:50
  • I see, thank you! I admit, the $g$ is found did feel a little like cheating.

    I also added a partial answer to question 2: apparently, the method you use to sum $\sum \sin(nx)$ is equivalent to Abel summation.

    –  Nov 26 '14 at 04:54
  • Thank you very much! I am certainly going to read about Abel summation. I hope to learn when it is applicable and how it relates to other fields (real functions; Fourier transforms). – M. Wind Nov 26 '14 at 06:14
0

I performed numerical tests on several convergence factors $g(n, t)$. Note that the parameter $t$ only appears in the product with $n$; therefore it is convenient to introduce $y = nt$. For the convergence factor $g(y)$ I set the following 4 criteria : [1] $g(0) = 1$; [2] $g(\infty) = 0$ ; [3] g is monotonically decreasing; [4] g is smooth.

I selected a broad range of candidate functions $g(y)$ and performed the summation in double precision for both the sine and the cosine function (for which the exact result is $S = 1/2$) for different values of $t$. In every case convergence to the exact value was seen.

The functions $g$ that led the sums to convergence with the greatest speed and with the highest accuracy, even for fairly large values of $t$ (0.001 for $x < 0.3$ and 0.002 otherwise), were of the following type. For small values of $y$ it behaves like $g(y) = 1 - y^N$ , with $N = 8$. Two excellent choices were found to be $g(y) = 1/(1 + y^8 + 0.25y^{16})$ and $g(y) = exp(-y^8)$.

Summarizing, I can now state with confidence that the answer to Question 1 is:

For every well-behaved convergence factor $g$ the sum $S$ converges in the limit of $t$ to $+0$ to the exact result, that is obtained for the exponential convergence factor $g(n, t) = exp(-nt)$.

M. Wind
  • 3,624
  • 1
  • 14
  • 20
  • Could you possibly include a list of the functions $g(y)$ you tested? (Just to satisfy my curiosity.) –  Nov 26 '14 at 03:58
  • 1
    This was my initial set of 10 candidates: [1] $exp(-y)$ [2] $1/(1+y)$ [3] $1/(1 + y + 0.5y^2)$ [4] $2/(1 + exp(2y))$ [5] $exp(-y^2)$ [6] $(1+y)exp(-y)$ [7] $1/(1+y^2)$ [8] $2/(exp(y)+exp(-y))$ [9] $(1+y+0.5y^2)exp(-y)$ [10] $1/(1+y^3)$. The best four were no 5, 6, 8, 9. I kept those, adjusted the other six. At each step I improved the set. Classifying them according to their Taylor series, $g = 1-y^N$ for small y, I noticed a clear preference for higher values for $N$. Roughly $N=8$ turned out to be optimal. All in all I tested around 50 functions. – M. Wind Nov 26 '14 at 17:06