12

I have two questions:

A. I know the taylor series for $\arctan(x)$ and for $e^x$. How do I find the series for $\arctan(x)\cdot e^x$ ?

B. Say I want to find the series for $\arctan(g(x))$, do I just subtitue $x$ with $g(x)$, Or do I have to start the process of finding the series from the beginning?

Thanks!

yotamoo
  • 2,753

2 Answers2

16

For product: Suppose that the Taylor series for $f(x)$ about $x=0$ is $a_0+a_1x+a_2x^2 +a_3x^3+\cdots$, and converges if $|x|<A_f$. Suppose also that the series for $g(x)$ is $b_0+b_1x+b_2x^2+b_3x^3 +\cdots$, and converges if $|x|<A_g$. Then the series for $f(x)g(x)$ is in principle not hard to compute, and converges at least when $|x|<\min(A_f,A_g)$.

Just do what comes naturally, and multiply the two series as if they were long polynomials. Explicitly, the series for $f(x)g(x)$ is $$c_0+c_1x+c_2x^2+c_3x^3+\cdots, \quad\text{where}\quad c_n=\sum_{i=0}^n a_i b_{n-i}.\qquad\qquad(\ast)$$ In some cases, we may be able to find a closed form for the $c_n$. The series $\sum c_nx^n$ is called the convolution of the two series $\sum a_nx^n$ and $\sum b_n x^n$. The convolution operation is of great importance in many areas of mathematics.

Often, for approximation purposes, we only want to find the first few terms of the power series expansion for $f(x)g(x)$. Then the computations are easy.

We said that the series for $f(x)g(x)$ converges at least when $|x|<\min(A_f, A_g)$. But the convergence behaviour may be far better than that. For example let $f(x)=1-x$ and $g(x)=\frac{1}{1-x}$. The series for $f(x)$ is simply $1-x$, and converges everywhere. The series for $g(x)$ only converges when $|x|<1$. But the series for $f(x)g(x)$ is simply $1$, and converges everywhere.

Only minor modification of $(\ast)$ is needed when we know the Taylor expansions of $f(x)$ and $g(x)$ in powers of $x-a$ instead of powers of $x$: just replace $x$ everywhere by $x-a$.

For composition: Again, we do what comes naturally, and simply substitute. Well, as pointed out by Robert Israel, the process is not quite that simple. The series for $f(x)$ usually tells us the behaviour of $f(x)$ when $x$ is near $0$, and may not be valid if $x$ is some distance from $0$. So substitution will work when for $x$ near $0$, $g(x)$ is near $0$. In terms of the Taylor series for $g(x)$, it means that the constant term in the expansion of $g(x)$ should be $0$. Thus $g(x)=\arctan(x)$ is generally OK, but $g(x)=1+x^2$ is not. If we attempt substitute $1+x^2$ for $u$ in the usual series for $\frac{1}{1-u}$, we definitely will not obtain the power series expansion for $\frac{-1}{x^2}$, since this function does not have a power series expansion about $x=0$.

Unfortunately, the substitution process, even when it valid, can be tedious. However, since we only look at $g(x)$ whose series has $0$ constant term, the expansion of $(g(x))^k$ has no powers of $x$ less than $x^k$. So finding the first few terms of the power series expansion of $f(g(x))$ is quite easy.

There are some simple but important cases. For example, if $f(u)=\frac{1}{1-u}$, then the series expansion of $f(u)$ is $$1+u+u^2+u^3+\cdots.$$ We want the series expansion for $\frac{1}{1+x^2}$. So our function is $f(g(x))$, where $g(x)=-x^2$. Just substitute $-x^2$ every time that you see $u$. We get $$\frac{1}{1+x^2}=1-x^2+x^4-x^6+\cdots.$$ The series for $\arctan(x)$ is usually obtained in this way. Find the series for $\frac{1}{1+x^2}$ as we just did, and integrate term by term.

André Nicolas
  • 507,029
  • 2
    In general, to get the series for $f(g(x))$ about $x=x_0$ you want to use the series for $g(x)$ about $x=x_0$ and the series for $f(y)$ about $y=g(x_0)$. So knowing the series of $\arctan(t)$ in powers of $t$ you can get the series for $\arctan(g(x))$ in powers of $x$ if $g(0) = 0$. – Robert Israel Feb 07 '12 at 17:27
  • For composition of power series, is the radius of convergence the minimum of that of the individual series, or is it like for multiplication and addition of power series where the resultant interval of convergence may be larger than for the individual series?

    If the radius of convergence is the minimum as described, then is the interval of convergence the intersection of the two individual intervals of convergence, or is behaviour at the end-points not guaranteed (like when differentiating a power series to obtain another)?

    – ryang Mar 07 '13 at 13:07
  • I really don't know general answers here! That might make aa good question, or perhaps more than one. – André Nicolas Mar 07 '13 at 17:25
  • @AndréNicolas: I am thinking about your claim in the 5th paragraph that the series for $f(x)g(x)$ converges everywhere. Multiplying the two series gives $(1-x)\sum x^k$, which by the ratio test still has radius of convergence 1. What is going on here? I think the problem is that $f(x)g(x)=\frac{1-x}{1-x}$, which is undefined at 1. – symplectomorphic Aug 16 '16 at 16:23
  • @symplectomorphic: The function $f(x)g(x)$ is identically equal to $1$ in a suitable neighbourhood of the origin, so the Maclaurin series is very simple. The fact that $f(x)g(x)$ is not defined at $x=1$ plays no role, the Maclauring series is determined by local behaviour. – André Nicolas Aug 16 '16 at 17:21
  • @AndréNicolas: ah, right. But then we also have a theorem that says the Taylor series operator is multiplicative: the Taylor series of $fg$ is the product of the Taylor series of $f$ and the Taylor series of $g$. In this case, we get $(1-x)\sum x^k$, and the radius of convergence of that series is 1, right? Perhaps there is a hypothesis I'm missing from the multiplicativity theorem. – symplectomorphic Aug 16 '16 at 17:23
  • @symplectomorphic: As you know, when we do the multiplication $(1-x)\sum x^k$ formally, we get $1$. The relevant result is that the convergence radius of the product is at least the min the the individual radii of convergence. – André Nicolas Aug 16 '16 at 17:28
  • @AndréNicolas: yes, I know. But my claim is that the following three statements cannot all be true. (1) the Taylor series of a product is the product of the Taylor series. (2) the radius of convergence of the series $\sum(1-x)x^k$ is 1. (3) the radius of convergence of the Taylor series of $f(x)g(x)=(1-x)(1-x)^{-1}$ is $\infty$. – symplectomorphic Aug 16 '16 at 17:39
  • @AndréNicolas: I realized my error; apologies for my questions. It is true that $\sum(1-x)x^k$ has radius of convergence 1, but this infinite series is not written as a power series, so there is no contradiction, as I thought there was. Taking the Cauchy product of $1-x$ and $\sum_{k=0}^\infty x^k$ does give 1. – symplectomorphic Aug 17 '16 at 01:48
  • You talked about Cauchy product. If $\sum_{n=0}^{\infty} a_n$ converges to $A$ and $\sum_{n=0}^{\infty} b_n$ converges to $B$ and at least one of these infinite series converges absolutely, then the Cauchy product of these infinite series converges to $AB$. This was proved by Franz Mertens, as the Wikipedia Cauchy product page says.. – user236182 Sep 17 '17 at 12:47
  • This direct substitution (in composition of taylor series) does not make sense when compared to computation of derivatives by chain rule. For example: $d(e^x)=e^xdx$ but if we substitute $x$ by $g(x)=x^2$, we'll notice that $d(e^{x^2})\neq e^{x^2}dx$. What do you think about this? Thanks! – Gaurang Tandon Feb 11 '18 at 05:35
1

You can check some theory on two matters:

A Cauchy product rule:

Let $$A = \sum {{a_k}} $$ and $$B = \sum {{b_k}} $$ Then$$A \cdot B = C = \sum {{c_k}} $$ where $${c_k} = \sum\limits_{n = 0}^k {{a_n}{b_{k - n}}} $$

(the last expression is a discrete convolution)

The theorem is valid for finite sums, and for series if one series converge and the other converges absolutely.

B For the second matter, the composition, you should consider the properties of the Taylor series. Since there is a broad scope of functions we can compose, you should always pay attention to continuity among other issues. However, in the simplest cases, you can compose a Taylor polynomial with another polynomial, or elementary functions. For example, you have that

$${e^{\sin x}} = \sum {\frac{{{{\sin }^k}x}}{{k!}}} $$ or

$$\frac{1}{{1 - {{\sin }^2}x}} - 1 = {\tan ^2}x = \sum\limits_{k = 1}^\infty {{{\sin }^{2k}}x} $$

converges quite well for a low amount of terms.

Pedro
  • 122,002
  • If $\sum_{n=0}^{\infty} a_n$ converges to $A$ and $\sum_{n=0}^{\infty} b_n$ converges to $B$ and at least one of these infinite series converges absolutely, then the Cauchy product of these infinite series converges to $AB$. This was proved by Franz Mertens, as the Cauchy product Wikipedia page says.. – user236182 Sep 17 '17 at 04:25
  • @user236182 Yes: "The theorem is valid for finite sums, and for series if one series converge[s] and the other converges absolutely." Didn't mention Mertens though, so thanks for that. – Pedro Sep 17 '17 at 11:29