3

In Taylor series we have $x-a$ which usually means shifting the function $a$ positions to the right (if $a$ is negative then it's a left-shift, and if $a=0$ then there's no shift at all).

What I don't understand is why this is in here. The explanation I hear is "to approximate around a different point" but this makes no sense to me.

If I have a function $f(x)$ and I want to evaluate it at point $a$ then I just plug in $a$, that is, evaluate $f(a)$. I don't understand the purpose of shifting the function over, or what it's accomplishing that I couldn't do by just leaving $a=0$ in the first place, since the function itself does not change, we're just picking it up and moving it over a bit.

user525966
  • 5,631
  • Taylor series are hugely useful and helpful...but not that much for polynomials (though even in this case it can help a lot...), but for other kinds of functions. And in advanced mathematics things get way more interesting and messy than simply "plugging" some value in some function. – DonAntonio Feb 24 '18 at 15:34
  • 3
    Because Taylor series are not used for polynomials. They are used to approximate more complicated functions – Yuriy S Feb 24 '18 at 15:34
  • 2
    The one use for a Taylor series for a polynomial is that you can express a polynomial in $x$ in powers of $x -a$ very easily. – ncmathsadist Feb 24 '18 at 15:35
  • @ncmathsadist: I used to think this also (at least before 2001 or so), but see my comment here for a much easier way that only uses basic high school algebra. (Well, maybe in some complicated and higher power cases, Taylor series might be quicker computationally.) – Dave L. Renfro Feb 24 '18 at 15:42
  • @DonAntonio Sorry by polynomial I meant function, changed the post. Doesn't really change the nature of my question though since it applies to polynomials too technically, same general idea. – user525966 Feb 24 '18 at 16:13
  • @user525966, it completely changes the nature of your question. How do you 'plug' something in, say, $\cos x$? If you do not have a calculator, how do you compute $\cos 25$ for example? – Yuriy S Feb 24 '18 at 20:53

5 Answers5

2

A Taylor expansion of $f$ around $a$ allows to simply read off the the derivatives of arbitrary order at $a$. So its just a neat way to write the function which includes much information how this function behaves near that point.

This is useful for example when calculating limits of functions when $x\to a$ (l'Hospital rule).
As an example, if you have the polyomials $f(x)=2(x-1)+3(x-1)^2$ and $g(x)=x-1$, then the limit $$\lim_{x\to 1}\frac {f(x)}{g(x)}=\lim_{x\to 1} \frac{2(x-1)+3(x-1)^2}{x-1}=2, $$ is easy to see. However, if we start with $f(x)=3x^2-4x+1$, then calculating the limit is (still easy as the functions are easy, but) not trivial anymore.


Edit: Calculating the limits:

$$\lim_{x\to 1}\frac {f(x)}{g(x)}= \lim_{x \to 1}\frac{2(x-1)+3(x-1)^2}{x-1}= \lim_{x \to 1}\frac{2(x-1)}{x-1} + \lim_{x \to 1} \frac{3(x-1)^2}{x-1}\\ = \lim_{x \to 1} 2+ \lim_{x \to 1}3(x-1)= 2+0 = 2, $$ so the limit is jjust the coefficient in front of the linear term of the Taylor series of $f$.

If we don't already know the Taylor series, we have to calculate: $$ \lim_{x\to 1}\frac {f(x)}{g(x)} = \lim_{x\to 1}\frac {3x^2-4x+1}{x-1}.$$ To my knowledge, there is no way how we can simply read off the limit. So we have to use some tricks.
Note that $\lim_{x\to 1} (3x^2-4x+1) =0 = \lim_{x\to 1} (x-1)$, so we can use l'Hospital: $f'(x) = 6x-4$, so $f'(1)=2$; $g'(x)=1$, so $g'(1)=1$. Thus we get: $$\lim_{x\to 1}\frac {f(x)}{g(x)} = \lim_{x\to 1}\frac {f'(x)}{g'(x)}= \frac 2 1 =2.$$

As I said before, this calculation is not that hard because $f$ and $g$ where chosen easy; but I guess this still illustrates the point that the second calculation was for more cumbersome than the first one.

klirk
  • 3,634
  • If it were easy to see then I probably wouldn't be asking this question since I am not seeing it to begin with! I don't understand why that limit is easy to see or the the other one is "not trivial" anymore. Can you elaborate on both? – user525966 Feb 24 '18 at 16:18
  • @DonAntonio: Ym expansion should be correct, the first derivative at $1$ is $+2$ – klirk Feb 24 '18 at 17:46
  • @kirk Of course, you are correct. Thanks. – DonAntonio Feb 24 '18 at 17:47
  • @user525966 the "easy" limit you get by expanding the fraction in two fractions corresponding to each summand of the nominator. Then calculte the limits. The "non-trivial" calculation would need us to use the l'Hospital rule, so it is more cumbersome. In fact, the as the coefficients of the taylor series correspond to the derivatives, if you have a taylor expansion of a function, you can think of it as if somebody already did most of the work for you, you need to do for calculating such a limit. – klirk Feb 24 '18 at 17:51
  • I didn't really understand any of that honestly, I don't see why one is easier than the other – user525966 Feb 24 '18 at 18:35
  • @user525966 I included a detailled calculation – klirk Feb 24 '18 at 19:12
  • For some U.S. calculus 2 level Taylor series applications, including evaluating limits, see my answer to What are power series used for? (a reference request). – Dave L. Renfro Feb 25 '18 at 09:21
0

You are correct.

If our function f(x) is a polynomial, it really does not matter to find the Taylor polynomial around $x=0$ or any other point.

As you know Taylor polynomials approximate a function f(x) and use derivatives of f(x) to evaluate coefficients.

Some functions such as $$f(x) = \sqrt x$$ or $$f(x)=log (x)$$ are not differentiable at $x=0.$

Thus it is not possible to write the Taylor polynomials for these types of functions around $x=0.$

We use Taylor polynomials around another point such as $ x=1 $ where our derivatives exist.

  • Sure but here we're plugging in $x=1$ instead of $x=0$. Why are we shifting the function over instead with $x-a$? – user525966 Feb 24 '18 at 16:21
  • 1
    When the function is not differentiable around $x=0$, the formula is designed around $x=a $and the polynomial is found in terms of powers of $(x-a)$. Now if later we want to shift the polynomial back to be centered at x=0, it is perfectly OK. – Mohammad Riazi-Kermani Feb 24 '18 at 16:45
0

A Taylor polynomial uses derivative data at a point, say $a \in \mathbb{R}$, to generate an approximation of a function. In this case "data" means derivatives of a function, say $f$. The term $x-a$ doesn't merely shift a polynomial from $0$ to $a$, but rather it indicates where the data is centered.

For instance, we write for a Taylor polynomial centered at zero:

$$T_n(x) = f(0) + f'(0) x + \frac{f''(0)}{2!} x^2 + \frac{f'''(0)}{3!} x^3 + \cdots + \frac{f^{(n)}(0)}{n!} x^n.$$

The function $T_n(x)$ is designed so that its derivatives up to order $n$ match the function $f$ at $0$. You can verify this by calculating $T_n(0)$, $T'_n(0)$ etc. This polynomial depends very much on the center point. For instance, if we had instead:

$$T_n(x) = f(0) + f'(0) (x+1) + \frac{f''(0)}{2!} (x+1)^2 + \frac{f'''(0)}{3!} (x+1)^3 + \cdots + \frac{f^{(n)}(0)}{n!} (x+1)^n$$ and we exmine $T_n(0)$ we have $f(0) + f'(0) + \cdots + f^{(n)}(0) \neq f(0)$, so this wouldn't be a good estimate of $f$ at the origin.

Instead, what we would be looking for would be data at $a=-1$, and we would write:

$$T_n(x) = f(-1) + f'(-1) (x+1) + \frac{f''(-1)}{2!} (x+1)^2 + \frac{f'''(-1)}{3!} (x+1)^3 + \cdots + \frac{f^{(n)}(-1)}{n!} (x+1)^n$$

and here we have $T_n(-1) = f(-1)$, etc.

So the center point indicates what data we should use from $f$, and also where we have an exact match for the function estimation. How well a Taylor polynomial approximates a function away from that point varies depending on the function estimated.

One example where a Maclauren series (Taylor series at the origin) would not work would be in the estimation of the natural logarithm.

In particular, $\ln(x)$ is not defined at the origin and has an asymptote there. In this case, $a=1$ is used for Taylor expansions.


As demonstration, let's derive the Taylor series for $\ln(x)$. For now call it $f$.

$$f'(x) = \frac{1}{x} = \frac{1}{1-(1-x)} = \sum_{n=0}^\infty (1-x)^n = \sum_{n=0}^\infty (-1)^n (x-1)^n.$$

Note that this series expansion is valid provided that $|x-1| < 1$. Then $f(x) = C + \sum_{n=0}^\infty \frac{(-1)^n}{n+1} (x-1)^{n+1}.$ We now need to find $C$, and since we know that $f(1) = C$ and $\ln(1) = 0$ we have $C = 0$.

Thus, $$\ln(x) = \sum_{n=1}^\infty \frac{(-1)^{n-1}}{n} (x-1)^{n},$$ and we have the Taylor expansion of $\ln(x)$ at $1$.

This comes with certain side benefits. For instance we know that a Taylor expansion is written as $$f(x) = \sum_{n=0}^\infty \frac{f^{(n)}(a)}{n!} (x-a)^n.$$

Now say we want to know the $10000$-th derivative of $\ln(x)$ at $1$. We know that coefficient on the $10000$-th term is $f^{10000}(0)/(10000!)$ but also that for the natural logarithm in particular we have $\frac{(-1)^{9999}}{10000}$.

Equating we find: $\left.\frac{d^{10000}}{dx^{10000}} \ln(x) \right|_{x=1} = -9999!$

Joel
  • 16,256
0

The Taylor series is sometimes an important way to reduce the degree of the polynomial equation, which is to be solved without using the long division. For example, if we assume that we have an equation of the third degree with a known of one of its roots$(x=a)$ and we seek to find the remaining two roots, we only have to turn the equation to the Taylor series at $(x=a)$ to get a new second degree equation .

E.H.E
  • 23,280
0

Assume you have to compute a "complicated function", containing $\sqrt{\cdot}$s, $\exp$s, $\sin$s, etcetera, a million times in points $x$ near some interesting point $a$. Computing $f(x)$ each time exactly by plugging in $x$ would be time consuming and expensive; but maybe you are willing to put up with a good approximation. That's where the Taylor polynomial comes in: It is a certain polynomial $p(X)=c_0+c_1X+c_2X^2+\ldots+c_nX^n$ in the increment variable $X:=x-a$, with coefficients $$c_j={f^{(j)}(a)\over j!}\ ,$$ computed once and for all, that gives you an approximate value for $f(x)$ at any nearby point $x:=a+X$ (note that $|X|\ll1$ here) with a well controlled error in no time.