5

Let, $$y = mx + c \tag 1$$ is the equation of a straight line.

Let, it pass through the point $$(x_0, y_0).$$

So, from (1) we find, $$c = y_0 + mx_0 \tag 2$$ On the other hand, from the formula of slope we find $$m=\frac {y_1-y_0}{x_1-x_0} \tag 3$$ Putting (3) in (2) we find $$c = y_0 - {\frac {y_1-y_0}{x_1-x_0}} x_0 \tag 4$$ Putting (3) and (4) into (1) we find $$y = y_1 \frac {x-x_0}{x_1-x_0} + y_0 \frac {x-x_1}{x_0-x_1}$$ $$\Longrightarrow P(x) = y_1 \frac {x-x_0}{x_1-x_0} + y_0 \frac {x-x_1}{x_0-x_1} \tag 5$$

Putting $x = x_0$ into (5) we find, $$P(x_0) = y_0 \tag 6$$ Putting $x = x_1$ into (5) we find $$P(x_1) = y_1 \tag 7$$ Putting (6) and (7) into (5) we find $$\Longrightarrow P(x) = P(x_1) \frac {x-x_0}{x_1-x_0} + P(x_0) \frac {x-x_1}{x_0-x_1} \tag 8 $$ Ok. Now, how to find this:

$P(x)=$

<span class=$$=> P(x) = P(x_1) \frac {x-x_0}{x_1-x_0} + P(x_0) \frac {x-x_1}{x_0-x_1}$$" /> $\tag 9$

ViktorStein
  • 4,838
  • 2
    Actually, the formula is $$P(x) = a_0\frac{x-x_1}{x_0-x_1} + a_1\frac{x-x_0}{x_1-x_0}$$ to get $P(x_0)=a_0,P(x_1)=a_1$. – Rory Daulton Aug 08 '15 at 22:58
  • 4
    No offence but an absolutely trivial google search taking less than 5 seconds would allow you to find detailed information about Lagrange interpolation, including a proof of the interpolating formula. Here. – wltrup Aug 08 '15 at 23:23
  • How to find Lagrange's interpolation ? What do you mean? Computing it? – Bernard Aug 09 '15 at 00:16
  • @wltrup no offense, but you seem to not understand the difference between a derivation and a proof – bruno Feb 20 '20 at 23:52

3 Answers3

1

The idea, which works also for higher degree polynomial interpolation, comes from linear algebra (dual bases). Instead of solving directly the problem though systems of linear equations to find the coefficients, you first try to solve the simpler problem: find two polynomials $p_0(x)$ and $p_1(x)$ such that: $$\begin{cases} p_0(x_0)=1\\p_0(x_1)=0 \end{cases}\qquad \begin{cases} p_1(x_0)=0\\p_1(x_1)=1 \end{cases}$$ Once you get them, it is trivial that the initial problem has the solution: $$p(x)=a_0p_0(x)+a_1p_1(x).$$ Now, from the second condition on $p_0(x)$ we know $\;p_0(x)=c(x-x_1),\enspace c\in\mathbf R$. Normalising it with the first condition, we obtain $c=\dfrac1{x_0-x_1}$.

Similarly, $\;p_1(x)=\dfrac{x-x_0}{x_1-x_0}$, whence finally: $$p(x)=a_0\frac{x-x_1}{x_0-x_1}+a_1\frac{x-x_0}{x_1-x_0}.$$

The general case goes along the same lines: say, for instance, we want to find a quadratic polynomial that takes prescribed values $(y_0,y_1, y_2$ at given (distinct) points $x_0, x_1, x_2$. We find first three quadratic polynomials that satisfy each one of these conditions: $$\begin{cases} p_0(x_0)=1\\p_0(x_1)=0\\p_0(x_2)=0 \end{cases}\qquad \begin{cases} p_1(x_0)=0\\p_1(x_1)=1\\p_1(x_2)=0\end{cases}\qquad \begin{cases} p_2(x_0)=0\\p_2(x_1)=0\\p_2(x_2)=1 \end{cases}$$ Once we have them, the solution to the original problem is trivial: $$p(x)=y_0p_0(x)+y_1p_1(x)+y_2p_2(x).$$

Now each $p_i(x)$ must vanish at $x_j$ and $x_k$, hence it has the form: $$p_i(x)=c(x-x_j)(x-x_k),\quad c\in\mathbf R,$$ and it is normalised by the condition $p_i(x_i)=1$, hence $c=\dfrac1{(x_i-x_j)(x_i-x_k)}$, so that the final solution is: $$p(x)=y_0\frac{(x-x_1)(x-x_2)}{(x_0-x_1)(x_0-x_2)}+y_1\frac{(x-x_2)(x-x_0)}{(x_1-x_2)(x_1-x_0)}+y_2\frac{(x-x_0)(x-x_1)}{(x_2-x_0)(x_2-x_1)}.$$

Bernard
  • 175,478
  • @anonymous: I've added the exalmple of the quadratic interpolation polynomial. Is it clear enough so that you can extrapolate? – Bernard Aug 09 '15 at 01:01
  • Please, never mind, let me ask you one thing: Both of the answerers, including you, have answered. But, your answers look incomprehensible to me. Why didn't you start as I did? Just curious and no offence. –  Aug 09 '15 at 02:43
  • @anonymous: You mean a method with indeterminate coefficients: $;p(x)a_nx^n+\dots+a1x+a_0$? First because, if $n$ is big, it's hard to prove there is a solution. Imagine if $n=100$. Second because it hides the essential fact: it is a problem of linear algebra. Third, because well employed ‘abstraction’ is, in the opinion of many, more efficient than brute force computation. – Bernard Aug 09 '15 at 08:16
0

For the case where $n = 2$: Take fixed numbers $x_0, x_1, a_0$, and $a_1$. Let's look at a few special cases.

Say we want a polynomial $Q$ such that $Q(x_0)$ is $a_0$ while $Q(x_1) = 0$.

Since $x_1$ is a root, we can say that $Q(x) = c(x-x_1)$ for some coefficient $c$. Then we have $a_0 = Q(x_0) = c(x_0-x_1)$. Solve this for $c$, and we find that $$Q(x) = c(x-x_1) = \frac{a_0}{x_0-x_1}(x-x_1) = a_0\frac{x-x_1}{x_0-x_1}.$$

Now say we want a polynomial $R$ such that $R(x_0)$ is $0$ while $R(x_1)$ is $a_1$. Through similar reasoning, we get that $$R(x) = a_1\frac{x-x_0}{x_1-x_0}.$$

Notice that we $Q(x) + R(x)$ satisfies the conditions that you put on your polynomial $P_1(x)$. So we finally have $$P_1(x) = Q(x) + R(x) = a_0\frac{x-x_1}{x_0-x_1} + a_1\frac{x-x_0}{x_1-x_0}.$$

(Furthermore, it's possible to prove that there is at most one polynomial that fits the chosen points, and hence this solution is unique.)

EDIT: For anyone reading this now, the author of this question had asked for a derivation of the formula for a Lagrange interpolating polynomial given two points. Now the author has edited his question to ask for a derivation of the general formula. The derivation I gave here can easily be extended to this situation.

Say we want a polynomial $P_0$ such that $P_0(x_0) = a_0$ while $P_0(x_1), P_0(x_2), \ldots , P_0(x_n)$ are roots. Knowing all these roots, we can say that $P_0(x) = c(x-x_1)(x-x_2)\ldots(x-x_n)$. Then we have $a_0 = P_0(x_0) = c(x_0-x_1)(x_0-x_2)\ldots(x_0-x_n)$. Solve for $c$ to find that $$P_0(x) = c(x-x_1)(x-x_2)\ldots(x-x_n) = a_0\frac{(x-x_1)(x-x_2)\ldots(x-x_n)}{(x_0-x_1)(x_0-x_2)\ldots(x_0-x_n)}.$$

Do the same for each of the other points -- that is, for each natural number $k$ where $0 \leq k \leq n$, find the equation for a polynomial $P_k$ such that $P_k(x_k) = a_k$ and all the other $x_0, x_1, \ldots, x_n$ points are roots. Then simply sum all these polynomials to get a final polynomial that hits each of the desired points.

Tom Tseng
  • 598
  • Sorry guys! I have changed my question a little bit. –  Aug 08 '15 at 23:55
  • The derivation I gave can be extended to work for more points. I've edited my post to add a brief explanation. Hope that helps. – Tom Tseng Aug 09 '15 at 00:20
0

The idea is:

  • $y(x)=a+bx=c_0(x-x_0)+c_1(x-x_1)$

for some choice of $c_0$ and $c_1$ s.t. $y(x_i)=y_i, (i=0,1)$.

  • $y(x)=a+bx+cx^2=c_0(x-x_1)(x-x_2)+c_1(x-x_0)(x-x_1)+c_2(x-x_0)(x-x_1)$ for some choice of $c_0,c_1$ and $c_2$ s.t. $y(x_i)=y_i, (i=0,1,2)$.

Now generalize this for $n$-degree polynomial.

Nitin Uniyal
  • 7,946