11

The McGraw Hill PreCaculus Textbook gives several good examples of solving partial fractions, and they justify all but one step with established mathematical properties.

In the 4th step of Example 1, when going from:

$$1x + 13 = (A+B)x+(4A-5B)$$

they say to "equate the coefficients", writing the linear system

$$A+B = 1$$

$$4A-5B=13$$

It is a simple step, color coded in the textbook for easy understanding, but McGraw Hill does not justify it with any mathematical property, postulate or theorem. Addition and/or multiplication properties of equality don't seem to apply directly.

Can someone help me justify this step?!

  • 2
    They're using the fact that two polynomials are equal when their coefficients are equal. Hence the system. – rubik Feb 03 '17 at 21:51
  • Also, take a look at a proof of the complex partial fraction decomposition (then adapt it to the less easy real case) – reuns Feb 03 '17 at 22:13

5 Answers5

6

Lemma: If $px+q=0$ for all values of $x$, then $p=q=0$.

Proof: In particular, $p(0)+q=0$, which means that $q=0$. So $px=0$ for all $x$, which means that $p(1)=0$, and so $p=0$.

Theorem: If $px+q$ and $rx+s$ are equal for all values of $x$, then $p=r$ and $q=s$.

Proof: If $px+q$ and $rx+s$ are equal for all values of $x$, then $$ px+q-(rx+s)=0 $$ for all values of $x$. But this expression can be rewritten as $$ (p-r)x+(q-s) $$ and so, by the lemma, $p-r=0$ and $q-s=0$. That is, $p=r$ and $q=s$.

Micah
  • 38,108
  • 15
  • 85
  • 133
3

Presumably it is stated that this equation must hold for all $x$. That means that in particular it must hold for $x=0$. If we set $x=0$ in your equation, we find the equation

$13=4A-5B$

which is the second equation you wrote. Let us now look at $x=1$, for which we find

$1+13=(A+B)+(4A-5B)$

We already know that $4A-5B=13$ from our previous step, which we can thus put into the equation, leading to $1+13=(A+B)+13$. Subtracting $13$ from either side leaves us with

$1=A+B$

which is your second equation.

Note that this can be done more generally, i.e. even if the equation only has to hold for a few $x$, not including $0$ or $1$. If you want a proof that does not use $x=0$ and $x=1$ let me know.

Shinja
  • 1,355
3

The general principle is: two polynomials are equal at every point if and only if their coefficients are equal. "If their coefficients are equal then the polynomials are equal" is clear.

Proving the reverse is not so easy in general. It follows from a stronger result from linear algebra, which says that the Vandermonde matrix for $d+1$ distinct real numbers is invertible, and so there is a unique polynomial of degree at most $d$ passing through any $d+1$ points, provided they all have different $x$ coordinates. This is probably not accessible to you at your level, but it is probably the best way to see it overall.

Another way to see it, though making this rigorous requires some calculus, is to note that if two polynomials are equal at each point, then their constant terms must be the same. Subtracting off the constant term from each and dividing by $x$, you have two polynomials that now again have to be equal at each point. So you plug in $x=0$, which gives agreement of the linear coefficients of the original polynomials. Doing this a total of $d+1$ times gives the desired result.

Where the lack of rigor comes in is in saying that $x/x=1$ even when $x=0$, which is not properly true. What we are really doing here is noticing that if two differentiable functions are equal everywhere then their derivatives are equal everywhere, and that if $p(x)=\sum_{k=0}^n a_k x^k$ then $a_k=\frac{p^{(k)}(0)}{k!}$, where $p^{(k)}$ denotes the $k$th derivative of $p$.

Ian
  • 101,645
  • 4
    I think it can be a little more simple than this. The converse is equivalent to proving: if a polynomial $p(x)$ takes the value $0$ for every $x$, then all the coefficients of $p(x)$ are $0$. This follows, for example, from knowing the limits of polynomials as $x\to\infty$. – Greg Martin Feb 03 '17 at 22:04
2

The property one uses is that two polynomials

$$ p(x) = a_n x^n + \dots + a_1 x + a_0, \\ q(x) = b_m x^m + \dots + b_1 x + b_0 $$

are equal to each other if and only if $a_n = b_n$ for all $n \in \mathbb{N}_0$. In other words, two polynomials are equal if and only if their coefficients are equal (and in particular, they must be of the same degree).

How one would justify this actually depends on how you think of polynomials. From an algebraic point of view, a polynomial is a formal expression of the form $p(x) = a_n x^n + \dots + a_1 x + a_0$ and two polynomials are defined to be equal if their coefficients are equal.

However, an alternative approach would be to define a polynomial as a function $p \colon \mathbb{R} \rightarrow \mathbb{R}$ for which there exists real numbers $a_0,\dots,a_n$ such that $p(x) = a_nx^n + \dots + a_1 x + a_0$ for all $x \in \mathbb{R}$. From this point of view, it is not clear that if you have two polynomial functions $p(x),q(x)$ such that $p(x) = q(x)$ for all $x \in \mathbb{R}$ then their coefficients must be equal (and in fact this is not true if one replaces $\mathbb{R}$ with an arbitrary field or ring).

To prove this, one can give an algebraic argument using the Vandermonde determinant but for real polynomials, it is easier to note that the $k$-coefficient of $p(x) = a_n x^n + \dots + a_1 x + a_0$ is given by $a_k = \frac{p^{(k)}(0)}{k!}$ where $p^{(k)}(0)$ is the $k$-th derivative of $p$ at $x = 0$. Then if two polynomials are equal as functions, it is clear that all their derivatives of all orders must be equal and so the coefficients must be equal.

levap
  • 65,634
  • 5
  • 79
  • 122
2
  1. In the equations $$\tan(2x)\equiv\frac{2\tan (x)}{1-\tan^2(x)}\\\tan(2x)=\tan(x),$$ the symbol means that the equation is an identity, that is, its two sides are identically equal (equal whenever both are defined), rather than merely conditionally equal (equal for only some values of its variables).

  2. Let $n\in\mathbb N.$ In general, equating coefficients means $$\color{red}{C_1}v_1+\color{red}{C_2}v_2+\ldots+\color{red}{C_n}v_n\equiv0\iff \color{red}{C_1},\color{red}{C_2},\dots,\color{red}{C_n}=0,$$ which means that $v_1,v_2\ldots,v_n$ are linearly independent.

    How can we be sure that we can equate coefficients?

  3. Consider the following partial-fraction decomposition. \begin{align}&\frac{-8x-8}{x^2+2x}\equiv\frac{A}{x}+\frac{B}{x+2}\\\iff{}&\forall x{\in}\mathbb R{\setminus}\{-2,0\}\quad -8x-8=A(x+2)+Bx\\\iff{}&\forall x{\in}\mathbb R{\setminus}\{-2,0\}\quad (A+B+8)x+(2A+8)=0\\\implies{}&(A+B+8)x+(2A+8) \;\text{ has more than one root}\\\implies{}&(A+B+8)x+(2A+8) \;\text{ cannot be a degree-$1$ polynomial}\\\implies{}&(A+B+8)x+(2A+8) \;\text{ must be the zero polynomial};\end{align} hence, in fact, \begin{align}&\frac{-8x-8}{x^2+2x}\equiv\frac{A}{x}+\frac{B}{x+2}\\\iff{}&(A+B+8)x+(2A+8)\equiv0\tag1\\\iff{}&(A+B+8)x+(2A+8) \;\text{ is the zero polynomial};\end{align} that last line means that all the coefficients are $0,$ that is, we can equate coefficients in $(1).$

  4. Let $n\in\mathbb N.$ By similar reasoning, the indeterminates $x^0,x^1,x^2,\ldots,x^n$ are linearly independent: $$\color{red}{C_n}x^n+\color{red}{C_{n-1}}x^{n-1}+\ldots+\color{red}{C_0}\equiv0\iff \color{red}{C_n},\color{red}{C_{n-1}},...,\color{red}{C_0}=0.\tag2$$

    This justifies $$1x + 13 \equiv (A+B)x+(4A-5B)\quad\iff\quad (A+B = 1)\;\text{and}\;(4A-5B=13),$$ that is,

    $$1x + 13 = (A+B)x+(4A-5B)$$ $$A+B = 1\\4A-5B=13.$$

  5. Alternatively, the coefficients in the identity in $(2)$ (or in $(1)$) can be determined simply by substituting as many values as necessary into $x$ then solving the resulting system of equation.

ryang
  • 38,879
  • 14
  • 81
  • 179