6

In my textbook, under solving linear homogeneous recurrence relations, it says that the basic approach for solving them is to look for a solution of the form an = rn, which yields the characteristic equation. Then there is a proof that the solution of a linear homogeneous recurrence relation of degree 2 with 2 distinct roots is an = α1r1n + α2r2n. After looking at the proof I understand why this is so. What I don't understand is the more intuitive idea behind it. Why in the the first place do we start looking for a solution of the form rn? For such an equation of degree 1 where an = can-1, if you start to express the terms as the previous:
a1 = c
a0, a2=ca1=cca0, ... an=can-1=cn * a0, you can clearly see that the solution is of form an = α*rn. Is there a similar way of looking at a solution of an equation of a higher degree. Also I understand that when you substitute an = α1r1n + α2r2n into c1*an-1 + c2*an-2 = an and you do a little math you prove that the solution satisfies the equation, but why in the first place assume that an = α1r1n + α2r2n would satisfy the equation?

2 Answers2

11

Short answer 1: we know that exponentials satisfy linear recurrences. So we try to fit the linear recurrence to an exponential. We check the result to justify our work a posteriori.

Short answer 2: someone has worked this method out in the past, so we use it.

Medium answer: we can write a linear recurrence relation as a matrix equation

$$ v_{n+1} = A v_n $$

where $v_n$ is a vector whose components are consecutive terms of the linear recurrence. This is easy to solve:

$$ v_n = A^n v_0 $$

If $A$ is diagonalizable, we can obtain a closed form for $v_n$ by diagonalizing and computing the exponential.

Long answer: there is a theory of difference equations that is quite analogous to the theory of differential equations. Given a function $f$, we define the (forward) difference $\Delta f$ to be the function $(\Delta f)(n) = f(n+1) - f(n)$. Many things in the theory of differential calculus have analogs in this difference calculus.

For example, the analog of the fundamental theorem of calculus:

$$ \sum_{i=0}^{n-1} (\Delta f)(i) = f(n) - f(0) $$

As in calculus, we learn differences of all sorts of functions. Some are nice:

$$ \Delta(a^n) = (a-1) a^n $$

and some are less nice:

$$ \Delta(x^3) = 3x^2 + 3x + 1 $$

actually, we usually use falling factorials instead of powers of $x$: we define

$$ x^{\underline{n}} = x (x-1) (x-2) \ldots (x-(n-1)) $$

and we have

$$ \Delta(x^{\underline{n}}) = n x^{\underline{n-1}} $$

A linear recurrence can be written as an ordinary difference equation.

Define the forward translation operator $(Ef)(n) = f(n+1)$; or as difference operators $E = \Delta + 1$. Consider the Fibonacci numbers as an example:

$$ f(n+2) = f(n) + f(n+1) $$

can be rewritten as

$$ E^2 f = f + E f $$

which we can solve in an analogous way to how we derive the solution to the analogous differential equation. e.g. we might rewrite it and factor:

$$ (E^2 - E - 1)f = 0 $$ $$ (E - r_1)(E - r_2) f = 0 $$

and solve the difference equation in steps.

(we could rewrite this in terms of $\Delta$ by substituting $E = \Delta + 1$, but leaving it in terms of $E$ is fine too)

Of course, we don't actually do that: like with differential equations, we get introduced to a few recipes for handling common equations, and simply memorize the solution technique. Or if we didn't have it memorized, we might use the guess and check recipe, knowing that the analogous differential equation has its solution as a linear combination of exponentials. (and that exponentials in differential equations are analogous to exponentials in difference equations)

  • Hi! I had the same question as the OP, and this is the clearest answer I've been able to find anywhere on the Internet so far. I find that I still don't quite understand how we can go from seeing $(E^2 - E - 1)f = 0$ to $(E - r_1)(E - r_2)f = 0$ -- where $r_1$ and $r_2$ are the roots of the polynomial, I assume. – Eli Rose Sep 12 '15 at 19:49
  • @Eli: The distributive law still applies, because we're working with linear operators: $(E-a)(E-b)$ is the same operator as $E^2 - aE - Eb + ab$. Since $b$ is a scalar, $Eb = bE$, so we have $(E-a)(E-b) = E^2 - (a+b)E + ab$. –  Sep 12 '15 at 19:53
  • Ah, I didn't understand that $1$ was the identity operator in this case. Okay, I now understand how to pass from the first equation to the second. But I don't think I understand "solve the difference equation in steps". How do we get an explicit formula for $f(n)$ from $(E - r_1)(E - r_2)f = 0$? – Eli Rose Sep 13 '15 at 02:27
  • 1
    @Eli: You find the complete space of solutions to $(E-r_1)g = 0$, then you solve $(E - r_2)f = g$. –  Sep 13 '15 at 06:17
0

Suppose $a_{n+1}=ra_n-sa_{n-1}$ and that $p,q$ are the roots of $x^2-rx+s=0$ so that $r=p+q$ and $s=pq$

Then $$a_{n+1}-(p+q)a_n+pqa_{n-1}=(a_{n+1}-pa_n)-q(a_n-pa_{n-1})=0$$

**If we let $b_n=a_{n+1}-pa_n$ then $b_n=qa_{n-1}=Bq^n$

We can split the recurrence the other way (swap $p$ and $q$) to obtain $c_n=a_{n+1}-qa_n=Cp^n$

Then $$b_n-c_n=(q-p)a_n=Bq^n-Cp^n$$

For me the ** comment above is the one which most feeds my intuition about this problem.


Quite another way around is to notice that if $p,q$ are the roots of $x^2-rx+s=0$ then they are also roots of $Ax^{n+2}-Arx^{n+1}+Asx^n=0$ so that if we have $a_n=Ap^n+Bq^n$ we can add $$(Ap^{n+2}-Arp^{n+1}+Asp^n)+(Bq^{n+2}-Brq^{n+1}+Bsq^n)=0+0=0$$ to obtain $$a_{n+2}-ra_{n+1}+sa_n=0$$

So the recurrence comes from studying the roots of the quadratic equation

Mark Bennet
  • 100,194