14

Let $\{a_n\}$ and $\{b_n\}$ be two integer sequences such that $a_1=b_1=1,$

\begin{align*} a_n=a_{n-1}+b_{n-1},\qquad\qquad b_n =r\,a_{n-1}+b_{n-1}. \end{align*}

Where $r$ is a natural number $>1$. ($r$ is a fixed value)

Write $c_n={b_n}/{a_n}$, how to prove the following limit exist without finding it?

\begin{align*} \lim_{n\to\infty} c_n. \end{align*}


Attempt

I used mathematical induction to prove $\{c_{2n}\}$ is monotonically decreasing and $\{c_{2n+1}\}$ is monotonically increasing.

Then, note that

\begin{align*} c_{n+1}=\frac{b_{n+1}}{a_{n+1}}&=\frac{r\,a_n+b_n}{a_n+b_n}\\ &=r-\frac{r-1}{\frac 1{b_n/a_n}+1}<r \end{align*}

Therefore, $0<c_n<r$.

$\vdots$

Instead of continuing using this method (too complicated), any other faster ways to prove "exist"?

I tried to use the definition of Cauchy sequence and contraction mapping (they are 2 methods), but I don't think they work here.

Any hints or tips are appreciated:)

Tianlalu
  • 5,177

5 Answers5

10

Write your formulas in a matrix form:

$$\begin{bmatrix}a_n\\b_n\end{bmatrix} = \begin{bmatrix}1&1\\r&1\end{bmatrix}\begin{bmatrix}a_{n-1}\\b_{n-1}\end{bmatrix}$$ i.e. $$w_n=Aw_{n-1}$$

Notice that if you normalize every vector after applying $A$, that is if you consider $$v_n=\frac{Av_{n-1}}{||Av_{n-1}||}$$ instead $w_n$, then it will generate the same $\{c_n\}$. This is because

  • normalization changes magnitude of the given vector, not its direction
  • $A$ is a linear transformation; changing the magnitude of its input doesn't change the direction of its output
  • and finally $c_n$ depends on the n-th direction only.

This formula for $v_n$ above is exactly what power iteration does. The following citation is useful (it has been translated to match our chosen names and starting index):

If $A$ has an eigenvalue that is strictly greater in magnitude than its other eigenvalues and the starting vector $v_{1}$ has a nonzero component in the direction of an eigenvector associated with the dominant eigenvalue, then a subsequence $\{v_n\}$ converges to an eigenvector associated with the dominant eigenvalue.

Think about eigenvalues (say: $\lambda$) and how they depend on $r$. You will have to consider a quadratic equation w.r.t. $\lambda$ and its discriminant, which will be of first degree w.r.t. $r$. By analyzing the two you will find that for $r>1$ there are always two distinct real eigenvalues (and none of them is $0$, this will be important in a moment). Apply proper Vieta's formula to show they cannot sum to $0$. They cannot be equal, they cannot be opposite, so one eigenvalue is strictly greater in magnitude for sure.

If the starting non-zero vector had zero component in the direction of one of the two eigenvectors, then it would be the other eigenvector. Applying $A$ to it would act as multiplying by eigenvalue (which is nonzero). This means $w_1, w_2, w_3, …$ would have the same direction, so $c_1, c_2, c_3, …$ would be equal and $\{c_n\}$ would trivially converge. In our case however $c_1=1$ and $c_2\neq1$. This means $v_1$ has non-zero component in the direction of any of the two eigenvectors; in particular in the direction associated with the dominant eigenvalue.

So the conditions are met, $\{v_n\}$ converges to some eigenvector. It has some direction which corresponds to the limit of $\{c_n\}$.

Q: But wait! If $\begin{bmatrix}0\\1\end{bmatrix}$ was this eigenvector associated with the dominant eigenvalue, wouldn't the limit be $\infty$ or $-\infty$?

A: I think it would, it doesn't matter though. Calculate $A\begin{bmatrix}0\\1\end{bmatrix}$ and you will see it's not an eigenvector.

  • 3
    So far this seems to be the only proof that doesn't at some point have the value of the limit in it, as it never calculates what the eigenvectors actually are. – eyeballfrog Sep 01 '17 at 02:35
  • @eyeballfrog the only proof that doesn't at some point have the value of the limit in it My answer proves the limit exists without giving it, too. (Note that $A_2, B_2$ are never calculated there, they are just known to exist - very much like the above doesn't actually calculate the eigenvectors.) – dxiv Sep 01 '17 at 19:57
  • @dxiv Fair point. In fact, unless I'm mistaken, your proof and this one are equivalent. – eyeballfrog Sep 01 '17 at 20:03
  • 2
    @eyeballfrog Indeed, it's the same core idea about reducing it to the case of a linear homogeneous recurrence, but (neatly) done in matrix form. – dxiv Sep 01 '17 at 20:07
6

Not sure if the following fits the requirement to avoid finding the limit, but anyway... Consider the change of variable $$x_n=c_n-\frac{r}{c_n}$$ then $$x_{n+1}=-f(c_n)x_n$$ with $$f(c)=\frac{(r-1)c}{(c+1)(c+r)}$$ Now, for every $c>0$, $$(c+1)(c+r)\geqslant(r+1)c$$ hence $$f(c)\leqslant s$$ with $$s=\frac{r-1}{r+1}<1$$ Thus, $$|x_n|\leqslant s^{n-1}\cdot|x_1|$$ which implies that $$x_n\to0$$ hence $$c_n\to\text{____}$$

Did
  • 279,727
4

Again, not sure if this satisfies your criteria, but if we let $f : [0,\infty) \to \mathbb{R}$ by

$$ f(z) = \frac{r+z}{1+z} $$

then $f$ is decreasing, $0\leq f \leq r$, and $c_{n+1} = f(c_n)$. So if we let

$$ \beta = \limsup_{n\to\infty} c_n, \qquad \alpha = \liminf_{n\to\infty} c_n $$

then we have $ \beta = f(\alpha)$ and $\alpha = f(\beta)$, i.e., both $\alpha$ and $\beta$ are non-negative fixed points of $g = f\circ f$. Since this function has a unique non-negative fixed point, this proves that $\alpha = \beta$.

Sangchul Lee
  • 167,468
  • How do we know it has a unique fixed point? – eyeballfrog Sep 01 '17 at 02:26
  • @eyeballfrog, You can solve $g(x) = x$ directly. It reduces to a quadratic equation with zeros $x = \pm \sqrt{r}$. But certainly solving this equation is related to knowing the possible limit candidate, which I am not sure if OP likes. – Sangchul Lee Sep 01 '17 at 02:30
  • Yeah that was what I was thinking of as the problem. It does require evaluating the fixed point which is the limit. Not super sure why OP wants us to not do that, but still. – eyeballfrog Sep 01 '17 at 02:33
3

From the first relation $b_{n-1}=a_n-a_{n-1}$, then substituting into the second one:

$$ a_{n+1} = 2 a_n +(r-1)a_{n-1} \tag{1} $$

From the second relation $ra_{n-1}=b_n-b_{n-1}$, then substituting into the first one and simplifying:

$$ b_{n+1} = 2b_n+(r-1)b_{n-1} \tag{2} $$

Therefore $a_n$ and $b_n$ satisfy the same linear homogeneous recurrence with characteristic polynomial $t^2-2t-(r-1)$, only with different initial conditions.

Let $t_{1}=1 - \sqrt{r},\, t_{2}=1 + \sqrt{r}$ be the roots of the characteristic polynomial, with $|t_2| \gt |t_1|$.

Then $a_n=A_1t_1^n+A_2t_2^n$ and $b_n=B_1t_1^n+B_2t_2^n$ for some constants $A_i, B_j$ determined by the initial conditions. It can be easily verified that $A_2 \ne 0$, therefore $c_n = b_n / a_n \to B_2 / A_2$.

dxiv
  • 76,497
3

Note that $$c_{n+1}=\frac{ra_n+b_n}{a_n+b_n}=1+\frac{r-1}{\frac{a_n+b_n}{a_n}} =1+\frac{r-1}{1+c_n}.$$ Wanting to show that the map $$f(x)=1+\frac{r-1}{1+x} $$ is a contraction, is a bit problematic However, $$ f(f(x))=1+\frac{r-1}{2+\frac{r-1}{1+x}}=1+\frac{(r-1)(1+x)}{2(1+x)+r-1}=\frac{(r+1)x+2r}{2x+r+1}$$ has derivative $$ \frac{(r+1)(2x+r+1)-2((r+1)x+2r)}{(2x+r+1)^2}=\frac{(r-1)^2}{(2x+r+1)^2}<1$$ for non-negative $x$.

  • Thanks for the answer. $f\circ f$ is proved to be a contraction. Does it mean we should consider the even and the odd terms of $c_n$ separately? Then, how to prove two fixed points in even and odd situations are equal? – Tianlalu Sep 01 '17 at 13:53