2

As the title says I'm interested in rigourously justifying the process of linearization of systems of non -linear difference equations. I recently started reading a book about dynamical systems and generally speaking the arguments given seem solid enough, but when I got to the study of stability of steady state solutions the arguments are a bit too informal to my liking and I haven't been able to fill in the details in a way that I find satisfying.

According to the book, we start by considering a generic difference equation $x_{n+1}=f(x_{n})$. The book assumes that f is at least twice differentiable, in a neighborhood of a steady state which is a reasonable requirement.

Then we consider a value of $x$ such that $x=f(x)$ (i.e. a steady state). To determine stability we start with a small deviation from $x$, say $x_{0}=x+\epsilon$ and look at the behaviour of this value under iteration. Specifically, we would like to know what happens to the difference $x_{n}-x$ as $n$ goes to infinity.

Now, we know that for all $n$, $x_{n+1}=f(x_{n})$. On the other hand we can write $x_{n}=x+\epsilon_{n}$. If we expand $f$ into its Taylor polynomial around $x$ we get $f(x_{n})=f(x)+f^{'}(x)\epsilon_{n}+O(\epsilon_{n}^{2})$. To simplify notation lets just set $a=f^{'}(x)$. Since $f(x)=x$, we end up with $\epsilon_{n+1}=x_{n+1}-x=a\epsilon_{n}+O(\epsilon_{n}^{2})$, which, for sufficiently small $\epsilon_{n}$ is roughly equal to $a\epsilon_{n}$.

The next part of the argument is what I find particularly sketchy: regardless of the value of $n$ we neglect the quadratic term and claim that $\epsilon_{n+1}=a\epsilon_{n}$. Why can we simply do this? Even if the individual error terms are tiny when we iterate using that last equation the errors begin to add up.

So my question is, can we even guarantee that if the starting value is sufficiently close to the steady state all the succesive values remain close to it? I would say no because if the steady state is unstable the difference may grow bigger and bigger, to the point we can no longer ignore the error term.

We could suppose then that all the iterations stay sufficiently close to the steady state. Then all the errors must also remain close to zero. As a result we could take the supremum and infimun over all these errors say $M$ and $m$, define $E=\max\lbrace\vert m\vert,\vert M\vert\rbrace$ and claim that for all $n$ the following inequalities hold:

$-E+a\epsilon_{n}\leq\epsilon_{n+1}\leq E+a\epsilon_{n}$

If I could somehow drop the error $E$, then I could easily find that if $\vert a\vert<1$ the error approaches zero, but then again I don't know how to rigourously justify that step. On the other hand if I keep $E$ I can only show that in the limit $\epsilon_{n}$ remains in a neighborhood of zero (assuming $\vert a\vert<1$ ), which is not enough.

So the only thing left is to assume that $\epsilon_{n}$ converges to zero. In other words, assume that the steady state $x$ is stable, and even then I have no idea what to do to prove that the error approaches 0.

To summarize, I intuitively see why it's reasonable to assume that as long as the error term in the expansion is small the linearized system should behave in the same way as the original system, but I don't find the argument rigourous. I've seen other proofs that make use of the mean value theorem to prove the stability criteria but I don't know if those generalize easily to higher dimensions, so I was hoping this other argument could be formalized.

  • It is rigourous because you can do the neighborhood as small as needed so if $|a|<1$ then also $|a \epsilon_n|<1$. That is the reason why you need the strict inequality. – Miguel Jun 07 '20 at 07:55
  • I know why it has to be strictly less than $1$, that's not the issue. The problem is that if you start to iterate the $O(\epsilon_{n}^{2})$ terms start to add up, regardless of how close you are to the steady state. And that's a problem even if the absolute value of $a\epsilon_{n}$ is less than $1$ for all $n$. To me it is not rigourous to take $\epsilon_{n+1}=a\epsilon_{n}+O(\epsilon_{n}^{2})$ and assume that it behaves in exactly the same way as $\epsilon_{n+1}=a\epsilon_{n}$, which is what the book does. I'm sorry if I sound harsh but I've been squeezing my brains for a while now. – Modesto Rosado Jun 07 '20 at 08:05

1 Answers1

1

The derivative is continuous, thus if $|a|<1$, then there is a neighborhood $|x-x^*|<\delta$ of the fixed point $x^*$ with $|f'(x)|\le q<1$ where $|a|<q<1$. Then $q$ is also a Lipschitz constant and you get linear convergence to the fixed point from any point in the neighborhood, as $|f(x)-x^*|=|f(x)-f(x^*)|\le q|x-x^*|$.


For increasingly smaller neighborhoods $q$ can be chosen arbitrarily close to $|a|$. It makes sense to ask if one can make a direct connection between the geometric iteration $z_{n+1}=az_n$ and $f$. If $x=x^*+ϕ(z)$ one would thus like to have $$ f(x^*+ϕ(z_n))=f(x_n)=x_{n+1}=x^*+ϕ(z_{n+1})=x^*+ϕ(az_{n}). $$ If (!) Taylor expansions of both functions exist, their coefficients are connected as \begin{align} f(x^*+ϵ)&=x^*+aϵ+f_2ϵ^2+f_3ϵ^3+...\\ ϕ(z)&=z+ϕ_2z^2+ϕ_3z^3+...\\ &x^*+az(1+ϕ_2z+ϕ_3z^2+...)+f_2z^2(1+ϕ_2z+...)^2+f_3z^3(1+...)^3\\ &=x^*+az+ϕ_2a^2z^2+ϕ_3a^3z^3+... \end{align} so that in the lower degree coefficients \begin{align} aϕ_2+f_2&=ϕ_2a^2\\ aϕ_3+f_2ϕ_2+f_3&=ϕ_3a^3\\ &... \end{align} Apparently this is well solvable for $|a|<1$. One would have to prove a positive radius of convergence for $ϕ$. But even with partial sums of $ϕ$ one gets increasingly linear conjugates of $f$.


Another view to see the influence of the non-linear terms of $f$ directly but less organized is to consider the reciprocal $$ \frac1{ϵ_{n+1}}=\frac1{ϵ_n}(a+f_2ϵ_n+O(ϵ_n^2))^{-1}=\frac1{aϵ_n}(1-a^{-1}f_2ϵ_n+O(ϵ_n^2))=\frac1{aϵ_n}-\frac{f_2}{a^2}+O(ϵ_n) \\~\\ \frac{a^{n+2}}{ϵ_{n+1}}=\frac{a^{n+1}}{ϵ_n}-a^{n}f_2+O(a^nϵ_n)=...=\frac{a}{ϵ_0}-\frac{1-a^{n+1}}{1-a}f_2+O(a^{2n}ϵ_0) \\~\\ ϵ_{n}=\frac{(1-a)a^{n+1}ϵ_0}{a(1-a)-f_2(1-a^n)ϵ_0+O(a^{2n}ϵ_0)} $$

Lutz Lehmann
  • 126,666
  • I've seen that proof in several places but it doesn't answer the question. I want to rigourously justify (or disprove) the validity of dropping the higher order terms of the Taylor expansion in the argument I described above, not just a different way to prove the stability criterion. – Modesto Rosado Jun 07 '20 at 18:36
  • 1
    Then the key words are implicit function theorem and exponential map. Both connect the linearization back to the original non-linear situation. – Lutz Lehmann Jun 07 '20 at 18:38
  • By exponential map do you mean in the sense of the theory of Lie groups? Or just the regular exponential function? – Modesto Rosado Jun 07 '20 at 19:05
  • 1
    On second thought, what you would need is a conjugation theorem, that there exists a map $\phi(x)=x+O(x^2)$ with $f(\phi(x))=\phi(ax)$. This then directly connects the linear and non-linear iterations, $f^n(x_0)=\phi(a^n\phi^{-1}(x_0))$. – Lutz Lehmann Jun 07 '20 at 19:44
  • Is there a particular reason why I would need to evaluate the inverse of $\phi$ at $x_{0}$. Also, is it possible that you meant $f(\phi(x))=\phi(a\phi^{-1}(x))$? I'm sorry if it's a dumb question. I have never heard or used anything like this in a proof. Is there any source that explains these concepts in detail? – Modesto Rosado Jun 08 '20 at 20:18
  • 2
    You can find an extensive discussion of this kind of asymptotic expansion in https://math.stackexchange.com/questions/3246828/asymptotic-expansion-of-u-n-1-frac12-arctanu-n in the answer of Szeto, with the associated keywords of " Schröder's equation" and "Conjugacy of iterated functions". // It is $x_n=f^n(x_0)=f^n(\phi(u_0))=\phi(a^nu_0)$ with $x_0=\phi(u_0)$. – Lutz Lehmann Jun 10 '20 at 11:03
  • 1
    And of course it has to be $\phi(x)=x+O((x-x^)^2)$, the examples I had in mind all had $x^=0$. – Lutz Lehmann Jun 10 '20 at 11:45
  • Thank you very much for taking the time to respond and redirecting me to that particular post. I haven't read everything thoroughly yet but it seems to be exactly the kind of argument I was looking for and the connection between the behaviour of the linearized system and the original system seems a lot clearer now. – Modesto Rosado Jun 11 '20 at 18:32