1

I have to find the fixed point of x$^3$-x$^2$-1=0.Then x=(1/x$^2$)+1 where I chose g(x)=(1/x$^2$)+1 .Then I tried to find a fixed point for g(x).Since I don't know the range of x,I chose x$\in$[1.3,1.6] as the suitable range because in this range 1.3

My question is my chosen range a suitable one? I wrote a python program to find the fixed point and it found the fixed point 1.47 in just 13 iterations.Why do I get from the program 13 and from calculation 62?

def fixedPoint(f,epsilon):

    guess=1.3
    count=0
    p=f(guess)


    while abs(p- guess)>=epsilon:

        guess=p
        p=f(guess)
        count+=1
        print guess

    return "The fixed point is "+str(round(guess,2))+"  and it took "+str(count)+" iterations"

def f(x):

    k=(1/x**2)+1
    return k
clarkson
  • 1,907

2 Answers2

3

We have:

$$x = g(x) = \dfrac{1}{x^2} +1$$

By the way, we could just as well choose a different $g(x)$ and some can be better than others.

If we plot this, it shows:

enter image description here

For a starting value of $x=1.3$, the fixed point should result in:

$$x = 1.46557137542.$$

As to when it works, see these Fixed Point Iteration notes, but here is a summary:

Amzoti
  • 56,093
0

You used an upper bound on the absolute value of the derivative to find an upper bound on the number of iterations required for the desired precision.

The Mean Value Theorem says that if $x_n$ is the $n$-th estimate, and $r$ is the root, then $|x_{n+1}-r|$ is equal to $|x_n-r|$ times the absolute value of $g'(x)$ somewhere between $x_n$ and $r$.

It is true that the derivative has absolute value $\lt \frac{2}{(1.3)^3}$. But that only gives us an upper bound on the "next" error in terms of the previous error.

Furthermore, as $x_n$ gets close to $r$, say around $1.4$ or closer, the relevant derivative has significantly smaller absolute value than your $k$.

A further huge factor in this case is that the derivative is negative. That means that estimates alternate between being too small and being too big. When $x_n\gt r$, the derivative has significantly smaller absolute value than your estimate $k$.

Even at the beginning, the convergence rate is faster than the one predicted from the pessimistic estimate of the derivative, particularly since half the time $x_n\gt r$. After a while, the disparity, for $x_n\lt r$, gets greater.

Remark: You know the root $r$ to high precision. It might be informative to modify the program so that at each stage it prints out $\frac{x_{n+1}-r}{x_n-r}$. That way, you can make a comparison between the upper bound $\frac{2}{(1.3)^3}$ on the ratio, and the actual ratio. Even not very large differences, under repeated compounding, can result in much quicker convergence than the one predicted from the upper bound.

André Nicolas
  • 507,029
  • Sorry I don't understand.Is the interval I have chosen is correct?And what is meant by " As we get reasonably close to the root, the derivative has significantly smaller absolute value than your k. " – clarkson Nov 15 '13 at 19:10
  • There are two factors, I only mentioned one, so will add to the post. Let $r$ be the root, about $1.47$. The Mean Value Theorem says that the "next" error is equal to the current error times the derivative somewhere between $x_n$ and $r$. You are using the derivative at $1.3$ as an upper bound on the derivative "somewhere between." That is safe, but pessimistic. Furthermore, when $x_n$ is say something like $1.43$, the absolute value of the derivative is actually less than $\frac{2}{(1.43)^3}$, a fair bit better than $\frac{2}{(1.3)^3}$. – André Nicolas Nov 15 '13 at 19:22
  • Thank you for the explanation – clarkson Nov 16 '13 at 07:10
  • You are welcome. You might consider doing the calculation suggested in the remark. It will tell you exactly what is going on. It is quite frequent that error estimates, for fixed point iteration, or numerical integration, or approximation by Taylor polynomials, give "pessimistic" estimates that are some distance from the truth. Your calculation guaranteed that no more than $62$ steps would be needed, but it did not imply that one would need that many. – André Nicolas Nov 16 '13 at 07:28
  • As I get to know r only at the end how can I make it to print out $\frac{x_{n+1}-r}{x_n-r}$ at each stage – clarkson Nov 16 '13 at 07:42
  • Two answers: (1) I was thinking of it as a tool to find out what went on. You do now know $r$ to high accuracy. So you can write a program that does it, and from scanning the numbers learn how the gains in accuracy actually behaved. (2) You can compare $x_{n+2}-x_{n+1}$ and $x_{n+1}-x_n$ on the fly, without knowing $r$. This comes fairly close to measuring gains in accuracy, and is used in many real world programs. – André Nicolas Nov 16 '13 at 07:52