2

In matrix form of Newton's method for system of non-linear equations: $$\begin{bmatrix} \frac{\partial f_1}{\partial x} & \frac{\partial f_1}{\partial y} & \cdots & \frac{\partial f_1}{\partial z} \\ \frac{\partial f_2}{\partial x} & \frac{\partial f_2}{\partial y} & \cdots & \frac{\partial f_2}{\partial z} \\ . \\ . \\ . \\\frac{\partial f_n}{\partial x} & \frac{\partial f_n}{\partial y} & \cdots & \frac{\partial f_n}{\partial z}\end{bmatrix} \times \begin{bmatrix} x_{i + 1} - x_i \\ y_{i + 1} - y_i \\.\\.\\.\\ z_{i + 1} - z_i \end{bmatrix} = -\begin{bmatrix} f_1(x_i,y_i,\ldots,z_i)\\f_2(x_i,y_i,\ldots,z_i)\\.\\.\\.\\f_n(x_i,y_i,\ldots,z_i)\end{bmatrix}$$ if the system is: $$f_1(x,y,\ldots,z)=0 \\ f_2(x,y,\ldots,z)=0 \\.\\.\\.\\f_n(x,y,\ldots,z)=0$$ In my book is written that for this method has quadratic convergence, $$\begin{bmatrix} \frac{\partial f_1}{\partial x} & \frac{\partial f_1}{\partial y} & \cdots & \frac{\partial f_1}{\partial z} \\ \frac{\partial f_2}{\partial x} & \frac{\partial f_2}{\partial y} & \cdots & \frac{\partial f_2}{\partial z} \\ . \\ . \\ . \\\frac{\partial f_n}{\partial x} & \frac{\partial f_n}{\partial y} & \cdots & \frac{\partial f_n}{\partial z}\end{bmatrix} = J(x_i , y_i, \ldots, z_i)] \implies \det[J(x_i , y_i, \ldots, z_i)] \neq 0$$ But I didn't understand the reason.

metamorphy
  • 39,111

1 Answers1

1

Let's look at a 1-dimensional example. Let $f(x) = x^3 - 3x + 3$, and suppose we start at some $x_0$ such that, lo and behold, it happens that $x_n = 1$ for some $n$. (There are actually 3 values of $x_0$ which would give $x_1 = 1$.) What is the next step?

$$\begin{align}x_{n+1} &= x_n - \frac{f(x_n)}{f'(x_n)}\\&=1 - \frac{1^3-3\cdot 1+3}{3\cdot 1^2 - 3}\\&= 1-\frac 10\end{align}$$

Well, shucks. What went wrong? The obvious answer is that I hit a place where $f'$ was $0$, so suddenly I couldn't proceed. The problem is actually worse. Obviously if I started at some random spot, the likelyhood of hitting exactly $1$ is negligible. But what happens is I just come close to $1$? Say $x_n = 1.001$? Then (to 3 decimal places) $$x_{n+1} \approx 1.001 - \frac {1.000}{0.006} \approx 166.667$$

My iteration, which (presumably) had been increasing quadratically in accuracy has suddenly decided to go massively wrong. I can still proceed with this iteration, but it will take a lot of effort to get back on track. All this happens because $f' = 0$ at a point in my search region. I don't even have to hit the place where $f' = 0$ for it to mess me up. If I come anywhere near, my nice, neat quadratic convergence is ruined.

The same thing happens in higher dimensions. When $\det[J] = 0$, you can't even solve for the next iteration, because $J$ is not invertible. And even when $\det[J]$ is only small at the current iteration, not $0$, it still causes huge jumps in the next iteration - jumps well away from the zero you were after.

Paul Sinclair
  • 43,643