Context
Consider the unconstrained optimization problem of the following one dimensional function $f(x) : \mathbb{R} \mapsto \mathbb{R}$ : \begin{align} minimize\ f(x) = x^4-1 \end{align} With : \begin{align} f'(x) &= 4x^3 \\ f''(x) &= 12x^2 \end{align} Clearly this is a convex function albeit not a strongly convex function with an optimal cost $x^* = 0$. To solve this optimization problem, I apply pure Newton's method (stepsize 1) : \begin{align} x_{k+1} &= x_k + \alpha_k d_k \\ \alpha_k &= 1 \\ d_k &= -\nabla_{xx}^2f(x_k)^{-1} \nabla_x f(x_k) \end{align} In this case : \begin{align} x_{k+1} &= x_k - \frac{4x_k^3}{12x_k^2} \\ x_{k+1} &= x_k - \frac{1}{3}x_k \end{align}
Since it is an unconstrained optimization problem, every $x$ is feasible. As an illustration, I provide three iterations starting with $x_0 = 4$ : \begin{align} x_1 &= 4 - \frac{4}{3} = \frac{8}{3} \\ x_2 &= \frac{8}{3}-\frac{8}{9} = \frac{16}{9} \\ x_3 &= \frac{16}{9}-\frac{16}{27} = \frac{32}{27} \end{align}
Question
The rate of convergence to the optimal cost $x^* = 0$ is linear : \begin{align} \frac{x_{k+1}-x^*}{x_k-x^*} = \frac{x_k-\frac{1}{3}x_k}{x_k} = \frac{2}{3} \end{align} Contrarily to what would be expected for Newton's method, the rate of convergence is merely linear and not quadratic.
I suspect that the reason for this sub-optimal rate of convergence is that the objective function is not strongly convex ($\nabla_{xx}^2f(0)=0$) but I would like an explanation of what is precisely going on.