Consider the problem of minimizing a convex function over $\mathbb{R}^n$ \begin{align} \min_{x\in\mathbb{R}^n}f(x) \end{align} Consider the damped Newton method (from Nesterov's book Introductory Lectures on Convex Optimization) \begin{align} x_{t+1} = x_{t} - \frac{1}{1+\lambda(x_t)}[\nabla^2f(x_t)]^{-1}\nabla f(x_t) \end{align} where $\lambda(x) = (\nabla f(x)^\top[\nabla^2 f(x)]^{-1}\nabla f(x))^{1/2}$ is the Newton decrement.
Claim: Let $f(x)$ be a strongly convex function with a Lipschitz continuous Hessian, $\nabla^2f(x)$. Then the damped Newton method is globally convergent.
Is my above claim true?