7

Consider the problem of minimizing a convex function over $\mathbb{R}^n$ \begin{align} \min_{x\in\mathbb{R}^n}f(x) \end{align} Consider the damped Newton method (from Nesterov's book Introductory Lectures on Convex Optimization) \begin{align} x_{t+1} = x_{t} - \frac{1}{1+\lambda(x_t)}[\nabla^2f(x_t)]^{-1}\nabla f(x_t) \end{align} where $\lambda(x) = (\nabla f(x)^\top[\nabla^2 f(x)]^{-1}\nabla f(x))^{1/2}$ is the Newton decrement.

Claim: Let $f(x)$ be a strongly convex function with a Lipschitz continuous Hessian, $\nabla^2f(x)$. Then the damped Newton method is globally convergent.

Is my above claim true?

jonem
  • 383
  • 1
    I know that, if the exact line search is used, then you get global convergence (see https://arxiv.org/abs/1601.04737, Theorem 1). I personally doubt that one can prove global convergence (even for strongly convex, smooth and Lipschitz continuous functions) if no line search is performed and the damping is prescribed explicitly as you have stated. But I can be wrong. – Vítězslav Štembera Dec 15 '21 at 20:25

0 Answers0