0

I was reading a research paper https://arxiv.org/pdf/1707.01647.pdf which talked about the convergence analysis of different Machine learning optimisation algorithms (in convex domain). i am not able to understand how finding an upper bound for the Regret bound guarantees that the algorithm will converge?

R.Ahuja
  • 21

1 Answers1

1

Let $\{d_k\}_{k=1}^{\infty} = f(x^k)-f(x^*)$. Then if $\sum_{k=1}^{\infty}d_k$ is bounded, the tail of the sequence must have value of zero. That is, as $k\to\infty$ we have $\{f(x^k)- f(x^*)\}\to 0$.

Detailed proof on the last statement can be found here: If a series converges, then the sequence of terms converges to 0.

iarbel84
  • 1,513
  • But in the research paper we are only bounding the Nth partial sum of a series not the entire series (since the bound is for summation from 1 to N) will the property still hold? – R.Ahuja Oct 06 '20 at 10:34
  • This is exactly what I explained. If we can bound the partial sum for any arbitrary $n$, we can also bound it as $n\to\infty$. Therefore we have $\lim_{n\to\infty}d_n = 0$ – iarbel84 Oct 06 '20 at 11:06