Suppose function $f: \mathbb{R}^d \to (-\infty, \infty]$ is convex and assume that it is differentiable over its effective domain. We do not assume that $f$ is Lipschitz, nor that it is closed. Suppose further that there is a nonempty set of points that minimize it.
Consider using gradient descent to solve the unconstrained minimization problem
\begin{align*} \min_{x} f(x) \end{align*}
with step sizes $t^k = \frac1k$.
Does gradient descent converge to an optimal point?
If it does not necessarily converge, are there any modifications we can make to the gradient descent algorithm to ensure convergence?
If not, what are some minimal assumptions to add to $f$ to ensure convergence?
If it does converge, what is the complexity?
Also, does it converge with backtracking line search?