8

I have a simple (few variables), continuous, twice differentiable convex function that I wish to minimize over the unit simplex. In other words, $\min. f(\mathbf{x})$, $\text{s.t. } \mathbf{0} \preceq x$ and $\mathbf{1}^\top \mathbf{x} = {1}$. This will have to be performed multiple times.

What is a good optimization method to use? Preferably fairly simple to describe and implement?

There are some really fancy methods (such as “Mirror descent and nonlinear projected subgradient methods for convex optimization”, Beck and Teboulle) that specifically have methods for minimizing over the unit simplex. But these methods only use the gradient and not the Hessian.

Royi
  • 8,711
dcm29
  • 81
  • If the function is convex, why wouldn't a simple line search method work? Since the objective function is twice-differentiable, I'd think Newton's method would work fine -- if only once-differentiable, gradient descent should work. Both are extremely well documented online with pseudocode and application-specific code abound. – jedwards Dec 06 '11 at 03:34
  • How about the "Karush–Kuhn–Tucker conditions"? http://en.wikipedia.org/wiki/Karush%E2%80%93Kuhn%E2%80%93Tucker_conditions – matt Dec 06 '11 at 12:35
  • Thanks for the kind reply. I was wondering if there is a specific algorithm for minimization over the unit simplex that uses the Hessian (since it is easily calculated and the problem size is small). (I am still new to convex optimization) In the end I implemented the Interior Point method and it works well. – dcm29 Dec 12 '11 at 03:05
  • 2
    This answer probably comes to late, but have you already looked into the paper "Projected newton methods for optimization problems with simple constraints" by Bertsekas (SIAM J. Control and Optimization, Vol. 20, No. 2, March 1982)? As the title says, a projected Newton algorithm is introduced together with an Armijo-like linesearch. Also, superlinear convergence is proved for simple constraints. The above problem is listed as an example. –  Oct 26 '13 at 11:12
  • See the paper... Efficient Projections onto the l1-Ball for Learning in High Dimensions This paper can perfectly solve your objective. – Lia Nov 22 '13 at 13:16
  • No author? No source? This is hardly a helpful answer! – amWhy Nov 22 '13 at 13:40
  • If this is all you require and the problem size is small, I would highly recommend an interior point method! E.g. you can turn the inequality into a constraint of the form $-\alpha\log(x)$ and send $\alpha \to 0$ to be a hard wall constraint. See Boyd's Convex Optimization, ch. 11 (I believe, haven't taught the class in a while) on interior point methods for a simple implementation (should be no more than 30-50 lines of python). – Guillermo Angeris Aug 22 '17 at 15:30

2 Answers2

3

The Exponentiated Gradient Descend algorithm may be another approach simple to implement.

The Lecture Notes from the University of Chicago - Exponentiated Gradient Descent gives a nice overview..

Royi
  • 8,711
Milan
  • 131
1

The simplest, yet pretty fast method (In running time, not iteration time), would be Accelerated Projected Gradient Descent method.

All you need is to calculate the Gradient of the function $ f \left( x \right) $ and project each iteration onto the Unit Simplex.

A MATLAB Code would be:

simplexRadius = 1;
stopThr = 1e-5;
vX = vXInit;

for ii = 1:numIterations

    vG = CalcFunGrad(vX); %<! The Gradient
    vX = vX - (stepSize * vG);
    vX = ProjectSimplex(vX, simplexRadius, stopThr);

end

You can have the Simplex Projection function from the link I pasted (Or directly in my Ball Projection GitHub Repository).

Royi
  • 8,711