3

Suppose we have the following optimization problem

$$ \min_{0\preceq M \preceq I} y^TMy $$ where $y \in \mathbb{R}^n$ and $M \in \mathbb{R}^{n \times n}$ is a positive semi-definite matrix. Notice that the optimization variable is matrix.

Is there any algebraically way to handle this in terms of $M$?

When we write it as the standard form we have the following

$$ \min y^TMy $$ $$ \text{s.t.}\,\,\,\,\,\, {g_1(M)=-M \preceq 0 } $$ $$ \text{s.t.}\,\,\,\,\,\, {g_2(M)=I-M \preceq 0 } $$ which is a matrix inequality. If it were vector, it was doable but what would we do when they are in matrix form.

I want to write the first order optimality condition using Lagrangian in terms of gradient of $g_1(M)$ and $g_2(M)$ but the gradient in terms of $M$ is $I$ and $-I$. The following answer explains it using different view.

What is the KKT condition for constraint $M \preceq I$?

I want a method that ties these to views together.

I want to know the general case. I want to handle it directly in terms of M.

I will appreciate, If you introduce me any reference that addresses this issue.

1 Answers1

4

This topic is discussed in Convex Optimization by Boyd and Vandenberghe. See section 5.9.

The key idea here is that you need an appropriate inner product associated with the conic inequality. For positive semidefiniteness constraints, the associated inner product is $\langle A, B \rangle=\mbox{tr}(A^{T}B)$. The Lagrange multiplier for the conic constraint must be a positive semidefinite matrix $\Lambda \succeq 0$ rather than a scalar. All of the theory for scalar constraints carries over in a straight-forward way to this more general setting.