4

Given the optimization problem: $$\textrm {min } \:\textrm {max} \{f_1, \dots, f_N\}$$$$\textrm{s.t. } \: h(x) = 0$$ with $f_1, \dots, f_N : \Bbb R^n \to \Bbb R$ and $h: \Bbb R^n \to \Bbb R^p$ continously differentiable. Let $\bar x$ be a local minimum for this problem and let the vectors $\nabla h_j(x)$ be linearly independent.

I have to show that there exist $\bar \mu \in \Bbb R^N, \bar \lambda \in \Bbb R^p$, such that $$\sum_{1\le i \le N}{\bar \mu _i \nabla f_i (\bar x)} + \sum_{1\le j \le p}{\bar \lambda _j \nabla h_j (\bar x)} = 0$$ $$\bar \mu \ge 0, \: \sum_{1\le i\le N}{\bar \mu _i}=1, \:\: \bar \mu_i \gt 0 \Rightarrow f_i(\bar x) = \textrm{max}\{f_1(\bar x), \dots, f_N(\bar x)\} $$

I think that with the KKT conditions (see the top of this post: Questions about constraints and KKT conditions) we would get something quite fitting if we use the constraints $g_i(x) = f_i(x) - \textrm{max}\{f_1(\bar x), \dots, f_N(\bar x)\}$, but I don't know what function $f$ that will be minimized I have to use when setting up the Lagrange function and how I will get $\sum_{1\le i\le N}{\bar \mu _i}=1$.

Maybe someone has an idea or a tipp on how to solve this. Thanks in advance!

  • 2
    A simple observation may be useful: $$\max{f_i}=\max_{\mu\in\Delta}\sum_{i=1}^n\mu_if_i$$ where $\Delta$ is the standard probability simplex ${\sum\mu_i=1,,\mu_i\ge 0}$. – A.Γ. Jan 21 '18 at 10:07

1 Answers1

1

Hint: Try to find a relation between the local optimal of your problem and following one

$$\textrm {min } t $$$$\textrm{s.t. } \: h(x) = 0$$ $$ f_{i}(x) \leq t \quad i=1,2,...,N$$

Red shoes
  • 6,948