0

I'm looking at a typical maximization problem: $f(\Theta) = \sum z_i * ln \theta_i$.

That is subjected to following constraints: $\sum \theta_i = 1$ and $\theta_i \ge 0$.

Basically, the resources that I'm looking at tend to solve it using Lagrangian, but only by introducing 1 multiplier: $$L(\Theta, \lambda) = \sum z_i * ln \theta_i + \lambda(1 - \sum \theta_i)$$

I'm a little bit confused, why this constraint $\theta_i \ge 0$ is ignored. To be more precise, I'm struggling to prove that with following optimization problem:

$$L(\Theta, \lambda, B) = \sum z_i * ln \theta_i + \lambda(1 - \sum \theta_i) - \sum \beta_i * \theta_i$$

will have $\beta_i$ as zeros in optimal case.

I appreciate your help, guys.

  • A quick guess, the log and thus the Lagrangian is defined only for positives, so the additional constraint is in some sense moot. – Macavity Jan 29 '18 at 02:21

1 Answers1

1

$\ln(x)$ is defined to be $-\infty$ at $x\leq 0$,
so it won't contribute to the maximization.
(This is somewhat related to How to deal with extended real valued functions in optimization?)

max_zorn
  • 4,875