10

$$\text{maximize } f(\mathbf{x}) \quad\text{subject to } \mathbf{Ax} = \mathbf{b}$$

where

$$f(\mathbf{x}) = \sum_{i=1}^N\sqrt{1+\frac{x_i^4}{\left(\sum_{i=1}^{N}x_i^2\right)^2}},$$

$\mathbf{x} = [x_1,x_2,...,x_N]^T \in \mathbb{R}^{N\times 1}$ and $\mathbf{A} \in \mathbb{R}^{M\times N}$.

We can see that $f$ is convex and of the form $\sqrt{1+y^2}$. It can be also shown that $f$ is bounded in $[\sqrt{2}, 2]$. I know that a convex maximization problem is NP-hard, in general.

However, using the specific nature of the problem, is it possible to solve it using any standard convex optimization software/package?

xskxzr
  • 7,455
  • 5
  • 23
  • 46
Sooraj
  • 201
  • 1
  • 2
  • There are two summations, one inside the other, that use the same "loop variable" $i$. It seems clear from context which uses of $i$ are which, but please fix for clarity. – j_random_hacker Jul 03 '18 at 09:57

1 Answers1

5

Yes, Convex Optimization with equality constraint is NP-Hard in general. However, there exist mature techniques that find very nice approximate solutions to convex optimization problems, like Coordinate Descent.

Suppose you use coordinate descent and the matrix A has rank $k$. You can fix n-k-1 coordinates of your feasible solution $x = (x_1, x_2, x_3, ..., x_n)$ and then the solution vectors in the solution space are uniquely determined by one coordinate, for example $x_i$. In that case, you can just take the derivative of $f(\cdot)$ with respect to $x_i$ to find the maximum in this iteration.

Then we iteratively fix n-k-1 coordinate and improve the solution until a approximately optimal one is found.

Strin
  • 1,505
  • 1
  • 11
  • 16
  • @RodrigodeAzevedo: It's not a contradiction or surprising that LP, a special case of convex optimisation, is easier than the general case. – j_random_hacker May 07 '19 at 20:21