I think the following problem is convex (due to the results of some simulations), but I am not sure:
$min_x||e^{(Ax)}-b||^2_2$ s.t. x>0
where $A$ is m x n, $x$ is n x 1, and b is m x 1. $A,x,b$ are all real. The exponent of a vector means taking the exponent of each coordinate (is there a better way to write this?).
My reasoning is as follows:
$Ax$ is convex and $e^x$ is convex.
Composition of convex functions is convex, so $e^{(Ax)}$ is convex.
Subtraction shouldn't change convexity, so $e^{(Ax)}-b$ is convex.
$||x||^2_2$ is convex so $||e^{(Ax)}-b||^2_2$ is convex due to convex composition.
Restriction of variables to be positive won't change convexity.
Is this correct?
1st EDIT:
As justt indicated, my reasoning does not hold because convexity is not defined for vector-valued functions. I will try other approaches and update on my progress.
2nd EDIT:
For future reference, if anyone is trying to solve a similar problem: while this problem is not convex, it can be viewed as fitting the parameters of a log-linear model ($e^{Ax}$) to data ($b$). If $e^{Ax}$ is normalized into a probability distribution, then fitting the parameters using the MLE (Maximum Likelihood Estimator) approach will give a convex problem.