This question is related to Minimize variance of a linear function but with an additional contraint.
Given a matrix $A \in \mathbb{R}^{n\times m}$ and vector $\vec{b} \in \mathbb{R}^{m}$, let $ \vec{m}(\vec{x})=A\vec{x}-\vec{b}$ .
$$ \min_\vec{x} \operatorname {var}[ \vec{m}(\vec{x}) ] \quad \mbox{s.t.} \quad x_i \geq 0 \quad\forall i \label{bla}, \tag{1} $$
where the variance can be written as
$$\operatorname {var} (\vec{m})={\frac {1}{n}}\sum _{i=1}^{n}(m_{i}-\mu )^{2}$$
with $\mu =\frac1n \sum _{i=1}^{n}m_{i}$. Right now i am just using this approach with the Matlab's lsqnonneg
,
$$ \min_\vec{x} \| (A\vec{x}-\vec{b} +\delta \cdot \mathbf{1})\|_{2}^{2} \quad \mbox{s.t.} \quad x_i \geq 0 \quad\forall i$$
where I loop through diffent values of $\delta$ to "probe" the unknown value of the mean of $A\vec{x}-\vec{b}$. Then I take the solution where the result is smallest for all the $\delta$ tested.
How can i do this optimization more efficient? Is there a way to solve equation (1) directly?