1

I have n points in Euclidean space $\{\mathbf{a_1}, \mathbf{a_2}, ... , \mathbf{a_n}\}$, and the desired distances to them $\{d_1, d_2, ..., d_n\}$. How can I find the optimal point $\mathbf{x}$ that minimizes $\sum_{i=1}^n [\lVert \mathbf{x} - \mathbf{a}_i \rVert - d_i]^2$

What I am doing now is put x somewhere, and differentiate the objective. However, gradient descent very often end up in local minima.

  • Assume that all the $d_i$ are arranged into a column vector as $\mathbf{d}= (d_1, \ldots, d_n)^T$, and now let us define a function $E: \mathbb{R}^n \to \mathbb{R}$ such that :

    $E(\mathbf{v}) = || \mathbf{v}- \mathbf{d}||^2$

    Now the minimum of $E$ is attained at point $\mathbf{v} = \mathbf{d}$. You can see that function $E$ is equivalent to your cost function, if we interpret i-th component of $\mathbf{v}$ as distance between point $\mathbf{a_i}$ and some point $\mathbf{x}$.

    – TheCoolDrop Jul 22 '19 at 21:36
  • Now this problem reduces to searching for point $\mathbf{x}$ such that its distances to $\mathbf{a_i}$ is given by $d_i$.

    Now from this you can see that this is actually a problem of finding intersection point of $n$ circles, which are centered on $\mathbf{a_i}$ and have radii given by respective $d_i$

    – TheCoolDrop Jul 22 '19 at 21:36
  • I cant solve the problem of intersection of n circles, thus I made my answer as comments – TheCoolDrop Jul 22 '19 at 21:36
  • Hi TheCoolDrop, Thanks for your comment. I think what you describe is equivalent to this one: https://math.stackexchange.com/questions/100448/finding-location-of-a-point-on-2d-plane-given-the-distances-to-three-other-know The problem is there might not be a perfect solution, that's why I try to minimize the difference. – Chaoping Jul 22 '19 at 22:19

2 Answers2

1

Say x $ = (x, y)$ and each $\mathbf{a_i} = (x_i, y_i)$. This means that we want to minimize $$\sum_{i=1}^{n} \left(\sqrt{(x-x_i)^2+(y-y_i)^2}-d_i\right)^2$$

Expanding this and getting rid of "+c" terms such as $d_i^2$ yields $$n(x^2+y^2)-2x\sum_{i=1}^{n}x_i -2y\sum_{i=1}^{n}y_i -2\sum_{i=1}^{n} d_i\sqrt{(x-x_i)^2+(y-y_i)^2}$$

Taking the partial derivative with respect to $x$ and setting equal to $0$ finds $$nx-\sum_{i=1}^{n}x_i - \sum_{i=1}^{n} d_i\frac{x-x_i}{\sqrt{(x-x_i)^2+(y-y_i)^2}} = 0$$

Taking the partial derivative with respect to $y$ and setting equal to $0$ is the same thing, but with $x$ and $y$ flipped. $$ny-\sum_{i=1}^{n}y_i - \sum_{i=1}^{n} d_i\frac{y-y_i}{\sqrt{(x-x_i)^2+(y-y_i)^2}} = 0$$

Both of these equalities must be satisfied to get the optimal point. If the specific $\mathbf{a_i, d_i}$ were known, $x, y$ could be approximated using Mathematica or some other software/website. However, I do not think any closed form will exist for the optimal point for $n \ge 3$.

1

Using automaticallyGenerated's answer as a basis for the discussion, you need to minimize $$\Phi(x,y)=\sum_{i=1}^{n} \left(\sqrt{(x-x_i)^2+(y-y_i)^2}-d_i\right)^2$$ which is highly nonlinear and "good" starting values are required.

You can get those easily if, in a prelimary step, you consider that you have $n$ equations $$f_i=(x-x_i)^2+(y-y_i)^2-d_i^2=0$$ Now, write the $\frac {n(n-1)}2$ equations $[(i=1,2,\cdots,n-1) \quad\text{and} \quad(j=i+1,i+2,\cdots n)]$ $$f_i-f_j=2(x_i-x_j)x+2(y_i-y_j)y=(d_i^2-x_i^2-y_i^2)-(d_j^2-x_j^2-y_j^2)$$ and a simple linear regression (or matrix calculations) will give you estimates for $x$ and $y$.

If you still want to improve these guesses, minimize in a second step $$\Psi(x,y)=\sum_{i=1}^{n} \left({(x-x_i)^2+(y-y_i)^2}-d_i^2\right)^2$$ which is better conditioned that $\Phi(x,y)$. For sure, start with the guesses obtained in the first step and obtain new estimates.

Now, you are ready for the minimization of $\Phi(x,y)$ and this would not present any problem. This could even be done using Newton-Raphson method for solving $$\frac{\partial \Phi(x,y) }{\partial x }=\frac{\partial \Phi(x,y) }{\partial y }=0$$ even using numerical derivatives.

  • Thank you so much for your answer! Just to confirm, after performing linear regression on the $\frac{n(n-1)}{2}$ equations, I should then minimize $\Psi$ before I can finally minimize $\Phi$? How does doing so guarantee optimal solution? Is there any literature I can read more on this? I really appreciate your time! – Chaoping Jul 23 '19 at 06:25
  • @Chaoping. In principle, you do not need to minimise $\Psi$ except if the distances are in large errors. Function $\Psi$ is much better conditioned than function $\Phi$ and much easier to minimize. So, if, after the preliminary step, you have problems to minimize $\Phi$, go backward and minimize $\Psi$ to have better starting values. Taking into account convexity, the minimum of $\Psi$ is unique. I worked this type of problems for many years and never faced difficulties even with loarge errors for the distances. – Claude Leibovici Jul 23 '19 at 06:51