2

As far as I know, the most used approach to computing the zeros of a function of one complex variable is based on identifying the point where the first derivative changes sign using numerical methods. I developed a more direct and theoretically more appealing approach based on nonlinear constrained optimization.

Let Z(x +iy) be the function of interest, the problem of finding the zeros can then be formulated as:

Problem (I): Minimize |Z| = Modulus(Z)

Subject to: (1) Re(Z) = 0; (2) Im(Z) = 0; (3) a < x < b ; b > a; (4) c < y < d; d > c;

Constraints (1), (2) or both may seem redundant, but are there to make sure the minimum achieved is zero. Constraints (3) and (4) are useful to find zeros in the area defined by these constraints.

Problem (I) can be solved numerically as long as Z and its first and second and third derivatives can be computed numerically, since there are several powerful nonlinear constrained optimization algorithms.

I am not aware of the optimization approach being implemented. My limited experience using Matlab’s software on Riemann’s Zeta function showed it to be very efficient in computing Zeta’s zeros.

The main question is: what is the "best" method to compute the zeros of Z? A first subsidiary question is: are there any references to articles where the optimization was implemented and tested against the traditional (first derivative sign change), or other methods?

To me, the most theoretically interesting cases are those where the function Z and its components (real part and imaginary part) are defined by integral equations, as in the case of Riemann’s Zeta function. Hence the second subsidiary question is: are there any articles on constrained optimization of integral equations with integral equations constraints?

Hass
  • 367
  • Your question is unclear, what is an "integral functional" ? – reuns Oct 28 '16 at 03:08
  • Thanks, I meant integral equation – Hass Oct 28 '16 at 04:23
  • What is an "integral equation" ? Did you mean finding (an approximation of) the zeros of an holomorphic function ? And what is the "first derivative sign change method" for a function of a complex variable ? – reuns Oct 28 '16 at 13:00

1 Answers1

1

The most commonly used algorithm for computing the roots of a real-valued function is Newton's Method. You might also be interested in this blog post on Newton's method and the associated Newton fractals.

I'm not sure that you would necessarily gain a lot by using the optimization problem that you posed. In general, the difficulty with most of these non-convex optimization problems is "where should I start my algorithm?". Even with Newton's method, there are good and bad choices of initial iterates for most functions. Interestingly enough, some numerical root-finding algorithms can be shown to converge if the initial guess if 'close enough' to a true root.

  • Thank you Thomas and Chris.There are more powerful optimization methods than Newton's method which is the most simplistic and gets easily stuck at a local minimum, thus may never reach the optimal (zero) solution). Newton's methods is also not the most efficient. – Hass Oct 28 '16 at 02:55
  • @Hass If $f(z)$ is holomorphic, then $|f(z)|$ doesn't any local minimum elsewhere than its zeros, if you meant holomorphic functions – reuns Oct 28 '16 at 03:03
  • Yes. The problem is how to compute those minima (zeros). – Hass Oct 28 '16 at 04:28
  • @Hass If $f(z)$ is analytic, then $f(z)= \sum_{n=0}^\infty \frac{f^{(n)}(z_0)}{n!}(z-z_0)^n$. Thus, in the neighborhood of one of its zero $\rho $ : $f(z) \sim \frac{f^{(n)}(z_0)}{n!} (z-\rho)^n$, and the gradient descent or the Newton method applied to $|f(z)| \sim |C| |z-\rho|^n$ will converge to $z=\rho$. The other possibility, when you are not close enough to a zero, is that it diverges instead to $z = \infty$, for example when applied to $z e^{-z}$ starting from $z = 10$. – reuns Oct 28 '16 at 13:09
  • Now this algorithm is often used together with the argument principle for detecting a zero inside a region. And for $\zeta(z)$, we also want to prove that a zero lie exactly on $Re(z) = 1/2$ and that there are no zeros off the critical line, in that case we use a different algorithm based on the argument principle and the functional equation – reuns Oct 28 '16 at 13:12
  • Thanks user 1952009. There are more powerful and efficient algorithms for minimizing |Z|. And yes, the constraint a < x <= b (a > 0, b >1) insured that x = 1/2, and that there are no zeros off the critical line. – Hass Oct 28 '16 at 20:39
  • There are a few comparative analyses of some methods for root finding, although they deal with functions of one variable. The most important are in these links: http://scholarsmine.mst.edu/cgi/viewcontent.cgi?article=3940&context=masters_theses – Hass Oct 29 '16 at 16:32
  • There are a few comparative analyses , although they deal with functions of one variable. A comparative study for multi variable functions would be very informative and useful in identifying which method(s) work best with which type of function. In my opinion, the optimization approach I proposed is most promising due to the availability of very powerful optimization algorithms. – Hass Oct 29 '16 at 16:48