As far as I know, the most used approach to computing the zeros of a function of one complex variable is based on identifying the point where the first derivative changes sign using numerical methods. I developed a more direct and theoretically more appealing approach based on nonlinear constrained optimization.
Let Z(x +iy) be the function of interest, the problem of finding the zeros can then be formulated as:
Problem (I): Minimize |Z| = Modulus(Z)
Subject to: (1) Re(Z) = 0; (2) Im(Z) = 0; (3) a < x < b ; b > a; (4) c < y < d; d > c;
Constraints (1), (2) or both may seem redundant, but are there to make sure the minimum achieved is zero. Constraints (3) and (4) are useful to find zeros in the area defined by these constraints.
Problem (I) can be solved numerically as long as Z and its first and second and third derivatives can be computed numerically, since there are several powerful nonlinear constrained optimization algorithms.
I am not aware of the optimization approach being implemented. My limited experience using Matlab’s software on Riemann’s Zeta function showed it to be very efficient in computing Zeta’s zeros.
The main question is: what is the "best" method to compute the zeros of Z? A first subsidiary question is: are there any references to articles where the optimization was implemented and tested against the traditional (first derivative sign change), or other methods?
To me, the most theoretically interesting cases are those where the function Z and its components (real part and imaginary part) are defined by integral equations, as in the case of Riemann’s Zeta function. Hence the second subsidiary question is: are there any articles on constrained optimization of integral equations with integral equations constraints?