51

I recently saw a lecturer prove the following theorem (assuming the result that every analytic function is locally 1-1 whenever its derivative is nonzero): Let $\Omega \subset \mathbb{C}$ be open, and let $f : \Omega \to \mathbb{C}$ be 1-1 and analytic on $\Omega$. Then $f'(z_0) \not = 0$ for every $z_0 \in \Omega$.

I got the basic idea behind the proof: we assume for contradiction that $f'(z_0) = 0$, and, assuming without loss of generality that $z_0 = f(z_0) =0$, we have (from the power-series expansion) that $f(z) = z^kg(z)$ for some analytic $g$ in some disk at the origin (i.e., $z_0$) and some $k \ge 2$. Since $z^k$ is not 1-1 in any such disk (because there are multiple roots of unity), then $f$ isn't either.

However, the proof he gave was rather awkward and technical- it involved defining three different axillary functions, even though the idea was simple, and I've since forgotten how it exactly worked. In any case, I'm convinced there's a better way.

The problem is that I'm having trouble turning the idea into a real proof- I know that it obviously follows if $g$ is 1-1, but I'm also pretty sure that that is too strong an assumption. Am I missing something, or does the argument just have to be more complicated?

  • 1
    It may be hard to answer whether there's a less complicated argument if we don't know what the argument was. One proof is in Theorem 7.4 of J.B. Conway's complex analysis text: http://books.google.com/books?id=9LtfZr1snG0C&pg=PA98#v=onepage&q&f=false – Jonas Meyer Apr 27 '11 at 01:03
  • 1
    Google isn't letting me see the page containing the proof. – Calvin McPhail-Snyder Apr 27 '11 at 01:23
  • 1
    (Apparently, enter posts comments.)

    I found some notes that I took- the proof involved dividing both sides by $g(0)$ (which seems unnecessary), then defining a new function $\psi$ by $\psi(z) = g(z)/g(0)$, so that $f(z)/g(0) = z^k\psi(z)$. $\psi$ takes values in a disc away from $0$, so its log is well-defined. We can then put $\phi(z) = z \exp(\log(\psi(z)) /k)$, so that $\phi(z^k) = z^k \psi(z)$, and then it's easy to show that $f$ isn't 1-1, since $\phi$ is, as $\phi'(0) = 1$.

    That actually isn't as bad as I remembered- I think it's because he didn't assume $z_0 = f(z_0) = 0$.

    – Calvin McPhail-Snyder Apr 27 '11 at 01:31
  • Isn't this just the inverse function theorem? – gary Apr 27 '11 at 01:34
  • In complex-variable form, yes. – Calvin McPhail-Snyder Apr 27 '11 at 01:35
  • 4
    @gary: No, the inverse function theorem implies the converse, that if the derivative is nonzero then the function is locally injective. – Jonas Meyer Apr 27 '11 at 01:42
  • 1
    I'd argue that although the idea is simple, any proof will have to be at least a little "awkward" or "technical" for the reason that the statement is analogous to other statements that are false. For example $f: \mathbb{R} \to \mathbb{R}$ given by $x \mapsto x^3$ is one-to-one and in many senses it's as nice a function from $\mathbb{R}$ to $\mathbb{R}$ as you might want (e.g. it is real analytic) but $f'(0) = 0$. Of course the $'$ in $f'$ means something different here, but it's certainly analogous. So any proof of your statement will really have to do something. Just my two cents. – anon Apr 27 '11 at 04:51
  • Jonas:I was referring to what I think is the result that an analytic injection has an analytic inverse. Then the inverse would necessarily have a specific formula given by the IFTheorem which does not allow for f'(zo)=0 in the region. – gary Apr 27 '11 at 06:38
  • @JonasMeyer But the converse is true, here, isn't it ? –  Mar 31 '15 at 20:42
  • @Gato: I don't know what you're asking, but I can repeat what I said in my comment: "If the derivative is nonzero then the function is locally injective." That is sort of a converse. Of course it must be qualified with "locally" as $z\mapsto e^z$ demonstrates. This converse direction is where the inverse function theorem is relevant. But I already said that, which is why I don't know what you're asking, and maybe I was just unclear. – Jonas Meyer Mar 31 '15 at 20:46
  • @JonasMeyer Ok, is it obvious that 'f the derivative is nonzero then the function is locally injective' ? What 'argument' is needed ? –  Mar 31 '15 at 20:52
  • @Gato: Have you looked up the inverse function theorem? – Jonas Meyer Mar 31 '15 at 21:26

3 Answers3

55

I like proving this theorem via its contrapositive rather than by contradiction (though the computations are essentially the same).

Suppose $f:\Omega\to\mathbb{C}$ is analytic with $f'(z_0)=0$. The goal is to show that every disc about the origin contains distinct $z_1,z_2$ with $f(z_1)=f(z_2)$. We may assume that $z_0=f(z_0)=f'(z_0)=0$ (using $f(z+z_0)-f(z_0)$ if necessary, as translation doesn't affect injectivity). Since $f$ is analytic at $z=0$ and $f'(0)=f(0)=0$, $f$ has a power series expansion $$f(z)=a_k z^k + a_{k+1} z^{k+1} + \dots$$ where $k>1$. Pulling out a $z^k$ gives $$f(z)=z^k (a_k + a_{k+1}z + \dots) = z^k g(z)$$ where $g$ is analytic with $g(0)\neq0$. Since $g$ is nonzero on a sufficiently small disc centered at the origin, we can define an appropriate branch its log so that its kth root is well-defined. Call this function $h$, so that $h$ is analytic with $h(z)^k=g(z)$ near the origin. Hence $$f(z)=\left(zh(z)\right)^k.$$ Note that $\phi(z)=zh(z)$ is analytic (near $z=0$). Therefore, for any $\epsilon>0$ (sufficiently small), $\phi(D(0,\epsilon))$ is open (by the Open Mapping Theorem) and hence contains a disc $D(0,2\delta)$. In particular, there exist $z_1,z_2\in D(0,\epsilon)$ with $\phi(z_1)=\delta$ and $\phi(z_2)=\delta \exp\left(\frac{2\pi i}{k}\right)$. Therefore $$f(z_2)=\delta^k \exp\left(\frac{2\pi i}{k}\right)^k = \delta^k = f(z_1)$$ as desired.


The proof does look a bit clunky, especially with all the auxilliary functions. However, it's actually fairly simple and the extra functions are really just to show why each step is valid. In fact, the gist of the proof is:

  1. Show that $f$ is the kth power of some analytic function $\phi$

  2. Show that you can always find $z_1,z_2$ where $\phi(z_1)$ and $\phi(z_2)$ lie on the same circle and their arguments differ by $\frac{2\pi}{k}$, so that their k-th powers are equal.

Mike Pierce
  • 18,938
Riley E
  • 1,561
46

Suppose $f$ is analytic at $z_0$ and non-constant, but $f'(z_0) = 0$. Then the order of the zero of $f(z) - f(z_0)$ at $z_0$ is some integer $k > 1$. Take some circle $\Gamma$ around $z_0$ so $f$ is analytic on and inside $\Gamma$ and there are no other zeros of $f(z) - f(z_0)$ or $f'(z)$ on or inside $\Gamma$. Now the sum of the orders of the zeros of $f(z) - p$ inside $\Gamma$ is $\dfrac{1}{2\pi i}\int_\Gamma \frac{f'(z)}{f(z) - p}\, dz.$, which is equal to $k$ for $p$ in a neighbourhood of $f(z_0)$. But since $f'(z)$ has no other zeros inside $\Gamma$, those zeros are simple, i.e. there are $k$ distinct solutions to $f(z) = p$ inside $\Gamma$.

Robert Israel
  • 448,999
  • Robert Israel, which theorem did you use for counting the sum of the orders of zeros of $f(z)-p$? – Yoav Bar Sinai Jan 14 '14 at 08:45
  • I'm not aware that the theorem has an official name. It's just from computing the residue of $f'(z)/(f(z)-p)$ at $z = r$ where $f(z) - p = g(z) (z - r)^m$ and $g(r) \ne 0$. – Robert Israel Jan 14 '14 at 15:59
  • 9
    Thanks, I think that this is the argument principle. – Yoav Bar Sinai Jan 18 '14 at 14:30
  • 2
    In fact, Rouche's theorem is proven along similar lines. – Teebro Prokash Dec 07 '18 at 17:19
  • 2
    How did you get that the number of zeros is equal to k for p near $f(z_0)$ ? – Homieomorphism Jan 28 '19 at 06:36
  • 5
    It's a continuous function of $p$ near $f(z_0)$, but it's $k$ at $f(z_0)$ and its values are integers. – Robert Israel Jan 28 '19 at 13:33
  • 5
    @RobertIsrael What is your $p$? And what is $r$? – Bach Jun 12 '19 at 11:55
  • @RobertIsrael How do we know it's a continuous function of $p$? – Sha Vuklia May 27 '20 at 20:46
  • @Bach $p$ is a point in some neighbourhood $N$ of $f(z_0)$ such that $N$ doesn't intersect $\sigma = f \circ \Gamma.$ Hence any two points $z_1,z_2$ in $N$ are lying in the same connected component of $\sigma.$ As a result of which the corresponding indices are same with respect to $\sigma.$ That means $f(z) - z_1$ and $f(z) - z_2$ have the same number of zeros (counting multiplicities) inside $\Gamma.$ Also if $\zeta \neq z_0$ then $f(z) - f(\zeta)$ has all it's zeros simple in a small neighbourhood around $z_0$ in which $f'(z) \neq 0$ for $z \neq z_0.$ – Anil Bagchi. Sep 01 '20 at 05:15
0

A proof using Rouche's Theorem: if $f$ is analytic on $\Omega$ and $f'(z_0)=0$, then we can assume $f$ isn't a constant, so $z_0$ is an isolated zero of $f'$. Let $m\ge 2$ be the order of the zero at $z_0$ of $g(z)=f(z)-f(z_0)$. Let $B_1$ be an open ball about $z_0$ included in $\Omega$ in which $g$ and $f'$ have no other zeroes apart from $z_0$ and let $\delta>0$ be so that $|g(z)|>\delta$ for all $z$ on the boundary of $B_1$. If $h$ is the constant map $z\mapsto -\delta$ on $\Omega$, then by Rouche's theorem $g+h$ has the same number of zeroes as $g$ does inside $B_1$. As the zeroes of $g+h$ are simple ($(g+h)'(z)=f'(z)\ne 0$ for any $z\ne z_0$ inside $B_1$), it follows $f(z)=f(z_0)+\delta$ for $m$-many $z$ inside $B_1$.

Drooga
  • 109
  • 3
  • Excuse me, you writed “let δ>0 be so that |g(z)|>δ for all z on the boundary of B” I want to ask how we know there is a δ like that? Why is it bounded? – Remas Jan 13 '24 at 10:44