21

This is very much a soft question, but after seeing Cauchy's integral formula in lecture today I was really struck by how neat complex analysis is. I don't understand how all of these amazing analytic properties (global extrapolations from local properties/holomorphic implies infinitely holomorphic) can come from just algebraically adjoining the square root of -1.

When I asked my professor about this, he said it was a function of the complementary relationship between complex analysis and algebraic topology and didn't really expand on that.

Even not knowing much algebraic topology, this connection does seem clear in some ways (the importance of simple connectedness for Cauchy's theorem and dealing with so many paths/using code words for homotopy). However, I am still not sure what it is about the complex plane that lends itself to this special link, especially when it comes to functions. $\mathbb{R}^{2}$ is topologically equivalent (maybe just point set topology questions?) from what I understand and it definitely isn't as nice.

I would appreciate any sort of discussion or direction towards references (especially for someone that hasn't learned much topology formally-Hatcher is a difficult text for me to grapple with on my own) and I hope this is interesting to other people.

operatorerror
  • 29,103
  • 3
    So this isn't an answer, but I'll give my thoughts on why complex analysis is so beautiful. To me, it's best served comparing with analysis on $\Bbb R^2$. The reason they are so different (in my opinion) is simply because you can divide by complex numbers. This property alone can provide the Cauchy-Riemann equations which, from there, gives you everything else. You can't divide by vectors and instead must divide by lengths which causes the analysis in $\Bbb R^2$ to be so messy. – Cameron Williams Apr 27 '16 at 01:00
  • ah I don't know why my instructor skipped doing those. It makes sense that this is important for analytic properties of functions different from $\mathbb{R}^{2}$ – operatorerror Apr 27 '16 at 01:02
  • 1
    You could also develop things from the Cauchy integral formula perspective, which, again, I contend is a direct result of being able to divide by complex numbers. – Cameron Williams Apr 27 '16 at 01:09
  • 2
    The reason complex analysis is nice is, to me, because it's about solutions to a certain very special kind of PDE called an "elliptic PDE": solutions to $\bar \partial f = 0$, the Cauchy-Riemann equations, are automatically analytic by their very nature of being solutions to an elliptic PDE. Similarly, the integral formula arises and is useful (in different forms) in the study of similar PDE; the maximum principle is also a generally true fact for this sort of PDE, as is (one version of) the identity theorem... the one thing that's especially strong about complex analysis as opposed to... –  Apr 27 '16 at 02:30
  • ...elliptic PDE in general is that they have a power series form $\sum a_i z^i$, which gives us some theorems that are strictly stronger than general elliptic results and rather magical. But for much of the stuff one sees at the beginning of a course, the results are not special to elliptic PDE. –  Apr 27 '16 at 02:31
  • @CameronWilliams: You can divide by quaternions. Should we expect analysis of quaternions to share features of complex analysis? – ziggurism Dec 17 '16 at 20:30
  • @ziggurism are the Quaternions somehow nastier? – operatorerror Dec 17 '16 at 20:34
  • @qbert: well they are not commutative. so I'd say yes, they are nastier. – ziggurism Dec 17 '16 at 20:35
  • @ziggurism what about when you do calculus with them? Do people do calculus with the quaternions? – operatorerror Dec 17 '16 at 20:42
  • @qbert I believe people have tried it, but I don't know much about it – ziggurism Dec 17 '16 at 20:59
  • Differential Topology by Guillemin/Pollack has a really nice discussion about the links between differential topology and complex analysis in the last chapter. This book also covers some basic and relevant algebraic topology from the differentiable viewpoint, e.g., degree theory and cohomology. – Matthew Kvalheim Feb 12 '18 at 03:00
  • You may read James Munkres' Topology book. In the Algebraic topology chapter, he discussed some connections to complex analysis and you will also see the proof of the FTA using algebraic topology. – John Thompson May 03 '19 at 04:19
  • 5

2 Answers2

6

I think one reason Complex Analysis is so nice is because being holomorphic/analytic is an extremely strong condition.

As opposed to real analysis, differentiability is a rather weak condition, so we have functions that are differentiable once but not twice etc. Real analysis is full of nasty counterexamples like the Weierstrass function which is continuous everywhere but differentiable nowhere.

Analytic functions are $C^\infty$, meaning they can be infinitely differentiated. Even more than that, analytic series is equal to its own Taylor series.

With regards to Algebraic Topology (AT), Hatcher does not focus much on the link between Complex Analysis and AT. Something interesting is that the Fundamental Theorem of Algebra can be proved in two different ways using Complex Analysis or Algebraic Topology (found in Chapter 1 of Hatcher).

yoyostein
  • 19,608
  • 1
    You never actually state the fact that analytic iff differentiable holds for complex function – Stella Biderman Apr 27 '16 at 03:04
  • 1
    I saw that proof of the FTA. Do you have a good suggestion for an approachable algebraic topology book that does explore the link. Also, Can you expand on why it makes sense to impose so much stronger a condition (analyticity) in complex analysis? Thanks! – operatorerror Apr 27 '16 at 03:31
  • 1
    You may want to swap analytic with holomorphic, as we usually start with the latter ("the" main concept of complex analysis) and then prove the equivalences. – YoTengoUnLCD Apr 27 '16 at 03:31
  • 1
    @qbert you can try reading up on differential topology and winding numbers. These two have more relation to complex analysis. Traditional Algebraic topology (homology/ homotopy theory) have nothing much to do with complex analysis, at least at the basic/intermediate level. – yoyostein Apr 27 '16 at 03:53
  • 1
    Technically, there are plenty of functions on the complex numbers that are continuous everywhere but differentiable nowhere - the counterexamples only start to disappear once you impose differentiability. – John Gowers Dec 17 '16 at 20:12
4

I'm going to narrow your question to the following first paragraph:

I don't understand how all of these amazing analytic properties (global extrapolations from local properties/holomorphic implies infinitely holomorphic) can come from just algebraically adjoining the square root of -1.

There are three questions hidden here: Why $-1$? Why the square root (as opposed to, say, the cube root)? And how do these produce "these amazing analytic properties"?

The other answer and the question John Kyon linked in comments give excellent answers to the third question. To quickly summarize, holomorphisms are nice, because Cauchy's formula and shifting contours give us the implication $\text{integrable}\Rightarrow\text{differentiable}$. We get Cauchy's formula and shifting contours from the Cauchy-Riemann equations, and the CR-equations arise because we want the derivative of a $\mathbb{C}\to\mathbb{C}$ function at a given point to itself be an element of $\mathbb{C}$.

But this leaves the first two questions more mysterious. We can generalize the construction of $\mathbb{C}$ quite substantially: given a commutative ring $R\leq\mathbb{R}$ and an $R$-algebra $A$, we can ask about the functions $A\to A$ with derivatives given by the multiplication action of $A$ on itself. For example, we could always look at numbers of the form $a+b\sqrt{-2}$ instead of $a+b\sqrt{-1}$. Of course, it turns out that those numbers are just $\mathbb{C}$ again…but can you be sure this isn't just a bad example? Why isn't there just as nice a theory for these other algebras? Why don't we hear about them?

The answer is that there are (essentially) no other algebras. We need to have some sort of underlying complete field in order to define derivatives. So we need to start with $R=\mathbb{R}$ above. But then abstract algebra tells us that, since $\mathbb{R}$ is a field, any commutative, finite-dimensional $\mathbb{R}$-algebra is a direct sum (as $\mathbb{R}$-algebras) of field extensions of $\mathbb{R}$. So the $A$ we wanted to analyze above is built out of objects like $\mathbb{C}=\mathbb{R}(\sqrt{-1})$…or $\mathbb{R}(\sqrt{-2})$.

So what sort of numbers can we adjoin to $\mathbb{R}$ to get something bigger than $\mathbb{R}$? By a Galois-theoretic argument (see Dummit and Foote, section 14.6), the answer is precisely "square roots of negative numbers." Moreover, those additions give all of $\mathbb{C}$, by comparing the dimensions as $\mathbb{R}$-vector spaces. So if we create an algebra by adjoining a different square root, we still get $\mathbb{C}$, but with a weird coordinatization that makes no geometric sense. We might as well use the coordinatization that gives good geometry!