136

What are examples of mathematical results that were discovered surprisingly late in history? Maybe the result is a straightforward corollary of an established theorem, or maybe it's just so simple that it's surprising no one thought of it sooner.

The example that makes me ask is the 2011 paper John Baez mentioned called "Two semicircles fill half a circle", which proves a fairly simple geometrical fact similar to those that have been pondered for thousands of years.

azimut
  • 22,696
Hrothgar
  • 639
  • 1
    I'm likely off, but isn't the posted problem quickly solved by use of coordinate geometry? – Shivam Sarodia Mar 04 '14 at 23:00
  • 3
    It took a surprisingly long time (17th century) for even very basic concepts of probability theory to be developed, considering that those concepts would have been immensely valuable in real life, given that gambling has been popular forever. – dfan May 01 '14 at 21:57

35 Answers35

112

This proof of the irrationality of $\sqrt 2$ appears to have been discovered in 1892 by A.P. Kiselev:

diagram

If $\sqrt2$ is rational, let $\triangle ABO$ be the smallest possible isosceles right triangle whose sides are integers.

Construct $CD$ perpendicular to $AO$ with $AC = AB$. $\triangle OCD$ is another isosceles right triangle.

$AC=AB$, therefore $AC$ is an integer, therefore $OC = OA - AC$ is an integer. $\triangle OCD$ is isosceles, so $OC=CD$ and CD is an integer. $CD$ and $BD$ are equal because they are tangent to the same circle, so $BD$ and $OD = BO-BD$ are integers. But then $\triangle OCD$ is an isosceles right triangle with integer sides, contradicting the assumption that $\triangle ABO$ was the smallest such.

I am amazed that this wasn't found by the Greeks, because it is so much more in their style than the proof that they did find.

(Source: http://www.cut-the-knot.org/proofs/sq_root.shtml#proof7)

MJD
  • 65,394
  • 39
  • 298
  • 580
  • 12
    It is really amazing that the Greeks should not have found this argument while they knew that $\sqrt2$ is irrational. Since that implies that Euclid's algorithm applied to lengths $\sqrt2:1$ cannot terminate (and find a $\gcd$) in a finite number of steps. In this particular case that is so because the ratio of the remaining lengths is periodic with period $2$. To wit, subtract the shorter AB from the longer OA in OA:AB to get AB:OC; then again subtract OC=CD=BC from AB=OB to get OC:OD, which as the figure shows is the same ratio as OA:AB, so one gets periodicity, and irrationality. – Marc van Leeuwen Mar 04 '14 at 18:50
  • 2
    Its still a wonderful proof! Wish it was more common. – Sawarnik Mar 04 '14 at 19:08
  • 1
    Did the greeks do many proofs by infinite descent? – jathd Mar 04 '14 at 21:22
  • 3
    The Greeks must have known the same proof of irrationality for the golden ratio, where the construction is built into the definition. I would be surprised if the proof for sqrt(2) were first discovered by Kiselyev or not noted by one of the ancient Greek mathematicians. – zyx Mar 04 '14 at 21:49
  • 2
    @zyx I would be delighted to see a citation. I first encountered this proof in Thomas Apostol "Irrationality of the Square Root of Two - A Geometric Proof" American Mathematical Monthly Nov 2000 841–842. Apparently neither Apostol nor the reviewers at the Monthly were aware of either its earlier publication by Kiselev, or any hypothetical earlier publication. Of course, there is plenty of Greek mathematics that is completely lost, and among it there are no doubt some results that would be much more surprising than this one. – MJD Mar 04 '14 at 21:52
  • 6
    @MJD Almost surely this proof is much older. There are many variants on such geometrical forms of the standard descent proofs and many of them are very old. I expect the same is true for this variant. – Bill Dubuque Mar 05 '14 at 02:39
  • 1
    "OCD" I love it. – Almo Mar 06 '14 at 20:00
70

This is not a theorem but a result which was discovered "too late".

The number $e$ hiding inside Pascal's triangle was found it by Harlan Brothers in 2012, in a very simple way!

enter image description here

Jose Antonio
  • 7,154
  • 19
    John Baez has a post explaining why this works. If we let $u_n = s_n/s_{n+1}$, then it's easy to see that $u_{n-1} = n!/n^n$, so $$\frac{s_{n-1}s_{n+1}}{s_n^2} = \frac{u_{n-1}}{u_n} = \frac{n!}{n^n}\frac{(n+1)^{n+1}}{(n+1)!} = \left(1+\frac1n\right)^n \to e$$ as $n \to \infty$. – ShreevatsaR Mar 06 '14 at 09:21
  • 15
    Personally, I don't think that this result is singular or imperative enough that it should have been discovered much earlier. – azimut Mar 10 '14 at 12:25
  • 2
    For what it's worth, Harlan Brothers actually found this no later than November 2009, as that is when he added this piece of information to the OEIS. – Nathaniel Johnston Mar 30 '14 at 03:09
  • Has anyone had any luck finding pi in Pascal's triangle? I tried it yesterday without any luck. – Neil Apr 10 '15 at 08:56
  • 1
    @Neil Presumably, any such result would require Stirling's approximation in its proof. – Akiva Weinberger Aug 18 '15 at 00:46
  • 1
    @ShreevatsaR Actually, the brief explanation you describe follows the one from my "Math Bite" article in Mathematics Magazine (Feb 2012). The one in Baez's post is, in fact, courtesy of Greg Egan and provides an alternate, neat way of looking at the result. A thorough treatment appears in The Mathematical Gazette (Mar 2012). – Harlan Aug 25 '16 at 05:52
34

An example mentioned for Martin Gardner (Martin Gardner's New Mathematical Diversions from Scientific American): Morley's trisector theorem is elementary enough to be proved two millenia ago, but unknown until 1899.

  • 5
    I think the use of trisectors was problematic for the Greeks, because they cannot be constructed using a compass and a ruler. – sds Mar 05 '14 at 19:56
  • 10
    @sds: You're absolutely right. Still, between ancient Greece and 1899 lies some room... – Pete L. Clark Mar 05 '14 at 21:43
  • And yet the first proof was heavily complicated, I think there was some time before easier geometric proofs were found. – Sawarnik Mar 07 '14 at 04:00
34

Apéry's 1975 proof of the irrationality of of $\zeta(3)=\sum_{n=1}^\infty\frac1{n^3}$ took the experts by surprise, and uses tools that had been available for a very long time.

See A Proof that Euler Missed for more details.

Martin Argerami
  • 205,756
31

Informally, the historical definition of an Archimedean solid requires that all its faces are regular polygons and that each vertex of the solid locally looks the same. Since the ancient Greeks it was common knowledge that up to scaling, rotating and reflecting, there are 13 types of Archimedean solids.

It was only in the 20th century that it was noticed that there is a 14th solid which complies with the above definition, the Elongated Square Gyrobicupola. It arises from the Rhombicuboctahedron (one of the 13 established Archimedean solids) by twisting one of its octagonal "caps" by 45 degrees. However, those two solids are not identical.

Elongated Square Gyrobicupola

Since this solid doesn't look as symmetric as the "proper" Archimedean solids (its symmetry group doesn't act regularly on its vertices), nowadays the definition typically is sharpened.

azimut
  • 22,696
  • 2
    This reminds me of another example: the Johnson solids, which are polyhedra with regular polygonal faces, were first enumerated by Norman Johnson in 1966, although there was nothing stopping their enumeration by Kepler, Archimedes, or anyone else in the past 2,500 years or so. – MJD Mar 11 '14 at 00:12
  • 1
    Another similar example: There are exactly eight convex polyhedra whose faces are equilateral triangles. Several, such as the icosahedron, have been known since antiquity. But as far as I know the one with twelve faces is first mentioned in a paper of Freudenthal and Van der Waerden from 1947. – MJD Mar 12 '14 at 16:04
  • I find it bizarre that it took mathematicians something like 400 years to realize that they could just rotate one of the "caps" of the rhombicuboctahedron by 45 degrees to obtain the elongated square gyrobicupola, thus not changing the size of any of the faces. – Vortico Mar 30 '14 at 11:53
  • @Vortico: It's not the size of the faces which matters for the historical definition. It's the fact that after the rotation, still every vertex is surrounded by a triangle and 3 squares. – azimut Jan 11 '16 at 16:31
26

The theorem that $$\text{PRIMES is in P}$$ was proven only in 2002 (via the AKS primality test). The algorithm certainly isn't rocket science, making it quite surprising that this wasn't found way earlier. The point is that it was common belief that $\text{PRIMES}$ is not in $\text{P}$, so there was not a big search for such an algorithm.

Frenzy Li
  • 3,685
azimut
  • 22,696
  • The answer by Marco13 already mentions this. Although it contains two answers which makes voting difficult… – MvG Mar 07 '14 at 18:28
  • 12
    I think it is a misconception that "it was the common belief that PRIMES is not in P". It was actually known in the 1980s that if the Generalized Riemann Hypothesis is true then PRIMES is in P (via the not-as-well-known Ankeny's algorithm). Since many people believe GRH, it was believed that PRIMES is in P. The new result of AKS in 2002 is a proof that doesn't depend on GRH. – Yoni Rozenshein Mar 30 '14 at 13:34
  • 5
    Looked up what I wrote in the previous comment, I made a mistake. It's not called "Ankeny's algorithm", it's simply the Euler-Jacobi test. Ankeny's theorem (1950s) guarantees (conditional GRH) that if $n$ is composite then there is a base $a < 2(\log n)^2$ for which $n$ is not an Euler-Jacobi pseudoprime, which yields a trivial polynomial time algorithm (check all bases $a$ up to $2(\log n)^2$...) – Yoni Rozenshein Mar 30 '14 at 15:23
  • @YoniRozenshein: Thanks for your interesting remark. So maybe its more precise to write that nobody expected the existence of an easy explicit algorithm, without any mystery like Riemann... – azimut Mar 31 '14 at 15:36
  • Yes, this was a big surprise :) I don't know if it was more surprising that it was independent of RH than that it was elementary. – Yoni Rozenshein Mar 31 '14 at 19:29
  • 1
    @YoniRozenshein, Ankeny only proved a result of type $O((\log n)^2)$, not with the sharp constant $2$ in it. Look at his Annals paper (1952). That constant is from Eric Bach's thesis in the 1980s. At the time of the AKS paper I was speaking about it with a CS professor who turned out to be unaware that a polynomial-time algorithm was already known, conditional on the Generalized Riemann Hypothesis. – KCd Jun 11 '15 at 11:04
21

This is not a theorem but a result as you require that was discovered "late" .

As Erdős mentions :

"At that time,mathematicians of the seventeenth and eighteenth centuries found pairs of odd amicable numbers such as $(12.285,14.595)$.
Curiosly, however,the smallest pair after the one known from antiquity $(1184,1210)$ was found only in 1866 by a 16-year old student Niccolo Paganini".

azimut
  • 22,696
21

Another result that was discovered late is the construction of the Heptadecagon (regular seventeen-sided polygon) by compass and unmarked straightedge, found by Gauß in 1796.

Wikipedia describes it as "...the first progress in regular polygon construction in over 2000 years"

Remarkable enough, considering that is a purely geometrical result, and the Greek already had everything that was necessary for achieving it in their toolbox.

azimut
  • 22,696
Marco13
  • 2,043
18

Theorem $\big($Sylvester-Gallai$\big):$ For any $n$ points in $\mathbb{R}^2$, not all collinear, there exists a line passing through exactly two of them.

$\rm{Proof}:$ Pick any $2$ points and draw a line $\ell$ through them: suppose a $3^{\rm{rd}}$ point lies on the line $($else we are done$)$ and pick the closest point $p\not\in\ell$ to the line, at a distance $\delta$ say. Of our $3$ points $\in\ell$ a pair lies on one side of $p:$ draw a line $\ell'$ through $p$ and the furthest of the pair from $p$. The distance $\delta'$ between $\ell'$ and the second point is $<\delta$.

$\qquad\qquad\qquad\quad$enter image description here


The statement was proposed first by Sylvester in $1893$ and independently by Erdős in $1943\;($rather surprisingly, Erdős could not find a proof$)$. It was proven the following year by Gallai.

$\big[\mathbf{Note:}$ the statement can fail in other fields: for instance, in $\mathbb{P}^2(\mathbb{C})$ see the Hesse configuration$\big]$

ocg
  • 612
  • 6
    ... So if a third point lies in $l'$, repeat to get two points line joining which has the closest point at even lower non-zero $\delta''$. Since there are finitely many lines with 2 points from the set, this has to stop somewhere, there are two points which have no other point on the line joining them.

    Incredible proof.

    – KalEl Mar 30 '14 at 05:20
17

A Dandelin sphere of a conic section will touch the plane of the conic section at a focus of the conic section. This pretty and useful fact was discovered only in 1822, but there is no reason why it could not have been in the treaties of Apollonius of Perga.

16

One possible answer would be the discovery of Strassen algorithm in 1969. This is a way to multiply large matrices together in (slightly) faster than standard time.

Although it may not be the fastest algorithm for matrix multiplication the majority of faster variants stem from this one idea.

It may not be a simple proof but it is very late.

  • 5
    The whole field of computational complexity could have started earlier by at least a few decades. – zyx Mar 05 '14 at 05:34
  • in contrast, I seem to remember that the FFT method for multiplication of large integers was first discovered by Gauss. – MJD Mar 05 '14 at 20:20
  • 3
    @MJD was the FFT known in those days? Because with a naive DFT, using fourier transforms for multiplication is no better than the standard algorithm. – Tim Seguine Mar 06 '14 at 10:31
  • @Tim Acording to Heideman, Johnson, and Burrus “Gauss and the History of the Fast Fourier TransformArchive for History of Exact Sciences 34 #3 pp. 265–277 “This [Gauss’ algorithm] is also exactly the FFT algorithm derived by Cooley and Tukey in 1965 … Gauss’ algorithm is as general and powerful as the Cooley-Tukey common-factor algorithm”. – MJD Mar 12 '14 at 16:23
  • @MJD I am aware of that; that is not the problem. The problem is that the standard "naive" DFT is $O(n^2)$, but the FFT is $O(n\log{n})$. Since standard multiplication is also $O(n^2)$, this means there is no algorithmic benefit of utilizing the Gauss algorithm for multiplication (it is not faster). Thus, the question of whether the FFT was known at the time would be extremely relevant. The fact that it can be used for multiplication is fairly trivial (the convolution theorem is a basic result which of course Gauss would have known about) – Tim Seguine Mar 12 '14 at 17:23
  • @Tim I understand that, and I provided the reference which I believe says that the Gauss algorithm is the $O(n\log n)$ FFT algorithm essentially as was rediscovered by Cooley and Tukey. Gauss was using it to speed up his own astronomical calculations, and was not so stupid as to use a complicated algorithm which was no faster than the simple one. – MJD Mar 12 '14 at 17:26
  • @MJD The "Gauss algorithm" is merely evaluating the sum in the definition of the DFT. It should be relatively easy to assure you that this requires $O(n^2)$ operations. The source you quoted seems to be saying only that the Cooley-Tukey algorithm is equivalent to Gauss's, not that it is of the same algorithmic complexity. – Tim Seguine Mar 12 '14 at 17:31
14

I'd say the fact that zero took until sometime after the 600s before it was used in math takes the cake here.

Shane
  • 209
  • 11
    Using zero does not qualify as a result that was discovered late. – Marc van Leeuwen Mar 04 '14 at 20:00
  • 7
    And it's not the mathematical concept of zero, that was adopted late, but the written notation for zero in a place-value system. But its late adoption in Europe and elsewhere is not so surprising, because calculation was not done with paper (scarce and expensive) but with abacuses and counting boards. And on an abacus there is a notation for zero. So when you look carefully at just what it was that was adopted late, you find that it is a very specific technical invention that had has much more to do with engineering than with mathematics. – MJD Mar 04 '14 at 21:19
  • In line with @MJD 's comment, it's not easy to come up with a situation, prior to positional notation, where having zero or "nothing" as a number makes a big difference, or where treating positive and (what we would now call) negative numbers as a single unified system is critical. Accounting works fine with debit and credit as separate columns. – zyx Mar 05 '14 at 00:37
  • 2
    @MJD: Actually the mathematical concept of zero as a number ("how can nothing be a number?") was also very late in being adopted. AFAIK no one in the Western world (so discounting the abacus) did arithmetic with zero as a number ("adding zero to five gives five", etc.) for a long time. Even when Fibonacci introduced the place-value system in his Liber Abaci, he wrote "the nine digits of the Hindus" and "these nine figures, and with the sign $0$", instead of "the ten digits" as we might. And you can Google for "is zero a number" to see many people who have trouble with the idea even now. :-) – ShreevatsaR Mar 06 '14 at 08:36
  • But why would that be such a big discovery without some form of positional notation? People understood that income and expenses cancel each other, and that one can travel the same distance in two directions (and how to compose trips). Creating a $0$ or minus sign does not seem as revolutionary without the relation to multiplication, or the ability to process larger numbers by positional notation. @ShreevatsaR – zyx Mar 06 '14 at 20:05
  • @zyx: I didn't say anything about whether it was a big discovery or not; I was only countering the idea that the mathematical concept of zero is ancient. The reification of "nothing" as an actual quantity/number is an idea that historically doesn't seem to have come naturally in large parts of the world. [Secondly, negative numbers: even for the part you say, the recognition that income and expenses cancel each other, etc., is there any evidence in the literature? It is not about the notation, it's about treating negative numbers in the same category / on par with positive numbers.] – ShreevatsaR Mar 07 '14 at 09:06
  • 2
    "The introduction of the digit $0$ or the group concept was general nonsense too, and mathematics was more or less stagnating for thousands of years because nobody was around to take such childish steps..." - Grothendieck. – Ragib Zaman Mar 07 '14 at 17:07
10

Pascal's theorem was discovered in 1639 or 1640, but its projective dual, Brianchon's theorem, was only discovered in 1806. So it took about 167 years despite the fact that Pascal already realized the projective nature of his theorem.

MvG
  • 42,596
10

It took 2000 years for anyone to recognize that Euclid's Parallel Postulate didn't follow from his other four axioms. It has always been surprising to me that people tried for so long to prove the parallel postulate followed from the other axioms and no one seemed to consider the possibility that it was independent of them. The ultimate realization that there exist models of geometry in which the converse holds has blossomed into an enormous area of mathematics rife with deep and interesting results.

D Wiggles
  • 2,818
  • 3
    Why should this have been discovered earlier? Spherical geometry was known, but (bidirectionally) interpreting Euclidean geometry in spherical or hyperbolic geometry is complicated, and I would imagine some accumulated experience with spherical trigonometry and calculus was important. Saying "curved surfaces exist and have geodesics as the equivalent of lines" doesn't resolve the issue with the parallel axiom. That several people discovered it at the same time suggests that the effectively necessary concepts were not available much earlier than 1800. – zyx Mar 06 '14 at 20:00
  • @zyx Riemannian geometry isn't necessary to develop a model of the hyperbolic plane. – D Wiggles Mar 07 '14 at 16:12
  • 3
    Something that is neccessary is a clear understanding that the axioms refer to terms with no further meaning than what the axioms say about them. As long as you have Euclidean intuition about what the term “line” refers to, you have trouble imagining anything satisfying the axioms where that role is played by something you usually wouldn't call a line. I'd associate that level of abstraction with Hilbert, though it might be others should be named here as well or even instead. – MvG Mar 07 '14 at 18:18
  • @MvG I understand your point of view, but counterintuitive results are abundant in mathematics. It seems to me that the story is always that [insert geometer here] went about trying to prove Euclid's parallel postulate followed from the others using a proof by contradiction. [Geometer] proved a bunch of theorems (for example about quadrilaterals), but never found a contradiction. Then he gave up or assumed something was wrong because the theorems were counter-intuitive. It seems like people just discarded the results off-hand because they seemed odd. – D Wiggles Mar 07 '14 at 18:59
  • @zyx: That's true, but the claim is why didn't anyone even consider the possibility of independence. Maybe they didn't have the tools to prove it, but why didn't one ancient or medieval philosopher/mathematician/what-have-you ever say something like "Perhaps this statement simply cannot be proved after all?" – The_Sympathizer Oct 26 '14 at 13:46
10

Grothendieck said that he found it embarassing that it took humanity so long to define a group.

  • If that's true then he was losing it, most people would define a group as 'some things that are near each other' or words to that effect - they're not thinking about the Rubik's cube. –  Apr 11 '16 at 02:06
  • 5
    @mistermarko ...I'm not sure quite how to respond to that. The English language synonym of the word "group" obviously has no effect on when group theory was developed. Do you think groups would have been discovered earlier if they were named differently? – Alexander Gruber Apr 14 '16 at 03:58
  • 3
    To wit, you may be interested to learn that in early 20th century, most of the literature follows Noether in studying "groups of operators" (what we now refer to as "group actions") rather than groups in the abstract. Even Galois was mostly concerned about the action of groups on polynomials, and spent comparatively little time developing groups abstractly. – Alexander Gruber Apr 14 '16 at 04:08
8

Generally the JSJ-decomposition of 3-manifolds is considered to be discovered "late" (by $40$ years, but that is a long time in modern maths!).

From Allen Hatcher's Notes on Basic 3-Manifold Topology,

Beyond the prime decomposition, there is a further canonical decomposition of irreducible compact orientable 3 manifolds, splitting along tori rather than spheres [the JSJ-decomposition]. This was discovered only in the mid 1970’s, by Johannson and Jaco-Shalen, though in the simplified geometric version...it could well have been proved in the 1930’s. (A 1967 paper of Waldhausen comes very close to this geometric version.) Perhaps the explanation for this late discovery lies in the subtlety of the uniqueness statement. There are counterexamples to a naive uniqueness statement, involving a class of manifolds studied extensively by Seifert in the 1930’s. The crucial observation, not made until the 1970’s, was that these Seifert manifolds give rise to the only counterexamples.

For context, JSJ-decompositions are the starting point for Perelman's proof of Geometrisation, which implies the Poincare conjecture. His subsequent shunning of a fields medal and $1m prize is oft-told...

user1729
  • 31,015
8

The first thing that came to my mind was the Kepler conjecture. While it's not a straightforward or simple proof, the fact that fruitsellers stacked their apples optimally for thousands of years until it was proven that they did it is somewhat surprising.

azimut
  • 22,696
Marco13
  • 2,043
  • 2
    I don't think this one qualifies for the question. Which kind of rigorous proof we should have expected much earlier? – azimut Mar 08 '14 at 19:55
  • 1
    @azimut Strictly referring to the question (asking for "simple" results), this is certainly true. Maybe the surprising thing here is that the fact itself is so simple (in contrast to the proof). But admittedly, this would apply to many other open problems (e.g. Goldbach)... and BTW: most results that have been mentioned so far do not really look "simple" for me (I'm not a mathematician). However, I wonder when things like Goldbach and Riemann will be added as answers here ;-) http://abstrusegoose.com/210 – Marco13 Mar 08 '14 at 20:44
7

The Mason-Stothers theorem (aka the $abc$ conjecture for polynomials): if $f(t)$, $g(t)$, and $h(t)$ are nonzero polynomials over a field satisfying $f(t) + g(t) = h(t)$ with $f(t)$ and $g(t)$ being relatively prime and the three polynomials are not all constant, then $$ \max(\deg f, \deg g, \deg h) \leq \deg({\rm rad}(fgh)) - 1, $$ where ${\rm rad}(F)$ for a nonzero polynomial $F$ is the product of its irreducible factors (set it to be $1$ if $F$ is constant). Strictly speaking, the version I wrote is for a field of characteristic $0$. In characteristic $p$, the condition that at least one of the polynomials is nonconstant should be replaced with at least one of them having nonzero derivative.

This theorem was proved only in the 1980s, independently, by Mason and Stothers. Its proof uses just some elementary calculations with derivatives of polynomials. There was a proof given by Silverman that shows this result is, to use the OP's term, a "straightforward corollary" of the Riemann-Hurwitz formula, so in principle the theorem could have been formulated in the 19th century, but the simple statement of this theorem did not appear until Mason and Stothers as far as I'm aware.

On somewhat the same theme, another example of the type being requested is Belyi's theorem. It says that for a smooth projective algebraic curve $C$ over the complex numbers, if $C$ is defined over the algebraic numbers then $C$ admits a covering of the Riemann sphere ramified over at most three points. (The converse was known earlier.) I say this is somewhat the same theme because Belyi's theorem is closely related to when the inequality of the Mason-Stothers theorem is an equality. Like the Mason-Stothers theorem, the proof of Belyi's theorem is surprisingly low-tech compared to what anyone would imagine when hearing the statement of the theorem.

KCd
  • 46,062
6

I got three of them for you.

Trisecting the angle, doubling the cube, and squaring the circle. They were three famous geometric problems from ancient Greece. They were concerned about whether or not these could be done with compass and straightedge. They all ended up being impossible, but it wasn't proven for 2000 years.

Trisecting the angle was disproved in 1837 using Galois theory

Doubling the cube was proved impossible in 1839 by showing that $\sqrt[3] 2$ is not a constructable number.

Squaring the circle was proved impossible in 1882 by showing that $\pi$ is transcendental

Thomas Eding
  • 137
  • 5
  • 15
    The tools to prove, and even to state, those results were not available until the era in which the proof was accomplished. The first two results are nearly trivial given the idea of degree of a field extension, but this was an unknown concept before approximately 1800. The proofs that pi is irrational and transcendental used ideas of good rational and polynomial approximation that were unheard-of before those breakthroughs, which gave birth to transcendental number theory. – zyx Mar 05 '14 at 05:28
  • Yeah but I still think it fitted nicely with the question. – sicklybeans Mar 05 '14 at 05:31
  • It's also a brilliant example of seemingly unrelated fields of math coming together – sicklybeans Mar 05 '14 at 05:31
  • 2
    @sicklybeans: While this result and the underlying technique is certainly very nice, I agree with zyx that it doesn't qualify for the question. The proofs were not found late, but they were just in pace with the mathematical development. – azimut Mar 07 '14 at 16:00
5

The Perko pair:

enter image description here

Since the first knot tables published around 1900, these two were thought to be different knots. The mistake slipped by Alexander's (1920s), Conway's (1960s), and Rolfsen's (1970s) tabulating efforts. It was only in 1974 that Kenneth Perko noticed that these two diagrams belong to one and the same knot.

(for cuteness sake, the picture above is from Perko himself)

Glorfindel
  • 3,955
4

Find an explicit bijection between $\mathbb N$ and $\mathbb Q^{+}$.

The countability of $\mathbb Q^{+}$ had been known for a century, but an explicit bijection was not known until, in $1989$, Yoram Sagher noticed a rather simple explicit bijection between $\mathbb N$ and $\mathbb Q^{+}$.

Let $\frac{m}{n} \in \mathbb Q^{+}$ with $gcd(n,m)=1$, and let $q_1, q_2, \dots, q_n$ be the prime factors of $n$. Then $$f(\frac{m}{n}) = m^2n^2/(q_1q_2\cdots q_n)$$ is the desired bijection. Given the simplicity of $f$, it is surprising that no one had noticed it before.

Note that the inverse is (easily) computable, so you say what the $n$'th rational number listed is for any $n$.

gamma
  • 1,957
  • 4
    Many explicit bijections were known, but not this one. – André Nicolas Aug 29 '15 at 06:35
  • @AndréNicolas Oh, I thought I had read that this was the first. I would be interested if you could point me towards others. Thanks. – gamma Aug 29 '15 at 17:08
  • 2
    All of the usual proofs of countability of $\mathbb{Q}$, or $\mathbb{Q}^+$ use an explicit bijection, or involve an explicit bijection that may not be written down in detail. – André Nicolas Aug 29 '15 at 18:27
4

Brouwer's demonstrations of the difficulties with classical logic and the excluded-middle principle (tertium non datur), which had been used in mathematics and logic for more than 2000 years.

The lateness of discovery was due to the increasing frequency and complexity of infinitary constructions in 19th-century analysis (and the relatively more concrete arguments before that time), and the concomitant pressure to develop precise and consistent ways of handling them. The extension of principles like Excluded Middle from finite to infinite situations is where the problems isolated by Brouwer most clearly appear.

zyx
  • 35,436
  • 1
    Considering that classical logic, including tertium non datur, is alive and kicking, if space allows it, it would IMO be good to present at least one of Brouwer's points in some detail. – Daniel Fischer Mar 05 '14 at 10:29
  • 1
    Your point is not clear. Adding the law of excluded middle cannot generate (additional) contradictions, so there is no possibility of an internal foundational crisis specific to classical logic. On the other hand, many classical proofs work as-is or with insignificant modification without nonconstructive methods, so that no matter which approach one pursues, there is always a growing body of mathematical knowledge exempt from Brouwer's criticism, and one could argue that this (and political inertia) is what keeps things alive and kicking even if things are somehow "wrong" in the framework. – zyx Mar 07 '14 at 04:29
  • 2
    "Can you give an example of the difficulties Brouwer demonstrated?" Since classical logic is still the predominant logic used, I think many people don't know Brouwer's criticism, and it seems the community in the large didn't find it convincing. Having a point explained would allow tentatively judging it by oneself. – Daniel Fischer Mar 07 '14 at 11:18
4

Smale's paradox: in topology, the fact a sphere in R3 can be turned completely inside out without tears or sharp creases (self-intersection is allowed) was only indirectly proved in 1957 and an actual example of the process wasn't found until 1961.
See http://en.wikipedia.org/wiki/Smale%27s_paradox and http://www.geom.uiuc.edu/docs/outreach/oi/history.html

3

Various irrationality and transcendence results have already been posted, but it is interesting to see that the mere existence of transcendental numbers was not proven until the nineteenth century. Of course the existence of irrational numbers was already known in ancient Greece, but it took until 1844 before we first knew with certainty that there exist transcendental numbers.

The notion of algebraic and transcendental numbers was not yet available in ancient Greece. It seems that the first mention of the term transcendental was made by Leibniz in the 17th century, although he was more interested in transcendental functions rather than transcendental numbers. As Bourbaki puts it (Elements of the History of Mathematics, page 74):

“The definition that Leibniz gives of "transcendental quantities" [...] seems to apply more to functions than to numbers (in modern language, what he does reduces to defining transcendental elements over the field obtained by adjoining to the field of rational numbers the given numbers from the problem); it is however likely that he had a fairly clear notion of transcendental numbers (even though these latter do not appear to have been defined in a precise way before the end of the XVIIIth century); [...]”

The first proof of the existence of transcendental numbers was given by Liouville in 1844, who constructed a class of numbers which he then proved to be transcendental (now known as the Liouville numbers). The Louiville numbers are a cornerstone in the field of Diophantine approximation, which has since grown into a rich and active field of study in contemporary mathematics.

Nowadays of course we get the existence of transcendental numbers as an easy corollary of a famous theorem of Cantor's, which states that $\mathbb{R}$ is uncountable. Since the set of algebraic numbers is only countable, it follows that there must exist (uncountably many) transcendental numbers. Knowing this simple proof, I was very much surprised to learn that the existence of transcendental numbers had only been settled for 30 years when Cantor first proved $\mathbb{R}$ to be uncountable.

(Interestingly, Cantor's first proof of the uncountability of the real numbers predates his famous diagonal argument by some 17 years. See also Robert Gray, Georg Cantor and Transcendental Numbers, The American Mathematical Monthly, Vol. 101, No. 9 (Nov., 1994), pp. 819-832, at the time of writing also available in its entirety at the website of the Mathematical Association of America.)

3

I'm not sure this is a real answer, but Wikipedia claims that the Möbius strip was invented in 1858, by Möbius. I find it incredible that there is any property of the strip that was still undiscovered by 1858, so to whatever fraction of Wikipedia's claim is true I think qualifies as a surprisingly late discovery.

MJD
  • 65,394
  • 39
  • 298
  • 580
2

The Meyers Serrin theorem that $ H=W $ (see their paper with exactly this equality as title) came only decades after people had proven all kinds of things with both $ H $ and $ W $, and is so elementary that you can ask second year students to prove it as exercise (extra credit, to be fair)

Vincent
  • 10,614
Bananach
  • 7,934
2

How about the Jordan curve theorem? It seems obvious at first sight, but it is quite complicated to prove.

ljfa
  • 423
  • 2
  • 7
  • 3
    This isn't exactly what the question is asking for. Just because some statement is "obvious", it doesn't necessarily imply that its rigorous proof should have been found early. In this case, the only known proofs are quite advanced and it is hard to imagine that someone should have come up with it much earlier. – azimut Mar 07 '14 at 15:51
  • The Jordan curve theorem is also very easy to prove with the machinery of algebraic topology (the usual modern proof is through Alexander duality); it's not surprising that it wasn't proven before such machinery was developed. – anomaly Nov 10 '14 at 05:46
2

Another result that seems to have appeared late is the AKS primality test ( http://en.wikipedia.org/wiki/AKS_primality_test ). Although the field of computational compexity is relatively new, one could have expected that a proof for the existance of a deterministic polynomial-time primality-proving algorithm should have appeared earlier - given the fact that some of the most intelligent people in the world have studied prime numbers intensively for thousands of years.

Marco13
  • 2,043
  • Given what you mentioned that computational complexity is relatively new, I find the thousands of years comment slightly irrelevant. – Tim Seguine Mar 12 '14 at 18:27
  • 1
    @TimSeguine I just wanted to point out that people have wondered "Is this a prime number or not?" for quite a while, and it is a very important question, and ... yet they did not find an efficient (deterministic, polynomial time) method for answering this question. (BTW: As you can see here, I don't hesitate to write irrelevant comments occasionally ;-) ) – Marco13 Mar 12 '14 at 21:44
  • I was just meaning that it doesn't seem very surprising that people didn't think that much about algorithmic complexity before it was a thing. The traditional method of primality testing was probably just considered good enough (it worked). – Tim Seguine Mar 12 '14 at 21:56
  • 1
    @TimSeguine They certainly did not try to find out whether "PRIMES is in P", but nevertheless, one could imagine that the ability to quickly check whether a number is prime or composite could have been of some interest. (I'm not a mathematician, so apologies if this sounds naive) – Marco13 Mar 12 '14 at 22:05
2

The binary GCD algorithm was only discovered (re-discovered?) in 1967.

John M
  • 7,293
2

The Futurama Theorem: Regardless of how many mind switches between two bodies have been made, they can still all be restored to their original bodies using only two extra people, provided these two people have not had any mind switches prior (assuming two people cannot switch minds back with each other after their original switch).

KalEl
  • 3,297
2

The one that bumps up in my mind is Erdős-Mordell inequality. It states that for a point $ p $ in a triangle $ ABC $, with $\alpha ,\beta,\gamma $ denoting the distance of $ P $ to the 3 sides,respectively, then $ AP+BP+CP\geq 2 (\alpha + \beta +\gamma)$ This wasn't thought of until Erdős conjectured it in 1935.

P.s.People think Erdős might have been inspired by Euler's theorem for triangles, which itself is a 'shame' of Euclid.

azimut
  • 22,696
0

Answer

Mirsky's theorem (1971) in order theory has been discovered late.

Background

In 1950, Robert Dilworth published what is today known as Dilworth's theorem:

In any finite poset $(X,\leq)$, the size of the largest antichain equals the minimum number of blocks of a partition of $X$ into chains.

By now, several proofs are known, but for the "hard" implication "$\Rightarrow$", all of them are somewhat involved inductive proofs.

There is a pretty obvious "dual" of the statement, today known as Mirsky's theorem:

In any finite poset $(X,\leq)$, the size of the largest chain equals the minimum number of blocks of a partition of $X$ into antichains.

Surprisingly, noone seemed to have thought about for two decades, until it was published in 1971 by Leon Mirsky. All the more as the proof turned out to be much simpler than everything known for Dilworth's theorem. For the hard implication "$\Rightarrow$", there is a direct construction of the partition as the preimages of the map $$X \mapsto \mathbb{N},\quad x\mapsto \text{size of the largest chain with maximal element }x.$$ Nothing comparable is known for Dilworth's theorem.

azimut
  • 22,696
0

The Deduction Meta-Theorem for slews of logical calculi, and slews of conditional proofs in ordinary mathematics which can get said to have an interaction with it.

0

James A. Garfield (20th President of the United States) found 1875 (~2175 years after Euclid) a quite simple, yet clever trapezoid proof of the Pythagorean theorem. Here is a video explaining it:

https://www.youtube.com/watch?v=EINpkcphsPQ

Lenar Hoyt
  • 1,062
-2

An example of this that jumps to my mind is Fermats last theorem, for which was conjectured by him in 1637 but only proved in 1995 despite many years work by countless mathematicians. But this proof (obviously) is not simple nor straightforward. I don't know how important that criteria is on our answers, but it is an example of a result that many believed to be true for a very long time, yet to be unable to be proved. Fermats last theorem is that,

For a,b and c being positive intergers

$a^n+b^n\neq c^n$ For n bigger than 2

user127688
  • 35
  • 1
  • 12
    The proof is not straightforward, which is the importance condition in the question. If you remove this criteria then many more examples come to mind, like the impossibility of squaring the circle. – Sawarnik Mar 04 '14 at 18:54
  • 7
    I don't think this satisfies the primary criterion, which is "surprisingly late". The theorem turned out to be surprisingly difficult, but there is no way that the proof we have could have been discovered by, say, Euler. – MJD Mar 04 '14 at 19:07
  • 3
    Fermat allegedly had a proof, that was too big for him to put in the margin. It would have been much simpler than Wiles' proof. –  Mar 05 '14 at 12:47
  • 3
    http://en.wikipedia.org/wiki/Fermat%27s_Last_Theorem#Did_Fermat_possess_a_general_proof.3F – oakad Mar 06 '14 at 05:50
  • The proof depends on Ribet's proof of the $\epsilon$-conjecture and Wiles' proof of the Taniyama-Shimura-Weil conjecture. Neither is trivial even to state, let alone prove. – anomaly Nov 10 '14 at 05:43