72

Obviously, there are obvious things in mathematics. Why we should prove them?

  • Prove that $\lim\limits_{n\to\infty}\dfrac{1}{n}=0$?
  • Prove that $f(x)=x$ is continuous on $\mathbb{R}$?
  • $\dotsc$

Just to list few examples.

Kaj Hansen
  • 33,011
x.y.z...
  • 1,150
  • 64
    Because "obvious" isn't justification. –  Apr 18 '14 at 16:22
  • 45
    Would you rather initially practice applying definitions/writing proofs to problems that are unintuitive and extremely difficult? If you were learning to ride a bike would you immediately start flying off 10 foot jumps? – BlueBuck Apr 18 '14 at 16:22
  • 31
    The baby begins to take a small step before running. –  Apr 18 '14 at 16:23
  • 89
    The examples you give are only "obvious" if you believe that the intuitive notions of limit and continuity, are captured by their formal versions. But that is not a mathematical argument, and untrained intuitions tend to be wrong. The only way to actually acquire accurate intuition that indicates how well the formal notions relate to their informal counterparts is by seeing in these concrete situations how the arguments go. Only after these are understood, one is in a position of really calling them obvious (or realizing that they are not). – Andrés E. Caicedo Apr 18 '14 at 17:22
  • 34
    A more interesting example would have been something like $1+1=2$. The point of a proof of such a statement would not necessarily be to verify its truth, but rather to verify properties of our formal systems of axioms. – Andrés E. Caicedo Apr 18 '14 at 17:25
  • 7
    If something is obvious, you should be able to provide an actual proof on demand. Can you? – Mariano Suárez-Álvarez Apr 18 '14 at 17:48
  • 8
    A professor is lecturing and says "obviously we see that..." and writes down a result. A student interrupts to ask "Professor, is it really obvious?" The professor puts down the chalk and thinks silently for two minutes before announcing "Yes, it is," and continues the lecture. If you were the student would you feel like you got value for your tuition money? – Eric Lippert Apr 18 '14 at 17:58
  • 13
    I note also that you begin "obviously there are obvious things". This assertion is by no means obvious to me. Can you prove that there are obvious things? – Eric Lippert Apr 18 '14 at 17:59
  • 1
    @Eric: in the version I heard of that apocryphal story, there's no student asking the question. Rather the lecturer says something is obvious, stops, leaves the theatre, then returns some minutes later to say "yes, it's obvious". In that case aside from the loss of time it could represent good value, since what the lecturer has actually done is satisfied himself both that it is true, and that a proof is within reach of the students. – Steve Jessop Apr 18 '14 at 18:57
  • 1
    Hope people don't start trying to prove Pigeon Hole Principle – evil999man Apr 18 '14 at 19:13
  • 9
    To say that something is "obvious" and accept it without proving it, is to add yet another axiom to your reasoning. Keeping axioms simplified to a minimum is a good idea. – Tim S. Apr 18 '14 at 20:13
  • 2
    If these things are so obvious, then you ought to be able to prove them. If you cannot prove them, then perhaps they are not that obvious after all - oftentimes in math, when a seemingly obvious rule turns out to be very hard to prove, it turns out that there are subtle exceptions to it nobody imagined, or the proof would be extremely complex and present a valuable contribution to mathematics in its own right. Note how proving something is just a way of answering "This seems true, but is it really always true?" – Superbest Apr 18 '14 at 20:43
  • 1
    Also note that "obvious" is subjective - what is obvious to me may not be obvious to you. Mathematics is objective, so "obvious" is not sufficient justification for accepting a rule. – Superbest Apr 18 '14 at 20:45
  • 4
    @Andres: While I agree, I prefer the $2+2=4$. One can always argue that $2$ is defined as the term $1+1$ (e.g. in the language of fields). But it's rare that $2$ is part of the language, so $2+2=4$ will always require you to unwind the definitions of the terms and addition and so on. – Asaf Karagila Apr 18 '14 at 22:50
  • How do you know something is obvious before proving it? – Andy Teich Apr 19 '14 at 04:48
  • 2
    Obviousness is subjective. – user5402 Apr 19 '14 at 09:08
  • 3
    Take a piece of paper and draw any closed curve that doesn't intersect itself on it. That curve divides the paper into two distinct parts, called the inside and the outside of the curve. Is that obvious? Can you prove that no point on the inside is also on the outside? – Eric Lippert Apr 19 '14 at 11:52
  • It's obvious that the Riemann hypothesis is true, isn't it? Does anyone understand all the fuss about finding a proof? – CompuChip Apr 19 '14 at 16:40
  • 2
    @All: Thank you very much for your help. I really appreciate all your comments and answers. At:CompuChip: How is it obvious the RH? – x.y.z... Apr 19 '14 at 16:48
  • 1
    I would say that the mathematicians balk about "obvious" facts is their version of the scientist's skepticism. – abnry Apr 19 '14 at 21:28
  • 2
    My high school CS textbook waved around the "equation" Bewilderment + Exposure = Obvious. I am terrible at proofs, so would not be able to provide proofs to the example in the OP. However, my exposure to these areas of mathematics make the result obvious to me. Considering something to be obvious does not imply the ability to create a proof of it. – Brian S Apr 21 '14 at 15:44
  • 2
    There are also things which are "obviously" true, and yet provably false: http://en.wikipedia.org/wiki/Monty_Hall_problem – Mooing Duck Apr 21 '14 at 17:18
  • 2
    Several comments imply that truly obvious things should be provable. This is only true of non-axiomatic principles. @TimS.'s comment is more accurate--the alternative to proving the obvious is not to accept it without proof, but to accept it as an axiom. (Also, OP, CompuChip's comment on RH is a joke.) – Kyle Strand Apr 21 '14 at 18:53
  • @x.y.z... my comment about the RH was an absurd example to show why "obvious" is a subjective statement. I could ask you the same thing about the continuity of f(x) := x. – CompuChip Apr 22 '14 at 07:31
  • Personally think this applies to all areas of scientific research, not just mathematics. However, 'prove' might not be the best choice of word in some sciences. – a different ben Apr 22 '14 at 07:41
  • Because some obvious things are false. – jwg Apr 19 '14 at 18:07
  • If you can't explain or prove why you think something is true, then it is arguable that it is true. – oemb1905 Apr 22 '14 at 02:37
  • 1
    Sometimes the real beauty in Mathematics is in the proofs. It's often not enough to simply know that something is true; one needs to understand why it's true. Also, in proving (or trying to prove) what might seem obvious, we might stumble on an important result :) – Shaun Apr 23 '14 at 09:04
  • Obvious is subjective – Amr Apr 27 '14 at 08:53
  • Aren't most mathematical theorems semi-obvious, meaning it is the sum of finitely many obvious steps? If you are allowed to skip some you might as well skip everything. – mez May 09 '14 at 07:16
  • Sadly, I do not remember the source of this quote, but: "Nothing is so obvious that its obvious. The use of the word 'obvious' indicates the lack of a logical argument." – PhoemueX Sep 06 '14 at 20:19
  • I think this quote by Alexander Grothendieck is relevant here, "One should not try to prove something that is not obvious" – Aditya P Aug 07 '18 at 14:32

18 Answers18

144

Because sometimes, things that should be "obvious" turn out to be completely false. Here are some examples:

  1. Switching doors in the Monty Hall problem "obviously" should not affect the outcome.
  2. Since Gabriel's horn has finite volume, then it "obviously" has finite surface area.
  3. "Obviously" we cannot decompose sphere into a finite number of disjoint subsets and reconstruct them into two copies of the original sphere.
  4. Since the Weierstrass function is everywhere continuous, then "obviously" it must have at least a few differentiable points.

Of course, mathematics has shown that switching doors is to the player's advantage, that Gabriel's horn actually has infinite surface area, that you can indeed get two copies of the original sphere (see Banach-Tarski paradox), and that the Weierstrass function is everywhere continuous but nowhere differentiable. The point being, there are many things out there which are "obvious" but actually turn out to be entirely counterintuitive and opposite what we would otherwise expect. This is the point of rigor: to double check and make sure our intuition is indeed correct, because it isn't always.

Kaj Hansen
  • 33,011
  • 8
    FWIW the Monty Haul problem was always obvious to me that you should switch, it's a simple logical predicate (is the car in this box or one of those boxes?) and then the probability is trivial. Which just goes to show that what's obvious can't always be true as soon as two people disagree what's obvious to them. The Banach-Tarksi decomposition I'll give you. – Steve Jessop Apr 18 '14 at 18:33
  • 1
    I agree with Steve, monty hall is/was always obvious to me. On the other hand, the non mathematician bit in me, refuses to believe the weirstrass function is real. it makes my head hurt. – Guy Apr 19 '14 at 15:27
  • 7
    In fact most (not all) people will find the Monty Hall problem is obvious once you change the numbers. Convince them that you're not doing a magic trick. Take a deck of cards. Ask them to pick one at random. Ask them how likely it is that they're holding the Ace of Spades, and how likely you still have it. Search through the remaining cards and select 50 cards to reveal that are not the AoS. Ask them how likely it is that they're holding the AoS, and how likely that you still have it. 3 is harder than 52, and some get it right for cards but then still answer differently for goats. – Steve Jessop Apr 19 '14 at 16:06
  • 6
    I would argue that it is obviously true that you cannot decompose a 3-dimensional sphere into a finite number of disjoint subsets and reconstruct them into two copies of the original sphere. The point of the Banach-Tarski paradox is not that it tells us something surprising and new about spheres, but that it lets us know that a certain formulation of set theory is not compatible with what we know to be true about spheres. – jwg Apr 19 '14 at 18:15
  • 6
    @jwg: Or rather that the definition of a sphere is incompatible with what we physically and intuitively knew. Mainly because a mathematical sphere is infinite, and a physical sphere is discrete. And infinite objects usually go nuts around the axiom of choice, and they go nuts without the axiom of choice in whole different ways. – Asaf Karagila Apr 19 '14 at 20:26
  • 1
    @jwg: Right, in that sense it tells you that it's important what you mean by "sphere". A sphere in ZFC (with the usual construction of $\mathbb{R}$) has different properties from whatever you know a sphere to be. So it tells you to pick only one from well-behaved sphere dissections or every set to be well-ordered. What it doesn't tell you is how to distinguish something that you believe "is true" but isn't true in ZFC (Banach-Tarski), from you just thinking you know something and being wrong (as if you don't switch boxes with Monty Hall). – Steve Jessop Apr 19 '14 at 20:49
  • @SteveJessop You are falling into the same error by talking about "something you believe 'is true' but isn't true in ZFC". The B-T paradox is only a paradox if you think it tells you something about real-life spheres. But if you think this than you are wrong, as anyone who knows the set theory/measure theory as far as being able to understand the statement of B-T must realize. The B-T theorem and an 'analogous' statement about 'decomposing 3-dimensional spheres' have nothing to do with another - it is perfectly possible for one to be obviously false and the other to be (nonobviously) true. – jwg Apr 19 '14 at 22:44
  • 1
    But I certainly agree with at least part of what you said to begin with, that B-T does not prove that you "can indeed get two copies of the original sphere". Quite aside from the philosphical question of what the sphere really is that the person is reasoning intuitively about, "can indeed" is quite strong for a result that depends on an axiom you can get a fairly long way without . – Steve Jessop Apr 20 '14 at 03:40
  • 3
    @jwg: actually on further reflection I've realised (at least one of the reasons) you're right. I was getting excited about the desirability of the Axiom of Choice and the meaning of "sphere", but the meaning of "dissect" in the theorem is far more directly counter-intuitive. You don't have to go anywhere near the AoC to be not talking about "real spheres". For example the idea of dissecting a glass marble into two pieces: "points with rational co-ordinates relative to its centre" and "the rest" is already physical nonsense and says nothing about real spheres. – Steve Jessop Apr 20 '14 at 10:56
  • @SteveJessop: For switching to be beneficial in the Monty Hall scenario, the probability that one would be offered a chance to switch if one is wrong must be at least half as high as the probability that would be offered a chance to switch if one was right. If someone looked at all 51 non-chosen cards and would only turn up 50 non-ace cards if the ace was the chosen card, and would otherwise turn up 50 cards including the ace ("sorry--you lose"), then switching would turn a 1/52 chance of winning into a 0% chance. – supercat Apr 21 '14 at 19:10
  • @supercat: right, in the Monty Hall problem it must be stated that what has happened is the rules of the game, not the choice of the host or random chance. The probability of being offered a switch is 1 in all cases. However, ambiguity in stating the problem is not the (only) reason why people find it non-obvious. They still find it non-obvious when stated correctly. In the alternate game you describe, switching in fact turns a 100% chance of winning into 0%, since the switch is offered iff the card held is a winning card. But it's not the Monty Hall problem. – Steve Jessop Apr 21 '14 at 19:12
  • 1
    @SteveJessop: If a player selects door #1 and a host says "let's see what's behind door #3" and shows that door #3 is empty, it's not obvious (unless explicitly stated) what prompted that particular choice. Some ways of describing the host's method of selection would make it obvious that switching would win 2/3 of the time; others would be equivalent, but not instantly recognizable as such, and consequently would make the benefit of switching less obvious. Unless it is specified that in the "initial choice was correct" case the host will choose the other doors with equal probability... – supercat Apr 21 '14 at 19:29
  • ...a player's expectation wouldn't go down by switching, but it might not go up either. For example, if after the player chooses #1 the host will alway show #3 unless it contains a prize (in which case he'll show #2) then having the host open door #3 would increase the odds a player's initial guess was right to 50-50, so the probability of a win from switching would also be 50-50. I've very seldom seen explicitly stated the fact that the host's choice must be random when it isn't forced, but it's essential for the claimed conclusion. – supercat Apr 21 '14 at 19:34
  • 1
    @supercat: all that's necessary is that he chooses in a way that gives the player no information (so that the player models it as independent of the location of the car) but sure, specifying that he chooses at random is sufficient to ensure that. It's true that the problem is often mis-stated but that doesn't detract from the fact that it's non-obvious when stated correctly. – Steve Jessop Apr 21 '14 at 19:40
  • @SteveJessop: If one specifies that the choice must be made in such fashion as to give the player no information, then the fact that the player gains no information should be obvious [it's stated in the puzzle!] If one specifies that if the player chooses #1, then host must show door #3 half the time when the prize is behind #1, and all the time when it's behind #2, then the fact that the prize is more likely to be in the location required for the "all the time" case is pretty clear. In any case, I think there are better examples of things that are "obvious but wrong". – supercat Apr 21 '14 at 19:55
  • @supercat: Actually "gains no information" would be (to me) an interesting way to test the problem. I think people would quite possibly still not find it obvious that "gains no information" means "the probability that you have the car remains 1/3" and therefore that they should switch. – Steve Jessop Apr 21 '14 at 20:04
  • 1
    @SteveJessop: That might be interesting to test. Of course, some people's ability to spot the obvious can be limited. One of my favorite examples of that is Eddie Kantar relating his efforts to teach people bridge; when asked where a certain card was after a sequence of plays, some people would reply that it must be in dummy, notwithstanding that all of dummy's cards were visible and the card in question was not among them! – supercat Apr 21 '14 at 20:12
  • Analogously, the Monty Hall problem never gives the player the opportunity of switching to the open box. https://xkcd.com/1282/ – Steve Jessop Apr 21 '14 at 20:14
113

I think the answer has four parts.

  1. If you ask a random person at Walmart what $$\lim_{n\to \infty} \frac{1}{n}$$ is, then you might not get much. If you tell them that it is $0$, then they probably won't think that it is obvious.

    Conversely, if you go to a high level research talk, you will hear "It is obvious that ..." or "It is clear that ..." a lot. And you might not think that it is very obvious.

    The point is: Whether something is obvious or not is relative to the person.

  2. Mathematics is built around proving things. There is a justification for everything. This is the very nature of mathematics. We start with some axioms and then we prove everything. So when you ask, "Why should we prove something?", the answer always contains: "Because we are doing mathematics."

  3. You don't need much experience teaching mathematics before you meet a student who is confused about losing points on an exam because of lack of justification. Often the student will respond that they just thought that it was obvious. When you press them a bit harder it becomes clear that they, in fact, have no idea how to justify what they did. Whether or not the student arrived at the correct answer is irrelevant; the point is: if something is truly obvious, then it shouldn't be hard to prove it.

  4. If you want to get good at proving difficult things, why not get experience with proving things by starting to focus on simple or "obvious" things? I think that the experience gained from proving even simple propositions is valuable later in your career as a mathematician.

lily
  • 3,727
Thomas
  • 43,555
  • 78
    This answer is proof there are three kinds of mathematicians: those who can count, and those who can't. – Caleb Stanford Apr 19 '14 at 14:03
  • 10
    Everybody knows that mathematicians never use numbers after middle school! – abnry Apr 19 '14 at 21:53
  • 18
    What if a mathematician is shopping at Walmart? – Saturn Apr 20 '14 at 02:40
  • 3
    @Omega: You use your credit card, and move along. – Asaf Karagila Apr 20 '14 at 03:20
  • 3
    "A simple closed curve divides the plane into exactly two parts, a bounded one and an unbounded one" is obvious, but rather hard to prove. – Joker_vD Apr 21 '14 at 06:17
  • 11
    I would like to give an example which satisfies part 4. Its not related to Math but it fits quite well. Isaac Newton and Galileo "assumed" that time was a universal standard and flowed on its own, not depending on anything else. It was quite "obvious" to them and a lot of people. Then came along Albert Einstein, and asked "Why is it obvious?". He set out to prove(or disprove?) this "obviously" accepted fact, and in doing so he changed the very fundamentals of how we view our world. So its actually quite important to understand the "obvious" and the "simple". The truth lies in them. – udiboy1209 Apr 21 '14 at 15:07
  • 11
    The Walmart example is weak. If you asked a random person at Walmart what 8*9 is they probably wouldn't know... That doesn't mean it needs to be proved. At the very lowest end I would have used Target. – blankip Apr 21 '14 at 15:31
  • 8
    Elitist much? I wonder how many grossly underpaid but otherwise mathematically apt people are down at walmart right now. (Including adjunct math faculty, even!) – bright-star Apr 21 '14 at 22:52
  • 1
    a random man at Walmart? What about women? Can this text be replaced by a random person? – A.L Apr 22 '14 at 14:03
  • 2
    @n.1 Keep your cis-normative privilege out of it! – Chris Cudmore Apr 22 '14 at 15:19
  • 1
    @blankip: But the answer to 8*9 does need to be proved. Just like 0.02¢ is not the same as $0.02 needs to be proved. Both because there are people who don't understand it, and because at least someone needs to know for a fact (and be able to vouch to the ignorant) that one can rely on it. That's why you have mathematicians, no? – Amadan Apr 23 '14 at 07:52
42

The main procedural reason is to show that your axioms correctly capture what you want them to capture: that is to say they are both "correct" and sufficient.

If it turned out that under our axioms $\lim\limits_{n\to\infty}\dfrac{1}{n}\neq0$ then we would probably choose a different definition of $\lim$ (or a different name for it), since it would not be describing anything that we'd like to call a "limit". It would not be "correct". Of course that wouldn't be a problem in some unusual topology, since by calling it "unusual" we mean that we don't expect it to behave the same as the usual one, so limits might be different. There may be a fine line between a result that's counter-intuitive but that we stand by our system anyway, and a result that causes us to conclude that our definitions or axioms aren't as useful as we thought they were.

Consider that Euclid tried and failed to prove the "obvious" parallel postulate. Fast forward 2000 years or so, and it's finally proved not to be a theorem of Euclid's other axioms. His first four axioms were not sufficient to describe what was "obvious". Furthermore, non-Euclidean geometries (in which the postulate is not true) are interesting and useful.

It is valuable to know whether or not "obvious" things are provable from your axioms.

When learning mathematics, it's useful to prove "obvious" results in addition to "non-obvious" ones because:

  • you "know" they're true before you start, which can save some frustration
  • the ease or difficulty of proving the obvious teaches you something interesting about the area you're working in
  • you train yourself to reason only using formal axioms, not by assuming any old "obvious" things you like, no matter how tempting they are
  • similarly you train yourself to accept from others only things that are proven, no matter how true they look
  • probably other benefits.

Then when something is stated as "obvious", or you want to state it so yourself, you quickly either prove it to yourself, or at least satisfy yourself that a proof is possible and you could write it out if really needed, or else you question the "obvious". It might turn out to be false (in which case you've avoided an error) or it might turn out to require quite a difficult proof (not so obvious after all despite your intuition being correct). Normally you would want to restrict the use of the word "obvious" to things where the first proof your reader would think of works (and hence anyone can easily prove them if they bother to write it out), not to things where your intuition is correct but the proof is tricky.

Steve Jessop
  • 4,106
  • A better example, also from geometry, would be Pasch’s axiom. It is entirely non-obvious that it is independent — and was thought to be a theorem by Euclid. – kinokijuf Apr 22 '14 at 10:56
  • @kinokijuf: if Euclid thought it was a theorem (i.e. he had an incorrect proof) then I suppose that's a good example of the benefits of checking your proofs for correctness, as well as the benefits of attempting to prove the obvious :-) – Steve Jessop Apr 22 '14 at 12:06
  • He did not have an incorrect proof, but used it implicitly without defining it as an axiom. – kinokijuf Apr 22 '14 at 14:52
38

I always say that the most difficult exercise in my undergrad studies was the first question in linear algebra. We were taught about the axioms of a field. Then we had to prove the following thing:

For every $x$, $x+0=x$.

The catch is that the axioms we were given stated $0+x=x$. So we had to use the axiom of commutativity first, then we could conclude that.

Why was that so difficult? Because of two reasons. The first is that you had to understand not just what you should prove, but also why you should prove it. The second reason is that you had to come up with a proof which was not the single word "obviously" or "trivially".

So why do we have to prove trivial things?

  1. Because it's good practice. It's a great practice for understanding why, how and what to do when you're writing a proof. And the good thing about trivial things is that they are trivial and you know they're true, so you don't have to bust your hump in order to prove something which might not be true after all.

  2. Because it teaches you to sit down and prove everything. Later on in your studies, you might have to prove more complicated things, and sometimes things look obviously true, but since you don't spend time proving them, you will take them for granted, only to waste precious time before realizing they are false.

    If you sit down to prove everything, you'll learn to do that later on in your work, and avoid wasting time on false assumptions, like I've done recently. Several times. (Yes, do as I say, not as I do!)

  3. Because trivial things are generally easy to prove, and it makes sure that you understand the process of finding the proof by verifying the definition. To show that $f(x)=x$ is continuous is easy. Given $\varepsilon>0$ take $\delta=\varepsilon$ and you have that that if $|x_0-x|<\delta$, then $|x_0-x|<\varepsilon$.

    By doing so, you review the definition of continuity, you understand it better. Using this understanding it is easier to tackle more difficult questions.

  4. Because kihon is as important, if not more important than advanced ideas. Let me digress and tell you a short story about my past (it's a true story). Some decade ago or so I used to practice ninjutsu for about a year and a half. It was great, I loved it. We had a great group, and a great sensei, whom despite not seeing for the better part of ten years now, I would still rally to his call immediately.

    We would often do advanced techniques involving throwing, or evading weapons, or using weapons, or whatever. But he would constantly remind us that kihon is the most important part. Kihon, in Japanese, is basics. In the context of ninjutsu it means that you have to know how to punch properly, how to kick properly, and how to fall properly. If you know that, then you have a much better chance of winning a fight (and in ninjutsu, generally, there's no scoring or rules, the winner is the guy who can walk away).

    What does that mean? It means that the guy who spent a month punching fifty thousand punches, will never punch improperly, and in a fight he will have a better chance to survive than someone who give half-assed punches, but can make a really mean throw.

    So what does all that have to do with mathematics? Kihon. In the context of mathematics kihon means three things. It means being able to understand a definition, it means being able to understand the problem that you have to prove, and it means being able to write a proof.

    If you try to jump, your kihon is weak. And it will haunt you. Trust me on that. You will get stuck later on, and it will trouble you. But if you sit down to write a proof why $x+0=x$, then you understand the definitions of a field (i.e. the axioms), and you understand how to read the problem (i.e. why do we have to prove something here), and you understand how to write a proof (i.e. well, really just how to write a proof).

    These skills, the mathematical kihon, will make you a mathematical ninja at some point. And the better your kihon is, the better you will be.

So sit down to write proofs for trivial things, but remember that as the levels go by, you can allow yourself leeway. When you've mastered one level, it's okay to "trivialize" certain proofs; but from time to time it's also good to repeat them.

Asaf Karagila
  • 393,674
  • Thanks for the short story and the discussion of kihon. Amazing analogy! – Prism Apr 29 '14 at 02:08
  • I remembered it last semester before the exam, and I told it to some students when they asked how to study for the exam (we had a slightly different course from previous years in terms of focus and how much we managed to cover). I agree it's a pretty great analogy. – Asaf Karagila Apr 29 '14 at 08:12
  • Ack. The kihon in this post distracts from the initial excruciating point: "The catch is that the axioms we were given stated 0+x=x ." Is the pain evident? How could we possibly know that this is the axiom and not x+0=x!?!?! The take home message should give us all pause: nothing is provable without a clean, clear list of axioms and already-proved theorems. Kihon, then, is only defined w.r.t. a preexisting list. When asked to prove something, the proper kihonic answer is "It is unprovable --unless and until you divulge your list of your axioms and proved theorems." – Peter Leopold Dec 16 '20 at 22:56
  • 1
    @Peter: I honestly don't know what you're trying to say. I am talking about my very first week in uni, and the axioms were given. I'm not expecting the readers of this post to guess these axioms. The kihon, however, is about being able to understand what you need to prove and how to prove it. That is independent of your axioms. – Asaf Karagila Dec 16 '20 at 23:01
24

Things that are obvious to one person are not necessarily obvious to another. Futhermore they dispel (most) skeptics. Just thinking they are true does not mean they are true. For example, before I entered university, I was under the impression that there were twice the number of elements in $\mathbb{Z}$ than in $\mathbb{N}$. I would've called this obvious, but after learning more about it, it is no longer "obvious", as $|\mathbb{N}|=|\mathbb{Z}|$.

If I tell you that $|\mathbb{N}|=|\mathbb{Z}|$, you might not believe me, but if I proved it to you- showed you without a shadow of doubt that my assertion was true, then you would believe me. So, it is a sort of argument to show the reader that a claim is true.

  • 4
    Besides, the numbers of elements in $\mathbb{Z}$ and $\mathbb{N}$ "obviously" have opposite parity because $0$ has no partner to pair up with ;-) – Steve Jessop Apr 18 '14 at 18:27
  • 4
    I would argue that your original interpretation that there are twice as many elements in $\Bbb Z$ as in $\Bbb N$ is still true, even though $|\Bbb Z|=|\Bbb N|$, which is a little more mind-stretching. – Mario Carneiro Apr 19 '14 at 08:15
  • 1
    @MarioCarneiro But our intuitive notion of number of elements, or "size" of a set, does not include the additive structure on that set. To claim that there are twice as many elements in $\mathbb{Z}$ than there are in $\mathbb{N}$ you must introduce additional structures on the set $\mathbb{Z}$ and $\mathbb{N}$ other than just the labels, and then you must give up the idea that the size of a set does not vary depending on how we label it. – Caleb Stanford Apr 19 '14 at 13:54
  • @MarioCarneiro Your statement that there are twice as many elements in $\Bbb Z$ as $\Bbb N$ is exactly wrong. Neither of them have a finite number of elements, so both are infinite. There are many different cardinalities (sizes) of infinity, and there are no two different sizes of infinity where one is exactly twice the other. $\Bbb N$ and $\Bbb Z$ have exactly the same cardinality, exactly the same size, and exactly as many elements as each other. This is exactly what's meant by $| \Bbb Z | = | \Bbb N |$ – Travis Bemrose Apr 20 '14 at 00:43
  • 5
    @Goos I see I need to justify my statement. I am not appealing to any additive properties on $\Bbb Z$ or $\Bbb N$. In fact, I claim that $\Bbb N$ also contains twice as many elements as $\Bbb N$. Obviously it would be naive and wrong to simply claim that $2\cdot\infty=\infty$; rather I mean "twice as many elements" in the same way that equinumerous sets have the same number of elements. If there is a bijection $f:{0,1}\times A\to B$, then there are twice as many elements in $B$ as $A$. (con't) – Mario Carneiro Apr 20 '14 at 01:03
  • 5
    @Travis (...con't) Thus $\Bbb N$ has twice as many elements as itself (and $\Bbb Z$), and it also has the same number of elements as itself (and $\Bbb Z$). There is no contradiction, and this is the most consistent definition of "twice as many" that applies to infinite sets. The confusion is perhaps that "of the same size" and "of twice the size" are not necessarily mutually exclusive options. (In the language of cardinals: A cardinal $\frak m$ is twice $\frak n$ if ${\frak m}=2\cdot \frak n=n+n$.) – Mario Carneiro Apr 20 '14 at 01:14
  • @MarioCarneiro Thanks for explaining; that is a nice way to look at it. I had assumed that you were talking about natural asymptotic density or some similar notion. – Caleb Stanford Apr 20 '14 at 04:43
20

There was just an answer I provided a couple of minutes ago that was wrong. The question was:

$$\lim_{n\to\infty}(1+2^{-n})^{2^{n+2}}$$

What I thought:

$$\left(1+\frac1{2^{\infty}}\right)^{2^{\infty}}=\left(1+0\right)^\infty=1$$

But the answer was actually $e^4$. Even the computer made a mistake (when $n$ got too high). Sometimes, something that seems obvious may be wrong.

Shahar
  • 3,302
16

Because obviously a continuous function must be piecewise monotone. And therefore differentiable at all but at most countably many points.

Ampère "proved" this result in 1806 and it was considered a theorem for quite a while.

Then Riemann came up with an example of a function than when integrated produces a function (i.e. $x \mapsto \int_a^x f(x)\,dx$) which is not differentiable on a dense set of points.

The final nail in the coffin came in 1872 when Weierstrass published his proof that $x \mapsto \sum_{n=1}^\infty b^n \cos(a^nx)$, $0<b<1$, $ab > 1+3\pi/2$ is continuous (because the series is uniformly convergent) and nowhere differentiable.

The history of real analysis putting the results of calculus on a solid footing is peppered with plenty of such stories. Things that sound intuitively obvious but are in fact wrong.

Analysis by Its History (Undergraduate Texts in Mathematics) by Ernst Hairer and Gerhard Wanner is likely to contain more examples, but that's the most famous one that I can think of off the top of my head.

kahen
  • 15,760
  • 1
    +1 for an interesting example, but there are some historical inaccuracies here so I will post a more accurate version of the Ampère story. – Mikhail Katz May 09 '14 at 08:30
13

My old real analysis professor once made the following statement:

Mathematical proofs come in three types: proving the obviously true to be true, proving the not-so-obviously true to be true, and proving the obviously false to be true.

He goes on to give an example of the third kind of proof with the following theorem:

Theorem. Let $I$ be an interval and $f : I \to \mathbb{R}$ strictly monotone. Then the inverse function $f^{-1} : f (I) \to I$ is continuous.

The point is that some "obvious" things are not so obvious at all, and the truth may turn out to be the complete opposite of what you expected.

  • 2
    Another theorem which is of the third kind in my opionion is the following: The abelian groups $(\mathbb R,+)$ and $(\mathbb R^2,+)$ are isomorphic. – Christoph Apr 22 '14 at 13:24
  • @Christoph That is indeed counterintuitive! I imagine the proof is to find a basis for both groups as vector spaces over $\mathbb{Q}$? I wonder if there is a proof not requiring the axiom of choice. – Caleb Stanford Apr 22 '14 at 13:39
  • 3
    That is indeed the way to prove this. The answer to your question about a proof without AC is no. See http://mathoverflow.net/questions/25375/ac-in-group-isomorphism-between-r-and-r2 – Christoph Apr 22 '14 at 13:44
12

Just an addition to the many good answers here: Every axiom is obvious, and every rule of inference is obvious. (Otherwise, they wouldn't be very good axioms.) Thus, every single proof in existence is a long string of trivialities. Thus if you agree that two trivialities make a triviality, then everything in math that is provable is also trivial, and a proof is merely a mechanism for showing how trivial the result is. This may seem like an argument to absurdity, but in fact it is the case that every proof is "trivial", once you understand the steps. That's what the proof is there for.

So, to answer the original question, we prove things in order to show how obvious they are, not the other way around (which is what occurs in the minds of people that would call the fact "obvious"). Everything is obvious under the right mindset; a proof just shows people the right mindset to use, after which they too will find the result obvious. That's how mathematical knowledge is transferred.

  • Your answer presupposes the principle of induction, which was clearly demonstrated as fallacious in this context by the sandpile paradox. :-) – Asaf Karagila Apr 19 '14 at 08:21
  • @Asaf Which is obvious, no? ;) Actually, I would argue that it is a metamathematical argument about proofs, so that I can argue under a standard mathematical framework, even if the proofs themselves do not. Still, it's hard to imagine a definition of what a proof is that doesn't admit some kind of induction, since proofs are themselves inductively defined. – Mario Carneiro Apr 19 '14 at 08:29
  • @Asaf My example has some very strong ties to the sandpile paradox, which I had in mind as well. Really, I would argue that the error is the assumption of transitivity, or else in the interpretation of what "triviality" really means, which is what I wrote about in the answer (I am arguing that there really is an appropriate meaning for triviality that is transitive and hence applies to all proofs, namely "trivial under the right mindset".) – Mario Carneiro Apr 19 '14 at 08:31
  • "Every axiom is obvious" -- what about something like the existence of Grothendieck universes / strongly inaccessible cardinals? You can boil that down to a question of personal taste, I think all that's "obvious" about it is that it clearly follows from itself ;-) – Steve Jessop Apr 19 '14 at 16:21
  • @Steve You are right that as you go up the large cardinal hierarchy, the axioms become less and less obvious, which is exactly why they are seldom used. The best way to understand how "obvious" they are is to agree that their consequences are desirable. That is, it would be nice if there were a hierarchy of universes closed under all these operations and as large as we need; since we can't find any contradictions in so assuming, we may as well take it as an axiom. This is less satisfying to the Platonist, who probably wouldn't want to venture beyond ZFC or maybe even just ZF. – Mario Carneiro Apr 19 '14 at 19:05
  • I consider there to be an important distinction between these sorts of axioms that describe desirable properties and the older axioms, which describe what is "true" (under some Platonist interpretation). In the large cardinal hierarchy, no one can really tell what is "true", since our intuition runs out around $\omega_1$ and we are "flying on instruments" thereafter, so the axioms that result are almost always opaque to our finitistic intuition. – Mario Carneiro Apr 19 '14 at 19:14
10

Because your first example is not even true without suitable conditions.

The limit of 1/n as the integer n increases forever depends on your number system. In the standard reals the limit is indeed zero. However in the reals augmented with infinitesimals (as nonstandard reals) the limit does not exist because every positive infinitesimal is strictly less than every 1/n when n is a positive integer, and there is no greatest positive infinitesimal (since 2x > x when x is positive).

Anyone claiming it is "obvious" that this limit is 0 is implicitly claiming it is "obvious" that infinitesimals don't exist. Why should that be obvious?

Setting infinitesimals aside, consider the standard complex numbers. Suitably defined, it is indeed the case that lim 1/n exists there and is 0. But how do you define "limit" so as to make this true? Could it not happen that the limit does not exist because the set of complex numbers closer to 0 than any given 1/n cannot be linearly ordered?

One can rule out this possibility with a suitable definition of limit, but "obvious" is no less appropriate for the complex numbers than for the reals augmented with infinitesimals.

7

Here's an obvious one:

There are half as many points on a number line between 0 and 1 as there are between 0 and 2.

(Of course, as "obvious" as that may seem, it's not true. In fact, there are the same number of points between 0 and 1 as there are between 0 and 2. But proving that in a way that would convince a skeptic may be a bit challenging.)

That said, the main reason for proving obvious things is that proofs are the fundamental building blocks of mathematics. If something is true, a mathematician should be able to prove it. If something cannot be proven, that will (or should) stick in the mathematician's craw. It is not enough to appear true, it must be proven true.

Two examples that spring to my mind are:

Both of these were widely regarded to be true (no counterexamples could be found), but the proofs were elusive for decades or centuries.

The long and short of it: No proof, no truth – no matter how obvious something may seem.

J.R.
  • 171
  • 1
  • 3
  • 10
  • An even more obvious example in the same vein is that there are more rational numbers than there are natural numbers (this is clear, because the rationals are dense). – user1729 May 09 '14 at 08:47
  • 2
    There are half as many points on a number line between 0 and 1 as there are between 0 and 2. I would say that this is true, since lebesgue measure is a more natural notion of size than cardinality, for real sets. – Caleb Stanford Aug 04 '15 at 14:23
  • There isn't any "number" of points between 0 and 1 so there certainly can'be the same number between 0 and 1 as between 0 and 2 – Benjamin Lindqvist Jun 14 '16 at 23:44
  • Somehow got to this thread again, and wanted to comment that There are half as many points on a number line between 0 and 1 as there are between 0 and 2 could be argued to be true even in the sense of cardinality, because $\mathfrak{c} = 2 \mathfrak{c}$. So your point was correct, but your choice of statement was unfortunate. – Caleb Stanford Jul 26 '16 at 01:54
6

Someone called Jerry Bona once pointed out the following.

The Axiom of Choice is obviously true, the well-ordering principle obviously false, and who can tell about Zorn's lemma?

The point is that all three statements are equivalent, but the obviousness of the statements vary.

The Axiom of Choice states that if you have some infinite sets then you can take an element from each set and make a new set out of these elements. This is obviously true.

The Well-Ordering Principle states that every set has a well-ordering (so a total order where every subset has a least element). This is clearly false! (It is far too strong.)

Zorn's Lemma states that if you have a partially ordered set $P$ which has the property that every chain has an upper bound in $P$. Then the set $P$ contains at least one maximal element. I have only recently understood what this actually means.

Informally, the Axiom of Choice is this: If you are in an old fashioned sweetie shop, with some infinitely big sweetie jars, then you can take one sweetie from each jar and put them all in another jar. Just take the top sweetie, you say? Well, that requires that the sweeties in each jar are well-ordered...

user1729
  • 31,015
4

In the 19th century it took a while to clarify the notion of continuity of a function. Ampère actually published a "proof" that any continuous function is differentiable, period. Even Cauchy seems to have felt that a continuous function should have at most a finite number of points of discontinuity (Boyer in his book erroneously reverses the roles of Ampère and Cauchy). For a more detailed discussion see this article. Such a "theorem" must have seemed intuitively "obvious" at the time.

Mikhail Katz
  • 42,112
  • 3
  • 66
  • 131
3

The obvious is hard to prove and often wrong.

Disintegrating By Parts
  • 87,459
  • 5
  • 65
  • 149
  • I wouldn't say often. But there are certainly times where it is wrong, which is why rigor in proofs is important. – MT_ Apr 18 '14 at 21:55
  • If they're obvious, why can't you explain them very simply? – Disintegrating By Parts Apr 19 '14 at 16:51
  • @MichaelT I don't take often to mean the majority, or even a large minority of the time. I take it to mean that "it crops up often". If one were to consider 1,000 different concepts in a month, and once a month something that seemed obvious at first, but later turned out to not be so obvious, that once a month occurrence would still be often. – Travis Bemrose Apr 20 '14 at 00:54
3

While enough counterexamples have been covered and the subjectivity of obvious has been noted, there is another subjective aspect of "it's obvious" that has to do with the student's own mathematical education. I would like to add this one little thing to the answers here: The advances in one's education proceed on many fronts, but a step forward occurs when the student recognizes that something that used to seem obvious is in fact more complicated.

A friend of mine in graduate school once got a fortune cookie, "Never try to prove what no one doubts." Oh, did that get a laugh! But it is not such a bad principle. A good teacher ought to be able to create such doubt in a student, at the appropriate time. One of the earliest doubts I recall, certainly from high school but maybe earlier, is whether or which things could be proved at all. It was a naive view and included nonmathematical things. Perhaps my interest was idiosyncratic and one of the reasons I came to like mathematics.

I alluded above that timing can be important. There is inevitably an order in which topics are discussed in education. I don't think it helps much to prove properties of arithmetic before the student notices that there are properties of arithmetic. In the case of arithmetic, we usually wait to long and the moment when the student is curious about the properties passes. By college they are as familiar as the fact that the sky is blue. I don't think it does much good to simply assert that obvious is subjective, that if it is obvious then you should be able to prove it easily, or give them counterexamples that they cannot understand. You have to show them things they can understand. If they are ready, a counterexample can be just the thing.

On the other hand, that obvious things are easily proved is not clear to me. It seems pretty obvious that there is an empty set. Perhaps it can be proved based other axioms. And perhaps, someday, we can have a system of nonobvious axioms to prove all the obvious things. :) Going further down this road to absurdity won't be useful, so let me turn back. The problem I see is that if the student does not see the need or the usefulness in proving, then the student is free to disregard the teacher's authority or the accusation implicit in the charge, if it's obvious then it's easy. Students have been made to do a lot of boring busy work throughout high school. If doing the proofs seem that way to them, then the teacher is losing their attention and possibly their respect.

My purpose, prompted by the education tag, was simply to add the dimension the development of human understanding to the answers, which so far have mainly addressed mathematical aspects of the question.

Michael E2
  • 1,569
2

There are at least two reasons:

  1. If you wish to improve or refresh your knowledge of how the definitions work.

  2. If you wish to improve or refresh your knowledge of useful general theorems proven by others, that have the obvious result as a special case. For example, the second statement trivially follows from (a) that all polynomials are continuous, or (b) that all differentiable functions are continuous, etc. etc. ... I wonder what is the most complex theorem that it trivially follows from?

2

If it's obvious then the proof should be trivial and pose no inconvenience. If you find it difficult to prove then perhaps it's not so obvious.

R R
  • 1,084
0

In my opinion, only axioms should be treated as obvious, above all while a theory is being explained to others. I think it is simply immoral for a mathematician who is writing a proof of a proposition in a book not to give every smallest detail in the chain of logical inferences, skipping the task of making the subject perfectly clear through a lazy abuse of the magic words trivial, obvious, exercise. If one wants to write mathematics for others, he must be as clear and rigorous as possible, at least while developing theory and examples. If instead he's too lazy to bear such a sacred task, then he should not even begin to write mathematics, because the essence of mathematics is in its logical clarity, and not in a pseudomystic vagueness - alas, many important books of great mathematicians are "written" in this terrible style! -.

Diogenes
  • 643
  • 1
    Why have you downvoted the answer? Did I go off topic? – Diogenes Apr 18 '14 at 18:08
  • 5
    It is a bit rantish. Don't know about the downvote, though. – Pedro Apr 18 '14 at 18:16
  • I don't completely agree. – Sawarnik Apr 18 '14 at 18:28
  • @Sawarnik I completely disagree. (But I didn't downvote.) The fact that some, or even many, books are written in “this terrible style” has nothing to do with the question. – egreg Apr 18 '14 at 18:29
  • 3
    This is like saying "it is simply immoral for a computer programmer not to write in machine language." – Lee Mosher Apr 18 '14 at 22:04
  • Perhaps I haven't expressed my point of view properly. I am not saying that math books should be written in a formal language. I am simply saying that if a book is written to explain a subject, the author should not omit any logical inference in the proofs, at least while developing the theory. But perhaps I am too radical. I also understand that perhaps my answer is off topic. If you want I'll delete it, no problem. – Diogenes Apr 19 '14 at 11:14
  • +1 for great philosophy. -1 for not really answering the question. – Travis Bemrose Apr 20 '14 at 00:56
  • 1
    Not my downvote, either, but: Immoral? Sacred? For someone professing such strict adherence to rigor, those words are a bit over-the-top. – J.R. Apr 20 '14 at 21:36