1

OK, My book has a proof that a continious function defined on $[0,1]$ attains all values between $f(0)$ and $f(1)$ using some ultra case bashy stuff, but I have two different proofs, is those correct ?

(a) Let the desired value be $m$. We prove that there exists a sequence of reals $\{a_i\}_{i=0}^{\infty}$ such that $lim \; a$ exists, and $0 \leq a_i \leq 1$ for all $i$, satsifying $lim\; f(a_i) = m$. So, by the defination of cont

Construction: From the defination of continiuty, for $x \in [0,1]$ given any $\epsilon$, there exists a nonzero $h(\epsilon, x):= \delta$ such that for all $y$ with $|y - x| < \delta$, $|f(y) - f(x) | < \epsilon$ Let $g(\epsilon) := min\{h(\epsilon, x) | x \in [0,1]\}$.

Now, set $\epsilon_0 := 10^{-10000}$. split $[0,1]$ into $ \lfloor \frac{1}{g(\epsilon_0)} \rfloor $ equal intervals $I_1, I_2, \cdots, I_{\text{A big number}}$ , and let $a_0$ be the lower bound of the interval $I_i$, of which $\max{f(x) | x \in I_i} \geq m \geq \min{f(x) | x \in I_i}$.

Now, set $\epsilon_{1} = \epsilon_0^{100000}$, and divide the $I_i$ into $ \lfloor \frac{1}{g(\epsilon_1)} \rfloor $ equal interval, and choose $a_1$ to be the lower bound of the interval $I_{i_{j}}$ for which $\max{f(x) | x \in I_{i_{j}}} \geq m \geq \min{f(x) | x \in I_{i_{j}}}$

Repeat the process.

(b) Another proof: Assume WLOG $f(0) < f(1)$. Let the desired number be $m$. If $f(0) = m$ or $f(1) = m$, then we're done. Otherwise, divide the reals in $[0,1]$ into sets $L$ and $R$ such that:

  1. $y \in L$ if and only if $max\{ f(x) | 0 \leq x \leq y \} \leq a$
  2. Otherwise, put $y$ in $R$.

Now, $L$ exists because as $m \neq f(0)$, we can pick very small epsilon $\epsilon$ such that for $0 \leq x \leq \epsilon$, $f(x) < m$, and $R$ exists by the analogous arguement on $f(1)$.

Now it's well known that a number $y$ exists such that for all member of $L$ is smaller or equal to it, and all members of $R$ is larger or equal to it. As $y$ must be inside $[0,1]$, we're done.

katana_0
  • 1,862
  • 1
    In your title you should change "mean" to "intermediate". – amsmath Oct 24 '17 at 17:45
  • Your second proof is fine in its overall shape, though you mean $m$ instead of $a$, and depending on the context of the proof (first course in analysis?) you might want to be clear why you can WLOG that. You should also clarify that the set in your step 1 exists because the continuous image of a compact set is compact. And the word "exists" in the following paragraph should be replaced by "is non-empty", and you should probably spell out exactly why the number $y$ has $f(y) = m$. – Patrick Stevens Oct 24 '17 at 17:59
  • @PatrickStevens Thanks for your reply ! Unfortunately, I don't know what compactness means still now :( (BTW, The notaion in first and second proofs are seperate..., so there is no $a$ in the second proof. ) – katana_0 Oct 24 '17 at 18:06
  • @AlexKChen: "compact" is simply a short way of saying "both closed and bounded". If you have a compact set S and a function f which is continuous everywhere on S then the set of f(x) for all x in S is also compact. – Eric Lippert Oct 24 '17 at 18:17
  • @AlexKChen: There certainly is an a in the second proof, at the end of the line labeled 1. – Eric Lippert Oct 24 '17 at 18:18
  • @EricLippert Oh yeah darn sorry I'm stuipd – katana_0 Oct 24 '17 at 18:19
  • In your first attempt you create a function which is equal to a minimum value, but have not proved that a minimum exists. You've established that h(ϵ,x):=δ is not zero, but given an ϵ, why must there be a value of x which minimizes δ? Maybe in the infinite set which is values of h(ϵ,x) for a given ϵ there are deltas 0.1, 0.01, 0.001, 0.0001, ... and so on. Can you prove that a minimum exists? Or can you find a counterexample that shows that it might not? – Eric Lippert Oct 24 '17 at 18:26

2 Answers2

1

There are fundamental issues with both approaches. You assume that things like $\min, \max$ exist. They do exist if the function under consideration is continuous but that's another deep theorem (extreme value theorem, EVT) which is at the same level of complexity as the intermediate value theorem (IVT) which you are trying to prove. Also the fact that $g(\epsilon) $ exists and is positive is a property which goes by the name uniform continuity. This seems to suggest that IVT depends on EVT or uniform continuity. This is not true.

The proof strategy works in both cases (I do have a few reservations about the choice of values of $\epsilon$ in first proof, you need to fix that somehow) but it is undeniably complicated and uses EVT unnecessarily. Moreover you have to establish that $f(a) =m$ in each of the proofs.

Much easier and simpler to understand proofs exist for IVT and all of them are based on different notions of completeness. I have presented a few proofs in this blog post.

  • To me, the intermediate value theorem is much more about connectedness than completeness (even though completeness is used as a tool in proving $[a,b]$ is connected). – Daniel Schepler Oct 25 '17 at 00:02
  • @DanielSchepler: Fully agree! My point was to highlight that these theorems are all based on the distinguishing property of real numbers which normally people call by the name "completeness". The idea of connectedness and compactness are at a higher level to give different perspectives, but on a lower level all this is the result of the fine structure of real numbers. – Paramanand Singh Oct 25 '17 at 00:09
  • I am not sure do I fully understand you two, at a first glance, I do not see big problems if we were to construct functions with intermediate-value property on sets that are not connected. @DanielSchepler –  Oct 25 '17 at 00:17
  • @AntoinePalAdeen: I meant that IVT expresses the idea that continous functions map connected spaces to connected spaces. – Paramanand Singh Oct 25 '17 at 00:24
  • @ParamanandSingh Do you think that non-continuous cannot do that? –  Oct 25 '17 at 00:25
  • @AntoinePalAdeen : well there are derivatives which satisfy IVT and yet derivative need not be continuous everywhere. So mapping connected to connected spaces is not the exclusive privilege of continuous functions. – Paramanand Singh Oct 25 '17 at 00:26
  • @ParamanandSingh Ooops, well, sorry for not mentioning it, but I thought everybody was reading Hardy too :P As Hardy proved EVT long before presenting his proof of IVT, I thought I can use that theorem (that a continious function attains its minima and maxima) implicitly. (And can the reservation about $h(x)$ be fixed with this ? ). Also, as always, nice blog post. Thanks ! +1 – katana_0 Oct 25 '17 at 18:02
  • @AlexKChen: Hardy's Pure Mathematics has a proof of IVT but it does not make use of EVT. Rather it uses Dedekind's Theorem which is basically same as your second proof with some modifications. Also the issue with $h(x) $ is different and related to uniform continuity. The fact is that the $\delta$ depends on $\epsilon$ as well as $x$ and it is not necessary that there is one minimum $\delta$ which works for all $x$. – Paramanand Singh Oct 25 '17 at 19:48
  • @AlexKChen: just to confirm I checked my copy of Hardy's Pure Mathematics and he proves IVT before EVT. – Paramanand Singh Oct 25 '17 at 20:03
0

In your first proof, you define $g(\epsilon):=\min\{h(\epsilon,x)|x\in[0,1]\}$ and then go on to consider $1/g(\epsilon)$. However, to do so you would need to prove that $g(\epsilon)>0$ (which is not always true in your definition).

The correct way of proving (defining $g(\epsilon)$) is through the usual compactness argument: $\{B_{h(\epsilon,x)}(x)\}_{x\in[0,1]}$ (balls) form an open cover of $[0,1]$, from which you can extract a finite subcover indexed by $x_i$. Then redefine $g(\epsilon)$ as the minimum over the finite set of $h(\epsilon,x)$, so then $g(\epsilon)$ is positive.

I also don't understand your choice of $\epsilon_0$. Presumably you meant it to be small, but small is relative to $f(x)$, so I find it hard to believe your $\epsilon_0$ can be independent of $f(x)$.

Alex R.
  • 32,771
  • I can let $g(\epsilon)\to 0$ since $h$ is not properly defined. – amsmath Oct 24 '17 at 18:04
  • @amsmath: To be clear I've redefined $g$ over a finite set of $x_i$, so how could it go to 0? – Alex R. Oct 24 '17 at 18:06
  • Can you give me some example where $g(\epsilon)$ is equal to $0$ for nonnegative and nonzero $\epsilon$ ? – katana_0 Oct 24 '17 at 18:10
  • Sorry, what I wanted to say is that $h$ is not properly defined, so $g(\epsilon)$ might be zero. – amsmath Oct 24 '17 at 18:11
  • Why $h$ is not properly defined ? – katana_0 Oct 24 '17 at 18:11
  • It could be anything below a certain value of $\delta$. You should prove that you can choose a uniform $\delta$ for given $\epsilon$ (which is a $\delta$ for all $x$). Then you don't have this problem. – amsmath Oct 24 '17 at 18:12
  • Oh OK, so If I define $h(\epsilon, x)$ to be $max{ \delta | \text{For all y with |y-x| } < \delta \text{we have |f(x + delta) - f(x) } < \epsilon }$, does the proof holds ? – katana_0 Oct 24 '17 at 18:15
  • As an example, choose $f(x) = x$ and put $h(x,\epsilon) := \epsilon$ everywhere except at $x = 1/n < \epsilon$, where you put $h(1/n,\epsilon) := 1/n$. – amsmath Oct 24 '17 at 18:15
  • You mean $f(y)$... But yes, that works. Better take the supremum, because the maximum does not exist. However, you would still have to prove that $g(\epsilon) > 0$. – amsmath Oct 24 '17 at 18:18
  • @amsmath OK, so the new deination of $h$ works ? – katana_0 Oct 24 '17 at 18:19
  • Yes, but see my last comment. And please write "definition" and not something like "deination"... – amsmath Oct 24 '17 at 18:20
  • @AlexKChen: Rather the redefining $h(x,\epsilon)$, it's much easier to just formulate $g(\epsilon)$ using the compactness argument I provided. The key is that you can extract a finite collection of $h(x_i,\epsilon)$ that cover $[0,1]$, which allows you conclude $g(\epsilon)>0$. Then you don't have to worry about defining $h$ in a fancy way: $h$'s job is to provide you with any $\delta>0$. – Alex R. Oct 24 '17 at 18:21