3

I am currently working through chapter $5$ on limits of Spivaks calculus book. I have come to exercise $15$ix. I feel like I can answer this question correctly, but there is one small step I know to be true but can't seem to show it concisely.

I would like to note, I am aware similar questions have been asked, but I am querying a specific part of this question.

The question:

Assume that: $$\lim_{x\to{0}}{}\frac{\sin(x)}{x}=\alpha.$$

Find, in terms of $\alpha$, the following limit:

$$\lim_{x\to1}\frac{\sin(x^2-1)}{x-1}.$$

My attempt:

Firstly, multiplying bottom and top by $x+1$ yields:

$$\lim_{x\to1}\frac{\sin(x^2-1)(x+1)}{(x-1)(x+1)}=\lim_{x\to1}\frac{\sin(x^2-1)(x+1)}{x^2-1}$$

Now, IF $(*):=\lim_{x\to1}\frac{\sin(x^2-1)}{x^2-1}$ exists, we can split the above into:

$$\lim_{x\to1}\frac{\sin(x^2-1)}{x^2-1}\times{}\lim_{x\to1}{x+1},$$

using algebra of limits. Which gives us:

$$\lim_{x\to1}\frac{\sin(x^2-1)}{x^2-1}\times{}2.$$

So I'm essentially left to show $(*)$ exists and is equal to $\alpha$. Which means the original limit is equivalent to $2\alpha$.

Now clearly as $x\to{1}$ we have that $(x^2-1)\to{0},$ so can we write this limit as:

$$\lim_{x\to0}\frac{\sin(x)}{x}\text{ or }\lim_{x^2-1\to0}\frac{\sin(x^2-1)}{x^2-1}?$$

If so, why exactly? It just feels a bit imprecise.

I guess what I'm asking exactly is:

$$\lim_{x\to0}\frac{\sin(x)}{x}\overset{?}{=}\lim_{x\to1}\frac{\sin(x^2-1)}{x^2-1}$$

kam
  • 1,255

2 Answers2

3

There are several exercises in this chapter that require steps that aren't justified fully in the published solutions.

Problem 5-1.(vi) uses $\lim_{h \to 0}\sqrt{a+h} = \sqrt{a}$, without any justification.

5-2.(ii) and (iii) involve $\lim_{x \to 0}\sqrt{1-x^2}$.

5-14 and 5-15 use $\lim_{x \to 0} \cos{x} = 1$ and $\lim_{x \to 0} \sin{x} = 0$ without justification (This comes from the continuity of $\sin$ and $\cos$, which is assumed in the next chapter but not proven until these functions are defined formally in chapter 15 [3rd ed.]).

5-15(ix), the exercise asked about here, uses the fact that if $\lim_{x \to 0} f(x) = \ell$, then $\lim_{x \to a} f(x^2 - a^2) = \ell$.

This fact is not yet justified and is worth proving to yourself, using an $\varepsilon$-$\delta$ argument.

IMO it's also worth proving that for $a > 0$, $\lim_{x\to a} \sqrt{x} = \sqrt{a}$ (or prove this statement for $a + h$ with $h \to 0$, if you prefer.)

The trig ones, don't worry about for now :)

Spoiler Alert: Below are proofs for the two postulates mentioned above.

Postulate 1 If $\lim_{x \to 0} f(x) = \ell$, show that $\lim_{x \to a} f(x^2 - a^2) = \ell$.

For the latter limit to be true we need to show that for any $\varepsilon > 0$, there exists some $\delta > 0$ such that for all $x$, if $$0 < |x - a| < \delta \text{ then }|f(x^2 - a^2) - \ell| < \varepsilon$$

From the first limit, we have for $\varepsilon$, there exists a $\delta_1 > 0 $ such that for all $x$ if

$$0 < |x| < \delta_1 \text{ then } |f(x) - \ell| < \varepsilon$$

Therefore, if $0 < |x^2 - a^2| < \delta_1$, then $|f(x^2 - a^2) - \ell| < \varepsilon$.

So we want to find a $\delta$ such that if $0 < |x - a| < \delta$ then $0 < |x^2 - a^2| < \delta_1$.

Taking the requirement $$0 < |x^2 - a^2| < \delta_1$$

and looking just at the left-hand side, we have

$$0 < |x^2 - a^2|$$

which is met by requiring $x \neq \pm a$.

We can guarantee this with the requirement

$$0 < |x - a| < 2|a|$$

Returning to our requirement $0 < |x^2 - a^2| < \delta_1$, the right-hand side has

$$|x^2 - a^2| < \delta_1$$ $$|x - a||x + a| < \delta_1$$

If you recall Spivak's proof that $\lim_{x \to a} x^2 = a^2$, you can see that the above requirement is similar, only with $\delta_1$ substituted for $\varepsilon$. Using the tricks employed in that proof, we find that

$|x - a| < \delta_2 = \min(1,\frac{\delta_1}{1+2|a|})$, will guarantee $|x^2 - a^2| < \delta_1$

Combining all our $\delta$-requirements, we have for all $x$ if $$0 < |x - a| < \delta \text{ then } |f(x^2 - a^2) - \ell| < \varepsilon$$

with $\delta = \min(1, 2|a|, \frac{\delta_1}{1+2|a|})$, where $\delta_1$ is given by the existence of the first limit $\lim_{x \to 0} f(x)$.

Postulate 2 Show that for $a > 0$, $\lim_{x\to a} \sqrt{x} = \sqrt{a}$.

Suppose the statement is true. Then for any $\varepsilon > 0$ there exists some $\delta > 0$ such that for all $x$ if

$$0 < |x-a| < \delta \text{ then } |\sqrt{x} - \sqrt{a}| < \varepsilon$$

Let's take the $\varepsilon$ requirement and work backwards to find a suitable $\delta$:

$$|\sqrt{x} - \sqrt{a}| < \varepsilon$$ Multiplying both sides by $|\sqrt{x} + \sqrt{a}|$ (which equals $\sqrt{x} + \sqrt{a}$ since both terms are nonnegative) we have $$(\sqrt{x} + \sqrt{a}) \cdot |\sqrt{x} - \sqrt{a}| < (\sqrt{x} + \sqrt{a}) \cdot \varepsilon$$

$$|x - a| < (\sqrt{x} + \sqrt{a}) \cdot \varepsilon$$

If we require $|x - a| < \sqrt{a} \varepsilon$, this should suffice, since $\sqrt{x} + \sqrt{a} \geq \sqrt{a}$. We also want to prevent $x$ from being negative, which we can do by requiring $|x - a| < a$.

Combining these requirements we have $\delta = \min(a,\sqrt{a}\varepsilon)$

Checking that this $\delta$ works, if $\varepsilon > 0$ then for all $x$ with

$$0 < |x-a| < \sqrt{a}\varepsilon$$ $$|\sqrt{x} - \sqrt{a}| \cdot (\sqrt{x} + \sqrt{a}) < \sqrt{a}\varepsilon$$ $$|\sqrt{x} - \sqrt{a}| < \frac{\sqrt{a}\varepsilon}{(\sqrt{x} + \sqrt{a})} \leq \varepsilon$$

You can combine these 2 postulates along with the other properties of limits to fill in some of the gaps in the chapter exercises.

Edit I suspect that exercise 5-15(ix) and the others mentioned were overlooked by Spivak. The published solutions involve steps that are completely without justification.

When I first did this chapter I was pretty discouraged by these lapses. I feared I might have to skip these things and just hope they didn't create flaws in my understanding. (I wasn't on MSE at the time, or I would have come here for answers!)

However, I was eventually able to bridge the gaps myself by using $\varepsilon$-$\delta$ arguments, which are at the heart of the chapter. This came as a huge relief.

Upon seeing kam's question, my instinct was to reassure them "no, it's not just you" and to steer them towards what I did.

In so doing, I failed to account for a few things. First, IIRC back when I first did the problems in this chapter I probably skipped the parts I wasn't sure of, or maybe even just accepted them and moved on. It wasn't until finishing the other exercises that I came back and attacked $\varepsilon$/$\delta$-Style. Sorry, kam. I should have encouraged you to come back to these later after finishing the rest of the problems.

Second, Paramanand Singh is right. Though it's maybe not something Spivak intended (I reckon these extra problematic problems were an oversight), this is a good reason to think more generally about limit substitutions, and that's maybe closer to what kam was asking about anyway.

Furthermore, general ideas on such substitutions aren't really in the book aside from a few exercises (6-12, maybe others?) but they are worth being aware of. (Perhaps they show up after Chapter 12, where I am? Seems unlikely.) Update: the proof of an important theorem in chapter 12 (Inverse Functions) uses one of these substitutions, again without quite giving a complete explanation.

Limit Substitutions without continuity, basic gist Suppose we have two functions $f$ and $g$ both with limits

$$\lim_{x \to a} g(x) = b$$ $$\lim_{x \to b} f(x) = \ell$$

Suppose also that there exists an interval around (possibly excluding) $a$ such that $g$ in this interval is never equal to $b$. In other words, there exists some $\delta_a > 0$ such that for all $x$ if $$0 < |x - a| < \delta_a \text{ then } g(x) \neq b$$

Then the limit of composite function $\lim_{x \to a} f(g(x)) = \ell$. Intuitively, you can see that this should be true. As $x$ approaches $a$, $g(x)$ gets close to $b$, so $f(g(x))$ approaches $\ell$, its limit at $b$.

The condition that $g$ never actually reach $b$ within some interval around $a$ is to ensure $g$ properly "mimics" the behavior of $x$ approaching $b$. That is, we know only the behavior of $f(x)$ for $x$ near but not equal to $b$. Without this restriction, we don't know if $f(g(x))$ approaches $\ell$ or not. It might but we can't be certain.

Consider the functions $f$ and $g$ with $$f(x) = 1 \text{ (for $x \neq 0$)}$$ $$f(x) = 50 \text{ (for $x = 0$)}$$ $$g(x) = 0$$

Here, $\lim_{x \to 0} f(x) = 1$, $\lim_{x \to a} g(x) = 0$ (for any $a$), but $\lim_{x \to a} f(g(x)) = 50$. A silly example but it shows the point.

Proof: By definition, for any $\varepsilon > 0$ there exists some $\delta_f > 0$ such that for all $y$ if

$$0 < |y - b| < \delta_f \text{ then } |f(y) - \ell| < \varepsilon$$

For $\delta_f$ there is some $\delta_g$ such that for all $x$ if

$$0< |x-a| < \delta_g \text{ then } |g(x) - b| < \delta_f$$

Therefore there is a $\delta = \min(\delta_a, \delta_g)$ such that for all x if

$$0< |x-a| < \delta \text{ then } 0<|g(x) - b| < \delta_f$$

Thus, for any $\varepsilon > 0$ there exists some $\delta$ such that for all x if

$$0< |x-a| < \delta \text{ then } |f(g(x)) - \ell| < \varepsilon$$

or, $$\lim_{x \to a} f(g(x)) = \ell = \lim_{x \to b} f(x) = \ell$$

The consequence of this is that we can sometimes swap out more complicated limits and replace them with simpler ones.

Problem 5-15(ix) presents us with $\lim_{x \to 1} \frac{\sin(x^2 - 1)}{x^2-1}$ which is $\lim_{x \to 1}f(g(x))$ with $f(x) = \frac{\sin x}{x}$ and $g(x) = x^2 - 1$.

We know $$\lim_{x \to 1} x^2 - 1 = 0$$ $$\lim_{x \to 0} \frac{\sin x}{x} = \alpha$$

Furthermore, you can see that as $x$ approaches $1$, $ x^2 - 1$ gets close to but never equal to $0$ (Where does it equal $0$?). In this way it "mimics" the behavior of $x\to 0$, so we are ok replacing $g(x)$ in the limit.

$$\lim_{x \to 1} \frac{\sin(x^2 - 1)}{x^2-1} = \lim_{x \to 0} \frac{\sin x}{x} = \alpha$$

Finally, note that there are other criteria for making similar substitutions with limits. One thing to look at is the "continuity" of the outer function $f$. This is examined in the next chapter of the book.

Ben
  • 1,585
  • Thanks so much. I couldn't quite put my finger on it, but I was certain I was required to utilise something I hadn't seen yet (since im going through the book in order). Thank you!! – kam Jan 12 '21 at 20:03
  • You're very welcome. It bothered me exactly the same when I hit this part. An unusual lapse in Spivak's rigor. I'm writing out proofs for those 2 postulates I mention, just in case you or others ever want them. I'll put them at the bottom of my answer to avoid "spoilers" :) – Ben Jan 12 '21 at 20:30
  • Thank you, I'll give them a go, but if you could, like you said, put them at the end, it'd be great so I can check my proofs. Thank you :) – kam Jan 12 '21 at 20:51
  • You are the useful stack exchange people I need but don't deserve :') – kam Jan 12 '21 at 20:52
  • It is better to have access to well known theorems (and their proofs) instead of proving their special cases. The proofs which you have given corresponds to rule of substitution and use of continuity. – Paramanand Singh Jan 13 '21 at 04:17
  • 1
    @ Paramanand Singh That is true: continuity, properties of inverse functions, and other techniques can easily deal with cases like this. However, this question is from Chapter 5 of Spivak which is the introduction of limits. Those higher powered tools aren't yet available to the reader. Experience grinding through $\varepsilon$-$\delta$ proofs at this stage of the book is useful. – Ben Jan 13 '21 at 04:31
  • You may disagree, but I consider the $\epsilon, \delta$ exercises as futile. I did such exercises only to help users on this website and not in my study of analysis. The technique is best learnt while studying proofs of standard theorems. – Paramanand Singh Jan 13 '21 at 04:48
  • Hi there @Ben I just had a flick through your rather extensive answer and would like to thank you for this! I absolutely despise going through exercises and feeling like I just have to accept something with no justification, so to read your answer put me at ease so I thank you for that! – kam Jan 18 '21 at 16:56
  • That's heartening to hear. I'm tremendously enjoying making my way (slowly) through the book myself, but there are a few mistakes and oversights in the text. Most of them are just little typos and things, but there are a few that are more substantial, that put me off when I ran into them, and I fear will do the same to others. I've been lurking on MSE, checking for Spivak questions in case there are instances like this where I think I can help fill in gaps. It's certainly helping my own understanding, and paltry LaTex skillz! – Ben Jan 18 '21 at 17:17
  • @Ben Thank you for this extensive answer which is now helping me a lot, too. "When I first did this chapter I was pretty discouraged by these lapses." - This is exactly how I feel! Note that in Problem 5-8 (3rd ed.) he uses $(\lim\limits_{x \to a} f(x)=l) \Leftrightarrow (\lim\limits_{x \to a} -f(x)=-l)$ which is intuitively clear, but also not proven. However, I think this follows directly from Theorem 2.2 considering $fg$, with $g(x)=-1$ for all $x\in\mathbb{R}$. – Carlevaro99 Feb 05 '21 at 13:50
  • @Carlevaro99 IMO if these exercises bother you, it means your doing a good job with the material. Keep it up! – Ben Feb 05 '21 at 15:16
1

Yes, it is as long as we keep into the definition domain of the function. One way to see this is making substitutions: Substitute $\;t:=x^2-1\;$ , and observe that $\;x\to1\implies t\to0\;$ , so

$$\lim_{x\to1}\frac{\sin(x^2-1)}{x^2-1}=\lim_{t\to0}\frac{\sin t}t=1$$

DonAntonio
  • 211,718
  • 17
  • 136
  • 287