0

There are a number of posts about definite integrals with non-injective u-substitutions:

  1. Is it always safe to assume that a integral is zero if it has equal bounds?
  2. How do the limits change with the substitution $u=\sin θ$
  3. Substitution Makes the Integral Bounds Equal
  4. Why does this $u$-substitution zero out my integral?

The conclusion is basically the same in each case. For some reason, I don't remember ever being warned about this situation, and I thought it must be because each time this came up, we just skipped over the part where you evaluate $u(a)$ and $u(b)$ and went straight to substituting $u(x)$ in the now integrated function. Is this what people generally do?

i.e. if a u-substitution enables us to find an anti-derivative $F(u(x))$, can we safely assume that regardless of whether or not $u(a) = u(b)$, $F(b) - F(a)$ will be the solution?

It is mostly a practical concern - in general I'm not going to remember on what intervals some function is injective, which appears to be the solution in the above questions. In none of those examples do I see the above points failing.

edit: I try add an example different from those in the above questions:

$$\int_0^{2\pi} \cos x \sin^2 x dx$$

re-arrangements using trig identities aside, the simplest approach seems to involve letting $u(x) = \sin x$, so that $F(x) = \frac{1}{3} \sin^3 x$ and $F(b) - F(a) = 0$, which in this case happens to coincide with what would have (erroneously) happened had we written

$$\int_{u(0)}^{u(2\pi)} u^2 du$$

algae
  • 200

2 Answers2

3

The substitution rule/change of variables theorem says the following:

Suppose $f:[a,b]\to\Bbb{R}$ is continuous, and $u:[\alpha,\beta]\to [a,b]$ is differentiable with Riemann-integrable derivative (or at this point if you don't like remembering various hypotheses, just assume everything is smooth). Then, \begin{align} \int_{\alpha}^{\beta}f(u(x))\cdot u'(x)\,dx &= \int_{u(\alpha)}^{u(\beta)}f(t)\,dt \end{align} "substitute $t=u(x)$"

If you state the theorem like this, there is no need at all for any injectivity assumptions on $u$; this equality follows immediately from the fundamental theorem of calculus and chain rule (if $F$ is a primitive of $f$, then the LHS and RHS are equal to $F(u(\beta))-F(u(\alpha))$). The problem is that people often don't carefully specify the two functions $f$ and $u$; what ends up happening is they misapply the theorem and then impose extra conditions like injectivity (which of course doesn't hurt, but it doesn't really address the issue).

In your example, let $f(t)=t^2$ and $u(x)=\sin x$ (we can define these functions on all of $\Bbb{R}$, so there's no domain issues here at all, and all the compositions make sense etc). Then, \begin{align} \int_0^{2\pi}\sin^2x\cdot \cos x\,dx &=\int_0^{2\pi}f(u(x))\cdot u'(x)\,dx\\ &=\int_{u(0)}^{u(2\pi)}f(t)\,dt\\ &=\int_0^0t^2\,dt\\ &= 0. \end{align} This really is by a direct application of the theorem; I'm not sure why you say it is erroneous.


Of course, a corollary of the theorem I wrote above is the following:

Suppose $g:[\alpha,\beta]\to\Bbb{R}$ is continuous and $v:[\alpha,\beta]\to [a,b]$ is $C^1$ with $C^1$ inverse. Then, \begin{align} \int_{\alpha}^{\beta}g(x)\,dx &= \int_{v(\alpha)}^{v(\beta)}g(v^{-1}(t))\cdot (v^{-1})'(t)\,dt\\ &=\int_{v(\alpha)}^{v(\beta)}g(v^{-1}(t))\cdot \frac{1}{v'(v^{-1}(t))}\,dt \end{align} "substitute $x=v^{-1}(t)$"

The "advantage" of this formula is that on the LHS there is only $g$, i.e without any variable changes, and we move all the stuff involving changes of variables to the other side of the equation (so all instances of $v$ appear only on the RHS). Compare this to my first formula, where we didn't make any assumptions of injectivity, and thus as a result we have $u$ appearing on both the LHS and the RHS of the equation. The added hypothesis of injectivity is the price we pay if we want to isolate everything to one side.

Sometimes in computations, this second form of the theorem (which is really a special case of the one above) is more useful, which is why people may sometimes insist that injectivity is a must.

peek-a-boo
  • 55,725
  • 2
  • 45
  • 89
  • Thanks. I'm no longer sure why I needed to ask the question.. I guess I was conflating one example I had in mind with different examples that were essentially unrelated. – algae Apr 24 '21 at 05:43
  • 1
    @algae I'm glad this was helpful, and it's always good to ask questions :) the issue is that people often learn[teach] theorems through formulae without knowing[explaining] what they mean/are doing. Case in point is the $u=\sin\theta$ example you link to. Usually these errors can be traced back to the fact that while people identify the $u$ correctly, they misidentify $f$ and as a result they get erroneous results. Or, if they're trying to use the second version of the theorem, they identify $g$ correctly, but don't ensure invertibility of $v$. – peek-a-boo Apr 24 '21 at 05:49
0

Well, if $u(a)$ and $u(b)$ are the bounds on your integral (hopefully I'm understanding the question correctly), then if $u(a)=u(b)$ the integral is just simply going to be zero. There's no horizontal distance that you are integrating over here. But you are correct in saying that as long as $f(x)$ is well defined, the integral, after substitution will indeed evaluate to $$F(u(b))-F(u(a))$$

JayP
  • 1,096
  • In the edit that you made. Making the substitution $u=\sin(x)$, note that $u(2\pi)=\sin(2\pi)=0=\sin(0)=u(0)$ so you are integrating over nothing as your bounds are $0\to 0$. So you still get the same result. – JayP Apr 24 '21 at 05:02
  • Thanks. Yes the result is the same, but that isn't always the case as pointed out in those other questions. You may find after u-sub that the bounds of the integral are zero, but in fact the value of the integral is not zero. My point, is that if u(a) = u(b), does it really matter? Provided we get our integral and can back substitute, that's all that seems relevant here. – algae Apr 24 '21 at 05:07