0

Lets take an equation $y=x^2$ . If we have to differentiate this, we will have to differentiate both sides of the equation with respect to the same variable(say 'x'), giving us $\frac{dy}{dx} = 2x$. It could also be differentiated with respect to 'y' , giving $1 = 2x {\frac {dx} {dy}}$. A little algebraic manipulation reveals that this "new" equation is identical to the previously obtained "new" equation.(EDIT : By the comments and the answer given by SchrodingersCat I have now realized that this assertion of mine is correct only in some special cases.)

So, one thing is clear : "Differentiation of both sides of an equation is done with respect to a single variable".

Now here is my question : Is this applicable to integrals too?

In other words, do we have to integrate both sides of the equation with respect to a single variable?

If the answer is "yes" , then how is it possible to integrate an expression in one variable with respect to some other variable ? (A "Reverse Chain Rule" ???)

If the answer is "no", then please provide some concrete examples to support your answer.

I know this is a very elementary question, but still it troubles me a lot.

Any help would be gratefully appreciated.

2 Answers2

5

From the modern perspective, differentiation and integration act on functions, not on equations. This fact gets hidden because the most convenient way to write common functions is to use an algebraic formula that expresses the output in terms of the input.

For example, $y = f(x) = x^{2}$ commonly denotes the squaring function, while $x = g(y) = \sqrt{y}$ denotes one branch of the inverse of $f$. (In each equation, $x$ and $y$ are dummy variables; one could write $y = g(x) = \sqrt{x}$ with identical meaning. The benefit of writing $x = \sqrt{y}$ is that these dummy variables are (mostly) compatible with the dummy variables in $y = x^{2}$, allowing us to work both both functions "in the same scope".)

"Differentiating $y = x^{2}$ with respect to $x$, obtaining $\frac{dy}{dx} = 2x$" amounts to the assertion $f'(x) = 2x$; the derivative of the squaring function is the multiplication-by-two function.

"Differentiating $y = x^{2}$ with respect to $y$, obtaining $1 = 2x\frac{dx}{dy}$" amounts to the assertion $g'(y) = \frac{1}{2\sqrt{y}}$.

Leibniz notation blurs an important distinction, denoting a derivative function $f'$ and the value of a derivative $f'(x)$ at an arbitrary point with the same symbol, $\frac{dy}{dx}$. Inevitably this leads to confusion among students who carefully ponder the meaning of notation. The "identity" $$ \frac{dx}{dy} = \frac{1}{dy/dx} \tag{1a} $$ is a perfect example. Written more carefully, the claim is that if the composition $f \circ g$ is the indentity function on some open interval $I$ and if $f$ is differentiable on $g(I)$, then $$ g'(y) = \frac{1}{f'(x)} = \frac{1}{f'(g(y))} \tag{1b} $$ for all $y$ in $I$ where $g'(f(y)) \neq 0$. While (1a) looks like manipulation of fractions, closer inspection (1b) reveals the functions $f'$ and $g'$ are not reciprocals, their values at suitably-chosen inputs related by $f$ are reciprocals. Equation (1a) is error-prone for both interpretations of Leibniz notation.

In this same spirit, integrals are not taken "with respect to a variable". Spivak's notation, $\int_{a}^{b} f$ instead of the near-universal $\int_{a}^{b} x^{2}\, dx$ (for $f(x) = x^{2}$), explicitly acknowledges this. Unfortunately, this analogue of Newton notation for integrals is inconvenient in practice. As with derivatives, it's much easier to specify a function (and its (anti-)derivatives) by giving the value at a "generic" input $x$ and using compatibly-named dummy variables between multiple functions.

"Integrating the chain rule" (with $f$ continuous and $h'$ continuously-differentiable, say) gives $$ \int_{h(a)}^{h(b)} f(u)\, du = \int_{a}^{b} f(h(x))\, h'(x)\, dx. \tag{2a} $$ This is customarily explained in calculus books by "substituting $u = h(x)$, so that $du = h'(x)\, dx$, and $u = h(a)$ when $x = a$, etc." It should be clear why calling the respective sides "the integral of $y = f(u)$ with respect to $u$" and "the integral of $y = f(u)$ with respect to $x$" would be logically inconsistent, however. The limits of integration differ (as SchrodingersCat notes), and the functions being integrated are not the same.

In "Newtonian" notation, for the record, the change of variables formula reads $$ \int_{h(a)}^{h(b)} f = \int_{a}^{b} (f \circ h)\, h'. \tag{2b} $$

  • 1
    Your answer pinpoints the real issue. It is important to understand the notation and it's meaning carefully. I especially like your remark about equations $(1a)$ and $(1b)$. +1 – Paramanand Singh Jan 12 '17 at 13:44
1

A little algebraic manipulation reveals that this "new" equation is identical to the previously obtained "new" equation.

I realize the context in which you are saying this, but you are wrong. Your first equation gives you $\boxed{\frac{dy}{dx} = 2x}$ and your second equation gives you $1 = 2x {\frac {dx} {dy}}$ which on algebraic manipulation gives, at most, $\boxed{\frac{dx}{dy}=\frac{1}{2x}}$ AND they are just not the same.
Only certain functions satisfying certain properties have the following relation to hold true i.e. $\large\frac{dy}{dx} = \frac{1}{\frac{dx}{dy}}$ and your function $f(x)=x^2$ happens to be one of them.


And now for your question, the answer is NO.

Example: $\mathrm{dy}=2x \,\,\mathrm{dx} \implies \int_{y_1}^{y_2}\mathrm{dy}=\int_{x_1}^{x_2}2x \,\,\mathrm{dx} $

And $x$ and $y$ are not the same.

Hope this helps you.