13

So I don't know if I'm the only one to feel this, but ever since I was introduced to Calculus, I've had a slight (if not to say major) aversion to differentials.

This sort of "phobia" started from the very first moment I delved into integrals. Riemann sums seemed to make sense, though for me they were not enough for justifying the use of "dx" after the integral sign and the function. After all, you could still do without it in practice (what's the need for writing down the base of these rectangles over and over?). I was satisfied by thinking it was something merely symbolic to remind students what they were doing when they calculated definite integrals, and/or to help them remember with respect to what variable they were integrating (kinda like the reason why we sometimes use dy/dx to write a derivative). Or so I thought.

Having now been approached to differential equations, I'm starting to realize I was completely wrong! I find "dy" and "dx" spread out around equations! How could that be possible if they are just a fancy way of transcribing derivatives and integrals? I imagined they had no meaning outside of those particular contexts (i.e.: dy/dx, and to indicate an integration with respect to x or whatever).

Could anybody help me out? I'm really confused at the moment. I'd really appreciate it :) (P.S.: Sorry to bother you all on Thanksgiving - assuming some of you might be from the US.)

EDIT: I don't think my question is a duplicate of Is $\frac{\textrm{d}y}{\textrm{d}x}$ not a ratio?, as that one doesn't address its use in integrals and in differential equations. Regardless of whether dy/dx is a ratio or not; what I'm really asking is why we use dx and dy separately for integration and diff. equations. Even if they're numbers, if they tend to 0, then dx (or dy) * whatever = 0. Am I wrong in thinking that way?

Matt24
  • 1,010
  • 2
    There is a fair bit of discussion about this on math.se, you could start here and then follow any links you find. – David Nov 26 '15 at 23:56
  • 4
    In differential equations classes, arguments that involve manipulating $ dx $ and $ dy $ individually can be easily rephrased to avoid doing this. – littleO Nov 26 '15 at 23:58
  • 5
    My differential geometry professor told us to change the subject whenever an undergraduate asks what a differential really is. – Matt Samuel Nov 27 '15 at 03:03
  • "Differentials" are like $\infty$... it can be given different meaning in different contexts, sometimes only "symbolic" meaning, sometimes notation-abused by people who actually know what they are doing, and sometimes simply abused by people who don't know what they are doing. – Aloizio Macedo Nov 27 '15 at 03:32
  • @Aloizio you seem to be implying that it's always either abused or symbolic. I don't know of it ever being abused, but certainly when it represents a section of the cotangent bundle of a manifold, or a vector in a particular fiber, it has more than just the symbolic meaning. Or perhaps you're simply taking the advice of my professor! – Matt Samuel Nov 27 '15 at 04:03
  • 1
    @Matt24 Haha I was (and still am) in EXACTLY the same boat as you! And oh boy just wait until calculus 3 where if you multiply $y$ and $\dfrac{d}{dx}$, you magically get $\dfrac{dy}{dx}$ even tough youre not really supposed to "multiply them"! That's why I just bought an analysis book and i'm hoping a lot of that will be cleared up! If you want a recommendation, I bought Apostol's book, it looks great so far. – Ovi Nov 27 '15 at 09:57
  • 1
    I know $\dfrac{d}{dx}$ is supposed to be an operator, but when you are finding the curl you are doing the dot product so you are kindda literarely multiplying $\dfrac{d}{dx}$ by $y$ – Ovi Nov 27 '15 at 10:18
  • @MattSamuel Not at all! Like I said, it is like $\infty$: for instance, you can talk about the one-point compactification, and $\infty$ is really a point in the space. Or you can talk about limit at infinity, which can be either symbolic, or some real limit when talking about some compactification. Or you can say that $#A=\infty$ when you mean it is not finite. But you do see people arguing things like $\infty/\infty=1$ when talking about limits. I almost agree with your teacher... but I think there is a not something a differential really is. (cont... – Aloizio Macedo Nov 27 '15 at 11:02
  • (inuing...) For example, $dx$ can mean a one-form, a "symbol of the variable you are integrating" (although I dislike this use), $\frac{d}{dx}$ can mean an operator or a tangent vector (although a tangent vector can be defined as an operator, it can also be defined as different ways, so this observation is still relevant). And, like $\infty$, you see people arguing things like $du/dx=du/dy.dy/dx$ just due to notation and without proper reasoning or rigorous background. – Aloizio Macedo Nov 27 '15 at 11:08
  • I think mathematicians overstate that "d/dx" is a operator/not a ratio/etc. If you look at these things from a physical point of view and consider 'd' to be 'delta', 'integral' to be 'sum' and 'dy/dx' to be a ratio, everything will be fine. – Kartik Nov 27 '15 at 13:48
  • Related: https://math.stackexchange.com/questions/27425/what-am-i-doing-when-i-separate-the-variables-of-a-differential-equation – Hans Lundmark Jun 18 '18 at 20:04

2 Answers2

5

If you read a real analysis textbook such as Calculus by Spivak, they manage to develop calculus rigorously while avoiding differentials like "$dx$" and "$dy$" entirely. This is the standard way to make calculus rigorous -- you just avoid using differentials. And indeed, in undergrad differential equations classes, arguments that involve manipulating $dx$ and $dy$ as individual quantities can easily be rephrased to avoid doing this.

For example, if a differential equations textbook says: \begin{align} & y \, dy = dx \\ \implies & \int y \, dy = \int \, dx \\ \implies & \frac{y^2}{2} = x + C \\ \end{align} we can rephrase this argument as \begin{align} & y \frac{dy}{dx} = 1 \\ \implies & \frac{y^2}{2} = x + C, \end{align} where in the second step we simply took antiderivatives of both sides, using the chain rule in reverse to find an antiderivative of $y \frac{dy}{dx}$.

But note: even though a rigourous approach might avoid using differentials entirely, there is no need to throw "differential intuition" out the window, because it makes perfect sense if we just think of $dx$ and $dy$ as being extremely tiny but finite numbers, and if we replace $=$ with $\approx$ in the equations we derive. Perhaps the word "infinitesimal" could be thought of as meaning "so tiny that the errors in our approximations are utterly negligible". We can plausibly obtain exact equations "in the limit" (if we are careful). There is something aesthetically appealing about treating $dx$ and $dy$ symmetrically, which can perhaps in some situations give us a feeling that the approach using differentials is the "right" way or more beautiful way to do these computations. Compare these two ways of writing an "exact" differential equation:

\begin{equation} I(x,y) \,dx + J(x,y)\, dy = 0 \end{equation} vs. \begin{equation} I(x,y) + J(x,y) \frac{dy}{dx} = 0. \end{equation} The first version is aesthetically compelling, because it's more symmetrical; this might help explain why the second version is not seen more often (despite its being easier to understand, in my opinion).

Of course, for any results derived using "differential intuition", we must later find a rigorous proof to confirm there is no mistake.

Note also: There are other approaches to making calculus rigorous (based on nonstandard analysis I think) that actually make infinitesimals rigorous. So they manage to embrace $dx$ and $dy$ as legitimate quantities, rather than avoiding them.

Additionally, in differential geometry, quantities like $dx$ are defined precisely as "differential forms", and some treatments of calculus (like Hubbard & Hubbard) embrace differential forms at an early stage. But you can understand calculus rigorously without using differential forms.

littleO
  • 51,938
  • This is a great answer!

    But if you don't mind me asking, I'm not sure I understand entirely how to get around the use of differentials. Why does the integral of y dy/dx with respect to x = y^2/2? If you took the integral of a derivative shouldn't you get the original function "y"?

    – Matt24 Nov 28 '15 at 16:31
  • Oh, never mind about that, just understood it's nothing more than u-substitution. – Matt24 Nov 28 '15 at 17:16
  • You can also just look at it like this: if $h(x) = (1/2)y(x)^2$, then $h'(x) = y(x) y'(x)$ by the chain rule. – littleO Nov 28 '15 at 20:29
  • @Matt24: You can preserve symmetry by treating all variables as varying with respect to a single separate parameter $t$. So "$I(x,y) ,dx + J(x,y) ,dy = 0$" would actually mean "$I(x,y) \frac{dx}{dt} + J(x,y) \frac{dy}{dt} = 0$". This viewpoint gets rid of annoying special cases, such as when using implicit differentiation to analyze motion on a circle. It of course introduces natural issues, such as having to ensure that $Δx$ is eventually nonzero as $Δt$ approaches (but never reaches) $0$ before you can analyze $\frac{Δy}{Δx}$ to determine $\frac{dy}{dx}$. – user21820 Jan 24 '17 at 15:14
  • @littleO Forgive me for my ignorance but how did you convert $dy = ydx$ to $y {dy\over dx} = 1$ ? did you divde by $dx$ or you took derivative on both sides ? –  Mar 04 '17 at 19:46
  • @A---B Notice I started with $y dy = dx$, not $dy = y dx$. – littleO Mar 04 '17 at 23:18
  • @littleO Yes how we get from $ydy = dx$ to $y {dy\over dx} = 1$. Sorry I messed it up in writing. –  Mar 05 '17 at 01:12
  • 1
    @A---B I just divided both sides by $dx$. By the way, if I were avoiding differentials, I would not write the ODE as $y dy = dx$ in the first place. – littleO Mar 05 '17 at 01:30
-2

If you want to know what a mathematical thing is, you need two things:

  • How it works - what the rules of using it are.
  • A way of expressing it in terms of mathematical things you know and trust.

For differentials, we know what the rules of using them are. And I'm told some clever folks have worked out how to model them (notably Abraham Robinson with non-standard analysis).

There is a whole pyramid of mathematical things that we accept without demur these days that were once as suspect as differentials probably still are. What tends to happen is that we know how the thing works, then we have to find what it is or might be.

If you're used to counting, what's a negative number? ... a rational number?

Our confidence in real numbers is perhaps as naive as that in differentials.

Thumbnail
  • 563
  • 1
    Sorry, but your answer is not mathematically valid. The real numbers are very easily constructed in most foundational systems, including ZFC set theory, whether by decimal representations or by Cauchy sequences or by Dedekind cuts. At least today, ZFC is the standard foundational system, and to claim that a mathematical object of some kind exists is to claim that there is a proof in ZFC. No proof, no go. In particular, non-standard analysis is developed in ZFC and uses AC in a crucial way, whereas the structure of reals uses far less, so there is far less reason to doubt the latter. – user21820 Jan 24 '17 at 15:30
  • @user21820 Our confidence in real numbers may be more reasonable than our confidence in differentials, while remaining equally as naive. I think that's the case for the questioner and me. The former is a formal property of the believed thing; the latter a characteristic of our assent to it. – Thumbnail Jan 24 '17 at 17:12
  • 1
    I think I see what you mean, but then it's about philosophy of mathematics and hence not an answer to the question, or am I still missing something in your answer? Anyway when it comes down to philosophy, I think the reals have a very concrete justification through positions and scalings due to perspective shifts. In contrast, there is no reason to believe one can play with positive infinitesimals as if they are smaller than any positive real number, since there is no corresponding real world phenomenon. But please let me know if I'm misreading your answer. Thanks! – user21820 Jan 25 '17 at 04:12
  • @user21820 I'm making a psychological point, not a philosophical one: you're right - I didn't really answer the question. Intuition needn't correspond to logical simplicity. It's harder to construct the reals from the rationals than the complex numbers from the reals. Yet our intuition for the continuity of the reals far surpasses our intuition for complex numbers. By the way, aren't the perspective transformations on rational space a proper subgroup of those on real space? – Thumbnail Jan 25 '17 at 14:36
  • Continuity of the reals comes in 2 main flavours. One is the completeness of the reals (equivalent to intermediate value theorem; see http://math.stackexchange.com/a/90300). Another is the algebraic notion that every non-negative element has a square-root and every odd-degree polynomial has a root, which when added to the ordered field axioms gives the first-order theory of the ordered reals called the theory of real-closed fields. The former is far stronger, but intuition for IVT 'supports' it. Complex multiplication by $z$ can be viewed as the spiral transform about $0$ mapping $1$ to $z$. – user21820 Jan 25 '17 at 15:52