From single-variable calculus where we first encounter differentials we are told fairly often that differentials are not to be treated as infinitesimal quantities/objects (but we are never really told why), and we kind of hand-wave things and manipulate them as fractions (e.g $\frac{dy}{dx}$) in things like $u$-Substitution.
$$\int f(x) \ dx$$
In the above integral, the differential, $dx$, seems to be doing absolutely nothing other than signifying where the expression for the integrand finishes. But the differential, seems to have more of a part to play in multivariable calculus.
In multivariable calculus we often encounter integrals like
$$\oint \vec{f(x)}\bullet\vec{dX} = \oint \|\vec{f(x)}\| \cdot \|\vec{dX}\| \cos(\theta)$$
where we can treat vector differentials as if they were infinitesimal vectors, and perform normal vector operations (such as taking the dot product)
Questions and comments
- It doesn't seem all that clear to me why we can't treat either scalar or vector differentials as infinitesimals.
- Furthermore why do we even treat differentials as infinitesimals, even when it's not technically rigorous?
- I'm sure there must be conditions that need to be met that allow us to treat differentials as infinitesimals.
It seems like in single-variable calculus the differentials we encountered just a special case of the differentials we encounter in multivariable calculus, which we can do more with.
I've heard of integration on differential forms (which I'm about to start learning), and I'm assuming that all the differentials we've encountered thus far in single and multivariable calculus must be special cases of this general form. Am I correct? If not then where do we rigorously define differentials and the operations we can perform with them?