Why do we ignore squares of infinitesimal quantities in differential calculus , doesn’t that cause error when we require very accurate measurements.?
-
See [https://math.stackexchange.com/questions/455639/what-is-the-meaning-of-infinitesimal?rq=1]("What is the meaning of infinitesimal?"). Infinitesimals as you are referring to do not really exist in the standard construction of $\mathbb{R}$ and are solely used to provide intuition. Their usage is formalized in nonstandard analysis. Infinitesimals are supposed to be smaller than any positive real number, and so technically they should not contribute to error in this sense. – csch2 Jan 18 '20 at 05:06
-
@csch2 Well, we do actually have a concrete notion of differentials (what formerly were infinitesimals), but I suspect that exterior calculus is a bit much for OP. – Rushabh Mehta Jan 18 '20 at 05:08
3 Answers
when you find a derivative it is something like $\frac{dy}{dx}=\frac{a_0\ +\ a_1{dx}\ +\ a_2{dx}^2 .......}{dx}$. Here $a_0\ +\ a_1\ .....$can be a function of x. So if $a_0$ is not zero this thing will tend to infinity (so lets us assume it to be zero). Overall it is $\frac{dy}{dx}=a_1\ +\ a_2{dx}.... $ . Now obviously you can take dx to be very small which is tending to zero , which will not give error.(let us say u r calculating till one billionth decimal place , you can take dx to be even smaller than that)

- 1,923
In the standard framework of the real numbers, there is no non-zero number that is infinitesimally small. Instead, there are quantities whose limits tend to $0$ - so "infinitesimal" only takes on a meaning inside of a limit (in the same way that "dx" only takes on a meaning under an integral sign). Thus if $f(x)$ is a quantity satisfying $$ \lim_{x\to a}f(x)=0, $$ we can say that $f(x)$ is "infinitesimally small" as $x$ approaches $a$. Now $f(x)^2$ is infinitesimally smaller than $f(x)$ itself, so even if we multiply $f(x)$ by some quantity $g(x)$ that blows up to $\infty$ (for example, $g(x)=1/f(x)$ has this property) then we can still have $g(x)f(x)^2$ tend to zero.

- 30,223
In all empirical sciences there is error no matter how accurate the measurements are. They will always be rational numbers and the error term is always non-trivial. You have to justify ignoring infinitesimals and that typically involves showing the error term is smaller than the error of the measurement. in practice this happens routinely and so it's useful to study the case of negligible terms.

- 11,018
- 1
- 12
- 29
-
-
This is a terrible answer. How does answer the question at all? OP's question is very valid. – Rushabh Mehta Jan 18 '20 at 05:07
-
1@DonThousand I'll elaborate then. I don't see how you can interpret this question mathematically though, as trying to mix infinitesimals with real world measurements seems non-obvious without appealing to philosophic arguments. – CyclotomicField Jan 18 '20 at 05:43
-
@HarshaVeluru That's right and you should never ignore it in real world measurements, ever. If you want to ignore them you have to justify it based on other considerations. Mathematically this is commonly applicable so we study this case but it is non-trivial. – CyclotomicField Jan 18 '20 at 05:46