1

If we want to determine the error with high precision we should calculate $f(x+\Delta x) - f(x) $. Why would we prefer an approximation using differentials? $dy = f'(x)dx$

  • 2
    Who is saying this? When calculating error of what? – user7530 Sep 09 '22 at 07:54
  • 1
    The derivative, if we know it (e.g. if it can be calculated symbolically), tells us the approximate error for every value of $x$ (and sufficiently small $\Delta x$). Calculating $f(x + \Delta x) - f(x)$ for specific values of $x, \Delta x$ only tells you the error at that specific point $x$ that you've chosen. – Qiaochu Yuan Sep 09 '22 at 08:33
  • Hopefully the answer by Bey is sufficient. Because it's still pretty unclear to me what you want. I can guess, but I suppose I prefer OP to spell it out a bit more. – Brian Tung Sep 09 '22 at 17:10
  • 1
    @Qiaochu Yuan: You might already realize this (although maybe not everyone here), but I suspect the OP's question arises from elementary calculus exercises asking for using differentials to approximate the value of things like $\sqrt{16.08}$ and $\sqrt[3]{8.02},$ rather than the kind of approximations one frequently sees (at least, this is where I mostly saw this arise, as I'm mostly in pure math) in physics courses (e.g. show that for small oscillations with friction given by $\ldots$ the amplitude is approximately $\ldots).$ – Dave L. Renfro Sep 09 '22 at 17:49
  • @DaveL.Renfro you are totally right. – Maroon Racoon Sep 11 '22 at 10:10

2 Answers2

2

So here you are implicitly assuming the error is solely in the measurement of $x$

$f(x+\Delta x) - f(x)$ is actual difference between two points of a function. If you know the error distribution of $\Delta x$ then you could, theoretically, get the distribution of $f(x + \Delta x)$.

However, in error analysis, we typically make the assumption that $|\Delta x| \ll |x|$ and we want to easily look at the sensitivity of our estimates of $f(x)$ to (small) errors in $x$.

If we assume $f(x)$ is smooth, then from Taylor's theorem we have

$$f(x+\Delta x) \approx f(x) + f'(x)(x+\Delta x)$$

Which will be reasonably accurate if $\Delta x$ is "small" (one of those vague terms that has a complex version, but think about a curve being approximated by its tangent plane -- there is some neighborhood around the tangent point where your errors re below some value), so the approximation error approaches $0$ as $\Delta x \to 0$.

So, we can now understand how our error in $y$ is impacted (approximately) by errors in $x$: $\Delta y \approx f'(x)\Delta x$.

For one-dimensional problems, this may not seem like an overly big improvement. However, imagine $x \in \mathbb{R}^n$. Then the exact approach requires $2^n+1$ calculations (i.e., the finite differences approach) since $\Delta x$ will also be a vector. This is where the differential approach shines, since it you can calculate $\nabla f(x)$ (as a formula) and then get approx. error as $\Delta y \approx f(x) + \nabla f(x) \cdot \Delta x$. Now you have a straightforward (linear) formula for the error for any $x$.

Of course, Taylor's theorem can provide second, third, nth order approximations using higher degree polynomials. So the main benefit is that it provides nice formulas that approximate the error in measurement vs relying on "black box" finite differences approaches.

Annika
  • 6,873
  • 1
  • 9
  • 20
1

This was originally written as a comment to follow up my yesterday comment above, but because of its length and possible interest to others, I'm giving this as an answer.

At the risk of stating the obvious -- given all that has been written here so far -- the primary applications of these approximations are NOT to calculate specific numerical values, but instead to approximate difficult-to-work-with formulas.

For example, if $x$ is small (i.e. close to $0),$ then $\sin x$ is approximately equal to $x.$ This is used to replace the exact pendulum equation (a differential equation that cannot be solved in terms of the usual "elementary functions" of calculus) with an approximate pendulum equation (see question/answer here) that is both easy to solve exactly (e.g. can now be easily solved by separation of variables method, often introduced in beginning calculus courses) while also giving you a lot of useful information (e.g. useful for timekeeping, although engineering modifications need to be incorporated for accurate time keeping over longer periods of time).

Another example is that $e^t$ is approximately equal to $1 + t,$ which explains why for small unit-time exponential growth rates, such as 2% and 5%, the "growth constant" $k$ in $Ae^{kt}$ is approximately the same as the "growth constant" $r$ in $A(1+r)^t.$

Finally, the meaning of "approximately the same" here is NOT absolute error but instead it is relative error (i.e. percent error). For example, when $x$ is close to $0,$ then the actual difference between $x$ and $x^2$ is small (e.g. for $x=0.01,$ we have $x - x^2 = 0.0099)$; however, the relative difference between $x$ and $x^2$ is large (e.g. for $x=0.01,$ the value of $x$ is $100$ times the value of $x^2).$

Useful Stack Exchange Questions (for linear, quadratic, and higher order polynomial approximations)

Physical applications of higher terms of Taylor series

An introductory example for Taylor series (12th grade)

What are power series used for? (a reference request)