0

Considering the program below that calculates the mathematical constant,e from the definition, $$e = \lim_{n\to \infty}\left(1+\frac{1}{n}\right)^n. $$

It computes $\left(1+\frac{1}{n}\right)^n$ for $n=10^k$, $k=1,2,..20$.

Program: Program EApprox implicit none

integer :: n, k
real :: f, abs_error

do k = 1, 20 n = 10(k) f = (1.0 + 1.0/real(n))(n) abs_error = abs((exp(1.0)) - ((1.0 + 1.0/real(n))*n)) write(,*) k, n, f, & abs_error

end do

End Program EApprox

Results: Results Description

Does the error always decrease as n increases?

Here are some points that I would like further clarifications on:

  1. what you see there in terms of the behavior of the error as $n$ increases.
  2. For the first few results, when $n$ increases by a factor of $10 $ what happens to the error? By what rate does it decrease?
  3. And then we reach a point, where the error starts to grow. What do you think might be causing that error growth? And as $n$ continues to decrease by a factor of 10, what is the rate of increase of the error?
  4. Here, is it possible to identify that the truncation error behaved like $O(h)$. and that the roundoff error behaved like $O(1/h)$? If yes, how so?
Arctic Char
  • 16,007

0 Answers0