I would like to give here the behavior of a numerical analysis software, Matlab, to which we have asked to compute $\left(1+\frac{1}{10^n}\right)^{10^n}$ for $n=1,2,\cdots 20$:
$$\begin{array}{|r|r|}
\hline
1& 2.000000000000000\\
2& 2.593742460100002\\
3& 2.704813829421529\\
4& 2.716923932235594\\
5& 2.718145926824926\\
6& 2.718268237192298\\
7& 2.718280469095753\\
8& 2.718281694132082\\
9& 2.718281798347358\\
10& 2.718282052011560\\
11& 2.718282053234788\\
12& 2.718282053357110\\
13& 2.718523496037238\\
14& 2.716110034086901\\
15& 2.716110034087023\\
16& 3.035035206549262\\
17& 1.000000000000000\\
18& 1.000000000000000\\
19& 1.000000000000000\\
20& 1.000000000000000\\
\hline
\end{array}$$
Why this sudden change past $10^n=10^{16}$ ?
Because, with $\left(1+10^{-17}\right)^{10^{17}}$, we are below the machine's epsilon ( a classical expression) which is precisely
$$eps=2.220446049250313 \times 10^{-16}$$
and is defined as the smallest "floating point" number representable with Matlab.
In fact $\left(1+10^{-17}\right)^{N}$ is like $\left(1+0\right)^{N}=1$ for this software.
The fact that the results are never very good (never better than a $3 \times 10^{-8}$ approximation: recall that $e \approx 2.718281828459046...$) is another thing and should deserve a separate analysis.
Remark: for another interesting occurence of the machine's epsilon, see here.