Taylor series can be very useful for limits and asymptotics, because you can truncate them while controlling the error in terms of higher powers of the distance to the expansion point. This is a very common application whenever we know that some argument is "small", e.g. because we're considering a discretisation.
As a practical example, famously the real-valued limit
$$
\lim_{x\to 0} \frac{\sin(x)}{x}
$$
can be evaluated using l'Hôpital's rule. That, however, requires lots of assumption checking. The Taylor expansion can instead immediately tell us that
$$
\sin(x) = 0 + x + \mathcal{O}(x^2)
$$
yielding
$$
\lim_{x\to 0} \frac{\sin(x)}{x} = \lim_{x\to 0} \frac{x + \mathcal{O}(x^2)} {x} = \lim_{x\to 0} \bigg(\frac{x}{x} + \mathcal{O}(x)\bigg) = 1
$$
If you want to practice this concept, try applying it to
$$
\lim_{x\to 0} \frac{1-\cos(x)}{x^2}
$$