The method of inserting $0$ in a clever way by adding and subtracting a term is used many times in analysis. I learned long ago to call this clever use of $0$ a "propitious zero" and others have been taught that term too: look here and here.
I'll apply this idea to products, reciprocals, and to the terms in a sequence rewritten as a series.
Example 1. Let's prove continuity of multiplication of real numbers. If $x$ is near $a$ and $y$ is near $b$ then we want to show $xy$ is near $ab$. A standard way to do this is to write
$$
xy - ab = (xy - ay) + (ay - ab) = (x-a)y + a(y-b)
$$
and then
$$
(x-a)y = (x-a)(y-b+b) = (x-a)(y-b) + (x-a)b,
$$
so
$$
xy - ab = (x-a)(y-b) + (x-a)b + a(y-b).
$$
On the right side $x$ and $y$ only show up in the context of $x-a$ and $y-b$, so by choosing $x$ and $y$ so that $|x-a|$ and $|y-b|$ are sufficiently small, we can make the right side arbitrarily close to $0$. Thus multiplication as a mapping $\mathbf R^2 \to \mathbf R$ is continuous at each point $(a,b)$ in $\mathbf R^2$.
A similar argument shows other multiplication operations (on $\mathbf C$, on ${\rm M}_n(\mathbf R)$, etc.) are continuous.
UPDATE: the answer by CR Drost reminds me that a propitious zero occurs in the proof of the product rule from calculus for the derivative $(u(t)v(t))'$ in exactly the same way as in the last identity above for $xy - ab$. In that identity, replace $a$ and $b$ with $u(t)$ and $v(t)$ and replace $x$ and $y$ with $u(t+h)$ and $v(t+h)$. It tells us that
$u(t+h)v(t+h) - u(t)v(t)$ equals
$$
(u(t+h) - u(t))(v(t+h)-v(t)) + (u(t+h) - u(t))v(t) + u(t)(v(t+h)-v(t)).
$$
Divide by $h$ and let $h \to 0$ to get in the limit
$$
u'(t)0 + u'(t)v(t) + u(t)v'(t) = u'(t)v(t) + u(t)v'(t).
$$
Example 2. Let's prove continuity of inversion on the nonzero real numbers. If $a \not= 0$ and $x$ is close enough to $a$, we want to show $1/x$ is close to $1/a$. To begin, let's suppose $|x-a| < |a|$, so
$x$ is inside the open interval around $a$ of radius $a$ and thus $x \not= 0$. We have
$$
\left|\frac{1}{x} - \frac{1}{a}\right| = \frac{|x-a|}{|x||a|}.
$$
On the right side, in the numerator $x$ appears only in the context of $x-a$, which is great.
For the denominator, we want to get a (positive) lower bound on $|x|$ in terms of $|x-a|$ in order to get an upper bound on $1/|x|$. It's time for a propitious zero:
$$
|a| = |a-x+x| \leq |a-x| + |x| \Longrightarrow
|x| \geq |a| - |a-x| = |a| - |x-a|.
$$
As long as $|x-a| < |a|$, that lower bound is positive, so
$$
|x-a| < |a| \Longrightarrow
\left|\frac{1}{x} - \frac{1}{a}\right| = \frac{|x-a|}{|x||a|} \leq \frac{|x-a|}{(|a| - |x-a|)|a|}.
$$
The right side goes to $0$ as $|x-a| \to 0$ (with $a$ fixed). Concretely, sharpen $|x-a|< |a|$ to $|x-a| \leq |a|/2$ and we get
$|a| - |x-a| \geq |a| - |a|/2 = |a|/2$, so
$$
\left|\frac{1}{x} - \frac{1}{a}\right| \leq \frac{|x-a|}{|a|^2/2} = \frac{2}{|a|^2}|x-a|.
$$
A similar argument shows inversion is continuous on $\mathbf C^\times$ and ${\rm GL}_n(\mathbf R)$, although some extra care is needed for the matrix case (when $n > 1$) since matrix multiplication is not commutative.
Example 3: If $\{a_n\}$ is a sequence of numbers where $|a_n - a_{n+1}| \leq 1/2^n$, we can write each $a_m$ as a telescoping sum of the differences $a_n - a_{n+1}$ for $n \geq m$, which amounts to using infinitely many propitious zeros:
$$
a_m = (a_m - a_{m+1}) + (a_{m+1} - a_{m+2}) + (a_{m+2} - a_{m+3}) + \cdots = \sum_{k \geq m} (a_k - a_{k+1}).
$$
This by itself does not seem very interesting, but using this idea with functions in place of numbers is how you prove in measure theory that an $L^1$-convergent sequence of functions has a subsequence that is pointwise convergent almost everywhere. The argument for that is written in the accepted answer here.