In highschool, I learned continuity of a function $f: \mathbb{R} \to \mathbb{R}$ at a point $a$ to be defined as the following condition holding:
$$ \lim_{x \to a} f(x) = f(a) \tag{1}$$
However in undergraduate analysis book, the continuity is defined as so. A function $f: \mathbb{R} \to \mathbb{R}$ at a point is continous, if for every $\epsilon>0$, there exists a $\delta$ such that:
$$ |x-a | < \delta \implies |f(x) - f(a) | < \epsilon \tag{2}$$
In this video by Michael Penn , it's noted that somehow the undergraduate definition is more general although it is not explained precisely how. I think that they are actually the same. I think so clearly (1) implies (2) [take definition of limit], but I am having difficulty understanding why (2) doesn't imply (1). Could someone explain abstractly what the issue is? ( Examples are helpful but please don't make the entire explanation based on that, thanks)