You should look at my answer here:
What is the intuition behind uniform continuity?
discussing continuity first (ps. don't worry about the stuff afterward about uniform continuity, etc. if you don't get it - right now we only need the first part about simple continuity). I actually think it may be more profitable to start with continuity, before limits, as we can then introduce limits in terms of continuity, instead of the other way around. And this is because continuity more immediately relates easily to our prior experiences with functions, calculators, and measurements, when the proper intuition is used to frame it. Moreover, bringing in "rates of change" means you're getting a bit ahead - that's differentiation, and here, we need limits, which are prior.
Basically, when we say a function $f$ is "continuous" at a point $x$ in its domain, what the means is we can approximate the value of $f$ evaluated at that point, i.e. $f(x)$, to within any given tolerance, i.e. $\epsilon$, by choosing a suitably-accurate approximate value for $x'$. This "suitable accuracy" of the approximant $x'$ of the value to plug, is what $\delta$ is. Note that this is effectively "why you can use a calculator", as I talk about in my answer: since calculators of necessity must work with inputs with limited precision, if functions like sine, square roots, etc. were not continuous, then there'd be no reason to even hope that putting in an imprecise value would give you a guarantee of getting an accurate result for what the sine or square root of the precise value which you wanted to know that function of, was. And then we'd be in a very bad way!
So, without further ado, let's consider $f_1(x) := x^3$, and think about approximating the value at $x = 1$. An $\epsilon$ of $\frac{1}{2}$ basically means that are asking for the ability to know the value of $f_1$ at $1$ to within $0.5$ accuracy. Of course, here, we already know the exact value of $f_1$ at $1$: it is just 1. Moreover, we know "1" exactly as well - it is "1"! But we're going to ignore that and instead imagine we will be approximating it anyways.
Finding a $\delta$ for this given $\epsilon$ means, basically, that you are asking "how close do I need to approximate $1$, i.e. the input, so that the output, i.e. $f_1(1)$, is approximated to within $\frac{1}{2}$ of the true value?" Since we know the exact, it is not hard to see how accurate our approximations are by comparison. Let's take, say, $x' := 2$. Now, $f_1(x') = 8$. Is $8$ within $\epsilon = \frac{1}{2} = 0.5$ of $1$? Clearly, very much no! So let's try a closer input: $x' := 1.5$. Now, $f_1(x') = 3.375$. Still not within 0.5. Now, how about $x' := 1.1$? Then we get $f_1(x') = 1.331$, which, at least, IS now within 0.5, i.e. within $\epsilon$, of $f_1(1) = 1$. Moreover, if you take any closer $x'$ than $1.1$, say, $1.01$, then $f_1(x')$ is going to be even closer to $f_1(1)$!
Hence, one possible value for $\delta$ for this given $\epsilon$ is $\delta = 0.1$. "Rates of change" do not matter - what you are after is whether or not you can make $x'$, your approximant, close enough to the input $1$ to get the output within $0.5$ accuracy of $1$ (this $1$ from $1^3$, not the input $1$, just to keep it clear as it does look a little possibly confusing), and moreover, once it gets that accurate, it stays at least that accurate even if you make that approximant better. And that this happens for all nonzero tolerances for the desired approximation of $f(x)$.
Of course, wait - as said, we were talking limits, not continuity. Well, limits are just this: If we say
$$\lim_{x \rightarrow c} f(x) = L$$
what that means is that $L$ is the value $f$ would need to take at $c$ if it were to be continuous there, regardless of whether it is or isn't, or is even defined there, i.e. it is the value $L$ that makes the "modified" $f$
$$f^{*}(x) := \begin{cases} f(x),\ x \ne c\\ L,\ x = c \end{cases}$$
continuous at $x = c$, if such a value exists. Alternatively, a limit is the value that $f$ "looks like it is trying to approximate" if it does, in fact, do such a thing, as you tighten up the tolerance on an approximation $x$ of the point of limit taking, $c$. In terms of the formal definitions, not the similarity of the condition for continuity:
$$|x - c| < \delta \ \rightarrow\ |f(x) - f(c)| < \epsilon$$
(nb. what we were calling $x'$ earlier is called "$x$" here and what we called $x$ earlier as the "input to be approximated" is called $c$ here in this discussion of limits, as this has been written in its standard form) and that for limit in the way it's usually presented in the beginner's texts:
$$|x - c| < \delta \ \rightarrow\ |f(x) - L| < \epsilon$$
so the understanding transfers immediately when thinking of $L$ as a "suitable" $f(c)$. The point about limits is we can have them even if $f(c)$ is not defined, or is defined to be something other than $L$ - which is crucial when you get to derivatives, because there you will be finding the limit of a quotient that, if you were to directly substitute the value $c$ that limit is being taken at, it would give the nonsense expression $\frac{0}{0}$ and hence be undefined!
And since $f_1(x)$ is continuous at $x = 1$, you also know the limit value should be $f_1(1) = 1$!
ADD: You may want to ask how we can then prove that $\delta = 0.1$ works in this case - i.e. how we know that for all tighter approximations $x'$ than $1.1$, the value of $f_1(x')$ will still not exceed the desired $\epsilon = 0.5$ tolerance, i.e. there aren't any hidden "spikes" or "jumps" in there that might screw it up. Well, that's just a bit of algebra with inequalities. Suppose that we have
$$1 < x' < 1.1 = 1 + \delta$$
Now we use inequality rules to apply $f_1$, i.e. cube all these:
$$1^3 < x'^3 < 1.1^3$$
giving
$$1 < f_1(x') < 1.331$$
hence
$$f_1(1) < f_1(x') < 1.331 < 1.5 = f_1(1) + \epsilon$$.
i.e. $f_1(1)$ is approximated within the $\epsilon$-tolerance no matter the $x'$, so long as it meets the $\delta$-tolerance. Cheers! Similarly, we can and should finish up by working on the left-hand side as well, where $f_1(1 - \delta) = f_1(1 - 0.1) = f_1(0.9) = 0.729$ which is again within 0.5 of 1, now more than $1 - 0.5 = 0.5$.