Consider the expression $\frac{2xh+3h^2}{h}$. When $h=0$ we cannot do this, as division. by zero is undefined. However, factoring the $h$ and cancelling gives $2x+3h$. Somehow, now we can substitute $h=0$ with no error at all. It seems like some cheap trick has occurred that has allowed me to bypass division by $0$.
Similarly, during first principles in calculus. When we use the limit as $h$ approaches $0$, we run into this division by $0$ error again. However, the terms cancel out, once again allowing us to evaluate the limit when we consider $h=0$.
My question, is why this is even allowed? I thought that division by $0$ is always not defined, so why can we just cancel some terms to avoid this, the reality was we still were dividing by $0$ originally. So what's going on?
Edit: After reading the linked answers, I still don't completely understand what is going on. Using my above example $\frac{2xh+3h^2}{h}=2x+3h$ is true for all $h$ except when $h=0$. I understand this. So for all values arbitrarily close to $h=0$ this is true. Hence the functions gets closer and closer to $2x$. However, it never actually reaches there, so what allows us to substitute $h=0$. We have basically said that this statement is true for all $h$ when not $0$, then have gone ahead and let $h=0$. This is extremely unintuitive to me.