I still have a very brief experience with calculus and limits, so this might actually be a silly question with a very simple answer.
Imagine you are trying to find the limit of $f(x)$ when $x$ approaches 1, being $f(x) = \frac{\sqrt x-1}{x-1}$
Replacing $x$ by 1 right away makes this $\frac{0}{0}$.
If you say that the denominator $x-1$ is equal to $(\sqrt x+1)(\sqrt x-1)$, and continue the equation,
$\lim \limits_{x \to 1} = \frac{\sqrt x-1}{(\sqrt x+1)(\sqrt x-1)} \to \lim \limits_{x \to 1} = \frac{1}{\sqrt x+1} \to \lim \limits_{x \to 1} = \frac{1}{2}$
you get $\frac{1}{2}$, which is, according to my answer sheet, indeed the limit of $f(x)$ as $x$ approaches 1.
What I don't understand though, is how did I get different results($\frac{0}{0}$ and $\frac{1}{2}$) just by changing the way to write the equation, not just in this case, but in almost any basic limit example question i've encountered. It is clear that changing the denominator to another expression that means exactly the same drastically change the final result, but how is this possible?