I'm reading a numerical analysis book, which gave me very confusing descriptions about stability and backward stability.
It says that an algorithm $f$ is stable, if for any $x$, $$ \frac{\|f_b(x)-f(x_b)\|}{\|f(x_b)\|}=\mathcal{O}(e_{machine}) $$
Because I am assuming it should be how the approximated function mapping is away from the true function, so I don't understand why it is not the $$ \frac{\|f_b(x)-f(x)\|}{\|f(x)\|}=\mathcal{O}(e_{machine}) $$
Can anyone give me some description why the stable is defined in this way, and what is the general considerations for stable analysis?
Also, for backward stability, is it true that any algorithm is stable if it is backward stable?