An algorithm $\tilde{f}$ of a problem $f$ is said to be backward stable if for any input $x$, $$ \tilde{f}(x) = f(\tilde{x}) \ \text{ for some } \tilde{x} \text{ such that } \frac{||\tilde{x} - x||}{||x||} = O(\epsilon) $$ as $ \epsilon \to 0 $, where $\epsilon$ is the machine epsilon.
The first time I saw this definition I was very confused: where does the $ \epsilon $ come from? There is no $ \epsilon $ in the definition of $ f, \tilde{f}, x, \tilde{x} \cdots $ etc.
After some thought, I realized that the algorithm $\tilde{f}$ is meant to be implemented on a family of idealized machines for which machine epsilon $\epsilon \to 0$. Thus, the algorithm is actually a family of functions $ \{ \tilde{f_\epsilon} \}_{\epsilon >0} $. Each $ \tilde{f_\epsilon} $ corresponds to different machine epsilon $\epsilon$.
Here is my understanding of backward stability:
An algorithm $ \{ \tilde{f_\epsilon} \}_{\epsilon >0} $ is backward stable if there exists $C, \delta>0$ such that for all input $x$ and $ 0 < \epsilon < \delta$, there exists $\tilde{x}$ such that $$ \tilde{f_\epsilon}(x) = f(\tilde{x}) \ \text{ and } \ ||\tilde{x} - x|| \leq C \epsilon ||x|| $$
Is my understanding correct? This definition seems to be too complicated.