Questions on the definition of sharpness in the context of bounds and inequalities have already been asked in this site, for example here and here. However I am interested in the specific case where sharp bounds are derived for some error measurement.
Let us consider some function $r:\mathbb{R}\rightarrow\mathbb{R}$ which can be interpreted as the size of some error which we are trying to bound; moreover there exists $x^\star\in\mathbb{R}$ such that $r(x^\star)=0$ that is the error is zero at $x^\star$. This is the case for example when approximating a function $f(h)$ around zero by using its Taylor expansion $f(0)+f^\prime(0)h$: when the shift $h$ becomes null then the value coincides with the approximation.
I don't have any reference at hand but when reading papers where an error is measured, I often find theorems or propositions where the author(s) have established a "sharp bound" (also referred to as "sharp estimate") for the error; this has the form of some inequality. So let us assume we have found a function $g:\mathbb{R}\rightarrow\mathbb{R}$ such that: \begin{align} & r(x)\leq g(x), \quad \forall\ x\in\mathbb{R} \tag{1a}\\ & r(x^\star)=g(x^\star) \tag{1b} \end{align} According to the definitions on sharpness (see for example this answer), the function $g$ provides a sharp upper bound on the error. Yet because we have assumed $r(x^\star)=0$, we can always find a constant $C$ such that $Cg(x)$ also satisfies $(1)$ because $Cg(x^\star)=C\times0=r(x^\star)$. Therefore the "sharpness" of the bound can be arbitrarily widened or tightened.
Would the above be still considered a sharp bound (in the context of error measurement)? Does a sharp bound for error estimates mean something else?