Here is a practical example. Two unbiased estimators for the mean $\mu$ of a normal
population are the sample mean $A$ and the sample median $H.$ (See here for unbiasedness of the sample median of normal data.)
That is, $E(A) = E(H) = \mu.$
However, for any one particular sample size $n \ge 2$ one has $Var(A) < Var(H),$
so it the sample mean is the preferable estimator.
In particular, if
we are trying to estimate $\mu$ with $n = 10$ observations from a normal population with $\sigma=1,$ then it is easy to see that
$Var(A_{10}) = 0.1.$ By simulation (and other methods) one can find that $Var(H_{10}) \approx 0.138.$
Therefore, if we were to insist on using the median rather than the mean
we would have to use more than ten observations to get the same degree
of precision of estimation we could get from the mean.
set.seed(2020)
h = replicate(10^6, median(rnorm(10)))
mean(h); var(h)
[1] 0.000159509 # aprx E(H) = 0
[1] 0.1384345 # aprx Var(H) > 0.1
Here is a histogram of sample medians of a million samples of
size $n=10.$ The solid red curve shows the density function
of the normal distribution of means of samples of size $n=10,$ which is
$\mathsf{Norm}(\mu = 0, \sigma = 1/\sqrt{10}).$
[There is also a Central Limit Theorem for sample medians
that ensures the histogram is very nearly normal--but with a larger variance.]
hist(h, prob=T, br=50, col="skyblue2",
main="n=10: Histogram of Sample Medians")
curve(rnorm(x, 0, 1/sqrt(10)), add=T, col="red", lwd=2)
