So this problem comes from Achim Klenke's Probability Theory: A comprehensive course that I am going through on my own. It appears as Exercise 6.2.1 in a chapter on convergence theorems. I have tried some angles but have not been able to crack it.
Let $H \in \mathcal{L}^1(\mu)$ with $H > 0$ $\mu$- almost everywhere and let $(E,d)$ be a separable complete metric space. For measurable $f,g : \Omega \to E$, define:
$$ d_H(f,g) := \int_{\Omega} \min\{1 , d(f(\omega),g(\omega)) \} H(\omega) \ \mu(\mathrm{d}\omega)$$
i) Show that $d_H$ is a metric that induces convergence in measure
ii) Show that $d_H$ is complete if $(E,d)$ is complete.
I have been able to prove part (i) by splitting the integral up in various ways and using DCT and so on. But I have not been able to prove (ii). I thought the following Lemma proved earlier in the chapter could be useful,
Corollary 6.15 Let $(E,d)$ be a separable complete metric space. Let $(f_n)_{n\in \mathbb{N}}$ be a Cauchy sequence in measure in $E$, that is, for any $A \in \mathcal{A}$ with $\mu(A) < \infty$ and any $\epsilon > 0$ we have,
$$\mu(A \ \cap \ \{d(f_m, f_n) > \epsilon \}) \xrightarrow{\ m, n \to \infty } 0 $$
Then $(f_n)_{n\in \mathbb{N}}$ converges in measure.
My idea was to prove that being Cauchy in $d_H$ entails being Cauchy in measure, which would then allow me have that $(f_n)_{n\in\mathbb{N}}$ converges in measure and so by part (i) it should converge in $d_H$ as well and hence we have that $d_H$ is complete.
One way I thought I could try to achieve this is by supposing we have some Cauchy sequence in $d_H$, that is, taking $\epsilon > 0$ arbitrary we have some $N$ such that,
$$d_H(f_m, f_n) < \epsilon, \ \forall \ m, n \geq N $$
Taking $\delta > 0$ arbitrary we seek to show that (assuming WLOG that $\mu(\Omega) < \infty$),
$$\mu(\{d(f_m, f_n) > \delta \}) < \epsilon $$
Or at least, some one-to-one function of $\epsilon$ (so that we can then pick the appropriate one when invoking the assumption of $d_H$ being Cauchy). Taking for now $\delta \geq 1$ we can see that in fact,
$$d_H(f_m, f_n) = \int_{\{d(f_m, f_n) \geq 1 \}} \min \{1, d(f,g) \}H \ \mathrm{d}\mu + \int_{\{d(f_m, f_n) < 1 \}} \min \{1, d(f,g) \}H \ \mathrm{d}\mu \leq \epsilon \\ \\ \Rightarrow \ \ \int_{\{d(f_m, f_n) \geq 1 \}} H \ \mathrm{d}\mu + \int_{\{d(f_m, f_n) < 1 \}} H d(f,g) \ \mathrm{d}\mu \leq \epsilon \\ \\ \Rightarrow \ \ \int_{\{d(f_m, f_n) \geq 1 \}} H \ \mathrm{d}\mu \leq \epsilon$$
At this point I wanted to use what I know about $H$ to try to get this inequality into one that uses $\mu(\{d(f_m, f_n) \geq 1 \})$ and the fact that $H$ is in $\mathcal{L}^1$ but I cannot think of how to do it. Intuitively there should be something I can do manipulate to separate the measure of the set the integral is being done over from the function. For instance if $H = c$ was a constant then we simply need to divide by that constant to get an estimate for the measure and hence allow us to conclude Cauchyness in measure. But I cannot think of how to proceed here. Am I on the right track?