I know the following theorem (see exercise 1.3.3 from Achim Klenke: »Probability Theory — A Comprehensive Course«):
Let $(\mu_n)_{n\in\mathbb{N}}$ be a sequence of finite measures on the measurable space $(\Omega,\mathcal{A})$. Assume that for any $A \in \mathcal{A}$ there exists the limit $\mu(A) := \lim_{n→\infty} \mu_n (A)$.
Then $\mu$ is a measure on $(\Omega,\mathcal{A})$.
For all $n\in\mathbb{N}$ the function $F_n(x) = \frac{n x}{n x+1}$ is continuous on $[0,\infty)$ and monotonically increasing. So on the measurable space $\bigl((0,\infty),\mathcal{B}\bigl((0,\infty)\bigr)\bigr)$ there are measures with $$\mu_n\bigl((a, b]\bigl) = F_n(b) - F_n(a)\, .$$ These are finite measures, because $F_n(x) < 1$. It seems that the following limit exists for all $b > a$: $$\mu\bigl((a,b]\bigr ) := \lim_{n\rightarrow\infty} \mu_n\bigl((a,b]\bigr)\,.$$
Let $A_m:=\bigl(0,\frac{1}{m}\bigr)$, so $A_m\downarrow\emptyset$. Then $$\lim_{m\rightarrow\infty} \mu(A_m) = \lim_{m\rightarrow \infty} \lim_{n\rightarrow\infty} \mu_n(A_m) =\lim_{m\rightarrow \infty} \lim_{n\rightarrow\infty} \frac{n}{n+m} = 1\, .$$ But if $\mu$ were a measure, it would be monotonous.
Can you help me to see what exactly goes wrong here? Why can't the theorem be applied? Thank you!