Let $(X,\mathcal{A},\mu)$ be a measure space, and let $A \in \mathcal{A}$ be such that $\mu(A) = 0$. Then define $h\colon X \to [-\infty,\infty]$ by $h(x) = +\infty$ if $x \in A$ and $h(x) = 0$ otherwise. It's easy to see that $\int h d\mu = 0$ since, for example, one can take the sequence $f_n = n \chi_A$, all of which are simple, measurable, increasing and non-negative, and $f_n \to h$, so $\int h d\mu = \lim_n \int f_n d\mu = 0$. But our $h$ function is not so different from the Dirac delta, is it? If we take $(X,\mathcal{A},\mu) = (\mathbf{R},\mathcal{B},\lambda)$ (measure space of Borel sets equipped with the Lebesgue measure) then define $h$ using $A = \{0\}$ then we get the dirac delta. But, according to wikipedia, the integral of the Dirac delta (over $\mathbf{R}$) is $1$, contrary to our result with $h$.
If someone could explain what's going on and what I'm misunderstanding, I would be grateful. Thanks.