Basically my question is about $\int_a^c f(x)dx$ such that there is a 'singularity' (*) at $x=b$ and $a<b<c$
Assume that $\displaystyle\int f dx= F$, my question is when is $\displaystyle\int_a^c f(x)dx = F(c)-F(a)$
For some examples, the general way to deal with stuff like this is to integrate from $a$ to $b- \epsilon$ and $b+ \epsilon$ to $c$ and sum them and take the limit as $\epsilon \to 0$. And we need to do this otherwise we may get some absurd things, for a famous example, if we just do $F(c)-F(a)$ we get $\displaystyle\int_{-1}^1 \left(\dfrac1x\right)^2 = -2$ which is clearly false.
So I thought that we would always have to do the limit thing whenever integrating around singularity, but then I did the following question:
$$\int_0^{\infty} \dfrac1{x^3-1}$$
It is quite easy to find the antiderivative of this using partial fractions,
(page from "inside interesting integrals, really cool book btw)
So then as there is a singularity at $1$ I did the routine and long $\epsilon$ calculations to get the final answer of $\dfrac{\pi}{\sqrt{27}}$ but then I was quite surprised to find out that this was indeed equal to $\displaystyle\lim_{x \to \infty}F(x)-F(0)$.
This is what has inspired this question,cos i would really like it if there was some way to know when I have to do these boring calculations to get the correct answer and when I do not. Finally, tldr; my question is when do we need to do the limit $\epsilon$ thing when integrating over a singularity
(*)singularity meaning:(the function is not defined, more specifically, for this question, assume $f$ tends to $\pm \infty$ as $x \to b$)