I am having trouble with a step in the proof of the statement: Let $f$ be integrable on $[a,b]$, and let $a<c<b$. Then $f$ is integrable on both $[a,c]$ and $[c,b]$, and $\int_a^b f = \int_a^c f + \int_c^b f$. The proof is presented in my textbook. In it, we use the Null's Partition Criterion, (though I am not sure if this is a standard result/typically named this, so I have included the statement below)
The Criterion states that given a bounded function over an interval $[a,b]$, and any sequence of partitions of $[a,b]$, $\{P_n\}$, such that $ ||P_n|| \to 0$ as $n \to\infty$:
(a) If $f$ is integrable on $[a,b]$, then, $$\lim_{n \to \infty} L(f,P_n) = \int_a^b f$$ and
$$\lim_{n \to \infty} U(f,P_n) = \int_a^b f$$
(b) If there is a number $I$ such that $\lim_{n \to \infty} L(f,P_n)$ and $\lim_{n \to \infty} U(f,P_n)$ both exist and equal $I$, then $f$ is integrable on $[a,b]$ and $I = \int_a^b f$
In the proof we start by letting $\{P_n\}$ be a sequence of partitions over the interval such that for all $n\geq 2$, $P_n$ includes the point $c$ as a partition point, and the mesh tends to $0$ as $n \to \infty$.
We define $\{Q_n\}$ to consist of those subintervals in $P_n$ that lie in $[a,c]$. Thus $\{Q_n\}$ is partition over $[a,c]$.
We end up proving the statement: $$U(f,Q_n) - L(f,Q_n) \leq U(f,P_n) - L(f,P_n)$$
The book then states, that since $U(f,P_n) - L(f,P_n) \to 0$ as $n \to \infty$ (by the Null Partitions Criterion), that by the Null Partitions Criterion, $f$ is integrable on $[a,c]$.
It is this last application of the Null Partitions Criterion that confuses me. I understand that by the limit inequality rule, we have that: $$\lim_{n \to \infty} (U(f,Q_n) - L(f,Q_n)) \leq 0$$
We can say that $\lim_{n \to \infty} (U(f,Q_n) - L(f,Q_n)) = 0$ since we know that the result must be greater than or equal to $0$ as well, by the relationship between upper and lower Riemann sums.
Since we are applying the Null Partitions Criterion, we must show that $\lim_{n \to \infty} L(f,Q_n)$ and $\lim_{n \to \infty} U(f,Q_n)$ both exist and equal $0$ to apply the theorem. However, I do not necessarily see the equivalence between this, and $$\lim_{n \to \infty} (U(f,Q_n) - L(f,Q_n)) = 0$$
Specifically, the combination rules for sequences tell us that if $\lim_{n \to \infty} a_n = m$ and $\lim_{n \to \infty} b_n = l$, then $\lim_{n \to \infty} a_n+b_n = m+l$, but the converse is not necessarily true. So in the case of the proof, we would have to know a priori that $\lim_{n \to \infty} U(f,Q_n)$ and $\lim_{n \to \infty} L(f,Q_n)$ converge in order to know that $$\lim_{n \to \infty} (U(f,Q_n) - L(f,Q_n)) = \lim_{n \to \infty} L(f,Q_n) + \lim_{n \to \infty} U(f,Q_n)$$
Ultimately my (two) questions are: (a) Is my bolded statement correct? Is it true that the sequence combination rules are such that the converse is not generally true? (b) If I am correct about the converse not generally being true, then why is this a valid step in the proof? How can we assume that there is a convergence of the upper and lower Riemann limits over $Q_n$?
I know it seems very intuitively obvious, but I don't see rigorously why we can make this assumption. The book also makes a point of mentioning that this theorem is often treated as 'obvious' even though it is not, so I am just trying to make sure I understand the proof completely.