I've been reading a proof that the reduced row-echelon form of a given matrix is unique, but there was one part that made me wonder.
This step of the proof shows that if $B$ and $C$ are row-equivalent and in reduced row-echelon form, where $r$ is the number of non-zero rows in $B$ and $r'$ is the same for $C$, then $r = r'$. Note that $d_k$ is the column of the $k^{th}$ pivot in $B$, and $d'_k$ is the same in $C$. Previously, it was proved that $d_k = d'_k$, which makes sense to me - this proof assumes that.
Without further ado, the proof (paraphrased from the University of Puget Sound's free textbook on Linear Algebra). If my annotations below are too convoluted, refer to page 34 here.
Suppose $r' < r$. For $1 \leq \mathscr{l} \leq r'$, we have $[B]_{rd_\mathscr{l}} = 0$, as $[B]_{kd_\mathscr{l}} = 0$ iff $k = \mathscr{l}$. Because the rows of $B$ (including row $r$) are a linear combination of those of $C$, we have $0 = [B]_{kd_\mathscr{l}} = \sum_{k=1}^{m} \delta_{rk} [C]_{kd_\mathscr{l}}$ where $\delta_{ik}$ is the coefficient row $k$ in $C$ is multiplied by to contribute to row $i$ in $B$.
We can decompose this sum as $\sum_{k=1}^{r'} \delta_{rk} [C]_{kd_\mathscr{l}} + \sum_{k=r' + 1}^{m} \delta_{rk} [C]_{kd_\mathscr{l}}$, and since $[C]_{kd_\mathscr{l}} = 0$ for $k > r' \geq \mathscr{l}$, we can drop the second sum, leaving $\sum_{k=1}^{r'} \delta_{rk} [C]_{kd_\mathscr{l}}$.
Since we know $d_k = d'_k$, this becomes $\sum_{k=1}^{r'} \delta_{rk} [C]_{kd'_\mathscr{l}}$. Pulling out a single term, we have $\delta_{r\mathscr{l}}[C]_{kd_\mathscr{l}}$ + $\sum_{k=1, k \neq \mathscr{l}}^{r'} \delta_{rk} [C]_{kd'_\mathscr{l}}$. Because $[C]_{kd_\mathscr{l}} = 1$ iff $k = \mathscr{l}$, and otherwise equals $0$, the previous expression reduces to $\delta_{r\mathscr{l}}(1)$ + $\sum_{k=1, k \neq \mathscr{l}}^{r'} \delta_{rk}(0) = \delta_{r\mathscr{l}}.$ Thus, $\delta_{r\mathscr{l}} = 0$...
This proof makes sense to me (especially after drawing a diagram), but I wonder why we pull out the sum from $r + 1$ to $m$ in the second paragraph. It's perfectly fine, but wouldn't the proof be shorter like this?
Suppose $r' < r$. For $1 \leq \mathscr{l} \leq r'$, we have $[B]_{rd_\mathscr{l}} = 0$, as $[B]_{kd_\mathscr{l}} = 0$ iff $k = \mathscr{l}$. Because the rows of $B$ (including row $r$) are a linear combination of those of $C$, we have $0 = [B]_{kd_\mathscr{l}} = \sum_{k=1}^{m} \delta_{rk} [C]_{kd_\mathscr{l}}$ where $\delta_{ik}$ is the coefficient row $k$ in $C$ is multiplied by to contribute to row $i$ in $B$.
Since we know $d_k = d'_k$, this becomes $\sum_{k=1}^{m} \delta_{rk} [C]_{kd'_\mathscr{l}}$. Pulling out a single term, we have $\delta_{r\mathscr{l}}[C]_{kd_\mathscr{l}}$ + $\sum_{k=1, k \neq \mathscr{l}}^{m} \delta_{rk} [C]_{kd'_\mathscr{l}}$. Because $[C]_{kd_\mathscr{l}} = 1$ iff $k = \mathscr{l}$, and otherwise equals $0$, the previous expression reduces to $\delta_{r\mathscr{l}}(1)$ + $\sum_{k=1, k \neq \mathscr{l}}^{m} \delta_{rk}(0) = \delta_{r\mathscr{l}}.$ Thus, $\delta_{r\mathscr{l}} = 0$...
Note that the second paragraph is now gone, and in the third paragraph $r'$ in the upper limit of sums has been replaced with $m$. My question is ultimately: is my shorter proof correct? If so, why wouldn't the proof be presented this way in the first place?