I'm watching a number theory lecture, and the lecturer proves the proposition that for any $a \in \mathbb{Z}$ and $b > 0$, there exist unique $q,r \in \mathbb{Z}$ with $0 \leq r < b$ such that $a = bq + r$. He begins the proof by letting $q$ be the largest integer less than or equal to $\frac{a}{b}$, and proceeds to prove existence in this manner.
I understand all of the steps of the proof, but is this strategy valid? There isn't really a valid "division" on the integers, except for what this proof is aiming to establish, so this seems somewhat circular. I've always begun this proof with considering the set of all non-negative $a - bx$ and finding its minimal element, which is the remainder. I did some research for whether this alternate approach was used but could not find it.
EDIT: Actually, I think it's wrong for me to call it circular. One can prove the existence of the floor function for the integers and I believe for the rationals as a result, which is what $q$ is in this case. It wasn't previously proved, but I can understand it being taken for granted.