(This answers the general question for $m$ particles in $n$ boxes.)
Bose-Einstein statistics (BES) can be generated by allocating particles one-at-a-time to $n$ boxes numbered $1,...,n,$ by repeatedly applying the following rule, starting with no particles yet allocated:
- If $r$ particles have been allocated, producing particle counts $(r_k)_{k=1,..,n}$ in the $n$ boxes, then allocate the next particle to box-number $X_r$, where $X_r$ is chosen from $\{1,...,n\}$ according to the probability distribution specified by $$P[X_r=k]={r_k+1\over r+n}\,\pmb 1_{k\in\{1,...,n\}}.\tag{1}$$
Sketch of proof:
Let $R_r=(r_1,...,r_n)$ be the particle-counts when $r$ particles have been allocated according to the above procedure. An allocation $R_r$ obeys BES if $$P[R_r=(r_1,...,r_n)]={r!\,(n-1)!\over (n+r-1)!}$$
for all $(r_1,...,r_n)\in \{0,1,...,r\}^n$ such that $\sum_{k=1}^nr_k=r$ (i.e., if the distribution is uniform on the set of all $\binom{n+r-1}{r}$ such $n$-tuples).
The proof proceeds by noting that $R_0$ trivially obeys BES, and then showing that if $R_{r-1}$ obeys BES then so does $R_{r}.$ Thus, we have
$$\begin{align}&P[R_r=(r_1,...,r_n)]\\[1ex]
&=\sum_{k=1}^nP[R_r=(r_1,...,r_n)\mid R_{r-1}=(r_1,..,r_k-1,..,r_n)]\,P[R_{r-1}=(r_1,..,r_k-1,..,r_n)]\\[1ex]
&=\sum_{k=1}^nP[X_{r-1}=k]\,P[R_{r-1}=(r_1,..,r_k-1,..,r_n)]\\[1ex]
&=\sum_{k=1}^n{(r_k-1)+1\over(r-1)+n}\,{(r-1)!(n-1)!\over(n+(r-1)-1)!}\\[1ex]
&=\sum_{k=1}^nr_k{(r-1)!(n-1)!\over(n+r-1)!}\\[1ex]
&={r!(n-1)!\over(n+r-1)!}
\end{align}$$
QED
I adapted the above from a derivation in the following article:
Ijiri Y, Simon H A. Some distributions associated with Bose-Einstein statistics. Proc Natl Acad Sci U S A. 1975;72(5):1654-1657.
Aside: Letting $\ell_r$ be the location (box-number) of the $r$th particle allocated, a calculation similar to the one above shows that $P[\ell_r=k]={1\over n}$ for all $k\in\{1,...,n\};$ i.e., $\ell_1,...,\ell_m$ are identically (though not independently) $\text{Uniform}\{1,...,n\}:$
$$\begin{align}&P[\ell_r=k]\\[1ex]
&=\sum\limits_{(r_1,...,r_n)\in\{0,...,r\}^n:\\ r_1+...+r_n=r}P[R_r=(r_1,...,r_n) \ \text{ and }\ R_{r-1}=(r_1,..,r_k-1,..,r_n)]\\[1ex]
&=\sum\limits_{(r_1,...,r_n)\in\{0,...,r\}^n:\\ r_1+...+r_n=r}\bigg(P[R_r=(r_1,...,r_n)\mid R_{r-1}=(r_1,..,r_k-1,..,r_n)] \cdot P[R_{r-1}=(r_1,..,r_k-1,..,r_n)]\bigg)\\[1ex]
&=\sum\limits_{(r_1,...,r_n)\in\{0,...,r\}^n:\\ r_1+...+r_n=r}{r_k\over n+r-1}\cdot {(r-1)!(n-1)!\over(n+r-2)!}\\[2ex]
&\overset{(*)}{=}{(r-1)!(n-1)!\over(n+r-1)!}\sum\limits_{(r_1,...,r_n)\in\{0,...,r\}^n:\\ r_1+...+r_n=r}r_k\\[1ex]
&={(r-1)!(n-1)!\over(n+r-1)!}\binom{n+r-1}{n}\\[1ex]
&={1\over n}
\end{align}$$
QED
(I found the sum $\sum_{...}r_k$ in (*) by imagining a matrix whose rows are all the $n$-tuples being summed over. By symmetry, every column has the same sum, which is the very sum $\sum_{...}r_k$ required. Thus the grand total of the column sums is $n\cdot\sum_{...}r_k,$ and this must equal the grand total of the row sums, which is $r\cdot\binom{n+r-1}{r}.$ Hence, the required sum is $\sum_{...}r_k={r\over n}\binom{n+r-1}{r}=\binom{n+r-1}{n}.$)