Let $\{p_j\}_{j = 1}^{\infty}$ be a set of nonnegative values such that they all sum to $1$, let $\mathcal{B}$ be a $\sigma$-algebra of subsets of the countable sample space $S = \{s_1, s_2, \dots\}$, and let $\{A_i\}_{i=1}^{\infty}$ be an infinite sequence of disjoint subsets in $\mathcal{B}$.
We define $$\mathbf{1}_{A_i}(s_j) = \begin{cases} 1, & s_j \in A_i \\ 0, & s_j \notin A_i\text{.} \end{cases}$$ What justifies the equivalence $$\sum\limits_{j=1}^{\infty}\sum\limits_{i=1}^{\infty}p_j\cdot \mathbf{1}_{A_i}(s_j) = \sum\limits_{i=1}^{\infty}\sum\limits_{j=1}^{\infty}p_j\cdot \mathbf{1}_{A_i}(s_j)\text{?}$$ This answer talks a bit in brief about this, but the given link has a 404 error.