I am reading this passage from Billingsley's Convergence of Probability measures.
Theorem 1.2. Probability measures $P$ and $Q$ on $\mathcal{S}$ coincide if $P f=Q f$ for all bounded, uniformly continuous real functions $f$.
Proof. For the bounded, uniformly continuous $f$ of (1.1), $P F \leq$ $P f=Q f \leq Q F^\epsilon$. Letting $\epsilon \downarrow 0$ gives $P F \leq Q F$, provided $F$ is closed. By symmetry and Theorem $1.1, P=Q$.
Because of theorems like this, it is possible to work with measures $P A$ or with integrals $P f$, whichever is simpler or more natural. We defined weak convergence in terms of the convergence of integrals of functions, and in the next section we characterize it in terms of the convergence of measures of sets.
Are the "bounded, uniformly continuous" analagous to the "test functions" $C_c^\infty(X)$ seen in the "theory of distributions? Is this describing that we can think of probability measures as "distributions" or "measures" by the reisz representation theorem? Why do they use "bounded, uniformly continuous" real functions instead of infinitely differentiable continuous functions with compact support as used in Schwartz distributions?