Actually your source only gave a deterministic environment example (ToZerind) which may not definitely mean they only consider deterministic environment for the general action cost function which is a concept rooted from control theory similar to (expected) reward from RL and loss function from statistics as the author wrote somewhere else in your reference.
It prevails not only in AI, but also in control theory, where a
controller minimizes a cost function; in operations research, where a policy maximizes a sum of rewards; in statistics, where a decision rule minimizes a loss function
In Sutton's RL book the reward (cost) function $\mathcal{R}$ is a random variable and they never wrote or needed $\mathcal{R}(s_t, a_t, s_{t+1})$, but instead, its expected value $r(s_t, a_t, s_{t+1})$ is defined on page 49 due to possible stochastic environment unlike a normal grid world game:
We can also compute the expected rewards for state–action pairs as a two-argument function $r : \mathcal{S} \times \mathcal{A} \to \mathbb{R}$:
$$r(s,a) = \sum_{r \in \mathcal{R}} \sum_{s' \in \mathcal{S}} rp(s',r|s,a)$$
and the expected rewards for state–action–next-state triples as a three-argument function $r : \mathcal{S} \times \mathcal{A} \times \mathcal{S} \to \mathbb{R}$,
$$r(s,a,s') = \sum_{r \in \mathcal{R}}r\frac{p(s',r|s,a)}{p(s'|s,a)}$$
In fact in Sutton's book the only place the 3-argument expected reward is explicitly used appears in a nondeterministic finite state diagram for a recycling robot example on page 52:

You see in such a diagram the expected reward function needs the explicit dependence of the entering state to conveniently express its value based on its entering state for each of the possibly branching state transition due to its random environment characterized by Markovian transition dynamics $p(s',r|s,a)$. Therefore sometimes the 3-argument cost (reward) function in the sense of expectation may be needed even assuming the usual MDP environment in the most general conceivable cases.
There's a very similar question recently and you can refer to my related answer there.