In Q-learning, am I the one who will define the way in which actions allow the agent to interact with the environment, so that the way in which actions allow the agent to interact with the environment can vary greatly from the problem in question?
For example, in this article: https://www.learndatasci.com/tutorials/reinforcement-q-learning-scratch-python-openai-gym/, which explains Q-learnig, it teaches the Smartcab problem, it only has 6 actions (walk up, down, right and left, pickup and dropoff). The action of moving upwards makes the agent add +1 to Y, advancing its state. In this Smartcab example, the states are positions X and Y representing where the agent is.
But the way in which actions allow the agent to interact with the scenario can be something much broader depending on the problem?, instead of being movement actions (such as walking up, down, right and left), instead of Furthermore, are they more complex actions, which could make the agent change state in a very different way than this Smartcab example?
In Q-learning the way in which the actions will make the agent interact with the environment will depend greatly on the problem in question, so that each problem can have its own rules for the agent to interact with the environment, rules that I myself Can I set it according to my needs?