Markov Decision Processes (MDPs)  Structuring a Reinforcement Learning Problem
What’s up, guys? In this post, we’re going to discuss Markov decision processes, or MDPs. This topic will lay the bedrock for our understanding of reinforcement learning, so let’s get to it!
Components of an MDP
Markov decision processes give us a way to formalize sequential decision making. This formalization is the basis for structuring problems that are solved with reinforcement learning.
To kick things off, let's discuss the components involved in an MDP.
In an MDP, we have a decision maker, called an agent, that interacts with the environment it's placed in. These interactions occur sequentially over time. At each time step, the agent will get some representation of the environment’s state. Given this representation, the agent selects an action to take. The environment is then transitioned into a new state, and the agent is given a reward as a consequence of the previous action.
Components of an MDP: Agent
 Environment
 State
 Action
 Reward
This process of selecting an action from a given state, transitioning to a new state, and receiving a reward happens sequentially over and over again, which creates something called a trajectory that shows the sequence of states, actions, and rewards.
Throughout this process, it is the agent’s goal to maximize the total amount of rewards that it receives from taking actions in given states. This means that the agent wants to maximize not just the immediate reward, but the cumulative rewards it receives over time.
Alright, let’s get a bit mathy and represent an MDP with mathematical notation. This will make things easier for us going forward.
MDP notation
We’re now going to repeat what we just casually discussed but in a more formal and mathematically notated way.
In an MDP, we have a set of states \(\boldsymbol{S}\), a set of actions \(\boldsymbol{A}\), and a set of rewards \(\boldsymbol{R}\). We'll assume that each of these sets has a finite number of elements.
At each time step \(t = 0,1,2,\cdots\), the agent receives some representation of the environment’s state \(S_t \in \boldsymbol{S}\). Based on this state, the agent selects an action \(A_t \in \boldsymbol{A}\). This gives us the stateaction pair \((S_t,A_t)\).
Time is then incremented to the next time step \(t+1\), and the environment is transitioned to a new state \(S_{t+1} \in \boldsymbol{S}\). At this time, the agent receives a numerical reward \(R_{t+1} \in \boldsymbol{R}\) for the action \(A_t\) taken from state \(S_t\).
We can think of the process of receiving a reward as an arbitrary function \(f\) that maps stateaction pairs to rewards. At each time \(t\), we have $$f(S_{t}, A_{t}) = R_{t+1}\text{.}$$
The trajectory representing the sequential process of selecting an action from a state, transitioning to a new state, and receiving a reward can be represented as $$S_0,A_0,R_1,S_1,A_1,R_2,S_2,A_2,R_3,\cdots$$
This diagram nicely illustrates this entire idea.
Let's break down this diagram into steps.
 At time \(t\), the environment is in state \(S_t\).
 The agent observes the current state and selects action \(A_t\).
 The environment transitions to state \(S_{t+1}\) and grants the agent reward \(R_{t+1}\).

This process then starts over for the next time step, \(t+1\).
 Note, \(t+1\) is no longer in the future, but is now the present. When we cross the dotted line on the bottom left, the diagram shows \(t+1\) transforming into the current time step \(t\) so that \(S_{t+1}\) and \(R_{t+1}\) are now \(S_t\) and \(R_t\).
Transition probabilities
Since the sets \(\boldsymbol{S}\) and \(\boldsymbol{R}\) are finite, the random variables \(R_t\) and \(S_t\) have well defined probability distributions. In other words, all the possible values that can be assigned to \(R_t\) and \(S_t\) have some associated probability. These distributions depend on the preceding state and action that occurred in the previous time step \(t1\).
For example, suppose \(s’ \in \boldsymbol{S}\) and \(r \in \boldsymbol{R}\). Then there is some probability that \(S_t=s’\) and \(R_t=r.\) This probability is determined by the particular values of the preceding state \(s \in \boldsymbol{S}\) and action \(a \in \boldsymbol{A}(s)\). Note that \(\boldsymbol{A}(s)\) is the set of actions that can be taken from state \(s\).
Let’s define this probability.
For all \(s^{\prime } \in \boldsymbol{S}\), \(s \in \boldsymbol{S}\), \(r\in \boldsymbol{R}\), and \(a\in \boldsymbol{A}(s)\), we define the probability of the transition to state \(s^{\prime }\) with reward \(r\) from taking action \(a\) in state \(s\) as \begin{equation*} p\left( s^{\prime },r\mid s,a\right) =\Pr \left\{ S_{t}=s^{\prime },R_{t}=r\mid S_{t1}=s,A_{t1}=a\right\} \text{.} \end{equation*}
Wrapping up
Alright, we now have a formal way to model sequential decision making. How do you feel about Markov decision processes so far? Some of this may take a bit of time to sink in, but if you can understand the relationship between the agent and the environment and how they interact with each other, then you're off to a great start!
Like we discussed earlier, MDPs are the bedrock for reinforcement learning, so make sure to get comfortable with what we covered here, and next time we’ll build on concept of cumulative rewards. I’ll see ya there!