Reinforcement Learning - Goal Oriented Intelligence

Deep Learning Course 4 of 4 - Level: Advanced

Watch Q-learning Agent Play Game with Python - Reinforcement Learning Code Project

video

expand_more chevron_left

text

expand_more chevron_left

Watch Q-learning agent play Frozen Lake

What's up, guys? In this post, we'll write the code to enable us to watch our trained Q-learning agent play Frozen Lake, so let's get to it!

Last time, we left off having just finished training our Q-learning agent to play Frozen Lake. We trained it for `10,000` episodes, and now it's time to see our agent on the ice in action!

The code to watch the agent play the game

This block of code is going to allow us to watch our trained agent play Frozen Lake using the knowledge it's gained from the training we completed.

```# Watch our agent play Frozen Lake by playing the best action
# from each state according to the Q-table

for episode in range(3):
# initialize new episode params

for step in range(max_steps_per_episode):
# Show current state of environment on screen
# Choose action with highest Q-value for current state
# Take new action

if done:
if reward == 1:
# Agent reached the goal and won episode
else:
# Agent stepped in a hole and lost episode

# Set new state

env.close()
```

We're going to watch the agent play three episodes. Let's look at the start of the outer loop first.

For each episode

```for episode in range(3):
state = env.reset()
done = False
print("*****EPISODE ", episode+1, "*****\n\n\n\n")
time.sleep(1)
...
```

For each of the three episodes, we first reset the `state` of our environment, and set `done` to `False`. This variable is used for the same purpose as we saw in our training loop last time. It just keeps track whether or not our last action ended the episode.

We then just print to the console what episode we're starting, and we `sleep` for one second so that we have time to actually read that printout before it disappears from the screen.

Now, we'll move on to the inner loop.

For each time-step

```for step in range(max_steps_per_episode):
clear_output(wait=True)
env.render()
time.sleep(0.3)
...
```

For each time-step within the episode, we're calling the `iPython` `display` function `clear_output()`, which clears the output from the current cell in the Jupyter notebook. With `wait=True`, it waits to clear the output until there is another printout to overwrite it. This is all just done so that the notebook and the screen display remain smooth as we watch our agent play.

We then call `render()` on our `env` object, which will render the current state of the environment to the display so that we can actually visually see the game grid and where our agent is on the grid. We then `sleep` again for `300` milliseconds to give us time to actually see the current state of the environment on screen before moving on to the next time step. Don't worry, this will all come together once we view the final product.

```action = np.argmax(q_table[state,:])
new_state, reward, done, info = env.step(action)
```

We then set our `action` to be the action that has the highest Q-value from our Q-table for our current `state`, and then we take that action with `env.step()`, just like we saw during training. This will update our `new_state`, the `reward` for our `action`, and whether or not the action completed the episode.

```if done:
clear_output(wait=True)
env.render()
if reward == 1:
print("****You reached the goal!****")
time.sleep(3)
else:
print("****You fell through a hole!****")
time.sleep(3)
clear_output(wait=True)
break
```

If our `action` did end the episode, then we `render` the environment to see where the agent ended up from our last time-step. If the `reward` for that `action` was a `1`, then we know that the episode ended because the agent reached the frisbee and won the game. So we print that info to the console. If the reward wasn't a `1`, then we know it was alternatively a `0` and that the agent fell through a hole.

After seeing how the episode ended, we then start a new episode.

Now, if the last action didn't complete the episode, then we skip over the conditional, transition to the new state, and move on to the next time step.

```state = new_state
```

After all three episodes are done, we then close the environment, and that's it!

```env.close()
```

Watching the agent play

Alright, now in the video, we run this code and watch the agent play! Here's what we expect. We'll have our grid printed to the screen, the agent will start in the starting state in the top left corner of the grid, and we'll be able to see the actions chosen by the agent displayed above the grid at each time step. We'll also see the agent move around the grid, as indicated with a red marker.

Remember when we introduced Frozen Lake, part of the description noted that the agent won't always take the action that it chooses to take because, since the ice is slippery, even if we choose to go right, for example, we may slip and go up instead. So keep this in mind as you watch the agent play because you may see the chosen action show as right but then see the agent take a step up, for example. The slippery ice is the reason for this.

Now check out the agent play in the video!

Wrapping up

Alright, that's it! Pretty sweet for our first implementation of reinforcement learning in code, right? If you were able to follow along with the code for this entire implementation of Frozen Lake, then you should definitely feel good, and give yourself a pat on the back!

We'll continue to gain exposure to more difficult and sophisticated games as we progress deeper into reinforcement learning, so if you thought playing Frozen Lake was cool, you'll definitely want to stick around!

In the next post, we're going to become enlightened on how this Q-learning algorithm that uses value iteration, like what we used for Frozen Lake, may not be the absolute best approach, especially when we're dealing in large state-spaces. There, we'll see what we can do to make huge efficiency advances.

Hm… I wonder if neural networks might start to show up sometime soon. Are you able to see where they might be able to fit in in the scheme of what we've learned so far? Let me know any and all your ideas in the comments. I'll see ya in the next one!

quiz

expand_more chevron_left

resources

expand_more chevron_left

expand_more chevron_left