Reinforcement Learning - Introducing Goal Oriented Intelligence

with deeplizard.

Watch Q-learning Agent Play Game with Python - Reinforcement Learning Code Project

October 21, 2018 by

Blog

Watch Q-learning agent play Frozen Lake

What’s up, guys? In this post, we’ll write the code to enable us to watch our trained Q-learning agent play Frozen Lake, so let’s get to it!

Last time, we left off having just finished training our Q-learning agent to play Frozen Lake. We trained it for 10,000 episodes, and now it’s time to see our agent on the ice in action!

The code to watch the agent play the game

This block of code is going to allow us to watch our trained agent play Frozen Lake using the knowledge it’s gained from the training we completed.

# Watch our agent play Frozen Lake by playing the best action 
# from each state according to the Q-table

for episode in range(3):
    # initialize new episode params

    for step in range(max_steps_per_episode):        
        # Show current state of environment on screen
        # Choose action with highest Q-value for current state       
        # Take new action
        
        if done:
            if reward == 1:
                # Agent reached the goal and won episode
            else:
                # Agent stepped in a hole and lost episode            
            
        # Set new state
        
env.close()

We're going to watch the agent play three episodes. Let's look at the start of the outter loop first.

For each episode

for episode in range(3):
    state = env.reset()
    done = False
    print("*****EPISODE ", episode+1, "*****\n\n\n\n")
    time.sleep(1)
    ...

For each of the three episodes, we first reset the state of our environment, and set done to False. This variable is used for the same purpose as we saw in our training loop last time. It just keeps track whether or not our last action ended the episode.

We then just print to the console what episode we’re starting, and we sleep for one second so that we have time to actually read that printout before it disappears from the screen.

Now, we'll move on to the inner loop.

For each time-step

for step in range(max_steps_per_episode):        
    clear_output(wait=True)
    env.render()
    time.sleep(0.3)
    ...

For each time-step within the episode, we’re calling the iPython display function clear_output(), which clears the output from the current cell in the Jupyter notebook. With wait=True, it waits to clear the output until there is another printout to overwrite it. This is all just done so that the notebook and the screen display remain smooth as we watch our agent play.

We then call render() on our env object, which will render the current state of the environment to the display so that we can actually visually see the game grid and where our agent is on the grid. We then sleep again for 300 milliseconds to give us time to actually see the current state of the environment on screen before moving on to the next time step. Don't worry, this will all come together once we view the final product.

action = np.argmax(q_table[state,:])        
new_state, reward, done, info = env.step(action)

We then set our action to be the action that has the highest Q-value from our Q-table for our current state, and then we take that action with env.step(), just like we saw during training. This will update our new_state, the reward for our action, and whether or not the action completed the episode.

if done:
    clear_output(wait=True)
    env.render()
    if reward == 1:
        print("****You reached the goal!****")
        time.sleep(3)
    else:
        print("****You fell through a hole!****")
        time.sleep(3)
        clear_output(wait=True)
    break

If our action did end the episode, then we render the environment to see where the agent ended up from our last time-step. If the reward for that action was a 1, then we know that the episode ended because the agent reached the frisbee and won the game. So we print that info to the console. If the reward wasn’t a 1, then we know it was alternatively a 0 and that the agent fell through a hole.

After seeing how the episode ended, we then start a new episode.

Now, if the last action didn’t complete the episode, then we skip over the conditional, transition to the new state, and move on to the next time step.

state = new_state

After all three episodes are done, we then close the environment, and that’s it!

env.close()

Watching the agent play

Alright, now in the video, we run this code and watch the agent play! Here's what we expect. We’ll have our grid printed to the screen, the agent will start in the starting state in the top left corner of the grid, and we’ll be able to see the actions chosen by the agent displayed above the grid at each time step. We’ll also see the agent move around the grid, as indicated with a red marker.

Remember when we introduced Frozen Lake, part of the description noted that the agent won’t always take the action that it chooses to take because, since the ice is slippery, even if we choose to go right, for example, we may slip and go up instead. So keep this in mind as you watch the agent play because you may see the chosen action show as right but then see the agent take a step up, for example. The slippery ice is the reason for this.

Now check out the agent play in the video!

Wrapping up

Alright, that’s it! Pretty sweet for our first implementation of reinforcement learning in code, right? If you were able to follow along with the code for this entire implementation of Frozen Lake, then you should definitely feel good, and give yourself a pat on the back!

We’ll continue to gain exposure to more difficult and sophisticated games as we progress deeper into reinforcement learning, so if you thought playing Frozen Lake was cool, you’ll definitely want to stick around!

In the next post, we’re going to become enlightened on how this Q-learning algorithm that uses value iteration, like what we used for Frozen Lake, may not be the absolute best approach, especially when we’re dealing in large state-spaces. There, we’ll see what we can do to make huge efficiency advances.

Hm… I wonder if neural networks might start to show up sometime soon. Are you able to see where they might be able to fit in in the scheme of what we’ve learned so far? Let me know any and all your ideas in the comments. I’ll see ya in the next one!

Description

Welcome back to this series on reinforcement learning! In this video, we’ll write the code to enable us to watch our trained Q-learning agent play Frozen Lake. We’ll continue using Python and OpenAI Gym for this task. Last time, we left off having just finished training our Q-learning agent to play Frozen Lake, so now it’s time to see our agent on the ice in action! Check out the corresponding blog and other resources for this video at: http://deeplizard.com/learn/video/ZaILVnqZFCg OpenAI Gym: https://gym.openai.com/docs/ Code: https://www.patreon.com/posts/22063269 Code files are available as a perk for the deeplizard hivemind. Check out the details regarding deeplizard perks and rewards at: http://deeplizard.com/hivemind ❤️🦎 Special thanks to the following polymaths of the deeplizard hivemind: Ruicong Xie Support collective intelligence, and join the deeplizard hivemind! TED Talk: https://youtu.be/IjbTiRbeNpM Follow deeplizard: YouTube: https://www.youtube.com/deeplizard Twitter: https://twitter.com/deeplizard Facebook: https://www.facebook.com/Deeplizard-145413762948316 Steemit: https://steemit.com/@deeplizard Instagram: https://www.instagram.com/deeplizard/ Pinterest: https://www.pinterest.com/deeplizard/ Check out products deeplizard suggests on Amazon: https://www.amazon.com/shop/deeplizard Get a free Audible 30-day trial and 2 free audio books with deeplizard’s link: https://amzn.to/2yoqWRn Support deeplizard by browsing with Brave: https://brave.com/dee530 Support deeplizard with crypto: Bitcoin: 1AFgm3fLTiG5pNPgnfkKdsktgxLCMYpxCN Litecoin: LTZ2AUGpDmFm85y89PFFvVR5QmfX6Rfzg3 Ether: 0x9105cd0ecbc921ad19f6d5f9dd249735da8269ef Recommended books on AI: The Most Human Human: What Artificial Intelligence Teaches Us About Being Alive: http://amzn.to/2GtjKqu Life 3.0: Being Human in the Age of Artificial Intelligence https://amzn.to/2H5Iau4 Playlists: Data Science - https://www.youtube.com/playlist?list=PLZbbT5o_s2xrth-Cqs_R9-us6IWk9x27z Machine Learning - https://www.youtube.com/playlist?list=PLZbbT5o_s2xq7LwI2y8_QtvuXZedL6tQU Keras - https://www.youtube.com/playlist?list=PLZbbT5o_s2xrwRnXk_yCPtnqqo4_u2YGL TensorFlow.js - https://www.youtube.com/playlist?list=PLZbbT5o_s2xr83l8w44N_g3pygvajLrJ- PyTorch - https://www.youtube.com/watch?v=v5cngxo4mIg&list=PLZbbT5o_s2xrfNyHZsM6ufI0iZENK9xgG Reinforcement Learning - https://www.youtube.com/playlist?list=PLZbbT5o_s2xoWNVdDudn51XM8lOuZ_Njv Music: Thinking Music by Kevin MacLeod Jarvic 8 by Kevin MacLeod YouTube: https://www.youtube.com/channel/UCSZXFhRIx6b0dFX3xS8L1yQ Website: http://incompetech.com/ Licensed under Creative Commons: By Attribution 3.0 License http://creativecommons.org/licenses/by/3.0/