Reinforcement Learning Series Intro - Syllabus Overview
Reinforcement Learning series introduction
What's up, guys? Welcome to this series on reinforcement learning! We'll first start out by introducing the absolute basics to build a solid ground for us to run.
We'll then progress onto more advanced and sophisticated topics that integrate neural networks into reinforcement learning. We'll also be getting our hands dirty by implementing some super cool reinforcement learning projects in code! Without further ado, let's get to it!
What is reinforcement learning?
Reinforcement learning (RL) is an area of machine learning that focuses on how you, or how some thing, might act in an environment in order to maximize some given reward. Reinforcement learning algorithms study the behavior of subjects in such environments and learn to optimize that behavior.
A commonly referred to domain that can illustrate the the power of reinforcement learning is in game playing. I know you've had to come across all the posts and the news and the papers about games that are being played by AI, right?
Take AlphaGo, for example – DeepMind's artificially intelligent Go player that beat the world champion of Go. Even more recently, what's gotten a ton of hype is OpenAI's team of five neural networks, called OpenAI Five, which defeated a team of top professionals in the very complex video game, Dota 2. These human-defeating AIs are being run via reinforcement learning algorithms.
So, back to the description of reinforcement learning we mentioned earlier. We said that reinforcement learning focuses on how a given subject might take actions in an environment in order to maximize some reward. Using a game as an example of an environment, reinforcement learning is concerned with how the player of the game can take actions, like making a move in a certain direction, in order to maximize its reward, which in a game setting, might be points.
So, what's going to be the approach in this series? Well, if you're anything like me and have tried to jump head first right into reinforcement learning, you may have gotten overwhelmed by all the confusing lingo, unknown acronyms, and abundance of mathematical notation.
I mean, we've got things like Markov Decision Processes, Q-Learning, DQNs, Bellman Equations, Policy Gradients, and not to mention, a bunch of math everywhere that looks like this: Markov Decision Processes, Q-Learning, DQNs, Bellman Equations, Policy Gradients, and not to mention, a bunch of math everywhere that looks like this:
Well, fear not! Lucky for us, we're going to tackle this beast of reinforcement learning together step-by-step in a super smooth and intuitive way. As we progress, you'll start to feel more comfortable and confident in your abilities to work with reinforcement learning. Adding this skill to your machine learning toolbox will prove to be immensely useful and valuable!
Through this series, we'll get exposure to and gain an understanding of the intuition, the math, and the coding involved with reinforcement learning. Some parts of the series will be purely intuitive, others are going to be more mathematically involved, and then others are going to be more programmatically involved. We'll first focus more on the intuition and some of the math, and then we'll dive into coding reinforcement learning programs.
Reinforcement learning series syllabus
Here is the syllabus for this course:
- Part 1: Introduction to Reinforcement Learning
- Section 1: Markov Decision Processes (MDPs)
- Introduction to MDPs
- Policies and value functions
- Learning optimal policies and value functions
- Section 2: Q-learning
- Introduction to Q-learning with value iteration
- Implementing an epsilon greedy strategy
- Section 3: Code project - Implement Q-learning with pure Python to play a game
- Environment set up and intro to OpenAI Gym
- Write Q-learning algorithm and train agent to play game
- Watch trained agent play game
- Part 2: Deep Reinforcement Learning
- Section 1: Deep Q-networks (DQNs)
- Introduction to DQNs
- Replay Memory Explained
- Training a Deep Q-Network
- TTraining a DQN With Fixed Q-Targets
- Section 2: Code project - Implement deep Q-network with PyTorch
- Deep Q-Network Code Project Intro
- Build Deep Q-Network in Code
- DQN Image Processing And Env Management
- Deep Q-Network Training Code
- Solving Cart and Pole With a DQN
If at any point you hit a speed-bump along any of the more technical parts, don't be discouraged! Take a breath, focus, ask questions, and I promise you'll be alright. This stuff takes effort, and we've got you! ❤️
Committed by on