Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying The Reinforcement Learning Workshop
  • Table Of Contents Toc
  • Feedback & Rating feedback
The Reinforcement Learning Workshop

The Reinforcement Learning Workshop

By : Alessandro Palmas , Emanuele Ghelfi , Dr. Alexandra Galina Petre , Mayur Kulkarni , Anand N.S. , Quan Nguyen , Aritra Sen , Anthony So , Saikat Basak
4.7 (7)
close
close
The Reinforcement Learning Workshop

The Reinforcement Learning Workshop

4.7 (7)
By: Alessandro Palmas , Emanuele Ghelfi , Dr. Alexandra Galina Petre , Mayur Kulkarni , Anand N.S. , Quan Nguyen , Aritra Sen , Anthony So , Saikat Basak

Overview of this book

Various intelligent applications such as video games, inventory management software, warehouse robots, and translation tools use reinforcement learning (RL) to make decisions and perform actions that maximize the probability of the desired outcome. This book will help you to get to grips with the techniques and the algorithms for implementing RL in your machine learning models. Starting with an introduction to RL, youÔÇÖll be guided through different RL environments and frameworks. YouÔÇÖll learn how to implement your own custom environments and use OpenAI baselines to run RL algorithms. Once youÔÇÖve explored classic RL techniques such as Dynamic Programming, Monte Carlo, and TD Learning, youÔÇÖll understand when to apply the different deep learning methods in RL and advance to deep Q-learning. The book will even help you understand the different stages of machine-based problem-solving by using DARQN on a popular video game Breakout. Finally, youÔÇÖll find out when to use a policy-based method to tackle an RL problem. By the end of The Reinforcement Learning Workshop, youÔÇÖll be equipped with the knowledge and skills needed to solve challenging problems using reinforcement learning.
Table of Contents (14 chapters)
close
close
Preface
Free Chapter
2
2. Markov Decision Processes and Bellman Equations
chevron up

Introduction

In the previous chapter, we studied the main elements of Reinforcement Learning (RL). We described an agent as an entity that can perceive an environment's state and act by modifying the environment state in order to achieve a goal. An agent acts through a policy that represents its behavior, and the way the agent selects an action is based on the environment state. In the second half of the previous chapter, we introduced Gym and Baselines, two Python libraries that simplify the environment representation and the algorithm implementation, respectively.

We mentioned that RL considers problems as Markov Decision Processes (MDPs), without entering into the details and without giving a formal definition.

In this chapter, we will formally describe what an MDP is, its properties, and its characteristics. When facing a new problem in RL, we have to ensure that the problem can be formalized as an MDP; otherwise, applying RL techniques is impossible.

Before presenting a formal definition of MDPs, we need to understand Markov Chains (MCs) and Markov Reward Processes (MRPs). MCs and MRPs are specific cases (simplified) of MDPs. An MC only focuses on state transitions without modeling rewards and actions. Consider the example of the game of snakes and ladders, where the next action is completely dependent on the number displayed on the dice. MRPs also include the reward component in the state transition. MRPs and MCs are useful in understanding the characteristics of MDPs gradually. We will be looking at specific examples of MCs and MRPs later in the chapter.

Along with MDPs, this chapter also presents the concepts of the state-value function and the action-value function, which are used to evaluate how good a state is for an agent and how good an action taken in a given state is. State-value functions and action-value functions are the building blocks of the algorithms used to solve real-world problems. The concepts of state-value functions and action-value functions are highly related to the agent's policy and the environment dynamics, as we will learn later in this chapter.

The final part of this chapter presents two Bellman equations, namely the Bellman expectation equation and the Bellman optimality equation. These equations are helpful in the context of RL in order to evaluate the behavior of an agent and find a policy that maximizes the agent's performance in an MDP.

In this chapter, we will practice with some MDP examples, such as the student MDP and Gridworld. We will implement the solution methods and equations explained in this chapter using Python, SciPy, and NumPy.

Create a Note

Modal Close icon
You need to login to use this feature.
notes
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Delete Note

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Edit Note

Modal Close icon
Write a note (max 255 characters)
Cancel
Update Note

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY