Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying The Reinforcement Learning Workshop
  • Table Of Contents Toc
  • Feedback & Rating feedback
The Reinforcement Learning Workshop

The Reinforcement Learning Workshop

By : Alessandro Palmas , Emanuele Ghelfi , Dr. Alexandra Galina Petre , Mayur Kulkarni , Anand N.S. , Quan Nguyen , Aritra Sen , Anthony So , Saikat Basak
4.7 (7)
close
close
The Reinforcement Learning Workshop

The Reinforcement Learning Workshop

4.7 (7)
By: Alessandro Palmas , Emanuele Ghelfi , Dr. Alexandra Galina Petre , Mayur Kulkarni , Anand N.S. , Quan Nguyen , Aritra Sen , Anthony So , Saikat Basak

Overview of this book

Various intelligent applications such as video games, inventory management software, warehouse robots, and translation tools use reinforcement learning (RL) to make decisions and perform actions that maximize the probability of the desired outcome. This book will help you to get to grips with the techniques and the algorithms for implementing RL in your machine learning models. Starting with an introduction to RL, youÔÇÖll be guided through different RL environments and frameworks. YouÔÇÖll learn how to implement your own custom environments and use OpenAI baselines to run RL algorithms. Once youÔÇÖve explored classic RL techniques such as Dynamic Programming, Monte Carlo, and TD Learning, youÔÇÖll understand when to apply the different deep learning methods in RL and advance to deep Q-learning. The book will even help you understand the different stages of machine-based problem-solving by using DARQN on a popular video game Breakout. Finally, youÔÇÖll find out when to use a policy-based method to tackle an RL problem. By the end of The Reinforcement Learning Workshop, youÔÇÖll be equipped with the knowledge and skills needed to solve challenging problems using reinforcement learning.
Table of Contents (14 chapters)
close
close
Preface
Free Chapter
2
2. Markov Decision Processes and Bellman Equations

Summary

Monte Carlo methods learn from experience in the form of sample episodes. Without having a model of the environment, by interacting with the environment, the agent can learn a policy. In several cases of simulation or sampling, an episode is feasible. We learned about the first visit and every visit evaluation. Also, we learned about the balance between exploration and exploitation. This is achieved by having an epsilon soft policy. We then learned about on-policy and off-policy learnings, and how importance sampling plays a key role in off-policy methods. We learned about the Monte Carlo methods by applying them to Blackjack and the Frozen Lake environment available in the OpenAI framework.

In the next chapter, we will learn about temporal learning and its applications. Temporal learning combines the best of dynamic programming and the Monte Carlo methods. It can work where the model is not known, like the Monte Carlo methods, but can provide incremental learning instead...

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech

Create a Note

Modal Close icon
You need to login to use this feature.
notes
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Delete Note

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Edit Note

Modal Close icon
Write a note (max 255 characters)
Cancel
Update Note

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY