Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Hands-On Reinforcement Learning with Python
  • Table Of Contents Toc
  • Feedback & Rating feedback
Hands-On Reinforcement Learning with Python

Hands-On Reinforcement Learning with Python

By : Sudharsan Ravichandiran
2.6 (18)
close
close
Hands-On Reinforcement Learning with Python

Hands-On Reinforcement Learning with Python

2.6 (18)
By: Sudharsan Ravichandiran

Overview of this book

Reinforcement Learning (RL) is the trending and most promising branch of artificial intelligence. Hands-On Reinforcement learning with Python will help you master not only the basic reinforcement learning algorithms but also the advanced deep reinforcement learning algorithms. The book starts with an introduction to Reinforcement Learning followed by OpenAI Gym, and TensorFlow. You will then explore various RL algorithms and concepts, such as Markov Decision Process, Monte Carlo methods, and dynamic programming, including value and policy iteration. This example-rich guide will introduce you to deep reinforcement learning algorithms, such as Dueling DQN, DRQN, A3C, PPO, and TRPO. You will also learn about imagination-augmented agents, learning from human preference, DQfD, HER, and many more of the recent advancements in reinforcement learning. By the end of the book, you will have all the knowledge and experience needed to implement reinforcement learning and deep reinforcement learning in your projects, and you will be all set to enter the world of artificial intelligence.
Table of Contents (16 chapters)
close
close

Dueling network

Now, we build our dueling DQN; we build three convolutional layers followed by two fully connected layers, and the final fully connected layer will be split into two separate layers for value stream and advantage stream. We will use the aggregate layer, which combines both the value stream and the advantage stream, to compute the q value. The dimensions of these layers are given as follows:

  • Layer 1: 32 8x8 filters with stride 4 + RELU
  • Layer 2: 64 4x4 filters with stride 2 + RELU
  • Layer 3: 64 3x3 filters with stride 1 + RELU
  • Layer 4a: 512 unit fully-connected layer + RELU
  • Layer 4b: 512 unit fully-connected layer + RELU
  • Layer 5a: 1 unit FC + RELU (state value)
  • Layer 5b: Actions FC + RELU (advantage value)
  • Layer6: Aggregate V(s)+A(s,a)
class QNetworkDueling(QNetwork):

We define the __init__ method to initialize all layers:


def __init__(self, input_size, output_size...

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech

Create a Note

Modal Close icon
You need to login to use this feature.
notes
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Delete Note

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Edit Note

Modal Close icon
Write a note (max 255 characters)
Cancel
Update Note

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY