-
Book Overview & Buying
-
Table Of Contents
-
Feedback & Rating

Deep Reinforcement Learning Hands-On
By :

To check this theoretical conclusion in practice, let's plot our policy gradient variance during the training for both the baseline version and the version without the baseline. The complete example is in Chapter12/01_cartpole_pg.py
, and most of the code is the same as in Chapter 11, Policy Gradients – an Alternative. The differences in this version are the following:
--baseline
, which enables the mean subtraction from the reward. By default, no baseline is used.To gather only the gradients from the policy loss and exclude the gradients from the entropy bonus added for exploration, we need to calculate the gradients in two stages. Luckily, PyTorch allows this to be done easily. In the following code, only the relevant part of the training loop is included to illustrate the idea:
...