In the previous chapter, we concluded a comprehensive overview of all the major policy gradient algorithms. Due to their capacity to deal with continuous action spaces, they are applied to very complex and sophisticated control systems. Policy gradient methods can also use a second-order derivative, as is done in TRPO, or use other strategies, in order to limit the policy update by preventing unexpected bad behaviors. However, the main concern when dealing with this type of algorithm is their poor efficiency, in terms of the quantity of experience needed to hopefully master a task. This drawback comes from the on-policy nature of these algorithms, which makes them require new experiences each time the policy is updated. In this chapter, we will introduce a new type of off-policy actor-critic algorithm that learns a target deterministic policy, while exploring...
-
Book Overview & Buying
-
Table Of Contents
-
Feedback & Rating

Reinforcement Learning Algorithms with Python
By :

Reinforcement Learning Algorithms with Python
By:
Overview of this book
Reinforcement Learning (RL) is a popular and promising branch of AI that involves making smarter models and agents that can automatically determine ideal behavior based on changing requirements. This book will help you master RL algorithms and understand their implementation as you build self-learning agents.
Starting with an introduction to the tools, libraries, and setup needed to work in the RL environment, this book covers the building blocks of RL and delves into value-based methods, such as the application of Q-learning and SARSA algorithms. You'll learn how to use a combination of Q-learning and neural networks to solve complex problems. Furthermore, you'll study the policy gradient methods, TRPO, and PPO, to improve performance and stability, before moving on to the DDPG and TD3 deterministic algorithms. This book also covers how imitation learning techniques work and how Dagger can teach an agent to drive. You'll discover evolutionary strategies and black-box optimization techniques, and see how they can improve RL algorithms. Finally, you'll get to grips with exploration approaches, such as UCB and UCB1, and develop a meta-algorithm called ESBAS.
By the end of the book, you'll have worked with key RL algorithms to overcome challenges in real-world applications, and be part of the RL research community.
Table of Contents (19 chapters)
Preface
In Progress
| 0 / 5 sections completed |
0%
Section 1: Algorithms and Environments
In Progress
| 0 / 1 sections completed |
0%
The Landscape of Reinforcement Learning
In Progress
| 0 / 7 sections completed |
0%
Implementing RL Cycle and OpenAI Gym
In Progress
| 0 / 9 sections completed |
0%
Solving Problems with Dynamic Programming
In Progress
| 0 / 7 sections completed |
0%
Section 2: Model-Free RL Algorithms
In Progress
| 0 / 1 sections completed |
0%
Q-Learning and SARSA Applications
In Progress
| 0 / 9 sections completed |
0%
Deep Q-Network
In Progress
| 0 / 8 sections completed |
0%
Learning Stochastic and PG Optimization
In Progress
| 0 / 8 sections completed |
0%
TRPO and PPO Implementation
In Progress
| 0 / 8 sections completed |
0%
DDPG and TD3 Applications
In Progress
| 0 / 7 sections completed |
0%
Section 3: Beyond Model-Free Algorithms and Improvements
In Progress
| 0 / 1 sections completed |
0%
Model-Based RL
In Progress
| 0 / 7 sections completed |
0%
Imitation Learning with the DAgger Algorithm
In Progress
| 0 / 9 sections completed |
0%
Understanding Black-Box Optimization Algorithms
In Progress
| 0 / 8 sections completed |
0%
Developing the ESBAS Algorithm
In Progress
| 0 / 7 sections completed |
0%
Practical Implementation for Resolving RL Challenges
In Progress
| 0 / 9 sections completed |
0%
Assessments
In Progress
| 0 / 1 sections completed |
0%
Other Books You May Enjoy
In Progress
| 0 / 2 sections completed |
0%
Customer Reviews