Up until this point, we have worked with discrete control tasks such as the Atari games in Chapter 5, Deep Q-Network, and LunarLander in Chapter 6, Learning Stochastic and PG Optimization. To play these games, only a few discrete actions have to be controlled, that is, approximately two to five actions. As we learned in Chapter 6, Learning Stochastic and PG Optimization, policy gradient algorithms can be easily adapted to continuous actions. To show these properties, we'll deploy the next few policy gradient algorithms in a new set of environments called Roboschool, in which the goal is to control a robot in different situations. Roboschool has been developed by OpenAI and uses the famous OpenAI Gym interface that we used in the previous chapters. These environments are based on the Bullet Physics Engine (a physics engine that simulates soft and rigid body dynamics...

Reinforcement Learning Algorithms with Python
By :

Reinforcement Learning Algorithms with Python
By:
Overview of this book
Reinforcement Learning (RL) is a popular and promising branch of AI that involves making smarter models and agents that can automatically determine ideal behavior based on changing requirements. This book will help you master RL algorithms and understand their implementation as you build self-learning agents.
Starting with an introduction to the tools, libraries, and setup needed to work in the RL environment, this book covers the building blocks of RL and delves into value-based methods, such as the application of Q-learning and SARSA algorithms. You'll learn how to use a combination of Q-learning and neural networks to solve complex problems. Furthermore, you'll study the policy gradient methods, TRPO, and PPO, to improve performance and stability, before moving on to the DDPG and TD3 deterministic algorithms. This book also covers how imitation learning techniques work and how Dagger can teach an agent to drive. You'll discover evolutionary strategies and black-box optimization techniques, and see how they can improve RL algorithms. Finally, you'll get to grips with exploration approaches, such as UCB and UCB1, and develop a meta-algorithm called ESBAS.
By the end of the book, you'll have worked with key RL algorithms to overcome challenges in real-world applications, and be part of the RL research community.
Table of Contents (19 chapters)
Preface
The Landscape of Reinforcement Learning
Implementing RL Cycle and OpenAI Gym
Solving Problems with Dynamic Programming
Section 2: Model-Free RL Algorithms
Q-Learning and SARSA Applications
Deep Q-Network
Learning Stochastic and PG Optimization
TRPO and PPO Implementation
DDPG and TD3 Applications
Section 3: Beyond Model-Free Algorithms and Improvements
Model-Based RL
Imitation Learning with the DAgger Algorithm
Understanding Black-Box Optimization Algorithms
Developing the ESBAS Algorithm
Practical Implementation for Resolving RL Challenges
Assessments
Other Books You May Enjoy
How would like to rate this book
Customer Reviews