Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Reinforcement Learning with TensorFlow
  • Toc
  • feedback
Reinforcement Learning with TensorFlow

Reinforcement Learning with TensorFlow

By : Dutta
2.2 (5)
close
Reinforcement Learning with TensorFlow

Reinforcement Learning with TensorFlow

2.2 (5)
By: Dutta

Overview of this book

Reinforcement learning (RL) allows you to develop smart, quick and self-learning systems in your business surroundings. It's an effective method for training learning agents and solving a variety of problems in Artificial Intelligence - from games, self-driving cars and robots, to enterprise applications such as data center energy saving (cooling data centers) and smart warehousing solutions. The book covers major advancements and successes achieved in deep reinforcement learning by synergizing deep neural network architectures with reinforcement learning. You'll also be introduced to the concept of reinforcement learning, its advantages and the reasons why it's gaining so much popularity. You'll explore MDPs, Monte Carlo tree searches, dynamic programming such as policy and value iteration, and temporal difference learning such as Q-learning and SARSA. You will use TensorFlow and OpenAI Gym to build simple neural network models that learn from their own actions. You will also see how reinforcement learning algorithms play a role in games, image processing and NLP. By the end of this book, you will have gained a firm understanding of what reinforcement learning is and understand how to put your knowledge to practical use by leveraging the power of TensorFlow and OpenAI Gym.
Table of Contents (17 chapters)
close

Asynchronous one-step Q-learning


The architecture of asynchronous one-step Q-learning is very similar to DQN. An agent in DQN was represented by a set of primary and target networks, where one-step loss is calculated as the square of the difference between the state-action value of the current state s predicted by the primary network, and the target state-action value of the current state calculated by the target network. The gradients of the loss is calculated with respect to the parameters of the policy network, and then the loss is minimized using a gradient descent optimizer leading to parameter updates of the primary network.

The difference here in asynchronous one-step Q-learning is that there are multiple such learning agents, for instance, learners running and calculating this loss in parallel. Thus, the gradient calculation also occurs in parallel in different threads where each learning agent interacts with its own copy of the environment. The accumulation of these gradients in...

bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete