Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Learn Unity ML-Agents ??? Fundamentals of Unity Machine Learning
  • Toc
  • feedback
Learn Unity ML-Agents ??? Fundamentals of Unity Machine Learning

Learn Unity ML-Agents ??? Fundamentals of Unity Machine Learning

1 (3)
close
Learn Unity ML-Agents ??? Fundamentals of Unity Machine Learning

Learn Unity ML-Agents ??? Fundamentals of Unity Machine Learning

1 (3)

Overview of this book

Unity Machine Learning agents allow researchers and developers to create games and simulations using the Unity Editor, which serves as an environment where intelligent agents can be trained with machine learning methods through a simple-to-use Python API. This book takes you from the basics of Reinforcement and Q Learning to building Deep Recurrent Q-Network agents that cooperate or compete in a multi-agent ecosystem. You will start with the basics of Reinforcement Learning and how to apply it to problems. Then you will learn how to build self-learning advanced neural networks with Python and Keras/TensorFlow. From there you move o n to more advanced training scenarios where you will learn further innovative ways to train your network with A3C, imitation, and curriculum learning models. By the end of the book, you will have learned how to build more complex environments by building a cooperative and competitive multi-agent ecosystem.
Table of Contents (8 chapters)
close

Contextual bandits and state

Our next step in understanding RL will be for us to look at the contextual bandit problem. A contextual bandit is the multi-armed bandit problem, with multiple bandits each producing different rewards. This type of problem has many applications in online advertising, where each user is thought of as a different bandit, with the goal being to present the best advertisement for that user. To model the context of the bandit, and which bandit it is, we add the concept of state. Where we now interpret state to represent each of our different bandits. The following diagram shows the addition of state in the Contextual Bandit problem and where it lies on our path to glory:



Stateless, Contextual and Full RL models

You can see in the preceding diagram that we now need to determine the state before evaluating an action. If you recall from earlier, the Value...

bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete