Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Learn Unity ML-Agents ??? Fundamentals of Unity Machine Learning
  • Table Of Contents Toc
  • Feedback & Rating feedback
Learn Unity ML-Agents ??? Fundamentals of Unity Machine Learning

Learn Unity ML-Agents ??? Fundamentals of Unity Machine Learning

1 (3)
close
close
Learn Unity ML-Agents ??? Fundamentals of Unity Machine Learning

Learn Unity ML-Agents ??? Fundamentals of Unity Machine Learning

1 (3)

Overview of this book

Unity Machine Learning agents allow researchers and developers to create games and simulations using the Unity Editor, which serves as an environment where intelligent agents can be trained with machine learning methods through a simple-to-use Python API. This book takes you from the basics of Reinforcement and Q Learning to building Deep Recurrent Q-Network agents that cooperate or compete in a multi-agent ecosystem. You will start with the basics of Reinforcement Learning and how to apply it to problems. Then you will learn how to build self-learning advanced neural networks with Python and Keras/TensorFlow. From there you move o n to more advanced training scenarios where you will learn further innovative ways to train your network with A3C, imitation, and curriculum learning models. By the end of the book, you will have learned how to build more complex environments by building a cooperative and competitive multi-agent ecosystem.
Table of Contents (8 chapters)
close
close

Summary

In this chapter, we took a closer look at the Unity PPO trainer. This training model, originally developed at OpenAI, is the current advanced model, and was our focus for starting to build more complex training scenarios. We first revisited the GridWorld example to understand what happens when training goes wrong. From there, we looked at some examples of situations where training is performing sub par, and we learned how to fix some of those issues. Then, we learned how an agent can use visual observations as input into our model, providing the data is processed first. We learned that an agent using visual observation required the use of CNN layers to process and extract features from images. After that, we looked at the value of using experience replay in order to further generalize our models. This taught us that experience and memory were valuable to an agent&apos...

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech

Create a Note

Modal Close icon
You need to login to use this feature.
notes
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Delete Note

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Edit Note

Modal Close icon
Write a note (max 255 characters)
Cancel
Update Note

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY