Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Learn Unity ML-Agents ??? Fundamentals of Unity Machine Learning
  • Toc
  • feedback
Learn Unity ML-Agents ??? Fundamentals of Unity Machine Learning

Learn Unity ML-Agents ??? Fundamentals of Unity Machine Learning

1 (3)
close
Learn Unity ML-Agents ??? Fundamentals of Unity Machine Learning

Learn Unity ML-Agents ??? Fundamentals of Unity Machine Learning

1 (3)

Overview of this book

Unity Machine Learning agents allow researchers and developers to create games and simulations using the Unity Editor, which serves as an environment where intelligent agents can be trained with machine learning methods through a simple-to-use Python API. This book takes you from the basics of Reinforcement and Q Learning to building Deep Recurrent Q-Network agents that cooperate or compete in a multi-agent ecosystem. You will start with the basics of Reinforcement Learning and how to apply it to problems. Then you will learn how to build self-learning advanced neural networks with Python and Keras/TensorFlow. From there you move o n to more advanced training scenarios where you will learn further innovative ways to train your network with A3C, imitation, and curriculum learning models. By the end of the book, you will have learned how to build more complex environments by building a cooperative and competitive multi-agent ecosystem.
Table of Contents (8 chapters)
close

Exercises

Use the following exercises to improve your understanding of RL and the PPO trainer.

  1. Convert one of the Unity examples to use just visual observations. Hint, use the GridWorld example as a guide, and remember that the agent may need its own camera.
  2. Alter the CNN configuration of an agent using visual observations in three different ways. You can add more layers, take them away, or alter the kernel filter. Run the training sessions and compare the differences with TensorBoard.
  3. Convert the GridWorld sample to use vector observations and recurrent networks with memory. Hint, you can borrow several pieces of code from the Hallway example.
  1. Revisit the Ball3D example and set it up to use multiple asynchronous agent training.
  2. Set up the crawler example and run it with multiple asynchronous agent training.

If you encounter problems running through these samples, be sure...

bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete