Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Accelerate Model Training with PyTorch 2.X
  • Table Of Contents Toc
  • Feedback & Rating feedback
Accelerate Model Training with PyTorch 2.X

Accelerate Model Training with PyTorch 2.X

By : Maicon Melo Alves
4.4 (10)
close
close
Accelerate Model Training with PyTorch 2.X

Accelerate Model Training with PyTorch 2.X

4.4 (10)
By: Maicon Melo Alves

Overview of this book

This book, written by an HPC expert with over 25 years of experience, guides you through enhancing model training performance using PyTorch. Here you’ll learn how model complexity impacts training time and discover performance tuning levels to expedite the process, as well as utilize PyTorch features, specialized libraries, and efficient data pipelines to optimize training on CPUs and accelerators. You’ll also reduce model complexity, adopt mixed precision, and harness the power of multicore systems and multi-GPU environments for distributed training. By the end, you'll be equipped with techniques and strategies to speed up training and focus on building stunning models.
Table of Contents (17 chapters)
close
close
Free Chapter
1
Part 1: Paving the Way
4
Part 2: Going Faster
10
Part 3: Going Distributed

Enabling AMP

Fortunately, PyTorch provides methods and tools to perform AMP by changing just a few things in our original code.

In PyTorch, AMP relies on enabling a couple of flags, wrapping the training process with the torch.autocast object, and using a gradient scaler. The more complex case, which is related to implementing AMP on GPU, takes all these three parts, while the most simple scenario (CPU-based training) requires only the usage of torch.autocast.

Let’s start by covering the more complex scenario. So, follow me to the next section to learn how to activate this approach in our GPU-based code.

Activating AMP on GPU

To activate AMP on GPU, we need to make three modifications to our code:

  1. Enable the CUDA and CuDNN backend flags.
  2. Wrap the training loop with torch.autocast.
  3. Use a gradient scaler.

Let’s take a closer look.

Enabling backend flags

As we learned in Chapter 4, Using Specialized Libraries, PyTorch relies on third...

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY