Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Accelerate Model Training with PyTorch 2.X
  • Table Of Contents Toc
  • Feedback & Rating feedback
Accelerate Model Training with PyTorch 2.X

Accelerate Model Training with PyTorch 2.X

By : Maicon Melo Alves
4.4 (10)
close
close
Accelerate Model Training with PyTorch 2.X

Accelerate Model Training with PyTorch 2.X

4.4 (10)
By: Maicon Melo Alves

Overview of this book

This book, written by an HPC expert with over 25 years of experience, guides you through enhancing model training performance using PyTorch. Here you’ll learn how model complexity impacts training time and discover performance tuning levels to expedite the process, as well as utilize PyTorch features, specialized libraries, and efficient data pipelines to optimize training on CPUs and accelerators. You’ll also reduce model complexity, adopt mixed precision, and harness the power of multicore systems and multi-GPU environments for distributed training. By the end, you'll be equipped with techniques and strategies to speed up training and focus on building stunning models.
Table of Contents (17 chapters)
close
close
Free Chapter
1
Part 1: Paving the Way
4
Part 2: Going Faster
10
Part 3: Going Distributed

A first look at distributed training

We’ll start this chapter by discussing the reasons for distributing the training process among multiple resources. Then, we’ll learn what resources are commonly used to execute this process.

When do we need to distribute the training process?

The most common reason to distribute the training process concerns accelerating the building process. Suppose the training process is taking a long time to complete, and we have multiple resources at hand. In that case, we should consider distributing the training process among these various resources to reduce the training time.

The second motivation for going distributed is related to memory leaks to load a large model in a single resource. In this situation, we rely on distributed training to allocate different parts of the large model into distinct devices or resources so that the model can be loaded into the system.

However, distributed training is not a silver bullet that solves...

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY