Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Learn Amazon SageMaker
  • Table Of Contents Toc
  • Feedback & Rating feedback
Learn Amazon SageMaker

Learn Amazon SageMaker

By : Julien Simon
4.8 (10)
close
close
Learn Amazon SageMaker

Learn Amazon SageMaker

4.8 (10)
By: Julien Simon

Overview of this book

Amazon SageMaker enables you to quickly build, train, and deploy machine learning models at scale without managing any infrastructure. It helps you focus on the machine learning problem at hand and deploy high-quality models by eliminating the heavy lifting typically involved in each step of the ML process. This second edition will help data scientists and ML developers to explore new features such as SageMaker Data Wrangler, Pipelines, Clarify, Feature Store, and much more. You'll start by learning how to use various capabilities of SageMaker as a single toolset to solve ML challenges and progress to cover features such as AutoML, built-in algorithms and frameworks, and writing your own code and algorithms to build ML models. The book will then show you how to integrate Amazon SageMaker with popular deep learning libraries, such as TensorFlow and PyTorch, to extend the capabilities of existing models. You'll also see how automating your workflows can help you get to production faster with minimum effort and at a lower cost. Finally, you'll explore SageMaker Debugger and SageMaker Model Monitor to detect quality issues in training and production. By the end of this Amazon book, you'll be able to use Amazon SageMaker on the full spectrum of ML workflows, from experimentation, training, and monitoring to scaling, deployment, and automation.
Table of Contents (19 chapters)
close
close
1
Section 1: Introduction to Amazon SageMaker
4
Section 2: Building and Training Models
11
Section 3: Diving Deeper into Training
14
Section 4: Managing Models in Production

Distributing training jobs

Distributed training lets you scale training jobs by running them on a cluster of CPU or GPU instances. It can be used to solve two different problems: very large datasets, and very large models.

Understanding data parallelism and model parallelism

Some datasets are too large to be trained in a reasonable amount of time on a single CPU or GPU. Using a technique called data parallelism, we can distribute data across the training cluster. The full model is still loaded on each CPU/GPU, which only receive an equal share of the dataset, not the full dataset. In theory, this should speed up training linearly according to the number of CPU/GPUs involved, and as you can guess, the reality is often different.

Believe it or not, some state-of-the-art-deep learning models are too large to fit on a single GPU. Using a technique called model parallelism, we can split it, and distribute the layers across a cluster of GPUs. Hence, training batches will flow across...

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech

Create a Note

Modal Close icon
You need to login to use this feature.
notes
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY