Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Learn Amazon SageMaker
  • Toc
  • feedback
Learn Amazon SageMaker

Learn Amazon SageMaker

By : Julien Simon
4.3 (10)
close
Learn Amazon SageMaker

Learn Amazon SageMaker

4.3 (10)
By: Julien Simon

Overview of this book

Amazon SageMaker enables you to quickly build, train, and deploy machine learning (ML) models at scale, without managing any infrastructure. It helps you focus on the ML problem at hand and deploy high-quality models by removing the heavy lifting typically involved in each step of the ML process. This book is a comprehensive guide for data scientists and ML developers who want to learn the ins and outs of Amazon SageMaker. You’ll understand how to use various modules of SageMaker as a single toolset to solve the challenges faced in ML. As you progress, you’ll cover features such as AutoML, built-in algorithms and frameworks, and the option for writing your own code and algorithms to build ML models. Later, the book will show you how to integrate Amazon SageMaker with popular deep learning libraries such as TensorFlow and PyTorch to increase the capabilities of existing models. You’ll also learn to get the models to production faster with minimum effort and at a lower cost. Finally, you’ll explore how to use Amazon SageMaker Debugger to analyze, detect, and highlight problems to understand the current model state and improve model accuracy. By the end of this Amazon book, you’ll be able to use Amazon SageMaker on the full spectrum of ML workflows, from experimentation, training, and monitoring to scaling, deployment, and automation.
Table of Contents (19 chapters)
close
1
Section 1: Introduction to Amazon SageMaker
4
Section 2: Building and Training Models
11
Section 3: Diving Deeper on Training
14
Section 4: Managing Models in Production

Understanding when and how to scale

Before we dive into scaling techniques, let's first discuss the monitoring information that we should consider when deciding whether we need to scale and how we should do it.

Understanding what scaling means

Two sources of information are available: the training log and the infrastructure metrics in Amazon CloudWatch.

The training log tells us how long the job lasted. In itself, this isn't really useful. How long is too long? This feels very subjective, doesn't it? Furthermore, even when training on the same dataset and infrastructure, changing a single hyperparameter can significantly alter training time. Batch size is an example, and there are many more.

When we're concerned about training time, I think we're really trying to answer three questions:

  • Is the training time compatible with our business requirements?
  • Are we making good use of the infrastructure we're paying for? Did we underprovision...

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete