Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Intelligent Workloads at the Edge
  • Toc
  • feedback
Intelligent Workloads at the Edge

Intelligent Workloads at the Edge

By : Indraneel (Neel) Mitra, Ryan Burke
4.8 (17)
close
Intelligent Workloads at the Edge

Intelligent Workloads at the Edge

4.8 (17)
By: Indraneel (Neel) Mitra, Ryan Burke

Overview of this book

The Internet of Things (IoT) has transformed how people think about and interact with the world. The ubiquitous deployment of sensors around us makes it possible to study the world at any level of accuracy and enable data-driven decision-making anywhere. Data analytics and machine learning (ML) powered by elastic cloud computing have accelerated our ability to understand and analyze the huge amount of data generated by IoT. Now, edge computing has brought information technologies closer to the data source to lower latency and reduce costs. This book will teach you how to combine the technologies of edge computing, data analytics, and ML to deliver next-generation cyber-physical outcomes. You’ll begin by discovering how to create software applications that run on edge devices with AWS IoT Greengrass. As you advance, you’ll learn how to process and stream IoT data from the edge to the cloud and use it to train ML models using Amazon SageMaker. The book also shows you how to train these models and run them at the edge for optimized performance, cost savings, and data compliance. By the end of this IoT book, you’ll be able to scope your own IoT workloads, bring the power of ML to the edge, and operate those workloads in a production setting.
Table of Contents (17 chapters)
close
1
Section 1: Introduction and Prerequisites
3
Section 2: Building Blocks
10
Section 3: Scaling It Up
13
Section 4: Bring It All Together

Deploying your first ML model

Now that you are familiar with remote deployments and loading resources from the cloud, it is time to deploy your first ML-powered capability to the edge! After all, a component making use of ML models is much like other components we have deployed. It is a combination of dependencies, runtime code, and static resources that are hosted in the cloud.

Reviewing the ML use case

In this case, the dependencies are packages and libraries for using OpenCV (an open source library for computer vision (CV) use cases) and the Deep Learning Runtime (DLR), the runtime code is a preconfigured sample of inference code that uses DLR, and the static resources are a preconfigured model store for image classification and some sample images. The components deployed in this example are all provided and managed by AWS.

The solution that you will deploy simulates the use case for our HBS device hub that performs a simple image classification as part of a home security...

bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete