Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Intelligent Workloads at the Edge
  • Toc
  • feedback
Intelligent Workloads at the Edge

Intelligent Workloads at the Edge

By : Indraneel (Neel) Mitra, Ryan Burke
4.8 (17)
close
Intelligent Workloads at the Edge

Intelligent Workloads at the Edge

4.8 (17)
By: Indraneel (Neel) Mitra, Ryan Burke

Overview of this book

The Internet of Things (IoT) has transformed how people think about and interact with the world. The ubiquitous deployment of sensors around us makes it possible to study the world at any level of accuracy and enable data-driven decision-making anywhere. Data analytics and machine learning (ML) powered by elastic cloud computing have accelerated our ability to understand and analyze the huge amount of data generated by IoT. Now, edge computing has brought information technologies closer to the data source to lower latency and reduce costs. This book will teach you how to combine the technologies of edge computing, data analytics, and ML to deliver next-generation cyber-physical outcomes. You’ll begin by discovering how to create software applications that run on edge devices with AWS IoT Greengrass. As you advance, you’ll learn how to process and stream IoT data from the edge to the cloud and use it to train ML models using Amazon SageMaker. The book also shows you how to train these models and run them at the edge for optimized performance, cost savings, and data compliance. By the end of this IoT book, you’ll be able to scope your own IoT workloads, bring the power of ML to the edge, and operate those workloads in a production setting.
Table of Contents (17 chapters)
close
1
Section 1: Introduction and Prerequisites
3
Section 2: Building Blocks
10
Section 3: Scaling It Up
13
Section 4: Bring It All Together

Bringing ML to the edge

ML is an incredible technology making headway in solving today's problems. The ability to train computers to process great quantities of information in service of classifying new inputs and predicting results rivals, and in some applications exceeds, what the human brain can accomplish. For this reason, ML defines mechanisms for developing artificial intelligence (AI).

The vast computing power made available by the cloud has significantly reduced the amount of time it takes to train ML models. Data scientists and data engineers can train production models in hours instead of days. Advances in ML algorithms have made the models themselves ever more portable, meaning that running the models can work on computers with smaller compute and memory profiles. The implications of delivering portable ML models cannot be overstated.

Operating ML models at the edge helps us as architects deliver optimal edge solution design principles. By hosting a portable model at the edge, the proximity to the rest of our solution leads to four key benefits, outlined as follows:

  • First, this means the solution can maximize responsiveness for capabilities depending on the results of ML inferences by not waiting for the round-trip latency of a call to a remote server. The latency to interpret myriad signals from an engine about to fail can be made in 10 milliseconds (ms) instead of 100 ms. This degree of latency can make the difference between a safe operation and a catastrophic failure.
  • Second, it means the functionality of the solution will not be interrupted by network congestion and can run in a state where the edge solution is disconnected from the public internet. This opens up possibilities for ML solutions to run untethered from cloud services. That imminent engine failure can be detected and prevented regardless of connection availability.
  • Third, anytime we can process data locally with an ML model and reduce the quantity of data that ultimately needs to be stored in the cloud, we also get the cost-saving benefits on transmission. Think of an expensive satellite internet provider contract; across that kind of transmission medium, IoT architects only want to transmit data that is absolutely necessary to keep costs down.
  • Fourth, another benefit of local data processing is that it enables use cases that must conform to regulation where data must reside in the local country or observe privacy concerns such as healthcare data. Hospital equipment used to save lives arguably needs as much intelligent monitoring as it can get, but the runtime data may not legally be permitted to leave the premises.

These four key benefits are illustrated in the following diagram:

Figure 1.4 – The four key benefits of ML at the edge

Figure 1.4 – The four key benefits of ML at the edge

Imagine a submersible drone that can bring with it an ML model that can classify images coming from a video feed. The drone can operate and make inferences on images away from any network connection and can discard any images that don't have any value. For example, if the drone's mission is to bring back only images of narwhals, then the drone doesn't need extensive quantities of storage to save every video clip for later analysis. The drone can use ML to classify images of narwhals and only preserve those for the trip back home. The cost of storage continues to drop over time, but in the precious bill of materials and space considerations of edge solutions such as this one, bringing a portable ML model can ultimately lead to significant cost savings.

The following diagram illustrates this concept:

Figure 1.5 – Illustration of a submersible drone concept processing photographs and storing only those where a local ML model identifies a narwhal in the subject

Figure 1.5 – Illustration of a submersible drone concept processing photographs and storing only those where a local ML model identifies a narwhal in the subject

This book will teach you the basics of training an ML model from the kinds of machine data common to edge solutions, as well as how to deploy such models to the edge to take advantage of combining ML capabilities with the value proposition of running at the edge. We will also teach you about operating ML models at the edge, which means analyzing the performance of models, and how to set up infrastructure for deploying updates to models retrained in the cloud.

Outside the scope of this book's lessons are comprehensive deep dives on the data science driving the field of ML and AI. You do not need proficiency in that field to understand the patterns of ML-powered edge solutions. An understanding of how to work with input/output (I/O) buffers to read and write data in software is sufficient to work through the ML tools used in this book.

Next, let's review the kinds of tools we need to build and the specific tools we will use to build our solution.

bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete