Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Applied Machine Learning for Healthcare and Life Sciences using AWS
  • Table Of Contents Toc
  • Feedback & Rating feedback
Applied Machine Learning for Healthcare and Life Sciences using AWS

Applied Machine Learning for Healthcare and Life Sciences using AWS

By : Ujjwal Ratan
4.9 (14)
close
close
Applied Machine Learning for Healthcare and Life Sciences using AWS

Applied Machine Learning for Healthcare and Life Sciences using AWS

4.9 (14)
By: Ujjwal Ratan

Overview of this book

While machine learning is not new, it's only now that we are beginning to uncover its true potential in the healthcare and life sciences industry. The availability of real-world datasets and access to better compute resources have helped researchers invent applications that utilize known AI techniques in every segment of this industry, such as providers, payers, drug discovery, and genomics. This book starts by summarizing the introductory concepts of machine learning and AWS machine learning services. You’ll then go through chapters dedicated to each segment of the healthcare and life sciences industry. Each of these chapters has three key purposes -- First, to introduce each segment of the industry, its challenges, and the applications of machine learning relevant to that segment. Second, to help you get to grips with the features of the services available in the AWS machine learning stack like Amazon SageMaker and Amazon Comprehend Medical. Third, to enable you to apply your new skills to create an ML-driven solution to solve problems particular to that segment. The concluding chapters outline future industry trends and applications. By the end of this book, you’ll be aware of key challenges faced in applying AI to healthcare and life sciences industry and learn how to address those challenges with confidence.
Table of Contents (19 chapters)
close
close
1
Part 1: Introduction to Machine Learning on AWS
Free Chapter
2
Chapter 1: Introducing Machine Learning and the AWS Machine Learning Stack
chevron up
4
Part 2: Machine Learning Applications in the Healthcare Industry
9
Part 3: Machine Learning Applications in the Life Sciences Industry
14
Part 4: Challenges and the Future of AI in Healthcare and Life Sciences

Introducing ML on AWS

AWS puts ML in the hands of every developer, irrespective of their skill level and expertise, so that businesses can adopt the technology quickly and effectively. AWS focuses on removing the undifferentiated heavy lifting in the process of building ML models such as the management of the underlying infrastructure, the scaling of the training and inference jobs, and ensuring high availability of the models. It provides developers with a variety of compute instances and containerized environments to choose from that are purpose-built for the accelerated and distributed computing needed for high-scale ML jobs. AWS has a broad and deep set of ML capabilities for builders that can be connected together, like Lego pieces, to create intelligent applications.

AWS ML services cover the full life cycle of an ML pipeline from data annotation/labeling, data cleansing, feature engineering, model training, deployment, and monitoring. It has purpose-built services for problems in computer vision, natural language processing, forecasting, recommendation engines, and fraud detection, to name a few. It also has options for automatic model creation and no-/low-code options for creating ML models. The AWS ML services are organized into three layers also known as the AWS machine learning stack.

Introducing the AWS ML stack

The following diagram represents the version of the AWS AI/ML services stack as of April 2022.

Figure 1.7 – A diagram depicting the AWS ML stack as of April 2022

Figure 1.7 – A diagram depicting the AWS ML stack as of April 2022

The stack can be used by expert practitioners who want to develop a project within the framework of their choice; data scientists who want to use the end-to-end capabilities of SageMaker; business analysts who can build their own model using Canvas; or application developers with no previous ML skills who can add intelligence to their applications with the help of API calls. The following are the three layers of the AWS AI/ML stack:

  • AI services layer: The AI services layer of the AWS ML stack is the topmost layer of the stack. It consists of services that require minimal knowledge of ML. Sometimes, it comes with a pre-trained model that can be just invoked using APIs from the AWS SDK, the AWS CLI, or the console. In other cases, the services allow you to customize the model by providing your own labeled training dataset so the responses are more appropriate for the problem at hand. In any case, the AI services layer of the AWS AI/ML stack is focused on ease of use. The services are designed for specialized applications in industrial settings, search, business processes, and healthcare. It also comes with a core set of capabilities in the areas of speech, chatbots, vision, and text and documents.
  • ML services layer: The ML services layer is the middle layer of the AWS AI/ML stack. It provides tools for data scientists to perform all the steps of the ML life cycle, such as data cleansing, feature engineering, model training, deployment, and monitoring. It is driven by the core ML platform of AWS known as Amazon SageMaker. SageMaker provides the ability to build a modular containerized environment that interfaces with the AWS compute and storage services seamlessly. It provides its own SDK that has APIs to interact with the service. It removes the complexity from each step of the ML workflow by providing simple-to-use modular capabilities with a choice of deployment architectures and patterns to suit virtually any ML application. It also contains MLOps capabilities to create a reproducible ML pipeline that is easy to maintain and scale. The ML services layer is suited for data scientists who build and train their own models and maintain large-scale models in production environments.
  • ML fameworks and the infrastructure layer: The ML frameworks and infrastructure layer is the bottom layer of the AWS AI/ML stack. The services in this layer are for expert practitioners who can develop using the framework of their choice. It provides a choice for developers and scientists to run their workloads as a managed experience in Amazon SageMaker or run their workloads in a self-managed environment on AWS Deep Learning, Amazon machine images (AMIs), and AWS Deep Learning Containers. The AWS Deep Learning AMI and containers are fully configured with the latest versions of the most popular deep learning frameworks and tools – including PyTorch, MXNet, and TensorFlow. As part of this layer, AWS provides a broad and deep portfolio of compute, networking, and storage infrastructure services with a choice of processors and accelerators to meet your unique performance and budget needs for ML.

Now that we have a good understanding of ML and the AWS ML stack, it is a good time to re-read sections that may not be entirely clear. Also, the chapter introduces concepts of ML, but if you want to dive deeper into any of the concepts touched upon in this chapter, there are several trusted online resources for you to refer to. Let us now summarize the lessons from this chapter and see what’s ahead.

bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY