Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying MLOps with Red Hat OpenShift
  • Table Of Contents Toc
  • Feedback & Rating feedback
MLOps with Red Hat OpenShift

MLOps with Red Hat OpenShift

By : Ross Brigoli, Faisal Masood
5 (2)
close
close
MLOps with Red Hat OpenShift

MLOps with Red Hat OpenShift

5 (2)
By: Ross Brigoli, Faisal Masood

Overview of this book

MLOps with OpenShift offers practical insights for implementing MLOps workflows on the dynamic OpenShift platform. As organizations worldwide seek to harness the power of machine learning operations, this book lays the foundation for your MLOps success. Starting with an exploration of key MLOps concepts, including data preparation, model training, and deployment, you’ll prepare to unleash OpenShift capabilities, kicking off with a primer on containers, pods, operators, and more. With the groundwork in place, you’ll be guided to MLOps workflows, uncovering the applications of popular machine learning frameworks for training and testing models on the platform. As you advance through the chapters, you’ll focus on the open-source data science and machine learning platform, Red Hat OpenShift Data Science, and its partner components, such as Pachyderm and Intel OpenVino, to understand their role in building and managing data pipelines, as well as deploying and monitoring machine learning models. Armed with this comprehensive knowledge, you’ll be able to implement MLOps workflows on the OpenShift platform proficiently.
Table of Contents (13 chapters)
close
close
Free Chapter
1
Part 1: Introduction
3
Part 2: Provisioning and Configuration
6
Part 3: Operating ML Workloads

Packaging and deploying models as a service

To take advantage of the scalability of OpenShift workloads, the best way to run inferences against an ML model is to deploy the model as an HTTP service. This way, inference calls can be performed by invoking the HTTP endpoint of a model server Pod that is running the model. You can then create multiple replicas of the model server, allowing you to horizontally scale your model to serve more requests.

Recall that you built the wine quality prediction model in the previous chapter. The first stage of exposing the model is to save your model in an S3 bucket. RHODS provides multiple model servers that host your models and allow them to be accessed over HTTP. Think of it as an application server such as JBoss or WebLogic, which takes your Java code and enables it to be executed and accessed over standard protocols.

The model servers can serve different types of model formats, such as Intel OpenVINO, which uses the Open Neural Network Exchange...

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech

Create a Note

Modal Close icon
You need to login to use this feature.
notes
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Delete Note

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Edit Note

Modal Close icon
Write a note (max 255 characters)
Cancel
Update Note

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY