
Learn Amazon SageMaker
By :

Real-life machine learning scenarios often involve more than one model. For example,you may need to run preprocessing steps on incoming data or reduce its dimensionality with the PCA algorithm.
Of course, you could deploy each model to a dedicated endpoint. However, orchestration code would be required to pass prediction requests to each model in sequence. Multiplying endpoints would also introduce additional costs.
Instead, inference pipelines let you deploy up to five models on the same endpoint or for batch transform, and automatically handle the prediction sequence.
Let's say that we wanted to run PCA and then Linear Learner. Building the inference pipeline would look like this: