Chapter 9. Deploy Models to Production

In previous chapters, we demonstrated how to train and optimize models. In this chapter, we shift focus from model development in the research lab to model deployment in production. We demonstrate how to deploy, optimize, scale, and monitor models to serve our applications and business use cases.

We deploy our model to serve online, real-time predictions and show how to run offline, batch predictions. For real-time predictions, we deploy our model via SageMaker Endpoints. We discuss best practices and deployment strategies, such as canary rollouts and blue/green deployments. We show how to test and compare new models using A/B tests and how to implement reinforcement learning with multiarmed bandit (MAB) tests. We demonstrate how to automatically scale our model hosting infrastructure with changes in model-prediction traffic. We show how to continuously monitor the deployed model to detect concept drift, drift in model quality or bias, and drift in feature importance. We also touch on serving model predictions via serverless APIs using Lambda and how to optimize and manage models at the edge. We conclude the chapter with tips on how to reduce our model size, reduce inference cost, and increase our prediction performance using various hardware, services, and tools, such as the AWS Inferentia hardware, SageMaker Neo service, and TensorFlow Lite library.

Choose Real-Time or Batch Predictions

We need to understand the application and business ...

Get Data Science on AWS now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.