Chapter 9. Deployment: Launching Your AI Application Into Production

So far, we’ve explored the key concepts, ideas, and tools to help you build the core functionality of your AI application. You’ve learned how to utilize LangChain to generate LLM outputs, index and retrieve data, and enable memory and agency.

But your application is limited to your local server environment, so external users can’t access its features yet.

In this chapter, you’ll learn the best practices for deploying your AI application into production. We’ll also explore various tools to debug, collaborate, test, and monitor your LLM applications.

Let’s get started.

Prerequisites

In order to effectively deploy your AI application, you need to utilize various services to host your application, store and retrieve data, and monitor your application.

In the deployment example discussed in this chapter, we will incorporate the following services:

Vector store ...

Get Learning LangChain now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.