Part III. Outside the Wall

Congratulations! You now know enough about NLP to actually read and understand the latest research and implement every part of the pipeline to solve the most common NLP tasks from scratch.

But when deploying models in production, there are many more things to consider. Where do you run your model—on the client or a server? How do you handle multiple simultaneous requests? How do you integrate your PyTorch model, which is only accessible from Python, in a JavaScript web app? How do you train on new, real user data that comes in? How do you detect and handle errors in your model while it’s in production? How do you scale training across very large datasets and multiple nodes?

Many of these questions actually do not have a perfect answer, but in this section, we’ll try to shine some light on the tools and technologies that are important to real-world productionization of models.

A lot of the topics discussed in these next few chapters are not, strictly speaking, directly related to NLP. They’re what we called “outside the box” concepts in Chapter 1. Nonetheless, they are important to consider when taking your NLP models from a fun side project to large-scale research and a real-world deployment that has an impact on real humans.

Get Applied Natural Language Processing in the Enterprise now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.