Chapter 3. NLP Tasks and Applications
In Chapter 2, we gave you a gentle introduction to language models and fine-tuning. Now, let’s explore more of what fine-tuning can actually be used for. It is good for more than just generating better domain-specific language models, as we alluded to in the previous chapter. Fine-tuning can be used to solve meaningful real-world tasks, which serve as the building blocks of complex real-world NLP applications.
In this chapter, we will officially introduce several of these “meaningful real-world tasks” and present several popular benchmarks, such as GLUE and SQuAD, for measuring performance on these tasks. We will also highlight several standard publicly available datasets for you to use when solving these tasks on your own. And, most importantly, we will solve two of these tasks—named entity recognition (NER) and text classification—together to show just how all of this works.
We hope this chapter gives you a deeper, more applied and hands-on take to performing NLP and can serve as the launch pad for building your own real-world NLP applications.
Pretrained Language Models
As we mentioned in Chapter 1, NLP has come a long way over just the past few years. Instead of training NLP models from scratch, it is now possible (and advisable) to leverage pretrained language models to perform common NLP tasks such as NER. Only when you have highly custom NLP needs is it advisable to train your NLP model from scratch. But, before we proceed any further, ...
Get Applied Natural Language Processing in the Enterprise now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.