Chapter 8. Advanced Sequence Modeling for Natural Language Processing
In this chapter, we build on the sequence modeling concepts discussed in Chapters 6 and 7 and extend them to the realm of sequence-to-sequence modeling, where the model takes a sequence as input and produces another sequence, of possibly different length, as output. Examples of sequence-to-sequence problems occur everywhere. For example, we might want to, given an email, predict a response, given a French sentence, predict its English translation, or given an article, write an abstract summarizing the article. We also discuss structural variants of sequence models here: particularly, the bidirectional models. To get the most out of the sequence representation, we introduce the attention mechanism and discuss that in depth. Finally, this chapter ends with a detailed walkthrough of neural machine translation (NMT) that implements the concepts described herein.
Sequence-to-Sequence Models, Encoder–Decoder Models, and Conditioned Generation
Sequence-to-sequence (S2S) models are a special case of a general family of models called encoder–decoder models. An encoder–decoder model is a composition of two models (Figure 8-1), an “encoder” and a “decoder,” that are typically jointly trained. The encoder model takes an input and produces an encoding or a representation (ϕ) of the input, which is usually a vector.1 The goal of the encoder is to capture important properties of the input with respect to the task at hand. ...
Get Natural Language Processing with PyTorch now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.