Chapter 2. Transformers

Many trace the most recent wave of advances in generative AI to the introduction of a class of models called transformers in 2017. Their most well-known applications are the powerful large language models (LLMs), such as Llama and GPT-4, used by hundreds of millions daily. Transformers have become a backbone for modern AI applications, powering everything from chatbots and search systems to machine translation and content summarization. They’ve even branched out beyond text, making waves in fields like Computer Vision, music generation, and protein folding. In this chapter, we’ll explore the core ideas behind transformers and how they work, with a focus on one of the most common applications: language modeling.

Before we dive into the details of transformers, let’s take a step back and understand what language modeling is. At its core, a language model (LM) is a probabilistic model that learns to predict the next word (or token) in a sequence based on the preceding or surrounding words. Doing so captures the language’s underlying structure and patterns, allowing the model to generate realistic and coherent text. For example, given the sentence “I began my day eating”, an LM might predict the next word as “breakfast” with a high probability.

So, how do transformers fit into this picture? Transformers are designed to handle long-range dependencies and complex relationships between words efficiently and expressively. For example, imagine that you want to use ...

Get Hands-On Generative AI with Transformers and Diffusion Models now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.