Part II. Methods

In Part II we will dive into the six families of generative models, including the theory behind how they work and practical examples of how to build each type of model.

In Chapter 3 we shall take a look at our first generative deep learning model, the variational autoencoder. This technique will allow us to not only generate realistic faces, but also alter existing images—for example, by adding a smile or changing the color of someone’s hair.

Chapter 4 explores one of the most successful generative modeling techniques of recent years, the generative adversarial network. We shall see the ways that GAN training has been fine-tuned and adapted to continually push the boundaries of what generative modeling is able to achieve.

In Chapter 5 we will delve into several examples of autoregressive models, including LSTMs and PixelCNN. This family of models treats the generation process as a sequence prediction problem—it underpins today’s state-of-the-art text generation models and can also be used for image generation.

In Chapter 6 we will cover the family of normalizing flow models, including RealNVP. This model is based on a change of variables formula, which allows the transformation of a simple distribution, such as a Gaussian distribution, into a more complex distribution in way that preserves tractability.

Chapter 7 introduces the family of energy-based models. These models train a scalar energy function to score the validity of a given input. We will explore a ...

Get Generative Deep Learning, 2nd Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.