Errata

Generative Deep Learning

Errata for Generative Deep Learning

Submit your own errata for this product.

The errata list is a list of errors and their corrections that were found after the product was released.

The following errata were submitted by our customers and have not yet been approved or disproved by the author or editor. They solely represent the opinion of the customer.

Color Key: Serious technical mistake Minor technical mistake Language or formatting error Typo Question Note Update

Version Location Description Submitted by Date submitted
Other Digital Version Chapter 1
Figure 1-6

Figure 1-6 renders a map of the world. There are two points labelled B, one in red, coherent with the text, and a black B, which I believe is a mistake.

This is for 2nd edition, kindle.
(sorry, I had this added to the 1st ed. errata by mistake)

Luis Torrao  May 14, 2023 
Other Digital Version Preface
The text says "We have a web page for this book,etc..." however the link takes the reader to the 1st edition website, not the 2nd edition one.

The text says "We have a web page for this book, where we list errata, examples, and any additional information. You can access this page at http5://oreil.ly/generative-dl." however the link takes the reader to the 1st edition website, not the 2nd edition one.

I am on 2nd edition for kindle.

Luis Torrao  May 14, 2023 
PDF Page Core Probability Theory (p. 16 in the pdf version)
Definition of "Probability density function"

Error 1.
"A probability density function (or simply density function)
is a function p(x) that maps a point x in the sample space to a number between 0 and 1."

A pdf can take any positive value as long as it integrates to 1.
Probability mass functions of discrete variables do have this [0,1] restriction (and they must sum up to 1).

Error 2.
"The integral of the density function over all points in the sample space must equal 1, so that it is a well-defined probability distribution."

Strictly speaking, a pdf is not a probability distribution but a tool to _define_ a continuous probability distribution. Quoting below the Deep Learning Book (3.3.2)
"A probability density function p(x) does not give the probability of a specific state directly; instead the probability of landing inside an infinitesimal region with volume dx is given by p(x)dx"

Anonymous  Jun 28, 2023 
PDF Page Covariate Shift (p. 47)
First two paragraphs

I recommend removing the discussion about internal covariate shift (ICS). It has been shown in "How Does Batch Normalization Help Optimization?" (arxiv.org 1805.11604) that
1) reducing ICS does not guarantee better learning results and 2) batch normalization does NOT reduce ICS. What BN does (and what improves learning) is smoothening the optimization landscape (please see the mentioned article). As a side note, dropout is no longer as popular as it used to be.

Anonymous  Jul 05, 2023 
Printed Page Page 18
First formula from the top

It seems to me that arg max should go over theta, not X. We're going through different parameters to find the best one, not tweaking the dataset, so it "suits" a particular parameter.

Anonymous  Sep 01, 2023 
PDF Page p. 117, Wasserstein GAN
Figure 4.12

For the Wasserstein GAN, the critic should be outputting positive values for real images and negative values for fake images, so it seems that the signs/labels on the top and bottom branches of the figure should be flipped.

Rich Radke  Sep 17, 2023 
Printed Page Page 88
Tables 3.5 and 3.6

Input layer of VAE shown in table takes the shape (32, 32, 3), but image size stated on page 87 example 3-15 as 64 by 64 (not 32 by 32).

Table 3.6 listed 4 sets of Con2DTranspose-BatchNormalization-LeakyReLU, but the notebook in the accompanying repo created 5 sets.

Shaolang  Sep 30, 2023 
Printed Page docker.md file
line 131

Hello,

Can I run the book’s Jupyter Notebooks with Python code in an MS VS Code IDE (that has the extensions Docker and Dev Containers) and not in the Jupyter notebook that comes up when I navigate to the address starting 127.0.0.1:8888/lab?token= in a web browser? I ask because I have 17 lab computers with VS Code as the IDE and want to teach students using Jupyter Notebooks in a VS Code IDE environment for this course (after installing Docker and running the foster code-app-1 container on each lab computer). Thank you.

Anonymous  Dec 20, 2023 
ePub, O'Reilly learning platform Page Digital Version
Chapter 1 (Generative Modeling), Core Probability Theory

Under Core Probability Theory, description of Maximum likelihood estimation, the equation for theta_hat should be argmax over theta rather than argmax over x.

Mike Singer  Feb 28, 2024 
Printed Page Chapter 2
Table 2-2

In chapter 2, in table 2-2, the number of parameters in the dense layer is
listed as 12,810. Is this a typo? I ask because the justification for the #
of parameters equaling 12,810 is that the output shape from the flatten
layer is 1,280 units multiplied by the 10 units in the dense layer i.e.,
1,280 * 10. Shouldn't the # of parameters = 12,800 because 1,280 * 10 =
12,800? Or am I missing something?

Stephen S. Chettiath  Mar 25, 2024 
O'Reilly learning platform Page Page 86
2nd paragraph

How's it going? Hopefully good. I'm reaching out because I'm having trouble trying to download the datasets for Generative Deep Learning: Teaching Machines to Paint, Write, Compose, and Play by David Foster 2nd Edition.

I was able to successfully clone the repo, get Docker installed, generated the API key from Kaggle, and was also able to actually run my container through Git Bash. However, when I try to run the following through Git Bash:

bash scripts/download.sh faces

I get this:

service "app" is not running

I'm not sure what's going on. I'm running the bash command within the directory that has the requirements.txt file and the env file so I think I'm running it in the right place. I'm just not sure how to proceed.

I also tried to reach out to Oreilly and was told to run this:

scripts/downloaders/download_kaggle_data.sh

But got the exact same service "app" error.

If anyone could help me out, it would be greatly appreciated.

Anonymous  Apr 29, 2024 
PDF Page 55
Reference 7.

In reference 7 on p. 55, the first two words are omitted from the title. Here's how it _currently_ appears:

7. Hinton et al., “Networks by Preventing Co-Adaptation of Feature Detectors,” July 3,
2012, <link removed because the errata form wouldn't accept it>

The actual title of the article above is "Improving neural networks by preventing co-adaptation of feature detectors."

Steve Wickert  Sep 20, 2023