Errata
The errata list is a list of errors and their corrections that were found after the product was released. If the error was corrected in a later version or reprint the date of the correction will be displayed in the column titled "Date Corrected".
The following errata were submitted by our customers and approved as valid errors by the author or editor.
Color key: Serious technical mistake Minor technical mistake Language or formatting error Typo Question Note Update
Version | Location | Description | Submitted By | Date submitted | Date corrected |
---|---|---|---|---|---|
Printed | Page 16 1st paragraph |
There are two folders containing different versions of the notebooks. The full folder contains the exact notebooks used to create the book you're reading now, with all the propose and outputs. The stripped version has the same headings and code cells, but all outputs and prose has been removed. |
Andrew Nakamura | Aug 24, 2020 | Sep 18, 2020 |
Printed | Page 28 5th paragraph |
We have to tell fastai how to get labels from the filenames, which we do by calling from_name_func (which means that the *filenames* can be extracted using a function applied to the filename) .... That should say labels instead of filenames Note from the Author or Editor: |
John O'Reilly | Sep 13, 2020 | Dec 18, 2020 |
Printed | Page 28 -5 |
" and passing x[0].isupper(), which evaluates to True if the first letter is uppercase (i.e., it’s a cat)." Note from the Author or Editor: |
HIDEMOTO NAKADA | Jan 16, 2021 | May 07, 2021 |
Printed | Page 136 3rd paragraph |
In the 3rd line of the 3rd paragraph. it should be " for a total of 784 pixels" not "768 pixels" ]. Change 768 to 784 Note from the Author or Editor: |
Mohammed Maheer | Oct 13, 2020 | Dec 18, 2020 |
Printed | Page 143 l.1 |
"If you’ve done numeric programming in PyTorch before, you may recognize these as being similar to NumPy arrays." Note from the Author or Editor: |
HIDEMOTO NAKADA | Jan 16, 2021 | May 07, 2021 |
Printed | Page 147 last paragraph |
"we'll get back 1,010 absolute values" should read Note from the Author or Editor: |
Peter Butterfill | Sep 08, 2020 | Sep 18, 2020 |
Printed | Page 147 in the middle |
tensor([1,2,3]) + tensor([1,1,1]) Note from the Author or Editor: |
HIDEMOTO NAKADA | Feb 08, 2021 | May 07, 2021 |
Page 149 The last two paragraphs |
All the vector symbols X and W should be in lowercase in order to be consistent with the function above the second to last paragraph. Note from the Author or Editor: |
ZHANG Hongyuan | Dec 18, 2020 | May 07, 2021 | |
Page 149 middle paragraph |
while we are discussing how to discriminate '3' and '7' in this chapter, this paragraph is talking about '8'. Note from the Author or Editor: |
HIDEMOTO NAKADA | Mar 19, 2021 | May 07, 2021 | |
Printed | Page 173 3rd paragraph |
"To decide if an output represents a 3 or a 7, we can just check whether it's greater than 0." Note from the Author or Editor: |
HIDEMOTO NAKADA | Feb 08, 2021 | May 07, 2021 |
Page 198 2nd paragraph, last line |
"...would then look like something like Figure 5-3" should be "would then look something like Figure 5-3" Note from the Author or Editor: |
ZHANG Hongyuan | Mar 03, 2021 | May 07, 2021 | |
Printed | Page 200 last paragraph |
"So, we want to transform our numbers between 0 and 1 to instead be between negative infinity and infinity. " Note from the Author or Editor: |
HIDEMOTO NAKADA | Feb 13, 2021 | May 07, 2021 |
Printed | Page 201 last paragraph (not the Sylvain says section) |
modification should be multiplication in the following: |
Peter Butterfill | Sep 08, 2020 | Sep 18, 2020 |
Printed | Page 255 last line of the 2nd paragraph |
'The Last Skywalker' Note from the Author or Editor: |
HIDEMOTO NAKADA | Jan 16, 2021 | May 07, 2021 |
Printed | Page 272 3rd paragraph |
The term 'embedding matrices' used here means something different from the one used in p.268. I believe you are talking about the output of the embedding layer here. Note from the Author or Editor: |
HIDEMOTO NAKADA | Jan 16, 2021 | May 07, 2021 |
Printed | Page 335 in the middle list |
xxunk Note from the Author or Editor: |
HIDEMOTO NAKADA | Feb 16, 2021 | May 07, 2021 |
Printed | Page 341 2nd paragraph |
"We then cut this stream into a certain number of batches (which is our batch size)." Note from the Author or Editor: |
HIDEMOTO NAKADA | Jan 16, 2021 | May 07, 2021 |
Printed | Page 364 Last code section |
The class SiameseImage is derived from Tuple (note the capital T). The native Python datatype is written all in lower case 'tuple' but I think what's meant here is fastuple, the extended tuple in fastcore. I wonder if Tuple was renamed fastuple at some point. Note from the Author or Editor: |
Nils Brünggel | Oct 13, 2020 | Dec 18, 2020 |
Printed | Page 399 last itemize |
There are "Embedding dropout" and "Input dropout" are listed. Note from the Author or Editor: |
HIDEMOTO NAKADA | Jan 16, 2021 | May 07, 2021 |
Printed | Page 401 Questionnaire 33. |
"Why do we scale the weights with dropout?" Note from the Author or Editor: |
HIDEMOTO NAKADA | Jan 16, 2021 | May 07, 2021 |
Printed | Page 410 first paragraph |
[channels_in, features_out, rows, columns] should be read: Note from the Author or Editor: |
vallotton | Oct 03, 2020 | Dec 18, 2020 |
Printed | Page 423 bottom figures |
What are shown in these figures are not the red, green and blue channel of the original image. Instead, you show the same image three times using a different colormap. Indeed, the three images show exactly the same pattern of intensitiy, which they shouldn't (for example, if the channels were really shown, the green grass should appear more prominently in the green channel than in the red channel). Or am I missing something? Thanks a for a great book though! Note from the Author or Editor: |
pascal vallotton | Oct 04, 2020 | Dec 18, 2020 |
Printed | Page 427 second paragraph |
"We'll use the same as one as earlier..." should read: Note from the Author or Editor: |
pascal vallotton | Oct 04, 2020 | Dec 18, 2020 |
Printed | Page 428 -2nd paragraph |
"except in camel_case." Note from the Author or Editor: |
HIDEMOTO NAKADA | Feb 16, 2021 | May 07, 2021 |
Printed | Page 433 1st paragraph |
"The percentage of nonzero weights is getting much better", Note from the Author or Editor: |
HIDEMOTO NAKADA | Jan 16, 2021 | May 07, 2021 |
Printed | Page 447 first paragraph |
"What if we intitialized gamma to zero for every one of those final batchnorm layers?" Note from the Author or Editor: |
vallotton | Oct 05, 2020 | Dec 18, 2020 |
Printed | Page 447 l.3 |
"where conv is the function from the previous chapter that adds a second convolution, then a ReLU, then a batchnorm layer" Note from the Author or Editor: |
HIDEMOTO NAKADA | Jan 16, 2021 | May 07, 2021 |
Printed | Page 464 3rd paragraph |
Note Fastai has changed so this should be Note from the Author or Editor: |
Conwyn Flavell | Apr 15, 2021 | May 07, 2021 |
Printed | Page 465 just above 'x = self.emb_drop(x)' |
"You can pass `emb_drop` to `__init__` to change this value:" Note from the Author or Editor: |
HIDEMOTO NAKADA | Jan 16, 2021 | May 07, 2021 |
Printed | Page 487 last list |
CancelFitException and CancelBatchException Note from the Author or Editor: |
HIDEMOTO NAKADA | Jan 16, 2021 | Dec 18, 2020 |
Printed | Page 501 -2 |
"Scale (1d tensor): (1) 256 x 256" Note from the Author or Editor: |
HIDEMOTO NAKADA | Jan 16, 2021 | Dec 18, 2020 |
Printed | Page 502 rules of Einstein summation |
Note from the Author or Editor: |
HIDEMOTO NAKADA | Jan 16, 2021 | May 07, 2021 |
Printed | Page 503 l.2 |
torch.einsum('bi,ij,bj->b', a, b, c) Note from the Author or Editor: |
HIDEMOTO NAKADA | Jan 16, 2021 | May 07, 2021 |
Printed | Page 505 l.3 |
"the scale of our activations will go from 1 to 0.1, and after 100 layers" Note from the Author or Editor: |
HIDEMOTO NAKADA | Jan 16, 2021 | May 07, 2021 |
Printed | Page 509 5th paragraph |
"For the gradients of the ReLU and our linear layer, we use the gradients of the loss with respect to the output (in out.g) and apply the chain rule to compute the gradients of the loss with respect to the output (in inp.g)." Note from the Author or Editor: |
HIDEMOTO NAKADA | Jan 16, 2021 | May 07, 2021 |
Printed | Page 510 the last paragraph of the column |
Here, SymPy has taken the derivative of x**2 for us! Note from the Author or Editor: |
HIDEMOTO NAKADA | Feb 19, 2021 | May 07, 2021 |
Printed | Page 513 1st paragraph |
The computation of bwd for the Lin(LayerFunction), in the second line, refers to self.inp and self.out. |
Kaushik Sinha | Jun 28, 2021 | Nov 05, 2021 |
Printed | Page 521 3rd paragraph |
"To do the dot product of our weight matrix (2 by number of activations) with the Note from the Author or Editor: |
HIDEMOTO NAKADA | Feb 19, 2021 | May 07, 2021 |
Printed | Page 524 2nd code snippet |
x.shape Note from the Author or Editor: |
HIDEMOTO NAKADA | Feb 19, 2021 | May 07, 2021 |
Printed | Page 532 last line |
"Before we do, we’ll call a hook, if it’s defined. " Note from the Author or Editor: |
HIDEMOTO NAKADA | Feb 19, 2021 | May 07, 2021 |