The errata list is a list of errors and their corrections that were found after the product was released. If the error was corrected in a later version or reprint the date of the correction will be displayed in the column titled "Date Corrected".
The following errata were submitted by our customers and approved as valid errors by the author or editor.
Color key: Serious technical mistake Minor technical mistake Language or formatting error Typo Question Note Update
| Version | Location | Description | Submitted By | Date submitted | Date corrected |
|---|---|---|---|---|---|
Page Chapter 1 Multiple instances |
While reading, I noticed a couple of minor inconsistencies that I thought might be helpful to point out for future editions or errata listings. Note from the Author or Editor: |
Stevin Wilson | Jul 29, 2025 | Jan 23, 2026 | |
Page Page 6 and 7 Page 6 4th Paragraph, Page 7 1st paragraph |
The aforementioned paragraphs mention the use of the framework 'Tensorflow', however it should be 'Pytorch' Note from the Author or Editor: |
Zoheb Khan | Aug 06, 2025 | Jan 23, 2026 | |
Page page 9, subsection heading bottom heading |
Text: Installing Porch in Python Note from the Author or Editor: |
Jørgen Lang | Sep 22, 2025 | Jan 23, 2026 | |
Page p62 continuation of code, line 5 (on p62) |
Text: print(f'Test Set Accuracy: {100 * correct / total}%') Note from the Author or Editor: |
Jørgen Lang | Oct 17, 2025 | Jan 23, 2026 | |
Page 8 2nd para |
Text: wrapped in the torchvision.models library. Note from the Author or Editor: |
Jørgen Lang | Sep 22, 2025 | Jan 23, 2026 | |
Page 9 3rd last para |
Text: With Python, there are many ways to install frameworks, but the default one supported by the TensorFlow team is pip. Note from the Author or Editor: |
Jørgen Lang | Sep 22, 2025 | Jan 23, 2026 | |
Page 17 last para |
Text: "You’ll see the word tensor a lot in ML; it gives the TensorFlow framework its name. Note from the Author or Editor: |
Jørgen Lang | Oct 13, 2025 | Jan 23, 2026 | |
Page 18ff multiple |
In multiple instances the variable "y" is written in uppercase (as a capital "Y"), while in the remainder of the chapter it is consistently in lowercase. Note from the Author or Editor: |
Jørgen Lang | Oct 13, 2025 | Jan 23, 2026 | |
Page 28 4th para, 1st sentence |
Text: If you remember, in Chapter 1 we had a Sequential model to specify that we had Note from the Author or Editor: |
Jørgen Lang | Oct 14, 2025 | Jan 23, 2026 | |
Page 35 3rd para, 1st sentence |
Text: We’ll also use the term epoch for a training cycle with all of the data Note from the Author or Editor: |
Jørgen Lang | Oct 15, 2025 | Jan 23, 2026 | |
Page 35 5th para (just above "Training the Neural Network"), 1st sentence |
Text: This will simply call the train function we specified five times […] Note from the Author or Editor: |
Jørgen Lang | Oct 15, 2025 | Jan 23, 2026 | |
Page 37 3rd code snippet |
Text: Note from the Author or Editor: |
Jørgen Lang | Oct 15, 2025 | Jan 23, 2026 | |
Page 38 Code below "Exploring the Model Output" |
Text: Note from the Author or Editor: |
Jørgen Lang | Oct 15, 2025 | Jan 23, 2026 | |
Page 39 2nd para, 2nd sentence |
Text: The Softmax function gets the log() of the value, where log(1) is zero and Note from the Author or Editor: |
Jørgen Lang | Oct 15, 2025 | Jan 23, 2026 | |
Page 51 5th para |
Text: Finally, these 128 are fed into the final layer (self.fc1) with 10 outputs—that represent the 10 classes. Note from the Author or Editor: |
Jørgen Lang | Oct 16, 2025 | Jan 23, 2026 | |
Page 51 4th para, last sentence |
Text: The output is 128, which is the same number of neurons we used in Chapter 2 for the deep neural network (DNN). Note from the Author or Editor: |
Jørgen Lang | Oct 16, 2025 | Jan 23, 2026 | |
Page 54 various, 3 times |
Text: "dense layers" Note from the Author or Editor: |
Jørgen Lang | Oct 17, 2025 | Jan 23, 2026 | |
Page 58 2nd para, last sentence |
Text: […] and the directory for validation is validation_dir, as specified earlier. Note from the Author or Editor: |
Jørgen Lang | Oct 17, 2025 | Jan 23, 2026 | |
Page 59 3rd para |
Text: The theory is that these will be activated feature maps […] Note from the Author or Editor: |
Jørgen Lang | Oct 17, 2025 | Jan 23, 2026 | |
Page 60 last para, 3rd sentence |
Text: In the preceding code snippet, you’ve already downloaded the training and vali‐ Note from the Author or Editor: |
Jørgen Lang | Oct 17, 2025 | Jan 23, 2026 | |
Page 61 2nd para, last sentence |
Text: Here, you’ll download some additional images for testing the model. Note from the Author or Editor: |
Jørgen Lang | Oct 17, 2025 | Jan 23, 2026 | |
Page 62 code, top |
Text (2x): Note from the Author or Editor: |
Jørgen Lang | Oct 17, 2025 | Jan 23, 2026 | |
Page 63 3rd para, 2nd sentence |
Text: I’ve provided a “Horses or Humans” notebook on GitHub that you can open directly in Colab. Note from the Author or Editor: |
Jørgen Lang | Oct 20, 2025 | Jan 23, 2026 | |
Page 65 2nd code snippet + text above |
Text: If it’s greater than 0.5, we’re looking at a human Note from the Author or Editor: |
Jørgen Lang | Oct 20, 2025 | Jan 23, 2026 | |
Page 67 before last para |
Text: missing? Note from the Author or Editor: |
Jørgen Lang | Oct 20, 2025 | Jan 23, 2026 | |
Page 73 code example, line1 |
Text: [5x whitespace]def load_image(image_path, transform): Note from the Author or Editor: |
Jørgen Lang | Oct 21, 2025 | Jan 23, 2026 | |
Page 74 2nd last para, 2nd sentence |
Text: If you wanted to train a dataset to recognize […] Note from the Author or Editor: |
Jørgen Lang | Oct 21, 2025 | Jan 23, 2026 | |
Page 74 2nd last para, last sentence |
Text: […] there’s a simple dataset you can use for this. Note from the Author or Editor: |
Jørgen Lang | Oct 21, 2025 | Jan 23, 2026 | |
Page 76 1st code snippet, line 6 |
Text: nn.Linear(1024, 3) # Final layer for binary classification Note from the Author or Editor: |
Jørgen Lang | Oct 21, 2025 | Jan 23, 2026 | |
Page 77 3rd para |
Text: If you explore this a little deeper, you can see that the file named scissors4.png had an output of –2.5582, –1.7362, 3.8465] Note from the Author or Editor: |
Jørgen Lang | Oct 21, 2025 | Jan 23, 2026 | |
Page 79 figure |
Text: A neural network with dropouts Note from the Author or Editor: |
Jørgen Lang | Oct 21, 2025 | Jan 23, 2026 | |
Page 80 last para, 1st sentence |
Text: Before we explore further scenarios, in Chapter 4, you’ll get an introduction to Note from the Author or Editor: |
Jørgen Lang | Oct 21, 2025 | Jan 23, 2026 | |
Page 84 2nd code example |
Text: # Create the FashionMNIST dataset Note from the Author or Editor: |
Jørgen Lang | Oct 22, 2025 | Jan 23, 2026 | |
Page 86 4th para, 1st sentence |
Text: While FakeData only gives image types, you could relatively easily create your own CustomData (as we looked at earlier) […] Note from the Author or Editor: |
Jørgen Lang | Oct 22, 2025 | Jan 23, 2026 | |
Page 86 last para |
Text: Thankfully, when using datasets, you can generally do this with an easy and intuitive API. Note from the Author or Editor: |
Jørgen Lang | Oct 22, 2025 | Jan 23, 2026 | |
Page 88 1st para, 1st sentence |
Text: One more thing to consider when using custom splits is that the name random doesn’t mean […] Note from the Author or Editor: |
Jørgen Lang | Oct 22, 2025 | Jan 23, 2026 | |
Page 88 5th para, 2nd sentence |
Text: For example, batching, image augmentation, mapping to feature columns, and other such logic […] Note from the Author or Editor: |
Jørgen Lang | Oct 22, 2025 | Jan 23, 2026 | |
Page 90 1st para, 2nd sentence |
Text: Whenever you’re dealing with training or inference and you want the data or model to be on the accelerator, you’ll see something like .to(“cuda”) […] Note from the Author or Editor: |
Jørgen Lang | Oct 22, 2025 | Jan 23, 2026 | |
Page 93 last para, 1st sentence |
Text: This chapter covered the data ecosystem in PyTorch and introduced you to the dataset and DataLoader classes. Note from the Author or Editor: |
Jørgen Lang | Oct 23, 2025 | Jan 23, 2026 | |
Page 100 2nd para, 1st sentence |
Text: Then, you’ll be given the sequences representing the three sentences. Note from the Author or Editor: |
Jørgen Lang | Oct 24, 2025 | Jan 23, 2026 | |
Page 120 1st para, last sentence |
Text: […] so add 24 to get to 24, 24. Note from the Author or Editor: |
Jørgen Lang | Oct 30, 2025 | Jan 23, 2026 | |
Page 139 2nd last para |
Text: It was a very short sentence, so it’s padded up to 85 characters with a lot of zeros! Note from the Author or Editor: |
Jørgen Lang | Nov 02, 2025 | Jan 23, 2026 | |
Page 154 1st para |
Text: You can then set the loss function and classifier to this. (Note that the LR is 0.001, or 1e–3.): Note from the Author or Editor: |
Jørgen Lang | Nov 11, 2025 | Jan 23, 2026 | |
Page 155 last para, last sentence |
Text: […] while the loss for the test set diverged after 15 Note from the Author or Editor: |
Jørgen Lang | Nov 11, 2025 | Jan 23, 2026 | |
Page 155 + 156 figure 7-9 + figure 7-10 |
Text: Note from the Author or Editor: |
Jørgen Lang | Nov 11, 2025 | Jan 23, 2026 | |
Page 168 last para |
Text: This model shows a total of 406.817 parameters of which only 6,817 are trainable, so training will be fast! Note from the Author or Editor: |
Jørgen Lang | Nov 12, 2025 | Jan 23, 2026 | |
Page 175 last para |
Text: You’ll want to create a single string with all the text and set that to be your data. Use \n for the line breaks. Then, this corpus can be easily loaded and tokenized. First, the tokenize function will split the text into individual words, and then the create_word_dictionary will create a dictionary with an index for each individual word in the text: Note from the Author or Editor: |
Jørgen Lang | Nov 16, 2025 | Jan 23, 2026 | |
Page 187 1st code snippet |
Will mark the prepending protocol in URL with [protocol] so error submission does not throw a tantrum. ¯\_(ツ)_/¯ Note from the Author or Editor: |
Jørgen Lang | Nov 18, 2025 | Jan 23, 2026 | |
Page 190 figure caption |
Text: Adding a second LSTM layer Note from the Author or Editor: |
Jørgen Lang | Nov 18, 2025 | Jan 23, 2026 | |
Page 209 ff. multiple |
Text: […] # features and targets […] Note from the Author or Editor: |
Jørgen Lang | Nov 24, 2025 | Jan 23, 2026 | |
Page 243-244 last sentence (contd. on p. 244) |
Text: As discussed in earlier chapters, with dropout, neighboring neurons are randomly dropped out (ignored) during training to avoid a familiarity bias. Note from the Author or Editor: |
Jørgen Lang | Nov 27, 2025 | Jan 23, 2026 | |
Page 260 1st + 2nd code snippet |
Text: python3 -m venv chapter12env |
Jørgen Lang | Dec 01, 2025 | Jan 23, 2026 | |
Page 260 6th para, 1st sentence |
Text: Then, you’ll be ready to install PyTorch. |
Jørgen Lang | Dec 01, 2025 | Jan 23, 2026 | |
Page 264 1st para 1st sentence |
Text: Before running it, make sure you have a model-store (or similar) directory that you will store the archived model in. |
Jørgen Lang | Dec 01, 2025 | Jan 23, 2026 | |
Page 266 1st code example, line 3 + line 6 |
Text: Note from the Author or Editor: |
Jørgen Lang | Dec 02, 2025 | Jan 23, 2026 | |
Page 341 1st para, last sentence |
Text: On macOS, the shared RAM with the M-Series chips works well, while running on an M1 Mac with 16 Gb, the Gemma 2B is fast and smooth with Ollama. |
Jørgen Lang | Dec 13, 2025 | Jan 23, 2026 | |
Page 348 2nd last para |
Text: Next, we can wrap all this code in a function called analyze, with this signature: Note from the Author or Editor: |
Jørgen Lang | Dec 14, 2025 | Jan 23, 2026 | |
Page 352 1st para, 2nd sentence |
Text:If the process completes successfully, the JobId is updated […] Note from the Author or Editor: |
Jørgen Lang | Dec 14, 2025 | Jan 23, 2026 |