Errata
The errata list is a list of errors and their corrections that were found after the product was released. If the error was corrected in a later version or reprint the date of the correction will be displayed in the column titled "Date Corrected".
The following errata were submitted by our customers and approved as valid errors by the author or editor.
Color key: Serious technical mistake Minor technical mistake Language or formatting error Typo Question Note Update
Version | Location | Description | Submitted By | Date submitted | Date corrected |
---|---|---|---|---|---|
Page web Figure 1.1 |
The function in figure 1.1 is not Relu but Leaky Relu. Note from the Author or Editor: |
Jaap van der Does | Oct 16, 2019 | ||
Page Chap 1 Figure 1.18 |
Shouldn't the symbol in the second blue box should be a \sigma and not \delta? |
Venkatesh-Prasad Ranganath | Nov 10, 2019 | ||
Page Chapter 1 Figure 1.1 |
The figure says ReLU function, but instead plots Leaky ReLU |
Tamirlan Seidakhmetov | Jan 05, 2020 | ||
Page Chaptet 1 "The Fun Part: The Backward Pass" -> "Code" section |
Here "then increasing x11 by 0.001 should increase L by 0.01 × 0.2489", 0.01 should be changed to 0.001 |
Tamirlan Seidakhmetov | Jan 07, 2020 | ||
Printed | Page p. 84 2nd to last paragraph |
default activation is said to be "Linear", but in the code snippet it is actually "Sigmoid". So in the code snippet on p.91, the linear_regression neural network would need an explicit assignment of the activation to Linear(), otherwise Sigmoid() would be used. Note from the Author or Editor: |
Anonymous | Oct 27, 2020 | |
ePub | Page https://learning.oreilly.com/library/view/deep-learning-from/9781492041405/ch01.html John Cochrane, [Investments] Notes 2006 |
John Cochrane, [Investments] Notes 2006 => hyperlink is broken Note from the Author or Editor: |
Gökçe Aydos | Sep 22, 2023 | |
ePub | Page Appendix - Matrix Chain Rule "it isn’t too hard to see that the partial derivative of this with respect to x1" |
> it isn’t too hard to see that the partial derivative of this with respect to `x_1` Note from the Author or Editor: |
Gökçe Aydos | Sep 28, 2023 | |
Printed | Page 10 return statement in def chain_length_2() function |
In chain_length_2() function, the return statement is f2(f1(x)) but x is undefined. The return statement should be f2(f1(a)) which a is the input for the function. Note from the Author or Editor: |
Anonymous | Oct 08, 2019 | |
Page 10 Figure 1-7 |
The use of f1 f2 to indicate the composite f2(f1(x)) is confusing and non-standard. If the author wanted to pipe the functions sequentially to create the composite above then there is a standard way of doing this. Otherwise it should be simply noted. Note from the Author or Editor: |
Bradford Fournier-Eaton | Nov 01, 2021 | ||
Page 11 the Math formula |
1) Page 11 - the math formula. Note from the Author or Editor: |
Peter Petrov | Mar 26, 2021 | ||
Page 13 In the function : chain_deriv_2 |
# df1/dx Note from the Author or Editor: |
Pradeep Kumar | Oct 10, 2020 | ||
Page 13 In the function : chain_deriv_2 |
There is no where it is mentioned what is plot_chain does. No codes are given in that chapter for reference neither its clear what does it do. This function is being used everywhere in the first chapter Note from the Author or Editor: |
Pradeep Kumar | Oct 10, 2020 | ||
Printed | Page 25 Last paragraph |
The text reads "...the gradient of X with respect to X." but it should read "...the gradient of N with respect to X." A gradient is a property of a function, not a vector. |
Jason Gastelum | Dec 25, 2020 | |
Printed | Page 28 Chapter 1 |
"we compute quantities on the forward pass (here, just N)" Note from the Author or Editor: |
Anonymous | Apr 29, 2020 | |
Printed | Page 64 Tabel 2-1 Derivative table for neural network |
the partial derivative dLdP = -(forward_info[y] - forward_info[p]) should be -2 * (forward_info[y] - forward_info[p]), just like the explanation on page 51. |
Anonymous | Oct 25, 2019 | |
Printed | Page 65 2nd |
Note from the Author or Editor: |
Eugen Grosu | Jan 03, 2021 | |
Printed, PDF | Page 88 section heading |
Heading is the same as the chapter title *and* the book title. Note from the Author or Editor: |
Anonymous | Feb 29, 2020 | |
Printed | Page 91 NeuralNetwork class invocations in the code |
The NeuralNetwork class, when used on page 91, is given a learning_rate parameter --- there's no learning_rate in the __init__ function for that class, and no methods in the class use the learning_rate. This is not surprising, as the learning-rate is something the Optimizer class (introduced on the following pages) cares about. Note from the Author or Editor: |
David Mankins | Sep 13, 2023 | |
Page 94 __init__ method of class Trainer |
The __init__ method is missing self.optim = optim before the setattr line. Note from the Author or Editor: |
Rodrigo Stevaux | Oct 07, 2020 | ||
ePub | Page 99 1st paragraph |
In the Lincoln library, required to run the cpde for chapter 4, 'lincoln.utils.np_utils' does not contain the function 'exp_ratios'. Note from the Author or Editor: |
Steven Kaminsky | Jan 14, 2020 |