Errata
The errata list is a list of errors and their corrections that were found after the product was released.
The following errata were submitted by our customers and have not yet been approved or disproved by the author or editor. They solely represent the opinion of the customer.
Color Key: Serious technical mistake Minor technical mistake Language or formatting error Typo Question Note Update
Version | Location | Description | Submitted by | Date submitted |
---|---|---|---|---|
Printed | Page page 81 equation of PDF |
In the equation of PDF, the square root of (2*pi) should be in the denominator. |
Siqi Li | Jul 06, 2022 |
Chapter 2 Question 5 |
In question 5 "You flipped a coin 19 times and got heads 15 times and tails 4 times." In appendix B question 5 "from scipy.stats import beta |
Mark Oliver | Jul 08, 2022 | |
ePub | Page Chapter 1, Order of Operations Math formula |
Amazon review indicates formatting error with mathematical expressions, likely for Kindle version. |
Thomas Nield | Aug 03, 2022 |
Printed | Page Chapter 1, p38, sidebar 3rd paragraph |
Riemann Sums are spelled incorrectly as Reimann Sums |
Rolf Würdemann | Nov 30, 2022 |
ePub | Page Chapter 3 The probability density function 1st paragraph |
I looked up the formula and it should be (including some HTML for the formatting: |
Linda Pescatore | Jan 01, 2023 |
ePub | Page Confidence intervals Second paragraph and Figure 3.15 |
This section is about finding the 95% confidence level for the sample mean of 64.408 being in line with the population mean. The mean should be the center of the plot of the normal distribution, in Figure 3.15. However, the figure shows the mean as 18. I don't know where this number came from. |
Linda Pescatore | Jan 02, 2023 |
Other Digital Version | Chapter 2 - Conditional Probability and Bayes's Theorem Equation 8 |
When applying the Bayes Theorem for the coffee/cancer problem, the P(Coffee) should be in the denominator for P(Cancer|Coffee). It is currently multiplying the numerator: P(Coffee|Cancer)*P(Coffee). The corrected version of the numerator should be P(Coffee|Cancer)*P(Cancer). |
Tales Ishida | Apr 18, 2023 |
Page Chapter 5, page# 181 1st paragraph |
Squared root is missing from the "Standard error of estimate" formula |
Muhammad Umar Amanat | May 04, 2023 | |
Printed | Page Page # 31 Example: 1 - 24 |
A line is missing in the book due to which error "x is not defined appears". I added the line |
Nanda | May 19, 2023 |
Page Example 7-11 Derivatives of Weights & Baises |
Dear Sir, |
Anonymous | Jul 19, 2023 | |
Printed | Page p96-102 on going through the examples |
Both in PDF and ePub. |
larisa Seward | Dec 03, 2023 |
Printed | Page 3 Last paragraph |
The order of operations paragraph omits the critical, but often forgotten, "from left to right." PEMDAS, while helpful, is only correct if one applies multiplication and division from left to right, and addition and subtraction from left to right. Omitting this renders the mnemonic unstable. The example on the following page says "The ordering of these two is swappable," which is technically incorrect. i.e. If you divide 25 by 5, you get 5, and if you multiply that by 2, you get 10, and if you subtract 4, you get the wrong answer. PEMDAS only works when multiplication and division is executed from left to right. |
Daniel Caron | Aug 07, 2022 |
Printed | Page 9, 10 p9: Headline of example; p10: description of graph |
In the headline of example 1-8 and also in the description of the graph presented in Figure 1-3, the function x^2 + 1 is denoted as "exponential" function. To my knowledge, an exponential function is in the form y^x as e.g. exp(x), which can easily be generalized, while x^2 + 1 is a polynomial (function) - in this case a polynomial of second grade, also called a quadratic function. |
Rolf Würdemann | Nov 30, 2022 |
Printed | Page 33 Integrals, 2nd paragraph |
The Riemann Sum is speeled incorrectly as Reimann Sum |
Christoph Jätz | Oct 02, 2022 |
ePub | Page 40 Exercise 1 |
62.6738 is more correctly expressed as 313369/5000 by sympy |
Austin Smith | Oct 04, 2022 |
ePub | Page 41 3rd paragraph |
In reference to using probability with data and statistics, the last sentence of the third paragraph says "We will cover that in Chapter 4 on statistics and hypothesis testing." But hypothesis testing seems to appear in Chapter 3, around page 96. |
Austin Smith | Oct 04, 2022 |
Printed | Page 43 Second paragraph |
The sentence "Conversely, you can turn an odds into a probability"... and then an example is shown. The example is turning a probability into odds. Hence, the sentence should be re-written as, "Conversely, you can turn a probability into an odds." |
Daniel Caron | Aug 07, 2022 |
Printed | Page 48 Middle |
The labels for proabilities P(A) and P(B), i.e., Cancer and Coffee, are switched. |
Christoph Jätz | Oct 02, 2022 |
Printed | Page 48 Bayes theorem example |
The coffee and cancer example accidentally switches the names for coffee and cancer variables in the written equation. P(Coffee) should be the denominator, not P(Cancer). |
Chester Hitz | Nov 27, 2022 |
Page 48 3rd paragraph |
Bayes' Theorem formula is given as |
Muhammad Umar Amanat | Mar 24, 2023 | |
Printed | Page 53 The aside with bird |
The sentence: "Turn to Appendix A to learn how to build the binomial distribution from scratch without scikit-learn." Should read: "Turn to Appendix A to learn how to build the binomial distribution from scratch without SciPy." |
Daniel Caron | Aug 07, 2022 |
Printed | Page 53 First paragraph |
"We iterate each number of successes x" should read "We iterate each number of successes k." |
Daniel Caron | Aug 07, 2022 |
Printed | Page 69 Sidebar |
The Straight Dope wasn't a publication of its own. It was a syndicated newspaper column started by the Chicago Reader. |
Andy Lester | Nov 14, 2022 |
Page 78 2nd paragraph |
Printed: The standard deviation for a sample and mean are specified by s and σ, respectively. |
Stefan Vanli | Aug 24, 2023 | |
Printed | Page 91 4 |
I might be wrong about it. From what I understood, Central Limit Theorem is saying that standard deviation of sample means (sampling standard deviation) is equal to the population standard deviation, divided by square root of sample size. |
Yaroslav Skoryk | Jul 22, 2023 |
Page 95 Inside function code of "def confidence_interval(p, sample_mean, sample_std, n)" |
Inside the confidence interval function lower_ci should be subtracted from sample_mean but it is added in sample_mean which is wrong. |
Muhammad Umar Amanat | Apr 04, 2023 | |
Printed | Page 101 Last paragraph |
The sentence "Since 16 is 4 days below the mean, we will also capture the area above 20, which is 4 days above the mean." The sentence should read "Since 16 is 2 days below the mean, we will also capture the area above 20, which is 2 days above the mean." |
Daniel Caron | Aug 07, 2022 |
Printed | Page 101 bottom paragraph |
That paragraph says 16 is 4 days below the mean, but the mean is 18. 16 is 2 days below 18. It makes the same mistake in the other direction, saying 20 is 4 days above the mean. It's two. |
Eric Osborne | Oct 03, 2023 |
Page 110 3rd paragraph |
The y value of the vector should 260000, not 2600000, as the valuation figure used in the example if $260,000. |
Kaushalya Samarasekera | Sep 10, 2022 | |
Printed | Page 113 Figure 4-3 |
The graph of the three dimensional vector could be improved in my opinion. As, i, j, k in the image do not correlate to lengths 4, 1, 2. Also, an actual 3d graph would be nicer, since we are talking about 3 dimensions. |
Daniel Caron | Aug 07, 2022 |
Printed | Page 114 Figure 4-4 |
In Figure 4.4, the numerical representations for both vectors include a negative x-value, when in fact the arrows represent positive x-values. The subsequent representations of the same vectors show positive x-values. |
Anonymous | Nov 28, 2022 |
Printed | Page 117 Figure 4-8 |
The books states 0.5v = [3, 1.5]. When it should state 0.5v = [1.5,0.5] |
Daniel Caron | Aug 07, 2022 |
Printed | Page 124 First paragraph |
In the first paragraph, "Shear" is not described, while all other transforms are described. |
Daniel Caron | Aug 07, 2022 |
Printed | Page 127 Figure 4-17 |
i_hat and j_hat values are should be the other way around in the right hand example for it to work as a visualisation of example 4-9. |
Anonymous | Mar 08, 2023 |
Printed | Page 129 formula in 3rd paragraph under Matrix Multiplication section |
In the 2x2 matrix multiplication, the term dy should be dg. |
Kirk Damron | Jun 30, 2022 |
Page 130 1st code snippet |
The code will read better if you swap the two transformation definitions and do "transformation1 @ transformation2", rather than doing it the opposite way. This will align better with the textual explanation that precedes the code snippet too. |
Kaushalya Samarasekera | Sep 11, 2022 | |
Page 130 2nd code snippet |
The variable should be named 'sheared' instead of 'sheered'. |
Kaushalya Samarasekera | Sep 11, 2022 | |
Printed | Page 135 top graph |
values for i_hat and j_hat are swapped. i_hat should be [3,-1.5] and j_hat should be [2, -1]. |
Eric Osborne | Oct 03, 2023 |
Printed | Page 137, 139, 140 Inverse matrix (A^-1) |
The inverse matrix needs to have -4/3 in the right center, not -4/3. |
Rpf | Mar 08, 2023 |
Printed | Page 151 Heading at bottom |
The heading reads, "Basic Linear Regression with SciPy," when it should read "Basic Linear Regression with Scikit Learn." |
Daniel Caron | Aug 07, 2022 |
Printed | Page 160 Top of page |
The top of page 160 should include the heading "Matrix Decomposition," as this page parallels the elaboration of techniques initially listed in the 3rd paragraph of page 157: Closed Form, Matrix Inversion. Matrix Decomposition, Gradient Descent." |
Daniel Caron | Aug 07, 2022 |
Printed | Page 181 formula of the standard error of the estimate |
the square root is missing in the formula |
Siqi Li181 | Jul 19, 2022 |
Page 181 1st paragraph |
Squared is missing from the "Standard error of estimate" formula |
Muhammad Umar Amanat | May 04, 2023 | |
Printed | Page 183 formula of the margin of error |
'x_0 + X_mean' part should be 'x_0 - X_mean' |
Siqi Li | Jul 19, 2022 |
Printed | Page 198 The minor heading |
The heading "Using Scipy" should be "Using Scikit Learn." |
Daniel Caron | Aug 07, 2022 |
Printed | Page 198 Example 6-3 |
When we turn off the penalty for the logistic regression: FutureWarning: `penalty='none'`has been deprecated in 1.2 and will be removed in 1.4. To keep the past behaviour, set `penalty=None`. |
Maya | Jul 25, 2023 |
Printed | Page 199 Entire page |
There are three references to SciPy when these references should be Scikit Learn, as they are different libraries. |
Daniel Caron | Aug 07, 2022 |
Printed | Page 200 Bird aside text |
The three instances of "Scipy" should be changed to "Scikit Learn." |
Daniel Caron | Aug 07, 2022 |
Printed | Page 201 formula of joint likelihood |
the second multiplier misses '1 - ' |
Siqi Li | Jul 22, 2022 |
Printed | Page 213 formula of log likelihood |
log is missing in the formula |
Siqi Li | Aug 02, 2022 |
Printed | Page 214 Example 6-13 |
The loglikelihood in the R^2 formula should be changed from -0.5596 to -14.341 |
Maya | Jul 25, 2023 |
Printed | Page 217 First formula |
The parentheses around the subtraction of log likelihood fit and log likelihood is missing in the formula of the Chi Square value. |
Julian K. | Mar 13, 2023 |
Printed | Page 221 Figure 6-18 |
Predicted and actual should be the other way round in the confusion matrix; Negative predicted value should be TN/(TN+FP) instead of TN/(TP+FN) |
Siqi Li | Aug 02, 2022 |
Printed | Page 221 Figure 6-18 |
Negative predicted value should be TN/(TN+FN) instead of TN/(TP+FN) |
Siqi Li | Aug 02, 2022 |
Printed | Page 221 figure 6-18 |
Author uses both sensitivity and recall in the figure, but fails to point out that they are the same thing. |
RP | Mar 23, 2023 |
Printed | Page 222 End of code / Before Major heading |
Author provides code to demonstrate a confusion matrix, but does not include the output of the final print statement (which should print out a confusion matrix), which is surprising, since all previous examples do show the output of the print statements. |
Daniel Caron | Aug 07, 2022 |
Printed | Page 225 section of Class Imbalance |
In the third paragraph, the author mentioned that using 'stratify' option in scikit-learn can duplicate samples in the minority class until it is equally represented in the dataset. However, that is not the function of stratified split. Using 'stratify' will retain the same class distribution in the train set and test set as in the original dataset, not duplicating the samples. |
Siqi Li | Aug 02, 2022 |
Printed | Page 225 section of Class Imbalance |
In the second paragraph, the author mentioned that ROC/AUC can be used when class is imbalanced, which might be misleading. ROC curves should be used when there are roughly equal numbers of observations for each class. Area under the Precision-Recall-Curve (PR-AUC) is more suitable for highly imbalanced data than ROC-AUC. |
Siqi Li | Aug 02, 2022 |
Printed | Page 236 Figure 7-8 |
Figure shows Logistic function as an Activation Function of the Output Layer, while Neural Network suppose to solve classification problem of 10 classes (numbers 0 - 9). |
Yaroslav Skoryk | Aug 07, 2023 |
Printed | Page 238 Example 3-7 |
In order to obtain the values the author demonstrates in the calculations on pages 241 f., the code for calculating z1 needs to be as follows: |
RP | Mar 24, 2023 |
Printed, PDF | Page 242 3rd paragraph |
In the 2nd line of the 3rd para it must be "dark (0)" instead of "dark (1)". |
frank langenau | May 18, 2023 |
Printed | Page 244 Second major paragraph |
The sentence "Let's focus on finding the relationship on a weight from the output layer W_2 and the cost function C." Should perhaps read something like: ""Let's focus on finding the relationship on a weight (W_2) from the second output layer and the cost function C." |
Daniel Caron | Aug 07, 2022 |
Printed | Page 245 example 7-9 |
it should say W2, not W1 |
Anonymous | Mar 29, 2023 |
Printed | Page 252 First paragraph |
The sentence "The activation argument specifies the hidden layer," should read something like, ""The activation argument specifies which activation function to apply to the nodes contained in the hidden layers." |
Daniel Caron | Aug 07, 2022 |
Page 310 Appendix B, Chapter 2 Solutions, Exercise 2 |
The union probability answer that is given in the book is: |
Fotis Koutoulakis | Sep 24, 2022 |