Errata

Learning TensorFlow

Errata for Learning TensorFlow

Submit your own errata for this product.

The errata list is a list of errors and their corrections that were found after the product was released. If the error was corrected in a later version or reprint the date of the correction will be displayed in the column titled "Date Corrected".

The following errata were submitted by our customers and approved as valid errors by the author or editor.

Color key: Serious technical mistake Minor technical mistake Language or formatting error Typo Question Note Update

Version Location Description Submitted By Date submitted Date corrected
Printed
Page 11
last line

print ans
should be replaced with
print(ans)

As of Python 3.0, "print" was replaced by the function "print()"

Kenneth T. Hall  Sep 11, 2017  Sep 15, 2017
PDF
Page 16
first equation

[xw0...xw9]=xW
->
[xw0, ..., xw9]=xW

Note from the Author or Editor:
Changed accordingly

Anonymous  Oct 31, 2017  Apr 13, 2018
PDF
Page 21
Figure 2-3

The figure is supposed to represent the model described in the text.
However the graph includes a bias variable node "b" while the text doesn't mention any bias.

Note from the Author or Editor:
add to the image caption:
"Here, b is a bias term that could be added to the model"

Paolo Baronti  Jan 24, 2018  Apr 13, 2018
PDF
Page 26
Table 3-1

The definition of the subtraction shortcut as
tf.subtract() a-b Subtracts a from b, element-wise.

should be

tf.subtract() a-b Subtracts b from a, element-wise.

Paolo Baronti  Jan 07, 2018  Apr 13, 2018
Printed
Page 36
2nd from the bottom

A = tf. constant ...

print(a.get_shape())

should be print(A.get_shape())

Yevgeniy Davletshin  Sep 19, 2017  Apr 13, 2018
PDF
Page 37
Middle of page, for code

The author created a 2x2x3 Array.

For those who are struggling with trying to tell the dimensions apart, the two dimensions that have length 2 are easily confused. It would be better to create an array that has dimension lengths that are all different, such as 2 and 3 and 4.

Note from the Author or Editor:
Thanks for the suggestions! We will try and change this in future versions of the book to make it easier to follow.

Clem Wang  Sep 23, 2017  Apr 13, 2018
Printed
Page 41
2nd from the top

in the formula f(xi) = w.T xi + b

but in the code transpose given for x not w:

y_pred = tf.matmul(w,tf.transpose(x)) + b

Note from the Author or Editor:
add this text below the equation:

(w is initialized as a row vector; therefore, transposing x will yield the same result as in the equation above)

Yevgeniy Davletshin  Sep 19, 2017  Apr 13, 2018
PDF
Page 45
line 5 (formula)


I guessing that the LaTex formula used was this:
$$ H(p, q) = - \Sigma_x p(x) log q(x) $$

But this is correct, which puts the x UNDER the Summation sign instead of making x a SUBSCRIPT of Sigma:
$$ H(p, q) = - \sum_x p(x) log q(x) $$

Note from the Author or Editor:
Thanks!

Clem Wang  Sep 23, 2017  Apr 13, 2018
Printed
Page 46
Logistic Regression

On page 46 , Logistic regression.

The sigmoid function is written as Pr(y_i=1 | x_i) = 1/(1+exp(w*x_i+b) but it should instead be

written as

Pr(y_i=1 | x_i) = 1/(1+exp(-w*x_i-b)

Thanks,

Theophilus Siameh

THEOPHILUS SIAMEH  Dec 07, 2017  Apr 13, 2018
PDF
Page 50
line 9 ( or line 2 of code on the page)

The line is missing a minus sign for the first term. The text incorrectly has:

loss = y_true*tf.log(y_pred) - (1-y_true)*tf.log(1-y_pred)

but it should be:

loss = - y_true*tf.log(y_pred) - (1-y_true)*tf.log(1-y_pred)



Compare to the documentation:

https://www.tensorflow.org/api_docs/python/tf/nn/sigmoid_cross_entropy_with_logits


Clem Wang  Sep 24, 2017  Apr 13, 2018
PDF
Page 57
2nd release, 4th paragraph

In
"we will use a tf.Variable and pass one value for train (.5) and another for test (1.0)."

tf.Variable should be tf.placeholder instead.




Paolo Baronti  Jan 28, 2018  Apr 13, 2018
Printed
Page 60
2nd paragraph, 1st sentence

"Next we have two consecutive layers of convolution and pooling, each with 5x5 convolutions and 64 feature maps, followed by a single fully connected layer with 1,024 units."

The first convolution layer has only 32 feature maps, not 64 as written.

Kenneth T. Hall  Oct 16, 2017  Apr 13, 2018
Printed
Page 60
2nd line of code near page bottom

cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y_conv,y_))

should be

cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y_conv, labels=y_))

Note from the Author or Editor:
Thanks! This will be fixed in future versions.

Kenneth T. Hall  Oct 16, 2017  Apr 13, 2018
Printed
Page 61
1st paragraph (after the code). Also, in the footnote on the same page.

"epoc" should be "epoch"

Kenneth T. Hall  Oct 16, 2017  Apr 13, 2018
Printed, PDF, ePub
Page 68
code example at top of page

code reads:
conv3_drop = tf.nn.dropout(conv3_flat, keep_prob=keep_prob)

full1 = tf.nn.relu(full_layer(conv3_flat, F1))

this doesn't make sense, it should read:

conv3_drop = tf.nn.dropout(conv3_flat, keep_prob=keep_prob)

full1 = tf.nn.relu(full_layer(conv3_drop, F1))

Note from the Author or Editor:
Thank you for catching this! We have already fixed it in the git repo, and will make the change in future versions of the boo.

Jeff Kriske  Dec 12, 2017  Apr 13, 2018
Printed
Page 93
Stacking multiple LSTMs


The code does not work: ValueError: Dimensions must be equal ...

Instead of using the lstm_cell instance multiple times:

cell = tf.contrib.rnn.MultiRNNCell(cells=[lstm_cell]*num_LSTM_layers,state_is_tuple=True)

one instance per LSTM layer should be created - for example in this way:

cell = tf.contrib.rnn.MultiRNNCell(cells=[tf.contrib.rnn.BasicLSTMCell(hidden_layer_size, forget_bias=1.0) for i in range(num_LSTM_layers)], state_is_tuple=True)

Note from the Author or Editor:
On page 93, under Stacking multiple LSTMs, replace these two lines:

lstm_cell = tf.contrib.rnn.BasicLSTMCell(hidden_layer_size, forget_bias=1.0)
cell = tf.contrib.rnn.MultiRNNCell(cells=[lstm_cell]*num_LSTM_layers, state_is_tuple=True)

With:

lstm_cell_list = [tf.contrib.rnn.BasicLSTMCell(hidden_layer_size,forget_bias=1.0) for ii in range(num_LSTM_layers)]
cell = tf.contrib.rnn.MultiRNNCell(cells=lstm_cell_list, state_is_tuple=True)

Michal Steuer  Mar 16, 2018  Apr 13, 2018
PDF
Page 131
Last paragraph

here we simply specify 'categorical_crossentreopy'
-->
here we simply specify 'categorical_crossentropy'

Edberg  Feb 01, 2018  Apr 13, 2018
Printed
Page 134
second line of code starting print...

testY --> Y_test

Michal Steuer  Mar 29, 2018  Apr 13, 2018
PDF
Page 158
3rd paragraph

'Note that if we were to run xs.eval() one more time'

xs.eval() --> x.eval()

Note from the Author or Editor:
Typo:
Replace xs.eval() with x.eval()

Edberg  Feb 18, 2018  Apr 13, 2018
PDF
Page 204
middle of the code example

there is a line with only one '.'

Edberg  Feb 25, 2018  Apr 13, 2018