Errata

Generative AI on AWS

Errata for Generative AI on AWS

Submit your own errata for this product.

The errata list is a list of errors and their corrections that were found after the product was released. If the error was corrected in a later version or reprint the date of the correction will be displayed in the column titled "Date Corrected".

The following errata were submitted by our customers and approved as valid errors by the author or editor.

Color key: Serious technical mistake Minor technical mistake Language or formatting error Typo Question Note Update

Version Location Description Submitted By Date submitted Date corrected
Page Chapter 4. Memory and Compute Optimizations
section : Memory Challenges, last paragraph

Hi Chris,

From your book : Generative AI on AWS,
Chapter 4. Memory and Compute Optimizations,

Figure 4-2. says : to train a 1-billion-parameter model, you will need approximately 24 GB of GPU RAM at 32-bit full precision.

where as, the last paragraph of the same section says :
By quantizing your model weights from 32-bit full precision down to 16-bit half precision, you can quickly reduce your 1-billion-parameter-model memory requirement down 50% to only 2 GB for loading and 40 GB for training.

For 1 Billion parameter model, shouldn't the memory at 16-bit precision be approximately half of 24 GB (at 32-bit precision) = 12 GB ?

Please confirm. Thanks !

Note from the Author or Editor:
Yes, you are correct. The last sentence of that paragraph should say "and 12 GB for training." instead of "and 40 GB for training."

Jagadish Kavuturu  Dec 06, 2023 
Page Chapter 6: Target Modules and Layers
Is a picture afther the third paragraph

Describes a matrix 64*512 and with Lora and rank 4 you would have a matrix A= 44*64 and a matrix B=512*4. In the next image is the error it refers to B as a matrix 64*4 (in the representation of the two low rank matrices).

Note from the Author or Editor:
Figure 6-5 should be changed to show the horizontal dimension of B = 512 instead of 64.

described more generically, the number below the blue B bar in the upper left (right below "Example: rank = 4") should be 512 instead of 64.

Gonzalo albornoz  Dec 07, 2023 
Page Page 28
Prompt: <text></text> block

Yup! Hey I, uh, forgot where you live."
has a redundant double quotes(") at its end.

Note from the Author or Editor:
Pls fix

Ryuichi Kubuki  Feb 26, 2024 
Page Page 29
The first line

XXX: Yup! Hey I, uh, forgot where you live."
has a redundant double quotes (") at its end.

Note from the Author or Editor:
Pls fix

Ryuichi Kubuki  Feb 26, 2024 
Page Page 85
Table 5-2

#Person2#: May I see some identification, sir,
please? #Person1#: Sure. Here you go.

has "#Person1#: Sure. Here you go." in the same line as the previous utterance by Person 2, is it intended?

Note from the Author or Editor:
Not intended. Pls fix

Ryuichi Kubuki  Feb 26, 2024 
Page Page 91
5th paragraph

>ROUGE calculates how well the input (dialogue, in this case) compares to the generated output (summary, in this case).

Isn’t it the comparison for ROUGE between the baseline example (created by human) and the generated output?

Note from the Author or Editor:
Yes pls change

Ryuichi Kubuki  Feb 26, 2024 
Page Page 92
3rd paragraph

The link for "Beyond the Imitation Game (BIG-bench)" goes to GLUE page while it should go to BIG-bench page.

Note from the Author or Editor:
Please fix

Ryuichi Kubuki  Feb 26, 2024 
Page Page 98
Table 6-1

Increased storage requirements model
should be
Increased model storage requirements

Note from the Author or Editor:
Please fix

Ryuichi Kubuki  Feb 26, 2024 
Page Page 141
At the end of the Python code

# Save the quantize model to disk
should be
# Save the quantized model to disk

Note from the Author or Editor:
Please update

Ryuichi Kubuki  Feb 26, 2024 
Page Page 146
2nd code example

s3_model_path = "s3://<your-private-s3-location/"
should be
s3_model_path = "s3://<your-private-s3-location>/"

Note from the Author or Editor:
Please fix

Ryuichi Kubuki  Feb 26, 2024 
Page Page 179
Prompt template

extra_bananas = 2
should be
extra_bananas = 3

Note from the Author or Editor:
Please fix

Ryuichi Kubuki  Feb 26, 2024 
Page Page 216
Output

Output has a redundant double quote (“) at the end

Note from the Author or Editor:
Please fix

Ryuichi Kubuki  Feb 26, 2024 
Page Page 219
2nd section

The section title “Forward Diffusion” seems not correct as it is not directly relevant to this section. (The correct 'Forward Diffusion' section appears 2 pages later)

Note from the Author or Editor:
Change to “Image-to-Text Generative Tasks”

Ryuichi Kubuki  Feb 26, 2024 
Page Page 229
6th paragraph

the process image information
should be
the processed image information

Note from the Author or Editor:
Yes please update

Ryuichi Kubuki  Feb 26, 2024 
Page Page 246
1st paragraph

>This guides the prompt templates that will be used as part of the training data, as shown in the code:

What does this ‘guide’ exactly mean in this case for the prompt templates?

Note from the Author or Editor:
“This learnable_property is injected into the prompt templates …”

Ryuichi Kubuki  Feb 26, 2024 
Page Page 116
6th paragraph

drag-and-drop UI interface
should be
drag-and-drop user interface

Note from the Author or Editor:
Yes that’s fine

Ryuichi Kubuki  Apr 02, 2024 
Page Page 238
2nd paragraph

The 2nd paragraph says

"The code uses the OpenCV library to extract the edges using the Canny edge map
ControlNet control:"

but the code does not use Canny edge map ControlNet control so it should be

"The code uses the OpenCV library to extract the edges:"

This line "from diffusers import StableDiffusionControlNetPipeline" in the code is unused and unnecessary.

Also

# Load the image
image = load_image(""
)

should be

# Load the image
image = load_image("")

Note from the Author or Editor:
Yes that’s fine

Ryuichi Kubuki  Apr 08, 2024 
Page Page 214
The code sample

The explanation of the code says "Here is the code to implement the chain-of-thought version of this prompt:" making it clear that it is CoT, but the question part of the prompts has only "Who produced the movie that features this character?" lacking "Think step-by-step" unlike the code sample in Page 215.

Note from the Author or Editor:
Pls add “think step by step”

Ryuichi Kubuki  May 11, 2024 
Page 68
2nd paragraph

..., MQA is particularly useful.

The section refers to GQA not MQA. Assume it's a typo. Please confirm.

Note from the Author or Editor:
Yes you are correct. We will fix this on the next printing.

Thanks for the note!!

Wayne Scarano  Jan 23, 2024 
Page 70
2nd paragraph

ZeRO stages explanation of the Figure 4-11 on the second paragraph does not correspond to the image. Stage 1 and Stage 3 should be swapped.

Note from the Author or Editor:
Yes please update

Velimir Graorkoski  Feb 27, 2024 
Page 85
python code example

The `convert_row_to_instruction` python function creates a formats a prompt like

prompt = prompt_template.format(
...

and never closes the format function with a right paren

Note from the Author or Editor:
We will fix for the next printing. Thanks for the report!

Ari Roffe  Dec 21, 2023 
Page 100-101
page 101, 5th and 6th paragraph; page 102 1st paragraph

The order of the matrix dimensions is swapped, indicating "column x row" instead of "row x column" (I am inferring the actual dimensions from the figures).

I am not sure if this was intended, but the second option (row x column) is more common in mathematical usage.

Note from the Author or Editor:
We could swap these on the next release of the book. That’s for pointing this out!

Pablo A. Sampaio  Jun 24, 2024 
Page 135
4th paragraph

"You then learned about some existing classifiers and managed services that can be
used as reward models out of the box without any training"
I assume the existing classifier would be the Meta model, but what is this managed service? SageMaker Ground Truth is not a reward model.

Note from the Author or Editor:
By “managed service”, we meant something like Amazon Comprehend or Rekognition that can be called as an API.

You can just remove “and managed services”

Ryuichi Kubuki  Apr 03, 2024 
Page 165
1st paragraph

Completion prompt:
should be
Completion:

Note from the Author or Editor:
Yes that’s fine

Ryuichi Kubuki  Mar 10, 2024 
Page 167
code snippet

Hi,

In the langchain code on 167, when adding `year` to the document metadata there's a ')' that should be a '}'

`document_fragment.metadata = {"year": year, "source": filename)` should be
`document_fragment.metadata = {"year": year, "source": filename}`

Note from the Author or Editor:
Thanks for catching this! We’ll update for the next printing.

Thanks!

Ari Roffe  Dec 29, 2023 
Page 191
Figure 9-20

The arrow pointing from the Model version 1 to the Evaluation should be in the opposite direction.

Note from the Author or Editor:
Yes please update

Velimir Graorkoski  Mar 04, 2024