Chapter 2. Prompt Engineering and In-Context Learning
In this chapter, you will learn about low-code ways to interact with generative AI models—specifically, prompt engineering and in-context learning. You will see that writing prompts is both an art and a science that helps the model generate better and more-applicable responses. We also provide some best practices when defining prompts and prompt templates to get the most out of your generative models.
You will also learn how to use in-context-learning to pass multiple prompt-completion pairs (e.g., question-answer pairs) in the “context” along with your prompt input. This in-context learning nudges the model to respond similarly to the prompt-completion pairs in the context. This is one of the more remarkable capabilities of generative models as it temporarily alters the model’s behavior for the duration of just that single request.
Lastly, you will learn some of the most commonly configured generative parameters like temperature
and top k
that control the generative model’s creativity when creating content.
Language-based generative models accept prompts as input and generate a completion. These prompts and completions are made up of text-based tokens, as you will see next.
Prompts and Completions
While generative AI tasks can span multiple content modalities, they often involve a text-based input. This input is called a prompt and includes the instructions, context, and any constraints used to accomplish a given task.
Get Generative AI on AWS now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.