Video description
Sneak Peek
The Sneak Peek program provides early access to Pearson video products and is exclusively available to subscribers. Content for titles in this program is made available throughout the development cycle, so products may not be complete, edited, or finalized, including video post-production editing.
Table of contents
- Introduction
-
Lesson 1: Introduction to AI Threats and LLM Security
- Learning objectives
- 1.1 Understanding the Significance of LLMs in the AI Landscape
- 1.2 Exploring the Resources for this Course - GitHub Repositories and Others
- 1.3 Introducing Retrieval Augmented Generation (RAG)
- 1.4 Understanding the OWASP Top-10 Risks for LLMs
- 1.5 Exploring the MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) Framework
-
Lesson 2: Understanding Prompt Injection Insecure Output Handling
- Learning objectives
- 2.1 Defining Prompt Injection Attacks
- 2.2 Exploring Real-life Prompt Injection Attacks
- 2.3 Using ChatML for OpenAI API Calls to Indicate to the LLM the Source of Prompt Input
- 2.4 Enforcing Privilege Control on LLM Access to Backend Systems
- 2.5 Best Practices Around API Tokens for Plugins, Data Access, and Function-level Permissions
- 2.6 Understanding Insecure Output Handling Attacks
- 2.7 Using the OWASP ASVS to Protect Against Insecure Output Handling
-
Lesson 3: Training Data Poisoning, Model Denial of Service Supply Chain Vulnerabilities
- Learning objectives
- 3.1 Understanding Training Data Poisoning Attacks
- 3.2 Exploring Model Denial of Service Attacks
- 3.3 Understanding the Risks of the AI and ML Supply Chain
- 3.4 Best Practices when Using Open-Source Models from Hugging Face and Other Sources
- 3.5 Securing Amazon BedRock, SageMaker, Microsoft Azure AI Services, and Other Environments
- Lesson 4: Sensitive Information Disclosure, Insecure Plugin Design, and Excessive Agency
- Lesson 5: Overreliance, Model Theft, and Red Teaming AI Models
- Lesson 6: Protecting Retrieval Augmented Generation (RAG) Implementations
Product information
- Title: Securing Generative AI
- Author(s):
- Release date: October 2024
- Publisher(s): Pearson
- ISBN: 0135401801
You might also like
audiobook
Generative AI
The future of AI is here. The world is transfixed by the marvel (and possible menace) …
article
From ChatGPT to HackGPT: Meeting the Cybersecurity Threat of Generative AI
Emerging generative AI technologies such as ChatGPT are putting new tools in the hands of hackers. …
audiobook
What Is Generative AI? (Audio)
ChatGPT, Midjourney, Stable Diffusion, and LLaMA are quickly becoming household names. These tools and many more …
book
Prompt Engineering for Generative AI
Large language models (LLMs) and diffusion models such as ChatGPT and Stable Diffusion have unprecedented potential. …