Securing Generative AI

Video description

Sneak Peek

The Sneak Peek program provides early access to Pearson video products and is exclusively available to subscribers. Content for titles in this program is made available throughout the development cycle, so products may not be complete, edited, or finalized, including video post-production editing.

Table of contents

  1. Introduction
    1. Securing Generative AI: Introduction
  2. Lesson 1: Introduction to AI Threats and LLM Security
    1. Learning objectives
    2. 1.1 Understanding the Significance of LLMs in the AI Landscape
    3. 1.2 Exploring the Resources for this Course - GitHub Repositories and Others
    4. 1.3 Introducing Retrieval Augmented Generation (RAG)
    5. 1.4 Understanding the OWASP Top-10 Risks for LLMs
    6. 1.5 Exploring the MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) Framework
  3. Lesson 2: Understanding Prompt Injection Insecure Output Handling
    1. Learning objectives
    2. 2.1 Defining Prompt Injection Attacks
    3. 2.2 Exploring Real-life Prompt Injection Attacks
    4. 2.3 Using ChatML for OpenAI API Calls to Indicate to the LLM the Source of Prompt Input
    5. 2.4 Enforcing Privilege Control on LLM Access to Backend Systems
    6. 2.5 Best Practices Around API Tokens for Plugins, Data Access, and Function-level Permissions
    7. 2.6 Understanding Insecure Output Handling Attacks
    8. 2.7 Using the OWASP ASVS to Protect Against Insecure Output Handling
  4. Lesson 3: Training Data Poisoning, Model Denial of Service Supply Chain Vulnerabilities
    1. Learning objectives
    2. 3.1 Understanding Training Data Poisoning Attacks
    3. 3.2 Exploring Model Denial of Service Attacks
    4. 3.3 Understanding the Risks of the AI and ML Supply Chain
    5. 3.4 Best Practices when Using Open-Source Models from Hugging Face and Other Sources
    6. 3.5 Securing Amazon BedRock, SageMaker, Microsoft Azure AI Services, and Other Environments
  5. Lesson 4: Sensitive Information Disclosure, Insecure Plugin Design, and Excessive Agency
    1. Learning objectives
    2. 4.1 Understanding Sensitive Information Disclosure
    3. 4.2 Exploiting Insecure Plugin Design
    4. 4.3 Avoiding Excessive Agency
  6. Lesson 5: Overreliance, Model Theft, and Red Teaming AI Models
    1. Learning objectives
    2. 5.1 Understanding Overreliance
    3. 5.2 Exploring Model Theft Attacks
    4. 5.3 Understanding Red Teaming of AI Models
  7. Lesson 6: Protecting Retrieval Augmented Generation (RAG) Implementations
    1. Learning objectives
    2. 6.1 Understanding the RAG, LangChain, Llama Index, and AI Orchestration
    3. 6.2 Securing Embedding Models
    4. 6.3 Securing Vector Databases
    5. 6.4 Monitoring and Incident Response

Product information

  • Title: Securing Generative AI
  • Author(s): Omar Santos
  • Release date: October 2024
  • Publisher(s): Pearson
  • ISBN: 0135401801