AI Security and Responsible AI Practices

Video description

Ethical development and responsible deployment of AI and ML systems.

  • Learn the latest technology in AI and ML security to safeguard against AI attackers and ensure data integrity and user privacy.
  • Navigate privacy and ethical considerations to gain insights into responsible AI practices and address ethical consideration.
  • Explore emerging trends and future directions in AI, ML, security, ethics, and privacy focusing on key concepts including threats, vulnerabilities, and attack vectors.
  • Recognize and understand the privacy aspects of AI and ML, including data protection, anonymization, and regulatory compliance

Get the essential skills to protect your AI system against cyber attacks. Explore how generative AI and LLMs can be harnessed to secure your projects and organizations against AI cyber threats. Develop secure and ethical systems while being mindful of privacy concerns with real-life examples that we use on a daily-basis with ChatGPT, GitHub Co-pilot, DALL-E, Midjourney, DreamStudio (Stable Diffusion), and others. Gain a solid foundation in AI and ML principles and be better prepared to develop secure and ethical systems while being mindful of privacy concerns. Authors Omar Santos and Dr. Petar Radanliev are industry experts to guide and boost your AI security knowledge.

Related Learning:

About the Instructors:

Omar Santos is a Distinguished Engineer at Cisco focusing on artificial intelligence (AI) security, cybersecurity research, incident response, and vulnerability disclosure. He is a board member of the OASIS Open standards organization and the founder of OpenEoX. Omar’s collaborative efforts extend to numerous organizations, including the Forum of Incident Response and Security Teams (FIRST) and the Industry Consortium for Advancement of Security on the Internet (ICASI). Omar is the co-chair of the FIRST PSIRT Special Interest Group (SIG). Omar is the lead of the DEF CON Red Team Village and the chair of the Common Security Advisory Framework (CSAF) technical committee. Omar is the author of more than 25 books, 21 video courses, and nore than 50 academic research papers. Omar is a renowned expert in ethical hacking, vulnerability research, incident response, and AI security. His dedication to cybersecurity has made a significant impact on technology standards, businesses, academic institutions, government agencies, and other entities striving to improve their cybersecurity programs. Prior to Cisco, Omar served in the United States Marines focusing on the deployment, testing, and maintenance of Command, Control, Communications, and Computer and Intelligence (C4I) systems.

Dr. Petar Radanliev, Department of Engineering Science, University of Oxford. Dr. Radanliev is a highly accomplished and experienced cybersecurity professional with 10+ years of experience in academic and industry settings. He has expertise in cybersecurity research, risk management, and cyber defense, as well as a track record of excellence in teaching, mentoring, and leading research teams. Technical skills include new and emerging technical cyber/crypto technologies and algorithms, DeFi, blockchain, Metaverse, and quantum cryptography. Petar obtained a PhD at University of Wales in 2014, and continued with postdoctoral research at Imperial College London, University of Cambridge, Massachusetts Institute of Technology, and University of Oxford. His awards include the Fulbright Fellowship in the United States, and the Prince of Wales Innovation Scholarship in the United Kingdom.

Skill Level:

  • Intermediate

Course requirement:

  • None

Table of contents

  1. Introduction
    1. AI Security and Responsible AI Practices: Introduction
  2. Module 1: Fundamentals of AI and ML
    1. Module introduction
  3. Lesson 1: Overview of AI and ML Implementations
    1. Learning objectives
    2. 1.1 Delving into supervised, unsupervised, and reinforcement learning
    3. 1.2 Diving into applications and use cases
    4. 1.3 Strategies in preprocessing and feature engineering
    5. 1.4 Navigating through popular and traditional ML algorithms
    6. 1.5 Exploring model evaluation and validation
  4. Lesson 2: Generative AI and Large Language Models (LLMs)
    1. Learning objectives
    2. 2.1 Introduction to generative AI
    3. 2.2 Delving into large language models (LLMs)
    4. 2.3 Exploring examples of AI applications we use on a daily basis
    5. 2.4 Going beyond ChatGPT, MidJourney, LLaMA
    6. 2.5 Exploring Hugging Face, LangChain Hub, and other AI model and dataset sharing hubs
    7. 2.6 Modern AI model training environments
    8. 2.7 Introducing LangChain, templates, and agents
    9. 2.8 Fine tuning AI Models using LoRA and QLoRA
    10. 2.9 Introducing retrieval-augmented generation (RAG)
  5. Module 2: AI and ML Security
    1. Module introduction
  6. Lesson 3: Fundamentals of AI and ML Security
    1. Learning objectives
    2. 3.1 Importance of security in AI and ML systems
    3. 3.2 OWASP top 10 risks for LLM applications
    4. 3.3 Exploring prompt injection attacks
    5. 3.4 Surveying data poisoning attacks
    6. 3.5 Understanding insecure output handling
    7. 3.6 Discussing insecure plugin design
    8. 3.7 Understanding excessive agency
    9. 3.8 Exploring model theft attacks
    10. 3.9 Understanding overreliance of AI systems
  7. Lesson 4: How Attackers Are Using AI to Perform Attacks
    1. Learning objectives
    2. 4.1 Exploring the MITRE ATLAS framework
    3. 4.2 AI supply chain security
    4. 4.3 Automated vulnerability discovery and creating exploits at scale
    5. 4.4 Intelligent data harvesting, OSINT, automating phishing, and social engineering attacks
    6. 4.5 Exploring examples of deepfakes and synthetic media
    7. 4.6 Dynamic obfuscation of attack vectors
  8. Lesson 5: AI System and Infrastructure Security
    1. Learning objectives
    2. 5.1 Secure development practices
    3. 5.2 Monitoring and auditing
    4. 5.3 Software Bill of Materials (SBOMs) and AI Bill of Materials (AI BOMs)
    5. 5.4 Using CSAF and VEX to accelerate vulnerability management
  9. Module 3: Privacy and Ethical Considerations
    1. Module introduction
  10. Lesson 6: Privacy and AI Fundamentals
    1. Learning objectives
    2. 6.1 Understanding key privacy considerations in AI implementations
    3. 6.2 Bias and fairness in AI and ML systems
    4. 6.3 Transparency and accountability
    5. 6.4 Understanding differential privacy
    6. 6.5 Exploring secure multi-party computation (SMPC)
    7. 6.6 Understanding homomorphic encryption
    8. 6.7 Understanding the AI data lifecycle management
    9. 6.8 Delving into federated learning
  11. Lesson 7: AI Ethics
    1. Learning objectives
    2. 7.1 Ethical considerations in AI development
    3. 7.2 Responsible AI frameworks
    4. 7.3 Policy frameworks
    5. 7.4 Exploring strategies to mitigate bias
  12. Lesson 8: Legal and Regulatory Compliance
    1. Learning objectives
    2. 8.1 Overview of upcoming regulations and guidelines
    3. 8.2 Ensuring compliance in AI and ML systems
    4. 8.3 Case studies and best practices
  13. Summary
    1. AI Security and Responsible AI Practices: Summary

Product information

  • Title: AI Security and Responsible AI Practices
  • Author(s): Omar Santos / Dr. Petar Radanliev
  • Release date: March 2024
  • Publisher(s): Pearson
  • ISBN: 0138361606