Skip to content
  • Sign In
  • Try Now
View all events
AI Principles

Responsible AI

Published by Pearson

Intermediate content levelIntermediate

Ensure Responsible Development and Deployment of AI and ML Systems

  • Learn the fundamentals of AI & ML before diving into how they affect and change security, ethics, and privacy
  • Expert insights from trainers who use AI in the field and who understand the complexities of AI and ML security
  • Focused attention on future-proof skills so you can stay ahead in the rapidly evolving AI and ML landscape

This highly interactive 2-day training is designed to provide a comprehensive understanding of the fundamentals of artificial intelligence (AI) and machine learning (ML), while emphasizing the importance of security, ethics, and privacy. You will start by learning essential AI and ML concepts, popular algorithms, and generative AI techniques. The course will then explore AI & ML security basics before diving into system and infrastructure security, privacy and ethical considerations, and legal and regulatory compliance. Security and AI experts Omar Santos and Dr. Petar Radanliev will emphasize best practices for securing AI and ML systems throughout while using real-world examples.

Come ready to explore common threats, vulnerabilities, and attack vectors, as well as mitigation strategies along with the ethical considerations and privacy aspects of AI and ML. This course is built to teach you how to ensure responsible development and deployment of AI and ML systems and will provide real-life examples that we use on a daily-basis with ChatGPT, GitHub Co-pilot, DALL-E, Midjourney, DreamStudio (Stable Diffusion), and others.

By the end of the class, students will have a solid foundation in AI and ML principles and be better prepared to develop secure and ethical systems while being mindful of privacy concerns. The course will be highly interactive, featuring real-world examples and discussions to ensure a comprehensive understanding of AI, ML, security, ethics, and privacy.

What you’ll learn and how you can apply it

By the end of the live online course, you’ll understand:

  • The fundamentals of AI and ML, including popular algorithms and their applications
  • Numerous concepts about AI and ML including tools, libraries, and frameworks
  • Insights into emerging trends and future directions in AI, ML, security, ethics, and privacy

And you’ll be able to:

  • Understand ethical considerations in AI and ML, including bias and fairness, transparency, and accountability
  • Apply responsible AI practices, such as fairness, transparency, and accountability, in AI and ML applications
  • Recognize and understand the privacy aspects of AI and ML, including data protection, anonymization, and regulatory compliance
  • Understand key concepts in AI and ML security, including threats, vulnerabilities, and attack vectors
  • Apply different techniques for securing AI and ML systems, such as data security, model robustness, and secure infrastructure

This live event is for you because...

  • You want to learn about security, ethics, and privacy in AI and ML systems
  • You are a developer, data scientist, or engineer looking to build secure and ethical AI and ML applications while considering privacy aspects
  • You are an IT professional, security specialist, or privacy officer interested in understanding the unique challenges posed by AI and ML technologies
  • You are a product manager, team leader, or executive looking to integrate secure and responsible AI and ML practices within your organization
  • You want to stay ahead of the curve in the rapidly evolving fields of AI and ML while ensuring that your work aligns with ethical guidelines and privacy regulations

Prerequisites

  • Basic awareness of ML and AI implementations such as ChatGPT, GitHub Co-pilot, DALL-E, Midjourney, DreamStudio (Stable Diffusion), and others.
  • Familiarity with computer science concepts: Basic knowledge of data structures, algorithms, and computer systems will be beneficial in understanding the underlying mechanisms of AI and ML algorithms and their security implications.
  • Curiosity and willingness to learn: A strong desire to learn about AI, ML, security, ethics, and privacy, and the ability to think critically about the implications of AI and ML technologies on society, is crucial for making the most of the training.

Course Set-up

  • You can follow along during the presentation with any Linux system with Python 3.x installed.

Recommended Preparation

Recommended Follow-up

Schedule

The time frames are only estimates and may vary according to how the class is progressing.

Day 1

Fundamentals of AI and ML: Part I (50 minutes)

  • Overview of AI and ML
  • Types of ML: Supervised, Unsupervised, and Reinforcement Learning
  • AI and ML applications and use cases
  • Examples of AI applications we use on a daily basis: What about ChatGPT?; GitHub Co-pilot; Image generation: DALL-E, Midjourney, DreamStudio (Stable Diffusion)

Break (10 minutes)

Fundamentals of AI and ML: Part II (50 minutes)

  • Data preprocessing and feature engineering
  • Overview of popular ML algorithms (Linear regression; Logistic regression; Decision trees; Random forests; Support vector machines; Neural networks; k-means clustering)
  • Large Language Models (LLMs)
  • GPT (i.e., GPT-3, GPT-3.5, GPT-4 and the future)
  • Model evaluation and validation

Break (10 minutes)

Introduction to Generative AI (50 minutes)

  • Overview of generative models
  • Types of generative models
  • Applications of generative AI

Break (10 minutes)

AI and ML Security Overview (50 minutes)

  • Importance of security in AI and ML systems
  • Key challenges in AI and ML security
  • Common threats, vulnerabilities, and attack vectors

Q&A (10 minutes)

Day 2

Fundamentals of AI and ML Security (50 minutes)

  • Risk assessment and management
  • Data security
  • Model security
  • Common Attacks
  • Tactics, Techniques, and Procedures (TTPs)

Break (10 minutes)

System and Infrastructure Security (50 minutes)

  • Secure development practices
  • Monitoring and auditing
  • Supply chain security
  • Secure deployment and maintenance

Break (10 minutes)

Privacy and Ethical Considerations (50 minutes)

  • Bias and fairness in AI and ML systems
  • Transparency and accountability
  • Privacy and data protection

Break (10 minutes)

Legal and Regulatory Compliance (45 minutes)

  • Overview of regulations and guidelines
  • Ensuring compliance in AI and ML systems
  • Case studies and best practices

Q&A (15 minutes)

Your Instructors

  • Omar Santos

    Omar Santos is a Distinguished Engineer at Cisco focusing on artificial intelligence (AI) security, cybersecurity research, incident response, and vulnerability disclosure. He is a board member of the OASIS Open standards organization and the founder of OpenEoX. Omar's collaborative efforts extend to numerous organizations, including the Forum of Incident Response and Security Teams (FIRST) and the Industry Consortium for Advancement of Security on the Internet (ICASI). Omar is the co-chair of the FIRST PSIRT Special Interest Group (SIG). Omar is the chair of the Common Security Advisory Framework (CSAF) technical committee. Omar is the author of over 25 books, 21 video courses, and over 50 academic research papers. Omar is a renowned expert in ethical hacking, vulnerability research, incident response, and AI security. He employs his deep understanding of these disciplines to help organizations stay ahead of emerging threats. His dedication to cybersecurity has made a significant impact on technology standards, businesses, academic institutions, government agencies, and other entities striving to improve their cybersecurity programs. Prior to Cisco, Omar served in the United States Marines focusing on the deployment, testing, and maintenance of Command, Control, Computer, Communications, and Intelligence (C4I) systems.

    linkedinXlinksearch
  • Dr. Petar Radanliev

    Dr. Petar Radanliev, Department of Engineering Science, University of Oxford. Dr. Radanliev is a highly accomplished and experienced cybersecurity professional with 10+ years of experience in academic and industry settings. He has expertise in cybersecurity research, risk management, and cyber defense, as well as a track record of excellence in teaching, mentoring, and leading research teams. Technical skills include new and emerging technical cyber/crypto technologies and algorithms, DeFi, blockchain, Metaverse, quantum cryptography. Petar obtained a PhD at University of Wales in 2014, and continued with postdoctoral research at Imperial College London, University of Cambridge, Massachusetts Institute of Technology, and University of Oxford. His awards include the Fulbright Fellowship in the US, and the Prince of Wales Innovation Scholarship in the UK.