AI, ChatGPT, and other Large Language Models (LLMs) Security
Published by Pearson
Understand the Privacy, Ethics, and Security Challenges for Today's AI
- A real-world analysis of security, ethics, and privacy for LLMs, AI, and ChatGPT for development and deployment
- Practical learning by analyzing real-life AI and machine learning scenarios extracting valuable insights
- Taught by leading experts in AI and cybersecurity who bring their extensive knowledge and experience to create a rich, engaging, and up-to-date learning experience
Discover the critical aspects of Artificial Intelligence (AI) and Large Language Models (LLMs) implementations -- such as ChatGPT -- in this cutting-edge training. With a focus on AI Security, Ethics, and Privacy, industry experts Omar Santos and Petar Radanliev will explore the real-world ramifications of AI. As the world embraces the power of Artificial Intelligence, it's essential to understand the responsibilities and potential pitfalls that accompany this revolution and to ensure a future where technology serves humanity in the most responsible way.
In this training, you will gain valuable insights into the complexities of AI systems, from security risks and attack techniques to ethical considerations and privacy preservation strategies. You will learn about adversarial attacks, data poisoning, model inversion, and other AI attack techniques that target AI systems. We will also discuss responsible AI governance, bias mitigation, and privacy protection in AI applications.
Real-world examples and case studies will provide a practical context for understanding these critical concepts, sparking thought-provoking discussions with leading experts in the field. By the end of this training, attendees will possess the skills and knowledge necessary to navigate the ever-evolving AI landscape and contribute to its responsible growth. Don't miss this opportunity to deepen your understanding of AI, LLMs, ChatGPT, and conversational AI while examining the vital aspects of security, ethics, and privacy.
What you’ll learn and how you can apply it
- Model Inversion and Stealing in systems such as ChatGPT and Data Poisoning in AI
- Additional threats, vulnerabilities, and attack vectors unique to conversational AI
- Ethical considerations in AI decision-making and how to address bias, transparency, and accountability
- Potential privacy implications and future laws and regulations
And you’ll be able to:
- Understand ethical considerations in AI and ML, including bias and fairness, transparency, and accountability. You will learn responsible AI practices, such as fairness, transparency, and accountability, in AI and ML applications.
- Recognize and understand the privacy aspects of AI and ML, including data protection, anonymization, and regulatory compliance.
- Understand key concepts in AI and ML security, including threats, vulnerabilities, and attack vectors.
This live event is for you because...
- You want to learn about security, ethics, and privacy in AI and ML systems.
- You are a developer, data scientist, or engineer looking to build secure and ethical AI and ML applications while considering privacy aspects.
- You are an IT professional, security specialist, or privacy officer interested in understanding the unique challenges posed by AI and ML technologies.
- You are a product manager, team leader, or executive looking to integrate secure and responsible AI and ML practices within your organization.
Prerequisites
- Basic awareness of ML and AI implementations such as ChatGPT, GitHub Co-pilot, DALL-E, Midjourney, DreamStudio (Stable Diffusion), and others.
- Familiarity with computer science concepts: Basic knowledge of data structures, algorithms, and computer systems will be beneficial in understanding the underlying mechanisms of AI and ML algorithms and their security implications.
- Curiosity and willingness to learn: A strong desire to learn about AI, ML, security, ethics, and privacy, and the ability to think critically about the implications of AI and ML technologies on society, is crucial for making the most of the training.
Course Set-up
- You can follow along during the presentation with any Linux system with Python 3.x installed.
Recommended Preparation
- Watch: Catalyst Conference by Jon Krohn
- Read: Deep Learning Illustrated: A Visual, Interactive Guide to Artificial Intelligence by Jon Krohn, Grant Beyleveld, and Aglaé Bassens
- Watch: The Complete Cybersecurity Bootcamp, 2nd Edition by Omar Santos
- Watch: Introduction to Transformer Models for NLP: Using BERT, GPT, and More to Solve Modern Natural Language Processing Tasks by Sinan Ozdemir
Recommended Follow-up
- Watch: Deep Learning with TensorFlow, Keras, and PyTorch by Jon Krohn
- Watch: Deep Learning for Natural Language Processing, 2nd Edition by Jon Krohn
- Watch: Machine Vision, GANs, and Deep Reinforcement Learning by Jon Krohn
- Watch: The Art of Hacking (Video Collection) by Omar Santos, Ron Taylor, Jon Sternstein and Chris McCoy
- Explore: Ethical Hacking Labs by Omar and Derek Santos
Schedule
The time frames are only estimates and may vary according to how the class is progressing.
Segment 1: Introduction to AI, ChatGPT, and Conversational AI (45 minutes)
- Overview of artificial intelligence and language models
- Introduction to ChatGPT and OpenAI's GPT architecture
- Applications and use cases of conversational AI
- Types of ML: Supervised, Unsupervised, and Reinforcement Learning
- AI and ML applications and use cases
- Examples of AI applications we use on a daily basis (ChatGPT, GitHub Co-pilot, DALL-E, Midjourney, DreamStudio (Stable Diffusion)
- Q&A (5 minutes)
- Break (10 minutes)
Segment 2: Security Challenges in AI and ChatGPT Systems (45 minutes)
- Importance of security in AI and ChatGPT applications
- Adversarial Attacks on AI Systems such as ChatGPT
- Data Poisoning in AI
- Threats, vulnerabilities, and attack vectors unique to conversational AI
- Model Inversion and Stealing in AI and ChatGPT
- Techniques for robust AI model training and evaluation
- Security of AI infrastructure: cloud platforms, on-premises solutions, and edge computing
- Human-in-the-loop security: addressing insider threats and social engineering
- Developing a comprehensive security policy for AI-driven organizations
- Q&A (5 minutes)
- Break (10 minutes)
Segment 3: Ethics and Responsible AI Deployment (45 minutes)
- Ethical considerations in AI decision-making
- Addressing bias, transparency, and accountability
- Ethical AI design principles and guidelines
- The role of AI in decision-making: ethical implications and potential consequences
- Establishing responsible AI governance and oversight
- AI in sensitive domains: healthcare, finance, criminal justice, defense, and human resources
- Engaging stakeholders: fostering dialogue and collaboration between developers, users, and affected communities
- Q&A (5 minutes)
- Break (10 minutes)
Segment 4: Privacy Protection and AI (50 minutes)
- Challenges in safeguarding user data and privacy
- Privacy-preserving AI techniques and tools
- Legal and regulatory aspects of AI and privacy
Wrap-up and Closing Remarks (10 minutes)
- Recap of key takeaways
- Opportunities for further learning and engagement
- Closing thoughts and next steps
Your Instructors
Omar Santos
Omar Santos is a Distinguished Engineer at Cisco focusing on artificial intelligence (AI) security, research, incident response, and vulnerability disclosure. He is a board member of the OASIS Open standards organization and the founder of OpenEoX. Omar's collaborative efforts extend to numerous organizations, including the Forum of Incident Response and Security Teams (FIRST) and the Industry Consortium for Advancement of Security on the Internet (ICASI). Omar is the co-chair of the FIRST PSIRT Special Interest Group (SIG). Omar is the lead of the DEF CON Red Team Village and the chair of the Common Security Advisory Framework (CSAF) technical committee. Omar is the author of over 20 books, numerous video courses, and over 50 academic research papers. Omar is a renowned expert in ethical hacking, vulnerability research, incident response, and AI security. His dedication to cybersecurity has made a significant impact on technology standards, businesses, academic institutions, government agencies, and other entities striving to improve their cybersecurity programs.
Dr. Petar Radanliev
Dr. Petar Radanliev, Department of Engineering Science, University of Oxford. Dr. Radanliev is a highly accomplished and experienced cybersecurity professional with 10+ years of experience in academic and industry settings. He has expertise in cybersecurity research, risk management, and cyber defense, as well as a track record of excellence in teaching, mentoring, and leading research teams. Technical skills include new and emerging technical cyber/crypto technologies and algorithms, DeFi, blockchain, Metaverse, quantum cryptography. Petar obtained a PhD at University of Wales in 2014, and continued with postdoctoral research at Imperial College London, University of Cambridge, Massachusetts Institute of Technology, and University of Oxford. His awards include the Fulbright Fellowship in the US, and the Prince of Wales Innovation Scholarship in the UK.