Skip to content
  • Sign In
  • Try Now
View all events
Machine Learning

Explainable Machine Learning Models—with Interactivity

Published by O'Reilly Media, Inc.

Intermediate content levelIntermediate

Making sense of opaque and complex models using Python

This live event utilizes Jupyter Notebook technology

Machine Learning is a powerful tool and is being increasingly used in multi-faceted ways across several industries. These AI models are being used to make decisions that affect people’s lives. Therefore, it becomes imperative that the predictions are fair and not biased or discriminating. One way to ensure fairness in AI is by analyzing the predictions obtained from the model. This gives us the ability not only to discover a model’s mispredictions but analyze and fix the underlying cause too.

In this course we shall look at commonly used model explainers like SHAP values, LIME, Partial dependence plots and ICE plots, etc. After understanding the intuition behind these techniques, we’ll learn how to implement them in Python using case studies. We’ll probe why a particular person was denied a loan by a bank and why a particular particular person has high chances of having a heart attack based on the person’s vital health stats. You will learn how to explain a model’s prediction by extracting the most important features and their values, which mostly impacted these prediction models. Finally, we’ll also discuss the vulnerabilities and shortcomings of these methods and discuss the road ahead.

What you’ll learn and how you can apply it

By the end of this live online course, you’ll understand:

  • The fundamental concept behind interpretability and explainability
  • An overview of various explanability techniques
  • How to interpret opaque machine learning models using post hoc techniques in Python
  • The challenges and strengths associated with each technique

And you’ll be able to:

  • Intuit explanations for the predictions of machine learning models
  • Implement techniques like permutation importance, partial dependence plots, SHAP, and LIME
  • Interpret and visualize the output of any machine learning model

This live event is for you because...

  • You’re a data science professional who wants to understand how a machine learning model makes predictions.
  • You’re concerned about the ethical and moral implications of machine learning.

Prerequisites

  • No preparation or local installation needed—all exercises will be provided using Jupyter notebooks
  • Familiarity with Python and machine learning libraries

Recommended preparation:

Recommended follow-up:

Schedule

The time frames are only estimates and may vary according to how the class is progressing.

Getting Started (10 minutes)

  • Presentation: Introduction and Motivation for explaining opaque models
  • Group Discussion: Discuss some common examples of bias in machine learning
  • Q&A

Understanding the basics (25 minutes)

  • Presentation: Interpretability vs. Explainability
  • Presentation: Differentiating between Opaque vs. Transparent Models
  • Presentation: Local vs. Global explanations
  • Quiz time
  • Q&A
  • Break(5 min)

Partial Dependence Plots & ICE Plots (30 minutes)

  • Presentation: Partial Dependence Plots concepts and its variants
  • Jupyter Notebook Exercise: Explaining and visualizing model predictions using Partial Dependence Plots & ICE plots.
  • Q&A

Shapley Value Explanations (30 minutes)

  • Presentation: Overview and importance of SHAP values
  • Jupyter Notebook Exercise: Implementing SHAP technique in Python
  • Q&A
  • Quiz time
  • Break(5 minutes)

LIME: Locally Interpretable Model-Agnostic Explanations (30 minutes)

  • Presentation: Fundamentals and intuition behind LIME values
  • Jupyter Notebook Exercise: Usage and implementation of LIME techniques for tabular, text and image data in Python
  • Q&A

Occlusion and Recap (40 minutes)

  • Presentation: Understanding Occlusion technique as means to explain image classifier
  • Jupyter Notebook Exercise: Using Occlusion technique to debug an image classifier in Pytorch
  • Quiz time
  • Q&A and Wrap Up

Your Instructor

  • Parul Pandey

    Parul Pandey works as a Machine Learning Engineer at Weights & Biases. Previously, She was a Data Scientist at H2O.ai, an AutoML company. She combines data science, data science and developer advocacy in her work. She is also a Kaggle Grandmaster in the notebooks category and was one of Linkedin’s Top Voice in the Software Development category in 2019. Parul has also written articles focussed on Data Science and Software development for multiple publications.

    linkedinXlinksearch