Interpretable AI

Book description

AI doesn’t have to be a black box. These practical techniques help shine a light on your model’s mysterious inner workings. Make your AI more transparent, and you’ll improve trust in your results, combat data leakage and bias, and ensure compliance with legal requirements.

In Interpretable AI, you will learn:

  • Why AI models are hard to interpret
  • Interpreting white box models such as linear regression, decision trees, and generalized additive models
  • Partial dependence plots, LIME, SHAP and Anchors, and other techniques such as saliency mapping, network dissection, and representational learning
  • What fairness is and how to mitigate bias in AI systems
  • Implement robust AI systems that are GDPR-compliant

Interpretable AI opens up the black box of your AI models. It teaches cutting-edge techniques and best practices that can make even complex AI systems interpretable. Each method is easy to implement with just Python and open source libraries. You’ll learn to identify when you can utilize models that are inherently transparent, and how to mitigate opacity when your problem demands the power of a hard-to-interpret deep learning model.

About the Technology
It’s often difficult to explain how deep learning models work, even for the data scientists who create them. Improving transparency and interpretability in machine learning models minimizes errors, reduces unintended bias, and increases trust in the outcomes. This unique book contains techniques for looking inside “black box” models, designing accountable algorithms, and understanding the factors that cause skewed results.

About the Book
Interpretable AI teaches you to identify the patterns your model has learned and why it produces its results. As you read, you’ll pick up algorithm-specific approaches, like interpreting regression and generalized additive models, along with tips to improve performance during training. You’ll also explore methods for interpreting complex deep learning models where some processes are not easily observable. AI transparency is a fast-moving field, and this book simplifies cutting-edge research into practical methods you can implement with Python.

What's Inside
  • Techniques for interpreting AI models
  • Counteract errors from bias, data leakage, and concept drift
  • Measuring fairness and mitigating bias
  • Building GDPR-compliant AI systems


About the Reader
For data scientists and engineers familiar with Python and machine learning.

About the Author
Ajay Thampi is a machine learning engineer focused on responsible AI and fairness.

Quotes
A sound introduction for practitioners to the exciting field of interpretable AI.
- Pablo Roccatagliata, Torcuato Di Tella University

Ajay Thampi explains in an easy-to-understand way the importance of interpretability in machine learning.
- Ariel Gamiño, Athenahealth

Effectively demystifies interpretable AI for novice and pro alike.
- Vijayant Singh, Razorpay

Concrete examples help the understanding and building of interpretable AI systems.
- Izhar Haq, Long Island University

Publisher resources

View/Submit Errata

Table of contents

  1. inside front cover
  2. Interpretable AI
  3. Copyright
  4. dedication
  5. brief content
  6. contents
  7. front matter
    1. preface
    2. acknowledgments
    3. about this book
      1. Who should read this book
      2. How this book is organized: a roadmap
      3. About the code
      4. liveBook discussion forum
    4. about the author
    5. about the cover illustration
  8. Part 1. Interpretability basics
  9. 1 Introduction
    1. 1.1 Diagnostics+ AI—an example AI system
    2. 1.2 Types of machine learning systems
      1. 1.2.1 Representation of data
      2. 1.2.2 Supervised learning
      3. 1.2.3 Unsupervised learning
      4. 1.2.4 Reinforcement learning
      5. 1.2.5 Machine learning system for Diagnostics+ AI
    3. 1.3 Building Diagnostics+ AI
    4. 1.4 Gaps in Diagnostics+ AI
      1. 1.4.1 Data leakage
      2. 1.4.2 Bias
      3. 1.4.3 Regulatory noncompliance
      4. 1.4.4 Concept drift
    5. 1.5 Building a robust Diagnostics+ AI system
    6. 1.6 Interpretability vs. explainability
      1. 1.6.1 Types of interpretability techniques
    7. 1.7 What will I learn in this book?
      1. 1.7.1 What tools will I be using in this book?
      2. 1.7.2 What do I need to know before reading this book?
      3. Summary
  10. 2 White-box models
    1. 2.1 White-box models
    2. 2.2 Diagnostics+—diabetes progression
    3. 2.3 Linear regression
      1. 2.3.1 Interpreting linear regression
      2. 2.3.2 Limitations of linear regression
    4. 2.4 Decision trees
      1. 2.4.1 Interpreting decision trees
      2. 2.4.2 Limitations of decision trees
    5. 2.5 Generalized additive models (GAMs)
      1. 2.5.1 Regression splines
      2. 2.5.2 GAM for Diagnostics+ diabetes
      3. 2.5.3 Interpreting GAMs
      4. 2.5.4 Limitations of GAMs
    6. 2.6 Looking ahead to black-box models
      1. Summary
  11. Part 2. Interpreting model processing
  12. 3 Model-agnostic methods: Global interpretability
    1. 3.1 High school student performance predictor
      1. 3.1.1 Exploratory data analysis
    2. 3.2 Tree ensembles
      1. 3.2.1 Training a random forest
    3. 3.3 Interpreting a random forest
    4. 3.4 Model-agnostic methods: Global interpretability
      1. 3.4.1 Partial dependence plots
      2. 3.4.2 Feature interactions
      3. Summary
  13. 4 Model-agnostic methods: Local interpretability
    1. 4.1 Diagnostics+ AI: Breast cancer diagnosis
    2. 4.2 Exploratory data analysis
    3. 4.3 Deep neural networks
      1. 4.3.1 Data preparation
      2. 4.3.2 Training and evaluating DNNs
    4. 4.4 Interpreting DNNs
    5. 4.5 LIME
    6. 4.6 SHAP
    7. 4.7 Anchors
    8. Summary
  14. 5 Saliency mapping
    1. 5.1 Diagnostics+ AI: Invasive ductal carcinoma detection
    2. 5.2 Exploratory data analysis
    3. 5.3 Convolutional neural networks
      1. 5.3.1 Data preparation
      2. 5.3.2 Training and evaluating CNNs
    4. 5.4 Interpreting CNNs
      1. 5.4.1 Probability landscape
      2. 5.4.2 LIME
      3. 5.4.3 Visual attribution methods
    5. 5.5 Vanilla backpropagation
    6. 5.6 Guided backpropagation
    7. 5.7 Other gradient-based methods
    8. 5.8 Grad-CAM and guided Grad-CAM
    9. 5.9 Which attribution method should I use?
      1. Summary
  15. Part 3. Interpreting model representations
  16. 6 Understanding layers and units
    1. 6.1 Visual understanding
    2. 6.2 Convolutional neural networks: A recap
    3. 6.3 Network dissection framework
      1. 6.3.1 Concept definition
      2. 6.3.2 Network probing
      3. 6.3.3 Quantifying alignment
    4. 6.4 Interpreting layers and units
      1. 6.4.1 Running network dissection
      2. 6.4.2 Concept detectors
      3. 6.4.3 Concept detectors by training task
      4. 6.4.4 Visualizing concept detectors
      5. 6.4.5 Limitations of network dissection
      6. Summary
  17. 7 Understanding semantic similarity
    1. 7.1 Sentiment analysis
    2. 7.2 Exploratory data analysis
    3. 7.3 Neural word embeddings
      1. 7.3.1 One-hot encoding
      2. 7.3.2 Word2Vec
      3. 7.3.3 GloVe embeddings
      4. 7.3.4 Model for sentiment analysis
    4. 7.4 Interpreting semantic similarity
      1. 7.4.1 Measuring similarity
      2. 7.4.2 Principal component analysis (PCA)
      3. 7.4.3 t-distributed stochastic neighbor embedding (t-SNE)
      4. 7.4.4 Validating semantic similarity visualizations
    5. Summary
  18. Part 4. Fairness and bias
  19. 8 Fairness and mitigating bias
    1. 8.1 Adult income prediction
      1. 8.1.1 Exploratory data analysis
      2. 8.1.2 Prediction model
    2. 8.2 Fairness notions
      1. 8.2.1 Demographic parity
      2. 8.2.2 Equality of opportunity and odds
      3. 8.2.3 Other notions of fairness
    3. 8.3 Interpretability and fairness
      1. 8.3.1 Discrimination via input features
      2. 8.3.2 Discrimination via representation
    4. 8.4 Mitigating bias
      1. 8.4.1 Fairness through unawareness
      2. 8.4.2 Correcting label bias through reweighting
    5. 8.5 Datasheets for datasets
      1. Summary
  20. 9 Path to explainable AI
    1. 9.1 Explainable AI
    2. 9.2 Counterfactual explanations
      1. Summary
  21. Appendix A. Getting set up
    1. A.1 Python
    2. A.2 Git code repository
    3. A.3 Conda environment
    4. A.4 Jupyter notebooks
    5. A.5 Docker
  22. Appendix B. PyTorch
    1. B.1 What is PyTorch?
    2. B.2 Installing PyTorch
    3. B.3 Tensors
      1. B.3.1 Data types
      2. B.3.2 CPU and GPU tensors
      3. B.3.3 Operations
    4. B.4 Dataset and DataLoader
    5. B.5 Modeling
      1. B.5.1 Automatic differentiation
      2. B.5.2 Model definition
      3. B.5.3 Training
  23. index
  24. inside back cover

Product information

  • Title: Interpretable AI
  • Author(s): Ajay Thampi
  • Release date: July 2022
  • Publisher(s): Manning Publications
  • ISBN: 9781617297649