Chapter 2. Interpretable and Explainable Machine Learning
Scientists have been fitting models to data to learn more about observed patterns for centuries. Explainable machine learning models and post hoc explanation of ML models present an incremental, but important, advance in this long-standing practice. Because ML models learn about nonlinear, faint, and interacting signals more easily than traditional linear models, humans using explainable ML models and post hoc explanation techniques can now also learn about nonlinear, faint, and interacting signals in their data with more ease.
In this chapter, we’ll dig into important ideas for interpretation and explanation before tackling major explainable modeling and post hoc explanation techniques. We’ll cover the major pitfalls of post hoc explanation too—many of which can be overcome by using explainable models and post hoc explanation together. Next we’ll discuss applications of explainable models and post hoc explanation, like model documentation and actionable recourse for wrong decisions, that increase accountability for AI systems. The chapter will close with a case discussion of the so called “A-level scandal” in the United Kingdom (UK), where an explainable, highly documented model made unaccountable decisions, resulting in a nationwide AI incident. The discussion of explainable models and post hoc explanation continues in Chapters 6 and 7, where we explore two in-depth code examples related to these topics.
Get Machine Learning for High-Risk Applications now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.