Chapter 2. An Overview of Explainability
Explainability has been a part of machine learning since the inception of AI. The very first AIs, rule-based chain systems, were specifically constructed to provide a clear understanding of what led to a prediction. The field continued to pursue explainability as a key part of models, partly due to a focus on general AI but also to justify that the research was sane and on the right track, for many decades until the complexity of model architectures outpaced our ability to explain what was happening. After the introduction of ML neurons and neural nets in the 1980s,1 research into explainability waned as researchers focused on surviving the first AI winter by turning to techniques that were “explainable” because they relied solely on statistical techniques, such as Bayesian inference, that were well-proven in other fields. Explainability in its modern form (and what we largely focus on in this book) was revived, now as a distinct field of research, in the mid-2010s in response to the persistent question of This model works really well…but how?
In just a few years, the field has gone from obscurity to one of intense interest and investigation. Remarkably, many powerful explainability techniques have been invented, or repurposed from other fields, in the short time since. However, the rapid transition from theory to practice, and the increasing need for explainability from users who interact with ML, such as end users and business stakeholders, ...
Get Explainable AI for Practitioners now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.