Chapter 6. Explainable Boosting Machines and Explaining XGBoost
This chapter explores explainable models and post hoc explanation with interactive examples relating to consumer finance. It also applies the approaches discussed in Chapter 2 using explainable boosting machines (EBMs), monotonically constrained XGBoost models, and post hoc explanation techniques. We’ll start with a concept refresher for additivity, constraints, partial dependence and individual conditional expectation (ICE), Shapley additive explanations (SHAP), and model documentation.
We’ll then explore an example credit underwriting problem by building from a penalized regression, to a generalized additive model (GAM), to an EBM. In working from simpler to more complex models, we’ll document explicit and deliberate trade-offs regarding the introduction of nonlinearity and interactions into our example probability of default classifier, all while preserving near-total explainability with additive models.
Note
Recall from Chapter 2 that an interpretation is a high-level, meaningful mental representation that contextualizes a stimulus and leverages human background knowledge, whereas an explanation is a low-level, detailed mental representation that seeks to describe a complex process. Interpretation is a much higher bar than explanation, rarely achieved by technical approaches alone.
After that, we’ll consider a second approach to predicting default that allows for complex feature interactions, but controls complexity ...
Get Machine Learning for High-Risk Applications now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.