Chapter 9. Debugging a PyTorch Image Classifier
Even in the hype-fueled 2010s, deep learning (DL) researchers started to notice some “intriguing properties” of their new deep networks. The fact that a good model with high in silico generalization performance could also be easily fooled by adversarial examples is both confusing and counterintuitive. Similar questions were raised by authors in the seminal paper “Deep Neural Networks Are Easily Fooled: High Confidence Predictions for Unrecognizable Images” when they questioned how it was possible for a deep neural network to classify images as familiar objects even though they were totally unrecognizable to human eyes? If it wasn’t understood already, it’s become clear that like all other machine learning systems, DL models must be debugged and remediated, especially for use in high-risk scenarios. In Chapter 7, we trained a pneumonia image classifier and used various post hoc explanation techniques to summarize the results. We also touched upon the connection between DL explainability techniques and debugging. In this chapter, we will pick up where we left off in Chapter 7 and use various debugging techniques on the trained model to ensure that it is robust and reliable enough to be deployed.
DL represents the state of the art in much of the ML research space today. However, its exceptional complexity also makes it harder to test and debug, which increases risk in real-world deployments. All software, even DL, has bugs, and they ...
Get Machine Learning for High-Risk Applications now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.