Chapter 5. Automated Testing: Move Fast Without Breaking Things
Dana has just sat down at her desk, the aroma of fresh coffee filling the space around her. She and a junior data scientist on the team continued on the user story that they kicked off yesterday—engineering a new feature that could improve the model.
They made the necessary changes and executed a command to run the tests. This suite of tests helped validate that the entire codebase was still behaving as expected. After 20 seconds, their terminal showed dashes of green—all the tests passed.
Sometimes the terminal went red. Some tests failed, but that’s OK—the failing tests caught them as they were about to fall into a deep rabbit hole, helping them recover easily by tracing a few steps backward. The tests are now back to green and they gave it another go.
Green or red, the tests gave them fast feedback on code changes. The tests gave them confidence and the occasional dopamine hit to tell them if they were going in the right direction or stop them when they went in the wrong direction. They didn’t have to follow a tedious sequence of manual steps to test the code. When the tests failed, there were only a small number of changes that could have caused the failure, not hours of potential suspects to sort through.
When they needed to train the model, they ran a command that triggered training on the cloud and their experiment-tracking dashboard lit up with updated metrics and explainability visualizations, which signaled ...
Get Effective Machine Learning Teams now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.