Chapter 5. The Pitfalls of A/B Testing
Thus far in this report, I’ve mainly focused on introducing the basic concepts in evaluating machine learning, with an occasional cautionary note here and there. This chapter is just the opposite. I’ll give a cursory overview of the basics of A/B testing, and focus mostly on best practice tips. This is because there are many books and articles that teach statistical hypothesis testing, but relatively few articles about what can go wrong.
A/B testing is a widespread practice today. But a lot can go wrong in setting it up and interpreting the results. We’ll discuss important questions to consider when doing A/B testing, followed by an overview of a promising alternative: multiarmed bandits.
Recall that there are roughly two regimes for machine learning evaluation: offline and online. Offline evaluation happens during the prototyping phase where one tries out different features, models, and hyperparameters. It’s an iterative process of many rounds of evaluation against a chosen baseline on a set of chosen evaluation metrics. Once you have a model that performs reasonably well, the next step is to deploy the model to production and evaluate its performance online, i.e., on live data. This chapter discusses online testing.
A/B Testing: What Is It?
A/B testing has emerged as the predominant method ...
Get Evaluating Machine Learning Models now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.