Chapter 10. Linear Regression: Going Back to Basics
Linear regression (OLS1) is the first machine learning algorithm most data scientists learn, but it has become more of an intellectual curiosity with the advent of more powerful nonlinear alternatives, like gradient boosting regression. Because of this, many practitioners don’t know many properties of OLS that are very helpful to gain some intuition about learning algorithms. This chapter goes through some of these important properties and highlights their significance.
What’s in a Coefficient?
Let’s start with the simplest setting with only one feature:
The first parameter is the constant or intercept, and the second parameter is the slope, as you may recall from the typical functional form for a line.
Since the residuals are mean zero, by taking partial derivatives you can see that:
As discussed in Chapter 9, the first equation is quite useful for interpretability reasons, since it says that a one-unit change in the feature is associated with a change in units of the outcome, on average. However, as I will now show, you must be careful not to give it a causal interpretation.
By substituting the definition of the outcome inside the covariance, you can also show that:
In a bivariate setting, the slope depends on the covariance between the outcome and the feature, and ...
Get Data Science: The Hard Parts now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.