Gradient descent

In the examples in this chapter, we analytically solved for the values of the model's parameters that minimize the cost function with the following equation:

Recall that X is the matrix of features for each training example. The dot product of XTX results in a square matrix with dimensions n by n, where n is equal to the number of features. The computational complexity of inverting this square matrix is nearly cubic in the number of features. While the number of features has been small in this chapter's examples, this inversion can be prohibitively costly for problems with tens of thousands of explanatory variables, which ...

Get Mastering Machine Learning with scikit-learn - Second Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.