Chapter 6. Decision Trees
Decision trees are versatile machine learning algorithms that can perform both classification and regression tasks, and even multioutput tasks. They are powerful algorithms, capable of fitting complex datasets. For example, in Chapter 2 you trained a DecisionTreeRegressor
model on the California housing dataset, fitting it perfectly (actually, overfitting it).
Decision trees are also the fundamental components of random forests (see Chapter 7), which are among the most powerful machine learning algorithms available today.
In this chapter we will start by discussing how to train, visualize, and make predictions with decision trees. Then we will go through the CART training algorithm used by Scikit-Learn, and we will explore how to regularize trees and use them for regression tasks. Finally, we will discuss some of the limitations of decision trees.
Training and Visualizing a Decision Tree
To understand decision trees, let’s build one and take a look at how it makes predictions. The following code trains a DecisionTreeClassifier
on the iris dataset (see Chapter 4):
from
sklearn.datasets
import
load_iris
from
sklearn.tree
import
DecisionTreeClassifier
iris
=
load_iris
(
as_frame
=
True
)
X_iris
=
iris
.
data
[[
"petal length (cm)"
,
"petal width (cm)"
]]
.
values
y_iris
=
iris
.
target
tree_clf
=
DecisionTreeClassifier
(
max_depth
=
2
,
random_state
=
42
)
tree_clf
.
fit
(
X_iris
,
y_iris
)
You can visualize the trained decision tree by first using the export_graphviz()
function to ...
Get Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow, 3rd Edition now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.