The second dataset sports an optimal decision boundary that is a diagonal line—one that is not orthogonal to the axes. Here, we start to see some cool behavior from certain classifiers:
Though all four classifiers were reasonably effective in this dataset's classification, we start to see each of the classifiers' personalities come out. First, the k-NN creates a boundary that closely approximates the optimal one. The logistic regression, amazingly, throws a perfect linear boundary at the exact right spot.