Chapter 4. Nonlinear Classifiers
4.1. Introduction
In the previous chapter we dealt with the design of linear classifiers described by linear discriminant functions (hyperplanes) g(x). In the simple two-class case, we saw that the perceptron algorithm computes the weights of the linear function g(x), provided that the classes are linearly separable. For nonlinearly separable classes, linear classifiers were optimally designed, for example, by minimizing the squared error. In this chapter we will deal with problems that are not linearly separable and for which the design of a linear classifier, even in an optimal way, does not lead to satisfactory performance. The design of nonlinear classifiers emerges now as an inescapable necessity.
4.2. ...
Get Pattern Recognition, 4th Edition now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.