language-icon Old Web
English
Sign In

Quadratic classifier

A quadratic classifier is used in machine learning and statistical classification to separate measurements of two or more classes of objects or events by a quadric surface. It is a more general version of the linear classifier. A quadratic classifier is used in machine learning and statistical classification to separate measurements of two or more classes of objects or events by a quadric surface. It is a more general version of the linear classifier. Statistical classification considers a set of vectors of observations x of an object or event, each of which has a known type y. This set is referred to as the training set. The problem is then to determine for a given new observation vector, what the best class should be. For a quadratic classifier, the correct solution is assumed to be quadratic in the measurements, so y will be decided based on In the special case where each observation consists of two measurements, this means that the surfaces separating the classes will be conic sections (i.e. either a line, a circle or ellipse, a parabola or a hyperbola). In this sense we can state that a quadratic model is a generalization of the linear model, and its use is justified by the desire to extend the classifier's ability to represent more complex separating surfaces. Quadratic discriminant analysis (QDA) is closely related to linear discriminant analysis (LDA), where it is assumed that the measurements from each class are normally distributed. Unlike LDA however, in QDA there is no assumption that the covariance of each of the classes is identical. When the normality assumption is true, the best possible test for the hypothesis that a given measurement is from a given class is the likelihood ratio test. Suppose there are only two groups, (so y ∈ { 0 , 1 } {displaystyle yin {0,1}} ), and the means of each class are defined to be μ y = 0 , μ y = 1 {displaystyle mu _{y=0},mu _{y=1}} and the covariances are defined as Σ y = 0 , Σ y = 1 {displaystyle Sigma _{y=0},Sigma _{y=1}} . Then the likelihood ratio will be given by for some threshold t {displaystyle t} . After some rearrangement, it can be shown that the resulting separating surface between the classes is a quadratic. The sample estimates of the mean vector and variance-covariance matrices will substitute the population quantities in this formula.

[ "Support vector machine", "Classifier (linguistics)", "Classifier (UML)", "Margin (machine learning)" ]
Parent Topic
Child Topic
    No Parent Topic