language-icon Old Web
English
Sign In

Mixture model

In statistics, a mixture model is a probabilistic model for representing the presence of subpopulations within an overall population, without requiring that an observed data set should identify the sub-population to which an individual observation belongs. Formally a mixture model corresponds to the mixture distribution that represents the probability distribution of observations in the overall population. However, while problems associated with 'mixture distributions' relate to deriving the properties of the overall population from those of the sub-populations, 'mixture models' are used to make statistical inferences about the properties of the sub-populations given only observations on the pooled population, without sub-population identity information. In statistics, a mixture model is a probabilistic model for representing the presence of subpopulations within an overall population, without requiring that an observed data set should identify the sub-population to which an individual observation belongs. Formally a mixture model corresponds to the mixture distribution that represents the probability distribution of observations in the overall population. However, while problems associated with 'mixture distributions' relate to deriving the properties of the overall population from those of the sub-populations, 'mixture models' are used to make statistical inferences about the properties of the sub-populations given only observations on the pooled population, without sub-population identity information. Some ways of implementing mixture models involve steps that attribute postulated sub-population-identities to individual observations (or weights towards such sub-populations), in which case these can be regarded as types of unsupervised learning or clustering procedures. However, not all inference procedures involve such steps. Mixture models should not be confused with models for compositional data, i.e., data whose components are constrained to sum to a constant value (1, 100%, etc.). However, compositional models can be thought of as mixture models, where members of the population are sampled at random. Conversely, mixture models can be thought of as compositional models, where the total size reading population has been normalized to 1. A typical finite-dimensional mixture model is a hierarchical model consisting of the following components: In addition, in a Bayesian setting, the mixture weights and parameters will themselves be random variables, and prior distributions will be placed over the variables. In such a case, the weights are typically viewed as a K-dimensional random vector drawn from a Dirichlet distribution (the conjugate prior of the categorical distribution), and the parameters will be distributed according to their respective conjugate priors.

[ "Statistics", "Machine learning", "Artificial intelligence", "Pattern recognition", "MIXTURE COMPONENT", "gaussian mixture regression", "variational learning", "mixture modeling", "Dirichlet process" ]
Parent Topic
Child Topic
    No Parent Topic