language-icon Old Web
English
Sign In

Maximum a posteriori estimation

In Bayesian statistics, a maximum a posteriori probability (MAP) estimate is an estimate of an unknown quantity, that equals the mode of the posterior distribution. The MAP can be used to obtain a point estimate of an unobserved quantity on the basis of empirical data. It is closely related to the method of maximum likelihood (ML) estimation, but employs an augmented optimization objective which incorporates a prior distribution (that quantifies the additional information available through prior knowledge of a related event) over the quantity one wants to estimate. MAP estimation can therefore be seen as a regularization of ML estimation. In Bayesian statistics, a maximum a posteriori probability (MAP) estimate is an estimate of an unknown quantity, that equals the mode of the posterior distribution. The MAP can be used to obtain a point estimate of an unobserved quantity on the basis of empirical data. It is closely related to the method of maximum likelihood (ML) estimation, but employs an augmented optimization objective which incorporates a prior distribution (that quantifies the additional information available through prior knowledge of a related event) over the quantity one wants to estimate. MAP estimation can therefore be seen as a regularization of ML estimation. Assume that we want to estimate an unobserved population parameter θ {displaystyle heta } on the basis of observations x {displaystyle x} . Let f {displaystyle f} be the sampling distribution of x {displaystyle x} , so that f ( x ∣ θ ) {displaystyle f(xmid heta )} is the probability of x {displaystyle x} when the underlying population parameter is θ {displaystyle heta } . Then the function: is known as the likelihood function and the estimate: is the maximum likelihood estimate of θ {displaystyle heta } . Now assume that a prior distribution g {displaystyle g} over θ {displaystyle heta } exists. This allows us to treat θ {displaystyle heta } as a random variable as in Bayesian statistics. We can calculate the posterior distribution of θ {displaystyle heta } using Bayes' theorem: where g {displaystyle g} is density function of θ {displaystyle heta } , Θ {displaystyle Theta } is the domain of g {displaystyle g} . The method of maximum a posteriori estimation then estimates θ {displaystyle heta } as the mode of the posterior distribution of this random variable: The denominator of the posterior distribution (so-called marginal likelihood) is always positive and does not depend on θ {displaystyle heta } and therefore plays no role in the optimization. Observe that the MAP estimate of θ {displaystyle heta } coincides with the ML estimate when the prior g {displaystyle g} is uniform (that is, a constant function). When the loss function is of the form

[ "Maximum likelihood", "Algorithm", "Artificial intelligence", "Pattern recognition", "MAP solution", "maximum a posteriori algorithm", "maximum a posteriori decoder", "map adaptation", "Minimum chi-square estimation" ]
Parent Topic
Child Topic
    No Parent Topic