language-icon Old Web
English
Sign In

Minimum mean square error

In statistics and signal processing, a minimum mean square error (MMSE) estimator is an estimation method which minimizes the mean square error (MSE), which is a common measure of estimator quality, of the fitted values of a dependent variable. In the Bayesian setting, the term MMSE more specifically refers to estimation with quadratic loss function. In such case, the MMSE estimator is given by the posterior mean of the parameter to be estimated. Since the posterior mean is cumbersome to calculate, the form of the MMSE estimator is usually constrained to be within a certain class of functions. Linear MMSE estimators are a popular choice since they are easy to use, easy to calculate, and very versatile. It has given rise to many popular estimators such as the Wiener–Kolmogorov filter and Kalman filter.Let us have the optimal linear MMSE estimator given as x ^ = W y + b {displaystyle {hat {x}}=Wy+b} , where we are required to find the expression for W {displaystyle W} and b {displaystyle b} . It is required that the MMSE estimator be unbiased. This means, In statistics and signal processing, a minimum mean square error (MMSE) estimator is an estimation method which minimizes the mean square error (MSE), which is a common measure of estimator quality, of the fitted values of a dependent variable. In the Bayesian setting, the term MMSE more specifically refers to estimation with quadratic loss function. In such case, the MMSE estimator is given by the posterior mean of the parameter to be estimated. Since the posterior mean is cumbersome to calculate, the form of the MMSE estimator is usually constrained to be within a certain class of functions. Linear MMSE estimators are a popular choice since they are easy to use, easy to calculate, and very versatile. It has given rise to many popular estimators such as the Wiener–Kolmogorov filter and Kalman filter. The term MMSE more specifically refers to estimation in a Bayesian setting with quadratic cost function. The basic idea behind the Bayesian approach to estimation stems from practical situations where we often have some prior information about the parameter to be estimated. For instance, we may have prior information about the range that the parameter can assume; or we may have an old estimate of the parameter that we want to modify when a new observation is made available; or the statistics of an actual random signal such as speech. This is in contrast to the non-Bayesian approach like minimum-variance unbiased estimator (MVUE) where absolutely nothing is assumed to be known about the parameter in advance and which does not account for such situations. In the Bayesian approach, such prior information is captured by the prior probability density function of the parameters; and based directly on Bayes theorem, it allows us to make better posterior estimates as more observations become available. Thus unlike non-Bayesian approach where parameters of interest are assumed to be deterministic, but unknown constants, the Bayesian estimator seeks to estimate a parameter that is itself a random variable. Furthermore, Bayesian estimation can also deal with situations where the sequence of observations are not necessarily independent. Thus Bayesian estimation provides yet another alternative to the MVUE. This is useful when the MVUE does not exist or cannot be found. Let x {displaystyle x} be a n × 1 {displaystyle n imes 1} hidden random vector variable, and let y {displaystyle y} be a m × 1 {displaystyle m imes 1} known random vector variable (the measurement or observation), both of them not necessarily of the same dimension. An estimator x ^ ( y ) {displaystyle {hat {x}}(y)} of x {displaystyle x} is any function of the measurement y {displaystyle y} . The estimation error vector is given by e = x ^ − x {displaystyle e={hat {x}}-x} and its mean squared error (MSE) is given by the trace of error covariance matrix where the expectation E {displaystyle operatorname {E} } is taken over both x {displaystyle x} and y {displaystyle y} . When x {displaystyle x} is a scalar variable, the MSE expression simplifies to E ⁡ { ( x ^ − x ) 2 } {displaystyle operatorname {E} left{({hat {x}}-x)^{2} ight}} . Note that MSE can equivalently be defined in other ways, since

[ "Communication channel", "Estimator", "frequency domain turbo equalization", "minimum mean square error detector" ]
Parent Topic
Child Topic
    No Parent Topic