language-icon Old Web
English
Sign In

Cramér–Rao bound

In estimation theory and statistics, the Cramér–Rao bound (CRB), Cramér–Rao lower bound (CRLB), Cramér–Rao inequality, Fréchet–Darmois–Cramér–Rao inequality, or information inequality expresses a lower bound on the variance of unbiased estimators of a deterministic (fixed, though unknown) parameter. This term is named in honor of Harald Cramér, Calyampudi Radhakrishna Rao, Maurice Fréchet and Georges Darmois all of whom independently derived this limit to statistical precision in the 1940s. In estimation theory and statistics, the Cramér–Rao bound (CRB), Cramér–Rao lower bound (CRLB), Cramér–Rao inequality, Fréchet–Darmois–Cramér–Rao inequality, or information inequality expresses a lower bound on the variance of unbiased estimators of a deterministic (fixed, though unknown) parameter. This term is named in honor of Harald Cramér, Calyampudi Radhakrishna Rao, Maurice Fréchet and Georges Darmois all of whom independently derived this limit to statistical precision in the 1940s. In its simplest form, the bound states that the variance of any unbiased estimator is at least as high as the inverse of the Fisher information. An unbiased estimator which achieves this lower bound is said to be (fully) efficient. Such a solution achieves the lowest possible mean squared error among all unbiased methods, and is therefore the minimum variance unbiased (MVU) estimator. However, in some cases, no unbiased technique exists which achieves the bound. This may occur either if for any unbiased estimator, there exists another with a strictly smaller variance, or if an MVU estimator exists, but its variance is strictly greater than the inverse of the Fisher information. The Cramér–Rao bound can also be used to bound the variance of biased estimators of given bias. In some cases, a biased approach can result in both a variance and a mean squared error that are below the unbiased Cramér–Rao lower bound; see estimator bias. The Cramér–Rao bound is stated in this section for several increasingly general cases, beginning with the case in which the parameter is a scalar and its estimator is unbiased. All versions of the bound require certain regularity conditions, which hold for most well-behaved distributions. These conditions are listed later in this section. Suppose θ {displaystyle heta } is an unknown deterministic parameter which is to be estimated from measurements x {displaystyle x} , distributed according to some probability density function f ( x ; θ ) {displaystyle f(x; heta )} . The variance of any unbiased estimator θ ^ {displaystyle {hat { heta }}} of θ {displaystyle heta } is then bounded by the reciprocal of the Fisher information I ( θ ) {displaystyle I( heta )} : where the Fisher information I ( θ ) {displaystyle I( heta )} is defined by and ℓ ( x ; θ ) = log ⁡ ( f ( x ; θ ) ) {displaystyle ell (x; heta )=log(f(x; heta ))} is the natural logarithm of the likelihood function and E {displaystyle operatorname {E} } denotes the expected value (over x {displaystyle x} ). The efficiency of an unbiased estimator θ ^ {displaystyle {hat { heta }}} measures how close this estimator's variance comes to this lower bound; estimator efficiency is defined as

[ "Maximum likelihood", "Upper and lower bounds", "Estimation theory", "Estimator", "bhattacharyya bound", "barankin bound" ]
Parent Topic
Child Topic
    No Parent Topic