language-icon Old Web
English
Sign In

Consistency (statistics)

In statistics, consistency of procedures, such as computing confidence intervals or conducting hypothesis tests, is a desired property of their behaviour as the number of items in the data set to which they are applied increases indefinitely. In particular, consistency requires that the outcome of the procedure with unlimited data should identify the underlying truth.Use of the term in statistics derives from Sir Ronald Fisher in 1922. In statistics, consistency of procedures, such as computing confidence intervals or conducting hypothesis tests, is a desired property of their behaviour as the number of items in the data set to which they are applied increases indefinitely. In particular, consistency requires that the outcome of the procedure with unlimited data should identify the underlying truth.Use of the term in statistics derives from Sir Ronald Fisher in 1922. Use of the terms consistency and consistent in statistics is restricted to cases where essentially the same procedure can be applied to any number of data items. In complicated applications of statistics, there may be several ways in which the number of data items may grow. For example, records for rainfall within an area might increase in three ways: records for additional time periods; records for additional sites with a fixed area; records for extra sites obtained by extending the size of the area. In such cases, the property of consistency may be limited to one or more of the possible ways a sample size can grow. A consistent estimator is one for which, when the estimate is considered as a random variable indexed by the number n of items in the data set, as n increases the estimates converge in probability to the value that the estimator is designed to estimate. An estimator that has Fisher consistency is one for which, if the estimator were applied to the entire population rather than a sample, the true value of the estimated parameter would be obtained. A consistent test is one for which the power of the test for a fixed untrue hypothesis increases to one as the number of data items increases. In statistical classification, a consistent classifier is one for which the probability of correct classification, given a training set, approaches, as the size of the training set increases, the best probability theoretically possible if the population distributions were fully known. Let b {displaystyle mathbf {b} } be a vector and define the support supp ⁡ ( b ) = { i : b i ≠ 0 } {displaystyle operatorname {supp} (mathbf {b} )={i:mathbf {b} _{i} eq 0}} where b i {displaystyle mathbf {b} _{i}} is the i {displaystyle i} th element of b {displaystyle mathbf {b} } . Let b ^ {displaystyle {hat {mathbf {b} }}} be an estimator for b {displaystyle mathbf {b} } . Then sparsistency is the property that the support of the estimator converges to the true support as the number of samples grows to infinity. More formally, P ( supp ⁡ ( b ^ ) = supp ⁡ ( b ) ) → 1 {displaystyle P(operatorname {supp} ({hat {mathbf {b} }})=operatorname {supp} (mathbf {b} )) ightarrow 1} as n → ∞ {displaystyle n ightarrow infty } .

[ "Weak consistency", "Causal consistency", "Statistics", "Machine learning", "Consistency criterion", "Usual consistency" ]
Parent Topic
Child Topic
    No Parent Topic