language-icon Old Web
English
Sign In

Heteroscedasticity

In statistics, a collection of random variables is heteroscedastic (or heteroskedastic; from Ancient Greek hetero “different” and skedasis “dispersion”) if there are sub-populations that have different variabilities from others. Here 'variability' could be quantified by the variance or any other measure of statistical dispersion. Thus heteroscedasticity is the absence of homoscedasticity. In statistics, a collection of random variables is heteroscedastic (or heteroskedastic; from Ancient Greek hetero “different” and skedasis “dispersion”) if there are sub-populations that have different variabilities from others. Here 'variability' could be quantified by the variance or any other measure of statistical dispersion. Thus heteroscedasticity is the absence of homoscedasticity. The existence of heteroscedasticity is a major concern in the application of regression analysis, including the analysis of variance, as it can invalidate statistical tests of significance that assume that the modelling errors are uncorrelated and uniform—hence that their variances do not vary with the effects being modeled. For instance, while the ordinary least squares estimator is still unbiased in the presence of heteroscedasticity, it is inefficient because the true variance and covariance are underestimated. Similarly, in testing for differences between sub-populations using a location test, some standard tests assume that variances within groups are equal. Because heteroscedasticity concerns expectations of the second moment of the errors, its presence is referred to as misspecification of the second order. Suppose there is a sequence of random variables { Y t } t = 1 n {displaystyle lbrace Y_{t} brace _{t=1}^{n}} and a sequence of vectors of random variables, { X t } t = 1 n {displaystyle lbrace X_{t} brace _{t=1}^{n}} . In dealing with conditional expectations of Yt given Xt, the sequence {Yt}t=1n is said to be heteroscedastic if the conditional variance of Yt given Xt, changes with t. Some authors refer to this as conditional heteroscedasticity to emphasize the fact that it is the sequence of conditional variances that changes and not the unconditional variance. In fact, it is possible to observe conditional heteroscedasticity even when dealing with a sequence of unconditional homoscedastic random variables; however, the opposite does not hold. If the variance changes only because of changes in value of X and not because of a dependence on the index t, the changing variance might be described using a scedastic function. When using some statistical techniques, such as ordinary least squares (OLS), a number of assumptions are typically made. One of these is that the error term has a constant variance. This might not be true even if the error term is assumed to be drawn from identical distributions. For example, the error term could vary or increase with each observation, something that is often the case with cross-sectional or time series measurements. Heteroscedasticity is often studied as part of econometrics, which frequently deals with data exhibiting it. While the influential 1980 paper by Halbert White used the term 'heteroskedasticity' rather than 'heteroscedasticity', the latter spelling has been employed more frequently in later works. The econometrician Robert Engle won the 2003 Nobel Memorial Prize for Economics for his studies on regression analysis in the presence of heteroscedasticity, which led to his formulation of the autoregressive conditional heteroscedasticity (ARCH) modeling technique. One of the assumptions of the classical linear regression model is that there is no heteroscedasticity. Breaking this assumption means that the Gauss–Markov theorem does not apply, meaning that OLS estimators are not the Best Linear Unbiased Estimators (BLUE) and their variance is not the lowest of all other unbiased estimators.Heteroscedasticity does not cause ordinary least squares coefficient estimates to be biased, although it can cause ordinary least squares estimates of the variance (and, thus, standard errors) of the coefficients to be biased, possibly above or below the true or population variance. Thus, regression analysis using heteroscedastic data will still provide an unbiased estimate for the relationship between the predictor variable and the outcome, but standard errors and therefore inferences obtained from data analysis are suspect. Biased standard errors lead to biased inference, so results of hypothesis tests are possibly wrong. For example, if OLS is performed on a heteroscedastic data set, yielding biased standard error estimation, a researcher might fail to reject a null hypothesis at a given significance level, when that null hypothesis was actually uncharacteristic of the actual population (making a type II error). Under certain assumptions, the OLS estimator has a normal asymptotic distribution when properly normalized and centered (even when the data does not come from a normal distribution). This result is used to justify using a normal distribution, or a chi square distribution (depending on how the test statistic is calculated), when conducting a hypothesis test. This holds even under heteroscedasticity. More precisely, the OLS estimator in the presence of heteroscedasticity is asymptotically normal, when properly normalized and centered, with a variance-covariance matrix that differs from the case of homoscedasticity. In 1980, White proposed a consistent estimator for the variance-covariance matrix of the asymptotic distribution of the OLS estimator. This validates the use of hypothesis testing using OLS estimators and White's variance-covariance estimator under heteroscedasticity.

[ "Statistics", "Financial economics", "Econometrics", "Machine learning", "Homoscedasticity", "Heteroscedasticity-consistent standard errors", "heteroscedastic model", "Park test", "heteroscedastic regression" ]
Parent Topic
Child Topic
    No Parent Topic