language-icon Old Web
English
Sign In

Covariance

In probability theory and statistics, covariance is a measure of the joint variability of two random variables. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the lesser values, (i.e., the variables tend to show similar behavior), the covariance is positive. In the opposite case, when the greater values of one variable mainly correspond to the lesser values of the other, (i.e., the variables tend to show opposite behavior), the covariance is negative. The sign of the covariance therefore shows the tendency in the linear relationship between the variables. The magnitude of the covariance is not easy to interpret because it is not normalized and hence depends on the magnitudes of the variables. The normalized version of the covariance, the correlation coefficient, however, shows by its magnitude the strength of the linear relation. cov ⁡ ( X , Y ) = E ⁡ [ ( X − E ⁡ [ X ] ) ( Y − E ⁡ [ Y ] ) ] , {displaystyle operatorname {cov} (X,Y)=operatorname {E} {{ig )(Y-operatorname {E} ){ig ]}},}     (Eq.1) K X Y = cov ⁡ ( X , Y ) = E ⁡ [ ( X − E ⁡ [ X ] ) ( Y − E ⁡ [ Y ] ) T ] = E ⁡ [ X Y T ] − E ⁡ [ X ] E ⁡ [ Y ] T {displaystyle {egin{aligned}operatorname {K} _{mathbf {X} mathbf {Y} }=operatorname {cov} (mathbf {X} ,mathbf {Y} )&=operatorname {E} left)(mathbf {Y} -operatorname {E} )^{mathrm {T} } ight]\&=operatorname {E} left-operatorname {E} operatorname {E} ^{mathrm {T} }end{aligned}}}     (Eq.2) In probability theory and statistics, covariance is a measure of the joint variability of two random variables. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the lesser values, (i.e., the variables tend to show similar behavior), the covariance is positive. In the opposite case, when the greater values of one variable mainly correspond to the lesser values of the other, (i.e., the variables tend to show opposite behavior), the covariance is negative. The sign of the covariance therefore shows the tendency in the linear relationship between the variables. The magnitude of the covariance is not easy to interpret because it is not normalized and hence depends on the magnitudes of the variables. The normalized version of the covariance, the correlation coefficient, however, shows by its magnitude the strength of the linear relation. A distinction must be made between (1) the covariance of two random variables, which is a population parameter that can be seen as a property of the joint probability distribution, and (2) the sample covariance, which in addition to serving as a descriptor of the sample, also serves as an estimated value of the population parameter. The covariance between two jointly distributed real-valued random variables X {displaystyle X} and Y {displaystyle Y} with finite second moments is defined as the expected product of their deviations from their individual expected values::p. 119 where E ⁡ [ X ] {displaystyle operatorname {E} } is the expected value of X {displaystyle X} , also known as the mean of X {displaystyle X} . The covariance is also sometimes denoted σ X Y {displaystyle sigma _{XY}} or σ ( X , Y ) {displaystyle sigma (X,Y)} , in analogy to variance. By using the linearity property of expectations, this can be simplified to the expected value of their product minus the product of their expected values: The units of measurement of the covariance cov ⁡ ( X , Y ) {displaystyle operatorname {cov} (X,Y)} are those of X {displaystyle X} times those of Y {displaystyle Y} . By contrast, correlation coefficients, which depend on the covariance, are a dimensionless measure of linear dependence. (In fact, correlation coefficients can simply be understood as a normalized version of covariance.) The covariance between two complex random variables Z , W {displaystyle Z,W} is defined as:p. 119

[ "Matrix (mathematics)", "Statistics", "General covariance", "Manifest covariance", "Covariance group", "Estimation of covariance matrices", "Maximally informative dimensions" ]
Parent Topic
Child Topic
    No Parent Topic