language-icon Old Web
English
Sign In

Covariance matrix

In probability theory and statistics, a covariance matrix, also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix, is a matrix whose element in the i, j position is the covariance between the i-th and j-th elements of a random vector. A random vector is a random variable with multiple dimensions. Each element of the vector is a scalar random variable. Each element has either a finite number of observed empirical values or a finite or infinite number of potential values. The potential values are specified by a theoretical joint probability distribution. Intuitively, the covariance matrix generalizes the notion of variance to multiple dimensions. As an example, the variation in a collection of random points in two-dimensional space cannot be characterized fully by a single number, nor would the variances in the x {displaystyle x} and y {displaystyle y} directions contain all of the necessary information; a 2 × 2 {displaystyle 2 imes 2} matrix would be necessary to fully characterize the two-dimensional variation. Because the covariance of the i-th random variable with itself is simply that random variable's variance, each element on the principal diagonal of the covariance matrix is the variance of one of the random variables. Because the covariance of the i-th random variable with the j-th one is the same thing as the covariance of the j-th random variable with the i-th random variable, every covariance matrix is symmetric. Also, every covariance matrix is positive semi-definite. The covariance matrix of a random vector X {displaystyle mathbf {X} } is typically denoted by K X X {displaystyle operatorname {K} _{mathbf {X} mathbf {X} }} or Σ {displaystyle Sigma } . Throughout this article, boldfaced unsubscripted X {displaystyle mathbf {X} } and Y {displaystyle mathbf {Y} } are used to refer to random vectors, and unboldfaced subscripted X i {displaystyle X_{i}} and Y i {displaystyle Y_{i}} are used to refer to scalar random variables. If the entries in the column vector are random variables, each with finite variance and expected value, then the covariance matrix K X X {displaystyle operatorname {K} _{mathbf {X} mathbf {X} }} is the matrix whose ( i , j ) {displaystyle (i,j)} entry is the covariance:p. 177 where the operator E {displaystyle operatorname {E} } denotes the expected value (mean) of its argument.

[ "Matrix (mathematics)", "Algorithm", "Statistics", "spatial covariance matrix", "minimum variance portfolio", "diagonal covariance matrix", "square root filtering", "Signal subspace" ]
Parent Topic
Child Topic
    No Parent Topic