language-icon Old Web
English
Sign In

Gaussian process

In probability theory and statistics, a Gaussian process is a stochastic process (a collection of random variables indexed by time or space), such that every finite collection of those random variables has a multivariate normal distribution, i.e. every finite linear combination of them is normally distributed. The distribution of a Gaussian process is the joint distribution of all those (infinitely many) random variables, and as such, it is a distribution over functions with a continuous domain, e.g. time or space. In probability theory and statistics, a Gaussian process is a stochastic process (a collection of random variables indexed by time or space), such that every finite collection of those random variables has a multivariate normal distribution, i.e. every finite linear combination of them is normally distributed. The distribution of a Gaussian process is the joint distribution of all those (infinitely many) random variables, and as such, it is a distribution over functions with a continuous domain, e.g. time or space. A machine-learning algorithm that involves a Gaussian process uses lazy learning and a measure of the similarity between points (the kernel function) to predict the value for an unseen point from training data. The prediction is not just an estimate for that point, but also has uncertainty information—it is a one-dimensional Gaussian distribution (which is the marginal distribution at that point). For some kernel functions, matrix algebra can be used to calculate the predictions using the technique of kriging. When a parameterised kernel is used, optimisation software is typically used to fit a Gaussian process model. The concept of Gaussian processes is named after Carl Friedrich Gauss because it is based on the notion of the Gaussian distribution (normal distribution). Gaussian processes can be seen as an infinite-dimensional generalization of multivariate normal distributions. Gaussian processes are useful in statistical modelling, benefiting from properties inherited from the normal distribution. For example, if a random process is modelled as a Gaussian process, the distributions of various derived quantities can be obtained explicitly. Such quantities include the average value of the process over a range of times and the error in estimating the average using sample values at a small set of times. A time continuous stochastic process { X t ; t ∈ T } {displaystyle left{X_{t};tin T ight}} is Gaussian if and only if for every finite set of indices t 1 , … , t k {displaystyle t_{1},ldots ,t_{k}} in the index set T {displaystyle T} is a multivariate Gaussian random variable. That is the same as saying every linear combination of ( X t 1 , … , X t k ) {displaystyle (X_{t_{1}},ldots ,X_{t_{k}})} has a univariate normal (or Gaussian) distribution. Using characteristic functions of random variables, the Gaussian property can be formulated as follows: { X t ; t ∈ T } {displaystyle left{X_{t};tin T ight}} is Gaussian if and only if, for every finite set of indices t 1 , … , t k {displaystyle t_{1},ldots ,t_{k}} , there are real-valued σ ℓ j {displaystyle sigma _{ell j}} , μ ℓ {displaystyle mu _{ell }} with σ j j > 0 {displaystyle sigma _{jj}>0} such that the following equality holds for all s 1 , s 2 , … , s k ∈ R {displaystyle s_{1},s_{2},ldots ,s_{k}in mathbb {R} } where i {displaystyle i} denotes the imaginary unit such that i 2 = − 1 {displaystyle i^{2}=-1} .

[ "Gaussian", "Expectation propagation", "Bayesian optimization", "gaussian scale mixtures", "Gaussian random field", "gaussian process latent variable model" ]
Parent Topic
Child Topic
    No Parent Topic