language-icon Old Web
English
Sign In

Bayesian linear regression

In statistics, Bayesian linear regression is an approach to linear regression in which the statistical analysis is undertaken within the context of Bayesian inference. When the regression model has errors that have a normal distribution, and if a particular form of prior distribution is assumed, explicit results are available for the posterior probability distributions of the model's parameters. In statistics, Bayesian linear regression is an approach to linear regression in which the statistical analysis is undertaken within the context of Bayesian inference. When the regression model has errors that have a normal distribution, and if a particular form of prior distribution is assumed, explicit results are available for the posterior probability distributions of the model's parameters. Consider a standard linear regression problem, in which for i = 1 , . . . , n {displaystyle i=1,...,n} we specify the mean of the conditional distribution of y i {displaystyle y_{i}} given a k × 1 {displaystyle k imes 1} predictor vector x i {displaystyle mathbf {x} _{i}} : where β {displaystyle {oldsymbol {eta }}} is a k × 1 {displaystyle k imes 1} vector, and the ε i {displaystyle varepsilon _{i}} are independent and identically normally distributed random variables: This corresponds to the following likelihood function: The ordinary least squares solution is used to estimate the coefficient vector using the Moore-Penrose pseudoinverse: where X {displaystyle mathbf {X} } is the n × k {displaystyle n imes k} design matrix, each row of which is a predictor vector x i T {displaystyle mathbf {x} _{i}^{ m {T}}} ; and y {displaystyle mathbf {y} } is the column n {displaystyle n} -vector [ y 1 ⋯ y n ] T {displaystyle ^{ m {T}}} . This is a frequentist approach, and it assumes that there are enough measurements to say something meaningful about β {displaystyle {oldsymbol {eta }}} . In the Bayesian approach, the data are supplemented with additional information in the form of a prior probability distribution. The prior belief about the parameters is combined with the data's likelihood function according to Bayes theorem to yield the posterior belief about the parameters β {displaystyle {oldsymbol {eta }}} and σ {displaystyle sigma } . The prior can take different functional forms depending on the domain and the information that is available a priori. For an arbitrary prior distribution, there may be no analytical solution for the posterior distribution. In this section, we will consider a so-called conjugate prior for which the posterior distribution can be derived analytically. A prior ρ ( β , σ 2 ) {displaystyle ho ({oldsymbol {eta }},sigma ^{2})} is conjugate to this likelihood function if it has the same functional form with respect to β {displaystyle {oldsymbol {eta }}} and σ {displaystyle sigma } . Since the log-likelihood is quadratic in β {displaystyle {oldsymbol {eta }}} , the log-likelihood is re-written such that the likelihood becomes normal in ( β − β ^ ) {displaystyle ({oldsymbol {eta }}-{hat {oldsymbol {eta }}})} . Write

[ "Bayesian inference", "Bayesian vector autoregression", "Bayesian econometrics", "Categorical distribution", "Asymmetric Laplace distribution", "General matrix notation of a VAR" ]
Parent Topic
Child Topic
    No Parent Topic