language-icon Old Web
English
Sign In

Quasi-maximum likelihood

A quasi-maximum likelihood estimate (QMLE, also known as a pseudo-likelihood estimate or a composite likelihood estimate) is an estimate of a parameter θ in a statistical model that is formed by maximizing a function that is related to the logarithm of the likelihood function, but in discussing the consistency and (asymptotic) variance-covariance matrix, we assume some parts of the distribution may be mis-specified. In contrast, the maximum likelihood estimate maximizes the actual log likelihood function for the data and model. The function that is maximized to form a QMLE is often a simplified form of the actual log likelihood function. A common way to form such a simplified function is to use the log-likelihood function of a misspecified model that treats certain data values as being independent, even when in actuality they may not be. This removes any parameters from the model that are used to characterize these dependencies. Doing this only makes sense if the dependency structure is a nuisance parameter with respect to the goals of the analysis. A quasi-maximum likelihood estimate (QMLE, also known as a pseudo-likelihood estimate or a composite likelihood estimate) is an estimate of a parameter θ in a statistical model that is formed by maximizing a function that is related to the logarithm of the likelihood function, but in discussing the consistency and (asymptotic) variance-covariance matrix, we assume some parts of the distribution may be mis-specified. In contrast, the maximum likelihood estimate maximizes the actual log likelihood function for the data and model. The function that is maximized to form a QMLE is often a simplified form of the actual log likelihood function. A common way to form such a simplified function is to use the log-likelihood function of a misspecified model that treats certain data values as being independent, even when in actuality they may not be. This removes any parameters from the model that are used to characterize these dependencies. Doing this only makes sense if the dependency structure is a nuisance parameter with respect to the goals of the analysis. As long as the quasi-likelihood function that is maximized is not oversimplified, the QMLE (or composite likelihood estimate) is consistent and asymptotically normal. It is less efficient than the maximum likelihood estimate, but may only be slightly less efficient if the quasi-likelihood is constructed so as to minimize the loss of information relative to the actual likelihood. Standard approaches to statistical inference that are used with maximum likelihood estimates, such as the formation of confidence intervals, and statistics for model comparison, can be generalized to the quasi-maximum likelihood setting. Pooled QMLE is a technique that allows estimating parameters when panel data is available with Poisson outcomes. For instance, one might have information on the number of patents files by a number of different firms over time. Pooled QMLE does not necessarily contain unobserved effects (which can be either random effects or fixed effects), and the estimation method is mainly proposed for these purposes. The computational requirements are less stringent, especially compared to fixed-effect Poisson models, but the trade off is the possibly strong assumption of no unobserved heterogeneity. Pooled refers to pooling the data over the different time periods T, while QMLE refers to the Quasi-Maximum Likelihood Technique. The Poisson distribution of y i {displaystyle y_{i}} given x i {displaystyle x_{i}} is specified as follows: f ( y i V x i ) = e − μ i μ i y i y i ! {displaystyle f(y_{i}Vx_{i})={frac {e^{-mu _{i}}mu _{i}^{y_{i}}}{y_{i}!}}} the starting point for Poisson pooled QMLE is the conditional mean assumption. Specifically, we assume that for some b 0 {displaystyle b_{0}} in a compact parameter space B, the conditional mean is given by: E [ y t V x t ] = m ( x t , b 0 ) = μ t {displaystyle E=m(x_{t},b_{0})=mu _{t}} for t=1,...,T The compact parameter space condition is imposed to enable the use M-estimation techniques, while the conditional mean reflects the fact that the population mean of a Poisson process is the parameter of interest. In this particular case, the parameter governing the Poisson process is allowed to vary with respect to the vector x t ⋅ {displaystyle x_{t}centerdot } . The function m(.) can, in principle, change over time even though it is often specified as static over time. Note that only the conditional mean function is specified, and we will get consistent estimates of b 0 {displaystyle b_{0}} as long as this mean condition is correctly specified. This leads to the following first order condition, which represents the quasi-log likelihood for the pooled Poisson estimation: l i ( b ) = Σ [ y i t log ⁡ ( m ( x i t , b ) ) − m ( x i t , b ) ] = 0 {displaystyle l_{i}(b)=Sigma =0}

[ "Likelihood function", "Expectation–maximization algorithm", "Maximum likelihood sequence estimation", "Likelihood principle", "likelihood equation", "German tank problem", "Maximum spacing estimation", "Conditionality principle" ]
Parent Topic
Child Topic
    No Parent Topic