language-icon Old Web
English
Sign In

Delta method

In statistics, the delta method is a result concerning the approximate probability distribution for a function of an asymptotically normal statistical estimator from knowledge of the limiting variance of that estimator. In statistics, the delta method is a result concerning the approximate probability distribution for a function of an asymptotically normal statistical estimator from knowledge of the limiting variance of that estimator. The delta method was derived from propagation of error, and the idea behind was known in the early 19th century. Its statistical application can be traced as far back as 1928 by T. L. Kelley. A formal description of the method was presented by J. L. Doob in 1935. Robert Dorfman also described a version of it in 1938. While the delta method generalizes easily to a multivariate setting, careful motivation of the technique is more easily demonstrated in univariate terms. Roughly, if there is a sequence of random variables Xn satisfying where θ and σ2 are finite valued constants and → D {displaystyle {xrightarrow {D}}} denotes convergence in distribution, then for any function g satisfying the property that g′(θ) exists and is non-zero valued. Demonstration of this result is fairly straightforward under the assumption that g′(θ) is continuous. To begin, we use the mean value theorem (i.e.: the first order approximation of a Taylor series using Taylor's theorem): where θ ~ {displaystyle { ilde { heta }}} lies between Xn and θ.Note that since X n → P θ {displaystyle X_{n},{xrightarrow {P}}, heta } and X n < θ ~ < θ {displaystyle X_{n}<{ ilde { heta }}< heta } , it must be that θ ~ → P θ {displaystyle { ilde { heta }},{xrightarrow {P}}, heta } and since g′(θ) is continuous, applying the continuous mapping theorem yields where → P {displaystyle {xrightarrow {P}}} denotes convergence in probability. Rearranging the terms and multiplying by n {displaystyle {sqrt {n}}} gives

[ "Estimator" ]
Parent Topic
Child Topic
    No Parent Topic