language-icon Old Web
English
Sign In

Importance sampling

In statistics, importance sampling is a general technique for estimating properties of a particular distribution, while only having samples generated from a different distribution than the distribution of interest. It is related to umbrella sampling in computational physics. Depending on the application, the term may refer to the process of sampling from this alternative distribution, the process of inference, or both. In statistics, importance sampling is a general technique for estimating properties of a particular distribution, while only having samples generated from a different distribution than the distribution of interest. It is related to umbrella sampling in computational physics. Depending on the application, the term may refer to the process of sampling from this alternative distribution, the process of inference, or both. Let X : Ω → R {displaystyle X:Omega o mathbb {R} } be a random variable in some probability space ( Ω , F , P ) {displaystyle (Omega ,{mathcal {F}},P)} . We wish to estimate the expected value of X under P, denoted E. If we have statistically independent random samples x 1 , … , x n {displaystyle x_{1},ldots ,x_{n}} , generated according to P, then an empirical estimate of E is and the precision of this estimate depends on the variance of X: The basic idea of importance sampling is to sample the states from a different distribution to lower the variance of the estimation of E, or when sampling from P is difficult.This is accomplished by first choosing a random variable L ≥ 0 {displaystyle Lgeq 0} such that E = 1 and that P-almost everywhere L ( ω ) ≠ 0 {displaystyle L(omega ) eq 0} .With the variate L we define a probability P ( L ) {displaystyle P^{(L)}} that satisfies The variable X/L will thus be sampled under P(L) to estimate E as above and this estimation is improved when var ⁡ [ X L ; P ( L ) ] < var ⁡ [ X ; P ] {displaystyle operatorname {var} left<operatorname {var} } . When X is of constant sign over Ω, the best variable L would clearly be L ∗ = X E [ X ; P ] ≥ 0 {displaystyle L^{*}={frac {X}{mathbf {E} }}geq 0} , so that X/L* is the searched constant E and a single sample under P(L*) suffices to give its value. Unfortunately we cannot take that choice, because E is precisely the value we are looking for! However this theoretical best case L* gives us an insight into what importance sampling does: to the right, a P ( X ∈ [ a ; a + d a ] ) {displaystyle a,P(Xin )} is one of the infinitesimal elements that sum up to E: therefore, a good probability change P(L) in importance sampling will redistribute the law of X so that its samples' frequencies are sorted directly according to their weights in E. Hence the name 'importance sampling.' Importance sampling is often used as a Monte Carlo integrator.When P {displaystyle P} is the uniform distribution and Ω = R {displaystyle Omega =mathbb {R} } , E corresponds to the integral of the real function X : R → R {displaystyle X:mathbb {R} o mathbb {R} } .

[ "Monte Carlo method", "Sampling (statistics)", "monte carlo rendering", "importance sampling method", "Slice sampling", "importance function", "rare event simulation" ]
Parent Topic
Child Topic
    No Parent Topic