language-icon Old Web
English
Sign In

CLs upper limits

In particle physics, CLs represents a statistical method for setting upper limits (also called exclusion limits) on model parameters, a particular form of interval estimation used for parameters that can take only non-negative values. Although CLs are said to refer to Confidence Levels, 'The method's name is ... misleading, as the CLs exclusion region is not a confidence interval.' It was first introduced by physicists working at the LEP experiment at CERN and has since been used by many high energy physics experiments. It is a frequentist method in the sense that the properties of the limit are defined by means of error probabilities, however it differs from standard confidence intervals in that the stated confidence level of the interval is not equal to its coverage probability. The reason for this deviation is that standard upper limits based on a most powerful test necessarily produce empty intervals with some fixed probability when the parameter value is zero, and this property is considered undesirable by most physicists and statisticians. P ( θ u p ( X ) < θ | θ ) P ( θ u p ( X ) < θ | 0 ) ≤ α ′  for all  θ . {displaystyle {frac {mathbb {P} ( heta _{up}(X)< heta | heta )}{mathbb {P} ( heta _{up}(X)< heta |0)}}leq alpha '{ ext{ for all }} heta .}     (1)'A concept of statistical evidence is not plausible unless it finds 'strong evidence for H 2 {displaystyle H_{2}} as against H 1 {displaystyle H_{1}} ' with small probability ( α {displaystyle alpha } ) when H 1 {displaystyle H_{1}} is true, and with much larger probability (1 - β {displaystyle eta } ) when H 2 {displaystyle H_{2}} is true. ' In particle physics, CLs represents a statistical method for setting upper limits (also called exclusion limits) on model parameters, a particular form of interval estimation used for parameters that can take only non-negative values. Although CLs are said to refer to Confidence Levels, 'The method's name is ... misleading, as the CLs exclusion region is not a confidence interval.' It was first introduced by physicists working at the LEP experiment at CERN and has since been used by many high energy physics experiments. It is a frequentist method in the sense that the properties of the limit are defined by means of error probabilities, however it differs from standard confidence intervals in that the stated confidence level of the interval is not equal to its coverage probability. The reason for this deviation is that standard upper limits based on a most powerful test necessarily produce empty intervals with some fixed probability when the parameter value is zero, and this property is considered undesirable by most physicists and statisticians. Upper limits derived with the CLs method always contain the zero value of the parameter and hence the coverage probability at this point is always 100%. The definition of CLs does not follow from any precise theoretical framework of statistical inference and is therefore described sometimes as ad hoc. It has however close resemblance to concepts of statistical evidenceproposed by the statistician Allan Birnbaum. Let X be a random sample from a probability distribution with a real non-negative parameter θ ∈ [ 0 , ∞ ) {displaystyle heta in [0,infty )} . A CLs upper limit for the parameter θ, with confidence level 1 − α ′ {displaystyle 1-alpha '} , is a statistic (i.e., observable random variable) θ u p ( X ) {displaystyle heta _{up}(X)} which has the property: The inequality is used in the definition to account for cases where the distribution of X is discrete and an equality can not be achieved precisely. If the distribution of X is continuous then this should be replaced by an equality. Note that the definition implies that the coverage probability P ( θ u p ( X ) ≥ θ | θ ) {displaystyle mathbb {P} ( heta _{up}(X)geq heta | heta )} is always larger than 1 − α ′ {displaystyle 1-alpha '} . An equivalent definition can be made by considering a hypothesis test of the null hypothesis H 0 : θ = θ 0 {displaystyle H_{0}: heta = heta _{0}} against the alternative H 1 : θ = 0 {displaystyle H_{1}: heta =0} . Then the numerator in (1), when evaluated at θ 0 {displaystyle heta _{0}} , correspond to the type-I error probability ( α {displaystyle alpha } ) of the test (i.e., θ 0 {displaystyle heta _{0}} is rejected when θ u p ( X ) < θ 0 {displaystyle heta _{up}(X)< heta _{0}} ) and the denominator to the power ( 1 − β {displaystyle 1-eta } ). The criterion for rejecting H 0 {displaystyle H_{0}} thus requires that the ratio α / ( 1 − β ) {displaystyle alpha /(1-eta )} will be smaller than α ′ {displaystyle alpha '} . This can be interpreted intuitively as saying that θ 0 {displaystyle heta _{0}} is excluded because it is α ′ {displaystyle alpha '} less likely to observe such an extreme outcome as X when θ 0 {displaystyle heta _{0}} is true than it is when the alternative θ = 0 {displaystyle heta =0} is true. The calculation of the upper limit is usually done by constructing a test statistic q θ ( X ) {displaystyle q_{ heta }(X)} and finding the value of θ {displaystyle heta } for which where q θ ∗ {displaystyle q_{ heta }^{*}} is the observed outcome of the experiment. Upper limits based on the CLs method were used in numerous publications of experimental results obtained at particle accelerator experiments such as LEP, the Tevatron and the LHC, most notable in the searches for new particles. The original motivation for CLs was based on a conditional probability calculation suggested by physicist G. Zech for an event counting experiment. Suppose an experiment consists of measuring n {displaystyle n} events coming from signal and background processes, both described by Poisson distributions with respective rates s {displaystyle s} and b {displaystyle b} , namely n ∼ Poiss ( s + b ) {displaystyle nsim { ext{Poiss}}(s+b)} . b {displaystyle b} is assumed to be known and s {displaystyle s} is the parameter to be estimated by the experiment. The standard procedure for setting an upper limit on s {displaystyle s} given an experimental outcome n ∗ {displaystyle n^{*}} consists of excluding values of s {displaystyle s} for which P ( n ≤ n ∗ | s + b ) ≤ α {displaystyle mathbb {P} (nleq n^{*}|s+b)leq alpha } , which guarantees at least 1 − α {displaystyle 1-alpha } coverage. Consider, for example, a case where b = 3 {displaystyle b=3} and n ∗ = 0 {displaystyle n^{*}=0} events are observed, then one finds that s + b ≥ 3 {displaystyle s+bgeq 3} is excluded at 95% confidence level. But this implies that s ≥ 0 {displaystyle sgeq 0} is excluded, namely all possible values of s {displaystyle s} . Such a result is difficult to interpret because the experiment cannot essentially distinguish very small values of s {displaystyle s} from the background-only hypothesis, and thus declaring that such small values are excluded (in favor of the background-only hypothesis) seems inappropriate. To overcome this difficulty Zech suggested conditioning the probability that n ≤ n ∗ {displaystyle nleq n^{*}} on the observation that n b ≤ n ∗ {displaystyle n_{b}leq n^{*}} , where n b {displaystyle n_{b}} is the (unmeasurable) number of background events. The reasoning behind this is that when n b {displaystyle n_{b}} is small the procedure is more likely to produce an error (i.e., an interval that does not cover the true value) than when n b {displaystyle n_{b}} is large, and the distribution of n b {displaystyle n_{b}} itself is independent of s {displaystyle s} . That is, not the over-all error probability should be reported but the conditional probability given the knowledge one has on the number of background events in the sample. This conditional probability is

[ "Coffin–Lowry syndrome", "classical least squares", "chaotic local search", "closed loop stimulation", "CENANI-LENZ SYNDACTYLY" ]
Parent Topic
Child Topic
    No Parent Topic