language-icon Old Web
English
Sign In

Fisher's method

In statistics, Fisher's method, also known as Fisher's combined probability test, is a technique for data fusion or 'meta-analysis' (analysis of analyses). It was developed by and named for Ronald Fisher. In its basic form, it is used to combine the results from several independent tests bearing upon the same overall hypothesis (H0). In statistics, Fisher's method, also known as Fisher's combined probability test, is a technique for data fusion or 'meta-analysis' (analysis of analyses). It was developed by and named for Ronald Fisher. In its basic form, it is used to combine the results from several independent tests bearing upon the same overall hypothesis (H0). Fisher's method combines extreme value probabilities from each test, commonly known as 'p-values', into one test statistic (X2) using the formula where pi is the p-value for the ith hypothesis test. When the p-values tend to be small, the test statistic X2 will be large, which suggests that the null hypotheses are not true for every test. When all the null hypotheses are true, and the pi (or their corresponding test statistics) are independent, X2 has a chi-squared distribution with 2k degrees of freedom, where k is the number of tests being combined. This fact can be used to determine the p-value for X2. The distribution of X2 is a chi-squared distribution for the following reason; under the null hypothesis for test i, the p-value pi follows a uniform distribution on the interval . The negative natural logarithm of a uniformly distributed value follows an exponential distribution. Scaling a value that follows an exponential distribution by a factor of two yields a quantity that follows a chi-squared distribution with two degrees of freedom. Finally, the sum of k independent chi-squared values, each with two degrees of freedom, follows a chi-squared distribution with 2k degrees of freedom. Dependence among statistical tests is generally positive, which means that the p-value of X2 is too small (anti-conservative) if the dependency is not taken into account. Thus, if Fisher's method for independent tests is applied in a dependent setting, and the p-value is not small enough to reject the null hypothesis, then that conclusion will continue to hold even if the dependence is not properly accounted for. However, if positive dependence is not accounted for, and the meta-analysis p-value is found to be small, the evidence against the null hypothesis is generally overstated. The mean false discovery rate, α ( k + 1 ) / ( 2 k ) {displaystyle alpha (k+1)/(2k)} , α {displaystyle alpha } reduced for k independent or positively correlated tests, may suffice to control alpha for useful comparison to an over-small p-value from Fisher's X2. In cases where the tests are not independent, the null distribution of X2 is more complicated. A common strategy is to approximate the null distribution with a scaled χ2-distribution random variable. Different approaches may be used depending on whether or not the covariance between the different p-values is known. Brown's method can be used to combine dependent p-values whose underlying test statistics have a multivariate normal distribution with a known covariance matrix. Kost's method extends Brown's to allow one to combine p-values when the covariance matrix is known only up to a scalar multiplicative factor. The harmonic mean p-value offers an alternative to Fisher's method for combining p-values when the dependency structure is unknown but the tests cannot be assumed to be independent.

[ "Applied mathematics", "Fisher information", "Statistics", "Econometrics", "Fisher's z-distribution" ]
Parent Topic
Child Topic
    No Parent Topic