language-icon Old Web
English
Sign In

Publication bias

Publication bias is a type of bias that occurs in published academic research. It occurs when the outcome of an experiment or research study influences the decision whether to publish or otherwise distribute it. Publishing only results that show a significant finding disturbs the balance of findings, and inserts bias in favor of positive results. The study of publication bias is an important topic in metascience. Publication bias is a type of bias that occurs in published academic research. It occurs when the outcome of an experiment or research study influences the decision whether to publish or otherwise distribute it. Publishing only results that show a significant finding disturbs the balance of findings, and inserts bias in favor of positive results. The study of publication bias is an important topic in metascience. Studies with significant results can be of the same standard as studies with a null result with respect to quality of execution and design. However, statistically significant results are three times more likely to be published than papers with null results. Multiple factors contribute to publication bias. For instance, once a scientific finding is well established, it may become newsworthy to publish reliable papers that fail to reject the null hypothesis. It has been found that the most common reason for non-publication is simply that investigators decline to submit results, leading to non-response bias. Factors cited as underlying this effect include investigators assuming they must have made a mistake, failure to support a known finding, loss of interest in the topic, or anticipation that others will be uninterested in the null results. The nature of these issues and the problems that have been triggered, have been referred to as the 5 diseases that threaten science, which include: 'significosis, an inordinate focus on statistically significant results; neophilia, an excessive appreciation for novelty; theorrhea, a mania for new theory; arigorium, a deficiency of rigor in theoretical and empirical work; and finally, disjunctivitis, a proclivity to produce large quantities of redundant, trivial, and incoherent works.' Attempts to identify unpublished studies often prove difficult or are unsatisfactory. In an effort to combat this problem, some journals require that studies submitted for publication are pre-registered (registering a study prior to collection of data and analysis) with organizations like the Center for Open Science. Other proposed strategies to detect and control for publication bias include p-curve analysis and disfavoring small and non-randomised studies because of their demonstrated high susceptibility to error and bias. Publication bias occurs when the publication of research results depends not just on the quality of the research but also on the hypothesis tested, and the significance and direction of effects detected. The subject was first discussed in 1959 by statistician Theodore Sterling to refer to fields in which 'successful' research is more likely to be published. As a result, 'the literature of such a field consists in substantial part of false conclusions resulting from errors of the first kind in statistical tests of significance'.. In the worst case, false conclusions could canonize as being true if the publication rate of negative results is too low . Publication bias is sometimes called the 'file-drawer effect,' or 'file-drawer problem.' This term suggests that results not supporting the hypotheses of researchers often go no further than the researchers' file drawers, leading to a bias in published research. The term 'file drawer problem' was coined by Rosenthal in 1979. Positive-results bias, a type of publication bias, occurs when authors are more likely to submit, or editors are more likely to accept, positive results than negative or inconclusive results. Outcome reporting bias occurs when multiple outcomes are measured and analyzed, but the reporting of these outcomes is dependent on the strength and direction of its results. A generic term coined to describe these post-hoc choices is HARKing ('Hypothesizing After the Results are Known'). There is extensive meta-research on publication bias in the biomedical field. Investigators following clinical trials from the submission of their protocols to ethics committees (or regulatory authorities) until the publication of their results observed that those with positive results are more likely to be published. In addition, studies often fail to report negative results when published, as demonstrated by research comparing study protocols with published articles.

[ "Confidence interval", "Odds ratio", "Meta-analysis", "Correlation and dependence", "MEDLINE", "Funnel plot", "P-hacking", "Decline effect" ]
Parent Topic
Child Topic
    No Parent Topic