language-icon Old Web
English
Sign In

Evidence-based practice

An evidence-based practice (EBP) is any practice that relies on scientific evidence for guidance and decision-making. Practices that are not evidence-based may rely on tradition, intuition, or other unproven methods. Evidence-based practices have been gaining ground since the formal introduction of evidence-based medicine in 1992, and have spread to the allied health professions, education, management, law, public policy, and other fields. In light of studies showing problems in scientific research (such as the replication crisis), there is also a movement to apply evidence-based practices in scientific research itself. Research into the evidence-based practice of science is called metascience. An evidence-based practice (EBP) is any practice that relies on scientific evidence for guidance and decision-making. Practices that are not evidence-based may rely on tradition, intuition, or other unproven methods. Evidence-based practices have been gaining ground since the formal introduction of evidence-based medicine in 1992, and have spread to the allied health professions, education, management, law, public policy, and other fields. In light of studies showing problems in scientific research (such as the replication crisis), there is also a movement to apply evidence-based practices in scientific research itself. Research into the evidence-based practice of science is called metascience. The movement towards evidence-based practices attempts to encourage, and in some instances to force, professionals and other decision-makers to pay more attention to evidence to inform their decision-making. The goal of evidence-based practice is eliminate unsound or outdated practices in favor of more effective ones by shifting the basis for decision making from tradition, intuition, and unsystematic experience to firmly grounded scientific research. For most of history, professions have based their practices on expertise derived from experience passed down in the form of tradition. Many of these practices have not been justified by evidence, which has sometimes enabled quackery and poor performance. Even when overt quackery is not present, quality and efficiency of tradition-based practices may not be optimal. As the scientific method has become increasingly recognized as a sound means to evaluate practices, evidence-based practices have become increasingly adopted. One of the earliest proponents of EBP was Archie Cochrane, an epidemiologist who authored the book Effectiveness and Efficiency: Random Reflections on Health Services in 1972. Cochrane's book argued for the importance of properly testing health care strategies, and was foundational to the evidence-based practice of medicine. Cochrane suggested that because resources would always be limited, they should be used to provide forms of health care which had been shown in properly designed evaluations to be effective. Cochrane maintained that the most reliable evidence was that which came from randomised controlled trials. The term 'evidence-based medicine' was introduced in 1992. This marked the first evidence-based practice to be formally established. Some early experiments in evidence-based medicine involved testing primitive medical techniques such a bloodletting, and studying the effectiveness of modern and accepted treatments. There has been a push for evidence-based practices in medicine by insurance providers, which have sometimes refused coverage of practices lacking in systematic evidence of usefulness. It is now expected by most clients that medical professionals should make decisions based on evidence, and stay informed about the most up-to-date information. Since the widespread adoption of evidence-based practices in medicine, the use of evidence-based practices has rapidly spread to other fields. More recently, there has been a push for evidence-based education. The use of evidence-based learning techniques such as spaced repetition can improve students' rate of learning. Some commentators have suggested that the putative lack of any conspicuous progress in the field of education is attributable to practice resting in the unconnected and noncumulative experience of thousands of individual teachers, each re-inventing the wheel and failing to learn from hard scientific evidence about 'what works'. Opponents of this view argue that hard scientific evidence is a misnomer in education; knowing that a drug works (in medicine) is entirely different from knowing that a teaching method works, for the latter will depend on a host of factors, not least those to do with the style, personality and beliefs of the teacher and the needs of the particular children (Hammersley 2013). Some opponents of EBP in education suggest that teachers need to develop their own personal practice, dependent on personal knowledge garnered through their own experience. Others argue that this must be combined with research evidence, but without the latter being treated as a privileged source. Evidence-based practice is a philosophical approach that is in opposition to tradition. Some degree of reliance on 'the way it was always done' can be found in almost every profession, even when those practices are contradicted by new and better information. Some critics argue that since research is conducted on a population level, results may not generalise to each individual within the population. Therefore, evidence-based practices may fail to provide the best solution to each individual, and traditional practices may better accommodate individual differences. In response, researchers have made an effort to test whether particular practices work better for different subcultures, personality types etc. Some authors have redefined EBP to include practice that incorporates common wisdom, tradition, and personal values in alongside practices based on evidence. Evaluating scientific research is extremely complex. The process can by greatly simplified with the use of a heuristic that ranks the relative strengths of results obtained from scientific research called a hierarchy of evidence. The design of the study and the endpoints measured (such as survival or quality of life) affect the strength of the evidence. Typically, systematic reviews and meta-analysies rank at the top of the hierarchy while randomized controlled trials rank above observational studies, and expert opinion and case reports rank at the bottom. There is broad agreement on the relative strength of the different types of studies, but there is no single, universally-accepted hierarchy of evidence. More than 80 different hierarchies have been proposed for assessing medical evidence.

[ "Pathology", "Alternative medicine", "Alternative therapies for developmental and learning disabilities", "evidence based psychotherapy", "evidence based pathology", "Evidence base medicine", "Contemplative neuroscience" ]
Parent Topic
Child Topic
    No Parent Topic