Research and Advice Giving: A Functional View of Evidence-Informed Policy Advice in a Canadian Ministry of Health

2009 
Given the exponential growth in the evidence-based medicine movement, it is perhaps surprising that the use of research and related evidence by career civil servants in health care agencies has received little more than exhortation and theoretical attention. Empirical or even descriptive studies are rare (Davies, Nutley, and Smith 2000; Mitton et al. 2007; Nutley, Walter, and Davies 2007) and far outnumbered in the last twenty years by “implementation research” in clinical practice settings. Apparently it is more rewarding to persuade clinicians to change their behavior in line with evidence than it is to improve the impact of research on the policy advice given by civil servants. The world clearly becomes more complicated when moving from a clinical focus to the management of service delivery and finally to the realm of system policy. Complex forces compete with research for the attention of civil servants and politicians: the interests of stakeholders, the values of the public, the ideologies of governing parties, the constraints of prior policy, and so on. Choosing whether to use public funds to improve children's access to immunization or to insure in-vitro fertilization (IVF), and with what breadth of coverage in what subpopulations, is decidedly more complex than the “treat–don't treat” clinical decision regarding the immunization of this child or IVF for that couple. Nor does the clinical world have to grapple with the added complexity of finding the appropriate metric to compare the benefits of immunization with those of IVF or the benefits of IVF for same-sex couples compared with heterosexual couples and how either fits with prevailing societal values. The receptor capacity for research also is less developed in management and policy than it is in medicine. But even in medicine, the gap between research and practice still is more a chasm than a crack. Nevertheless, the medical world tends more than the policy world to subscribe to a common knowledge base with common journals, common training, a common purpose of making patients better, and, through practice guidelines and protocols, common tools of implementation. Indeed, clinical governance structures are increasingly focused on ensuring compliance with and accountability to such evidence-based standards. In contrast, the disciplinary backgrounds of administrators managing health service delivery and civil servants advising elected and politically appointed officials on health policy are diverse, their sources of knowledge and research disparate, their purposes varied, and their accountability diffuse. Their main implementation tool is consensus around the acceptable rather than convergence on a loosely defined research “truth” (Walshe and Rundall 2001). Finally, the views of managers and civil servants regarding what counts as evidence diverge from those of clinicians, as they are broader and more encompassing (Culyer and Lomas 2006; Glasby, Walshe, and Harvey 2007). As one British writer noted in regard to the political realm, “what ministers call ‘evidence’ is what they get from their constituents” (Petticrew et al. 2004, p. 813). In this way, policy advice is less “evidence based” and more “evidence informed” (Bowen and Zwi 2005). Not surprisingly, therefore, attempts by some researchers to apply without reflection the lessons of evidence-based medicine to policy have not been successful (Boaz and Pawson 2005; Davies, Nutley, and Walter 2008; Klein 2003; Lewis 2007). Indeed, Black pointed out that “evidence based policy is not simply an extension of evidence based medicine: it is qualitatively different. Research is considered less as problem solving than as a process of argument or debate” (2001, p. 277). This view sits well with the work of Weiss (1979), who holds that a principal use of research for policymaking is conceptual: a source of enlightenment and a way of thinking about an issue, not an instrumental tool defining and then determining the “right” solution to a problem. These are major contrasts in the realistic expectations of how evidence, particularly evidence created by researchers, is treated and can be used by civil servants for policy advice rather than medical authorities for clinical guidance: evidence-informed versus evidence-based decisions, conceptual enlightenment versus instrumental solutions, and a way of thinking and a catalyst for debate versus an attenuation of thinking and diversion around disagreement. The implication is that the tools and programs of evidence-based medicine—critical appraisal, Cochrane-style systematic reviews, practice guidelines, audit and feedback, computer reminders, and so on—are of little relevance to civil servants trying to incorporate evidence in policy advice. What, then, are the best models and tools to encourage more evidence-informed decision making, more research-based dialogue in the policy world? In the remainder of this article, we describe the approach taken by the Ontario Ministry of Health and Long-Term Care, our method for interviewing its senior civil servants, and the specific tools it has implemented in its quest to better equip them for evidence-informed policy advice. Finally, we briefly review the shortcomings of existing classes of models or frameworks in order to understand these civil servants’ use of research before developing our own functional framework based on the interviews. In conclusion, we draw some lessons from the assessment and discuss the challenges for the future.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    64
    References
    102
    Citations
    NaN
    KQI
    []