T07P03 - Expertise and Evidence in Public Policy

Topic : Policy Design, Policy Analysis, Expertise and Evaluation

Panel Chair : Brian Head - brian.head@uq.edu.au

Panel Second Chair : Erik Baekkeskov - erik.baekkeskov@unimelb.edu.au

Panel Third Chair : Justin Parkhurst - j.parkhurst@lse.ac.uk

Objectives and Scientific Relevance of the panel

Call for papers

Session 1

Thursday, June 29th 10:30 to 12:30 (Manasseh Meyer MM 2 - 2 (38))

Trends in evidence-informed policymaking: political and institutional limitations

Brian Head - brian.head@uq.edu.au - University of Queensland - Australia

This paper explores recent experience in OECD countries in relation to:  the use of evaluation reports in policy and budgetary review processes; and attempts to use expert advisory councils to improve the quality of government decision-making.    

 The ‘evidence-based policy’ movement has argued that systematic use of best-available evidence is the major route to improved policy and program outcomes. But many critics and sceptics point to the highly selective and politicized use of evidence in real policymaking. They also point to the corruption of public debate by the populist and personality-driven media where opinions and fake-news remain untested info-tainment.  

In terms of a realistic middle ground to strengthen the evidence base for reasoned policy discourse, the increasing attempts to institutionalize some key features of evidence production and utilization could be attractive to proponents and sceptics alike.

In schematic terms, this might require long-term commitments in six closely-related dimensions. The first is substantial public investment in long-term data collection on key social, economic and environmental phenomena. The second is public investment in the analytical skills required to manage and analyze these data collections, ensure quality control, and provide useful information for managers and other stakeholders. Third is developing capacity to provide performance information for policy options analysis and to use expert information drawn from a variety of internal and external sources.

Fourth is the extensive use of evaluation and review mechanisms, with clear processes for assessing the impact of various programs and interventions and feedback into the policy development process. Fifthly, expert advisory councils or standing committees might be valuable for considering matters where evidence is complex and issues are contentious. Finally, political leaders and legislators need to be supportive of open debate and the sharing of knowledge, so that improved understanding of trends and issues can be joined up with focused deliberation on the merits of various options for action.

 

Is Designing Evidence-based Evaluation for Deliberative Democracy Possible?: An Impossibility Result and the Proposal of the Issue-specific Theories of Deliberation

Ryota SAKAI - sakai.ryota@gmail.com - Waseda University - Japan

[Summary]

Diana Mutz (2008) has called for evidence-based evaluation of deliberative democracy that allows us to utilize evidence from empirical research both for practice and normative research. I conversely propose that amalgamating varieties of evidence-based evaluation does not allow us to know whether certain procedures and institutions lead to fruitful deliberation. This conclusion is derived from Amartya Sen’s liberal paradox argument (1970) in social choice theory and is especially crucial for researchers and practitioners because it suggests that they hardly utilize empirical evidence to form/select appropriate forms of citizens’ deliberative participation in public policy. To alleviate the problem, I propose “issue-specific theories of deliberation” that allow researchers and practitioners to have specifiable norms and policy goals across contexts. This paper depicts how specification of normative arrangements across contexts facilitates evidence-based policy in deliberative democracy.

 

 

[Literature]

Current research on deliberative democracy has proposed a long list of conditions and consequences of deliberation to date. Varieties of “mini-publics” and other forums of deliberation have been proposed and implemented without systematic evaluation of effectiveness between forums. Therefore, in her article “Is Deliberative Democracy a Falsifiable Theory?,” Mutz has called for evidence-based evaluation for deliberative democracy that allows researchers and practitioners to achieve a scientifically productive deliberative theory. Following what she calls “textbook” orthodoxy of good empirical research, Mutz encourages us to (1) streamline conditions of deliberation down to its essential elements, (2) accumulate empirical evidence (causes and effects) by testing, and (3) evaluate forms of deliberation based on empirical evidence of its functions that normative theorists anticipate. My project is to offer a critical evaluation of the third proposal from social choice theoretic perspectives, which has not been discussed in detail by Mutz or any other researchers.

 

 

[My Research Question]

Can deliberative theorists achieve consistent evaluation of forms of deliberation by amalgamating varieties of evidence-based functional evaluation of deliberation? My conclusion is pessimistic.

 

 

[Methodological Framework]

Why so? My concern is that Mutz’s proposal shares the same logical structure of liberal paradox problem discussed by Amartya Sen (1970). Although originally discussed as a framework of privilege of liberty by social choice theorists, Sen suggests its interpretation is open to other issues that share a similar logical structure. Thus, I translate privilege of liberty framework into privilege of evidence framework.

In the same vein as Sen’s logic of the liberal paradox, I propose the impossibility of forming a consistent evaluation of deliberation by amalgamating evidence-based evaluation of deliberation.

 

 

[My Proposal and its Implications]

Instead, I argue issue-specific normative arrangements and value ordering formation known as the “specification” method in applied ethics will work well for the framework of the institutional arrangements governing evidence use in different settings. In particular, I propose “issue-specific theories of deliberation” such as the ethics of care in nursing and the ethics of immigration as an instance of specification. They allow researchers and practitioners to start the general framework of evaluation including setting up goals, specifying informational basis, specifying value standards, and eventually utilizing evidence to form consistent evaluation.

Nudges and evidence based policy: Fertile ground?

Colette Einfeld - colette.einfeld@unimelb.edu.au - Australia

Nudging is an approach to public policy development which changes the decision making environment to encourage citizens to make a particular choice. The approach has been eagerly adopted by administrations around the world, with some governments establishing dedicated units, or Behavioural Insights Teams (BITs), to advance the use of nudges.  

 

Nudging seems to have positioned itself firmly in evidence based policy rhetoric. For example, BITs have emphasised and encouraged the use of Randomised Control Trials as the best way to determine the effectiveness of a policy, arguing they can be simple to implement, cost effective and save money for government in the long term. State Government in Australia has argued a key reason for using behavioural insights is it supports evidence based policy.  It has also been suggested that one of the reasons for nudges popularity is it is built on an evidence based approach that is easily understood by policy makers.

 

There is little empirical understanding on whether its association with evidence based policy rhetoric has contributed to nudges popularity. This research seeks to understand how nudge is understood in relation to the evidence based movement, from the perspective of those designing, developing and implementing nudge policies.

 

This paper first reviews the literature on how nudges are situated in the evidence based policy movement.  It then introduces the empirical research, of in depth semi-structured interviews undertaken with policy officers, academics and consultants in Australia involved in designing and developing nudge policies. The paper concludes with a discussion on how evidence based policy rhetoric may have legitimised the nudge approach, and how intertwining nudging and evidence based policy has shaped the adoption of this public policy phenomenon. It also provides insight on how ethical questions around nudges directly relate to ethical questions around the use of evidence based policy.

 

This paper contributes to the literature and debates on the evidence based movement through an empirically informed understanding of how evidence based policy has shaped the legitimacy and adoption of the nudge policy tool.

Science-Led Policy-Making: is actual evidence-based policy best explained by epistemic consensus or by national ideational trajectories?

Erik Baekkeskov - erik.baekkeskov@unimelb.edu.au - University of Melbourne - Australia

Policy-making usually involves multiple kinds of actors, institutions and ideas competing for influence on the actions of government. Yet what logic explains policy-making when the scientific experts inhabiting specialized government agencies are in control? Do policies take shape from developing international epistemic community knowledge? Or do policies actually take shape from developing nationally rooted ideas? The former vision is embodied in much of the prescriptive literature on evidence-based policy, which takes the reasonable stance that scientific method leads to better answers to social problems than politics. The latter vision springs from historical institutionalism and is matched by key works in the philosophy of science, which emphasize that ideational starting points (hypotheses) are context-specific and persist until disproven under scientific standards. This paper explicates these alternative logics, and assesses their plausibility with case studies of actual science-led policy-making in cases of infectious disease control (2009 H1N1 pandemic response in Denmark, the Netherlands and Sweden and antimicrobial resistance strategizing in Australia during 2015).

Session 2

Thursday, June 29th 13:30 to 15:30 (Manasseh Meyer MM 2 - 2 (38))

Strengthening the expert review process: a case study of the WHO’s global malaria programme

Bianca DSouza - bianca.dsouza@lshtm.ac.uk - London School of Hygiene and Tropical Medicine - United Kingdom

Justin Parkhurst - j.parkhurst@lse.ac.uk - London School of Economics and Political Science - United Kingdom

Malaria is a major cause of illness and death in children, all over the world but particularly in sub-Saharan Africa, and in the past decade its prevention and control has benefitted from increased global health attention and resources, particularly from large and influential funders such as the Bill and Melinda Gates Foundation.  The resulting growth in the volume and complexity of knowledge generated, both in the form of research results and national malaria control program surveillance information, has made it hard for users of that evidence to effectively keep up and respond to a rapidly changing epidemiological and political landscape. At least this was the perception among stakeholders of the WHO’s malaria department, the Global Malaria Programme (WHO-GMP), in 2010.

 

In 2011, WHO-GMP embarked on a major review and re-design of its policy setting process in order to be more responsive to the rapidly evolving malaria landscape; this culminated in the creation of the Malaria Policy Advisory Committee (MPAC) in early 2012. Formed under the tenets of a “transparent, responsive, and credible” evidence review and policy setting process, and engaging with a wide variety of experts and institutions via its Evidence Review Groups (ERGs) and Technical Expert Groups (TEGs), MPAC is meant to provide independent strategic advice and technical input to WHO for the development of policies related to malaria control and elimination. 

 

MPAC’s very first policy recommendation was the 2012 policy for Seasonal Malaria Chemoprevention (SMC), formerly known as Intermittent Preventive Treatment (IPT) in children (IPTc). Intermittent Preventive Treatment, or IPT, is the delivery of a treatment dose of an anti-malarial drug given at a pre-specified time for the prevention of malaria, regardless of the presence of symptoms or confirmed malaria infection.

 

In this case, research showed that in areas of seasonal malaria, monthly treatment with effective antimalarial drugs (in this case, a combination of amodiaquine and sulfadoxine-pyrimethamine) during the rainy season provided children under five years with a high degree of safe protection at moderate cost. Within a year of the announcement and accompanying policy document from WHO recommending SMC, an implementation guide was published, nine countries included SMC in their strategic plans for malaria control, and SMC was implemented in southern Senegal, in parts of Mali, Chad and Niger, and in a pilot scheme in northern Nigeria.

 

This experience was quite different from a policy setting process for another intermittent preventive treatment that occurred before the formation of the MPAC. That previous policy dealt with Intermittent Preventive Treatment for infants (IPTi – rather than for older children covered in IPTc/SMC). The IPTi policy development process was widely discussed within the global malaria community as an example of a problematic process in which the inherent tension between researchers, their funders, and policy makers could have been better managed. In comparison to SMC, this tension led to a drawn out and contentious evidence review and policy process which halted or broke down on several occasions. Although IPTi did eventually become a WHO policy in 2010, three years after its formal evidence review process began, its uptake halted (only one country, Benin, adopted IPTi), and the political fall-out from that poorly perceived initial policy process was among the factors that precipitated WHO-GMP in creating an improved policy setting process via the MPAC.

 

In comparing the policy processes for the interventions of intermittent preventive treatment in infants (IPTi) versus in children (SMC), the results of the study show that ‘good evidence’ from a purely technical perspective, though important, is not sufficient to ensure universal agreement and uptake of recommendations, even within a highly technocratic body such as the WHO-GMP. Interviews undertaken as part of a doctoral research project found that evidence also needs to be relevant to the policy question being asked, and technical actors also retained a concern over the legitimacy of the process by which technical evidence was brought to bear in the policy development process. In this way we found that Cash and colleagues (2002, 2003) findings from the field of sustainable development, that evidence must be credible, legitimate, and salient to be accepted by the public, appears to equally apply within expert technical advisory bodies.

Experiment-based policy making in France: political use of science and practices-based knowledges

Agathe Devaux-Spatarakis - adevaux@quadrant-conseil.fr - France

 

Propagation of the Evidence based policy (EBP) movement since the 1990’s triggered a diversity of practices in the institutionalization of the relationship between supply and demand for evidence (Rieper 2009; Solesbury 2001). The model introduced as the most scientifically based, following the standards of evidence-based medicine practice,  was evidence informed by experiments –  or pilot programs –  assessed by Randomize Controlled Trials (RCTs)(Cochrane 2011; Coalition for Evidence-Based Policy 2007).

 

This trend of the EBP movement, promoted a new kind of scientific legitimacy to advise policy, based on the empirical demonstration of experimental program’s or project’s impact, rather than grounded on scientific general expertise on the policy area (Duflo 2005). In a nutshell, effectiveness of new policies had to be demonstrated through experiments assessed by controfactual analysis before generalizing the intervention to the whole population.

 

This model of EBP made its way to France through the creation of the Experimental Fund for Youth (EFY) within the French government, in 2009. Its ambition was to design an array of new policies for the French disadvantaged youth, grounded on sound evidence from funded experiments, evaluated preferably by RCTs (Conseil Scientifique du FEJ 2009). This new organization dedicated to EBP begs the questions of what was effectively learned by experimenting and most importantly how was this knowledge effectively utilized to fuel policy making?

 

These questions were addressed by a chapter of a PhD research on the French EBP movement between 2006 and 2014. 50 interviews were conducted among actors from the supply and demand side of evidence, as well as a thorough analysis of intern documents produced by the EFY, completed by a cross analysis of 15 embedded cases studies of  Experiments evaluated by RCTs. This research opted for a large definition of learning, thus scrutinizing both scientific knowledge but also practical knowledge from policy managers implementing the program (Head 2008; Patton 2010). Also, all type of uses were under our attention, following C. Weiss typology of political uses of evaluation (Weiss 1998).

 

Results show that far from providing sound scientific evidence of “what works”, experiments were mainly used by policy managers to acquire practical knowledge “as they go” on implementing these programs (1). More importantly, cross analysis among cases proved that RCTs’protocol entailed that scientific knowledge was generated at the expense of practical knowledge and vice-versa (2). Our findings concurred with the literature stating that knowledges were only used when fueling political interest. Yet, more surprisingly when knowledges were used, scientific knowledge was always put to the fore – albeit systematically subject to symbolic use  misuse or conceptual use – and practical knowledge, although not produced by this evaluation method, was actually subject to instrumental use to concretely improve policy design.  

 

Coalition for Evidence-Based Policy. 2007. « Hierarchy of Study Designs for Evaluating the Effectiveness of STEM Education Project or Practice ». http://coalition4evidence.org/wp-content/uploads/2009/05/study-design-hierarchy-6-4-09.pdf.

Conseil Scientifique du FEJ. 2009. « Guide méthodologique pour l’évaluation des expérimentations sociales à l’intention des porteurs de projets ».

Duflo, Esther. 2005. « Evaluer l’impact des programmes d’aide au développement: le rôle des évaluations par assignation aléatoire ». Revue d’économie du développement 19 (2): pp 185-226.

Head, Brian W. 2008. « Three Lenses of Evidence-Based Policy ». Australian Journal of Public Administration 67 (1): 1‑11.

Higgins, JPT, et S Green, éd. 2011. « Cochrane Handbook for Systematic Reviews of Interventions, Version 5.1.0 ». The Cochrane Collaboration.

Patton, Michael Quinn. 2010. Developmental Evaluation : Applying Complexity Concepts to Enhance Innovation and Use. New York: Guilford Press.

Rieper, Hansen. 2009. « The evidence movement, the development and consequences of methodologies in review practices ». Evaluation 15 (avril): 141‑63.

Solesbury, William. 2001. « Evidence Based Policy: Whence it Came and Where it’s Going ». ESRC UK Centre for Evidence Based Policy and Practice, Queen Mary, University of London. http://www.kcl.ac.uk/content/1/c6/03/45/84/wp1.pdf.

Weiss, Carol H. 1998. « Have We Learned Anything New About the Use of Evaluation? » American Journal of Evaluation 19 (1): 21‑33.

 

 

Inquiring with evidence: how contemporary public inquiries bring evidence to policy

Sue Regan - sue.regan@anu.edu.au - Australia

 

Evidence is a crucial component of policy-making yet little is known about its form and function in public inquires such as Royal Commissions, taskforces, reference groups and commissions of inquiry. Public inquiries are ad hoc and temporary advisory bodies appointed at the discretion of executive government, and represent a resilient feature within evolving governance contexts worldwide. Typically, public inquiries include expert members and undertake (with varying approaches, effort and rigour) processes of evidence-production, synthesis and analysis. As such, they can be portrayed as ‘evidence-based’ and offer a useful site for examining how evidence is used, contested and negotiated in policy-making. This paper considers what counts as evidence in a public inquiry process, what ‘expertise’ are included and excluded, and to what policy effect. Drawing on a qualitative analysis of three social policy inquiries (one from the UK, and two Australian), the analysis provides ground-breaking empirical data on how the inquiry process defines different forms of evidence and navigates tensions between them. The case-studies offer different strategies for how evidence and expertise can be embedded in policy-making. The paper argues that while public inquiries provide important sites for promoting evidence in policy processes, they pose important evidentiary challenges including how evidence-use is balanced with other policy principles deemed important in the inherently political process of inquiries. The paper informs broader debates on the role and contestations of expertise and evidence in contemporary policy-making. 

The Big Bad Wolf’s View: The Evaluation Clients’ Perspective on Independence of Evaluations

Susanne Hadorn - susanne.hadorn@kpm.unibe.ch - Center of competence for public management, University of Bern - Switzerland

Lyn Pleger - Lyn.Pleger@kpm.unibe.ch - Center of Competence for Public Management, University of Bern - Switzerland

The independence of evaluations in general and pressure put on evaluators by stakeholders in particular have gained increasing attention in research. The call by the evidence-based policy movement (EBP) for the use of unbiased evidence within policy making highlights the importance of independent evaluations. Only evaluations that are conducted in absence of distortion meet the requirements specified by the EBP. In this vein, research has mainly focused on evaluators’ experience confronted with pressure. This focus, however, led to a restricted view of the complex context in which evaluations take place. Specifically, while research has thus examined how evaluators react in case of pressure, it is time to also understand how and why pressure is exercised. Therefore, this paper follows the call by previous studies to pay attention to evaluation clients, which were identified as the most influencing stakeholders within evaluation processes. By means of an online survey among Swiss evaluation commissioners, this paper aims at shedding light on the clients’ notion of independence of evaluations. The findings strive at improving the dialogue between evaluators and evaluation clients to ultimately increase the quality of evaluation results. Likewise, the study contributes to the EBP literature by testing whether its theoretical posits find application in practice.

 

Export PDF