By Professor Sir Peter Gluckman (NZ), GUEST CONTRIBUTOR: Paediatrician and Endocrinologist; Chief Science Adviser to the Prime Minister of New Zealand; Chairman, International Network on Science Advice to Governments; Co-Chair of the WHO Commission on Ending Childhood Obesity; Elected Foreign Member of the Institute of Medicine, US National Academies of Science; Fellow of the UK Academy of Medical Sciences and Fellow of the Royal Society London; University Distinguished Professor, University of Auckland; Visiting Professor, University College London; and holder of Honorary Chairs at the National University of Singapore and University of Southampton. He is also Chief Scientific Officer, Singapore Institute for Clinical Sciences; Dean Emeritus of the Faculty of Medical and Health Sciences and Director Emeritus of the Liggins Institute, University of Auckland.
At the first global meeting on the practice of providing science advice to governments, held in Auckland in August 2014, there was broad recognition that there are multiple modalities of science advice including both deliberative (formal) and informal processes17. Where the evidence is of a complex or contested nature formal processes are essential, but to be effective, any system intended to ensure the input and integrity of scientific evidence into policy also requires informal processes. Not only do these two sets of processes differ from each other, but they are also quite separate from the goal and practices of providing policy for the science system per se. Yet these concepts and modalities are often conflated in the minds of the public, of government and, indeed, of the science community.
How science policy-making usually works
Formal deliberative inputs may come from an academy or a commissioned panel of experts or, in some cases, from government experts within a ministry. Academies may choose to undertake studies on their own account or they may be requested to do so by policy makers. However, when they do it on their own account they may be somewhat disappointed that their work may not have the impact they had hoped, for it may be answering questions that are not asked by the policy process and are not perceived as being needed. Generally, the robust independence of academies provides confidence in the products of their work, whether such studies are self-initiated or commissioned by governments. The use of ad–hoc or standing committees of experts, provided they are appropriately constituted and conducted, offer a similar level of assurance.
One key consideration with regard to such formally constituted and deliberative science advice however, is the prescribed timing of the committee’s or panel’s input into the policy process. This is either quite early (horizon scanning and foresighting), or late in the policy process (as decision makers await expert input). In either case, the input nearly always occurs at a single point within the policy cycle, yet the need for evidence and its use often evolves throughout the cycle.
Policy formation is a complex iterative process between policy analysts/advisors, subject matter experts, politicians and external stakeholders. It is easy for science to be quickly ignored or separated from other considerations. Yet the essential need is to try and ensure the place and integrity of science-based input throughout the process. In my view, this is a particular role best served by individual ‘evidence champions’ such as science advisors who are mandated within the policy process, yet remain positioned with a level of independence from the other processes of government. This is important.
Such informal advice has many other purposes. More often than not, the genesis of policy ideas is found within the informal conversations that politicians will have with colleagues, counterparts, and constituents. Sometimes, these ideas are supported by a recognised evidence base, but often they are not. In my experience this is where the individual science advisor can act as a foil and provide a challenge function. She or he can provide a safe and knowledgeable ear on which to test ideas and sharpen questions. The advisor can remind others of the data and evidence and flag any issues that the evidence may point to about the various policy options. That informal interaction may be the most important – but often unrecognised – component of the science advisory system.
Public servants who are technical experts play an important part in policy development and can also serve to protect the integrity of science-based advice. However, depending on the culture of the department, this type of internal expertise is often quite removed from the final decision-making machinery or the science too easily is integrated with other elements of the policy process and key points can be lost, as it is subject to filtering across the policy process.
Yet, increasingly it is becoming clear that science should have a privileged place in the policy process, that many areas in which policy- and law-makers are now called upon to act require the benefit of science. Robust scientific input is the only reliable way to understand what is known about many issues, what is not known and what are the likely implications and trade-offs of the policy options being explored. Once these elements are clearly delineated, it is the job of the decision-makers to weigh the trade-offs and to make values-based judgments, which require many other inputs, about one course of action over another.
While it is accepted that science itself is not values-free, the values associated with science (i.e. choice of methodology and analytical framework or judgments on the sufficiency of evidence for instance) are quite distinct from the values associated with policy-making, such as of public opinion and political ideology. Ideally government departments should have processes to protect the integrity of the evidence through this imprecise and shifting policy process. In New Zealand this will be a key role of the growing cadre of departmental science advisors now being appointed.
Two recent experiences in New Zealand highlight the relevance of a mixed model of formal and informal science advice with iterative capacities. The second case also highlights the limitations that the scientific community must accept as a reality of democracy.
Example 1: Teenage Morbidity
In 2009, a new Government was elected in New Zealand. The new Prime Minister himself shared the public’s concern over the high rates of teenage morbidity and mortality in New Zealand’s multicultural society, the statistics on which are alarming by international comparison. Within the OECD, New Zealand youth had the highest rate of suicide, the second highest rate of teenage pregnancy and there was high public concern over alcohol and drug usage and teenage crime statistics. The media interest was acute and clearly calling for action.
The conventional approach in New Zealand would have been for a multi-agency and multi-stakeholder working group to be established and to then report on the issue. In all likelihood, its outcomes would have represented the inputs of the various vested interests and little progress would have been made, given that in such issues every person and stakeholder has a strong view as to what the solution should be.
Instead, the Prime Minister approached me, as the country’s first Chief Science Advisor (CSA) for my view on how to proceed. I recommended that as a starting point we establish a committee of academic experts to review the literature and define what we actually knew about the science of adolescence and its associated morbidity. This was agreed to. I appointed an academic co-chair who is a distinguished developmental psychologist. We approached initially 15 experts from a range of relevant disciplines to form a committee. Ground rules were established – only the peer-reviewed literature was to be considered and work that reflected a values-based or ideological position was identified and ruled out. What was intended as a 6 month process in the end took about 18 months and the panel of experts grew as new dimensions to the study were identified.
The final report18 comprised a long summary document followed by over 25 chapters, written by members of the expert panel and associates. It was subject to international peer review. It effectively made no specific recommendations but highlighted areas of focus both with respect to early childhood experiences and the development of resilience, and regarding youth mental health and the development of the adolescent brain. Both an interim report and the final report received high media interest and led to considerable public conversation.
The Prime Minister established a senior officials group both from his Office and the relevant ministries to consider the report’s policy implications. The officials recommended a comprehensive suite of new actions largely in the area of youth mental health promotion and treatment. The Prime Minister again asked me to chair a small group of relevant academics to review the policy recommendations. (As it happened, this group had some strong views about one potential class of approach that had been omitted by officials, which I shared with the Prime Minister in informal discussion. As a result, the deficiency was remediated and consequent modifications were made in the final suite of activities for him to consider).
Throughout this policy formation process it was the report of the academic experts and then of both the officials and the academic review group that made it clear that the problem was complex and implied that there would be significant uncertainty as to which, if any, of the proposed interventions would be effective. The academic review group advocated for the continual monitoring and evaluation of interventions once implemented – something that is not always common practice within government.
The Prime Minister subsequently announced the funding of the full suite of recommendations at a press conference and, in doing so, he acknowledged that this choice of interventions was based on the best evidence available but their effect would be uncertain and warranted concurrent monitoring and study. In my experience, it is unusual and very healthy for a politician to announce a major initiative without making claims about its expected success. In this case, the Prime Minister was calling for nothing short of a robust policy intervention trial, with multiple activities being tried in parallel and subject to evaluation. This was again well received by the media and suggests that well-framed evidence-informed policy formation of uncertain impact even in complex areas is perfectly (if not more) acceptable to the public and is refreshingly devoid of political hubris. The evaluation process for these programmes is now underway by an independent agency.
In this example there are several important principles in play. The process was initiated because of an informal interaction between the Prime Minister and a trusted independent science advisor and led to a quite different process than what otherwise would have emerged. This led to a very extensive deliberative exercise with rigorous separation of the science from non-scientific values based arguments. The policy analysts were then able to take this and use it as the basis of developing recommendations for action. But again it took shepherding by the science advisor to protect the integrity of those recommendations to the point of final decision-making. The political upside of evidence-informed policy formation in a complex and highly charged area was that any contentious debate was readily diffused by a focus on a suite of actions, the possible outcomes of which were not exaggerated by the political process.
Example 2: Recreational Drug Use
New Zealand, like most countries, had taken a standard prohibitive approach to recreational psychotropic drugs – namely grading them for their perceived risk of harm and addiction and then specifically listing them in legal schedules with a proportional range of penalties. However, with the advent of synthetic cannaboids, many potentially harmful agents effectively escaped regulation. An arms race was underway between the chemists and the regulators that the regulators could not win. At the same time there was mounting evidence and growing concern at the number of young people being admitted to emergency rooms with acute drug-related psychotic events. That is when a member of the [then] coalition government proposed a bold experiment: to reverse the approach by proposing to ban all such agents unless they were proven to be “safe” according to regulations to be administered by the Ministry of Health (put another way, it was legalising those recreational psychoactive agents that could meet rigorous safety standards). This approach rapidly gained traction at least in principle, but the debate became politicised when it came to defining the “sufficiency of evidence” for safe use and its codification into law.
The proponents of liberalising recreational drug use had argued for a lower standard of proof than would be needed for a medicinal pharmaceutical agent, arguing that the state of synthetic chemistry was such that in vitro testing would be sufficient. Those with experience in pharmacology, by contrast, argued that animal testing in two species was needed, including some forms of acute and chronic toxicology measures. My Office was consulted by both the Ministry of Health and by Ministers. I advised that if the State was to give some affirmation of relative safety of new psychoactive substances, a pharmacological approach would be required and in the current state of knowledge, some in vivo testing in animals was unavoidable. Ministry officials concurred with this view.
At this point, both the political and popular discourse rapidly shifted from the role of the State in ensuring the relative safety of approvable agents, to the question of testing recreational drugs in animals. The use of animals in research and development is highly emotive and there was no political appetite for expanding safety testing in animal models beyond therapeutic agents. While some toxicologists had argued that sufficient evidence could be collected via rodent tests alone (i.e. avoiding larger animal species such as dogs), even that more restricted definition of sufficiency of evidence became an emotive discussion. The parliamentary select committee handling the Bill sent it back with amendments that banned animal testing. With the relevant drug safety committee of the Ministry of Health requiring animal testing to certify a pharmacological agent as fit for human use, the outcome was effectively a total ban on the recreational use of synthetic psychoactive agents.
It is not clear whether the parliamentarian who started the debate on animal testing did so with the calculated intention of this total ban. In any event the outcome has been that the country has ended the race between chemist and regulator.
The nature of the debate in this case rapidly shifted from an evidence-based and strictly values-based discussion on the use of animals in drug testing. The scientific advice and practice did not change and the practical outcome was a total ban. It is clear that the appeal to science in this case offered considerable collateral political benefit: politicians appeared sympathetic to a liberal approach, they emphasised concern for public safety (especially the youth market for the drugs); they were sensitive to societal concerns about animal testing, all while achieving an effectively total ban, without being seen to be heavy-handed.
These two examples highlight some key points about the use and limits of science advice and the realpolitik of policy formation in emotionally charged areas. The first example demonstrates that they are major areas of complex policy development where science advice can play an important role. But it also demonstrates the synergistic roles of informal and formal advice. It is difficult to imagine the counterfactual wherein the Academy writes an unsolicited report on adolescence and this leads to major government action. Nor would such a scenario create the nexus of interaction between scientist, policy maker and politician that led to action. But while informal processes were essential to the initiation and to ensuring integrity in the latter phases of policy formation, the deliberative formal approach was essential to there being trust and validity to the outcomes achieved.
The second example highlights something quite different: there are instances where science may seem key to resolution, when in reality it is marginal to the larger political process. However marginal, in this case it was the integrity of science (i.e. insisting on animal testing) that effectively resolved the values-based debate.
In a democracy there will always be a complex interplay between science and societal values. Successful science advice requires that those individuals or committees acting in the boundary roles between the two worlds of science and policy appreciate the difference between scientific values about methods and the sufficiency of evidence, and societal values. When they enter the world of societal values their voice may be informed by science but they have no more status than any other citizen. Policy makers rightly get concerned (even angry and dismissive of science advice) if those with such intermediary responsibilities are seen to usurp the policy maker’s role as arbiter of the trade-offs between different societal values and concerns.
This distinction is not always easy to make and the further distinction that needs to be appreciated is between the rights of the individual scientist as a citizen and those who have specific intermediary roles to play in advising governments. As Roger Piekle19 has pointed out, individual scientists can and do act as advocates but those in advising roles need to act primarily as honest brokers of knowledge. As science is increasingly called upon to assist society in many complex and contentious areas, it is important to understand how the worlds of advocate and honest broker interact and, in some cases, collide. Perceptions of hazard, risk, vulnerability and precaution often vary between the public, the science community and politicians. Any science advisory mechanism must take this into account.
REFERENCES IN THIS ARTICLE
17) See www.globalscienceadvice.org for meeting report
19) RA Piekle The honest broker- making sense of science in policy and politics. Cambridge 2007