By Carthage Smith, Senior Policy Analyst, OECD
On 6 April 2009 a major earthquake hit the Italian city of L'Aquila causing the death of 309 people. This did not come without warnings from nature - a series of smaller magnitude tremors occurred in the weeks prior to the main earthquake, and it was even predicted by at least one 'rogue' scientist (although the scientific basis for this prediction was questionable). However, the formal scientific advice, from the government mandated Major Risk Committee that met a few days earlier, on 31 March, was that the occurrence of a large earthquake in the short term was unlikely. The reassuring message that was subsequently communicated to the media almost certainly contributed to loss of life. After the earthquake the four Risk Committee members and the three other scientific experts who attended the March 31 meeting were charged with manslaughter and causing multiple serious injuries. In October 2012 they were found guilty and sentenced to 6 years imprisonment. This sent shockwaves across the whole global science community, with many academic organisations raising their voices in protest - after all, any seismologist knows that earthquakes cannot be accurately predicted in the short-term. There was huge relief when six of the scientists were acquitted two years later but, in the meantime, a more careful assessment of what had gone wrong, raised serious questions about the roles and responsibilities of scientists in providing scientific advice.
On 11 March, 2011, a massive tsunami, provoked by an exceptionally large earthquake, affected a vast area in the Fukushima district of Japan. Disruption of the cooling system for one of the reactors at the Dai-ichi Nuclear Power Plant, led to nuclear meltdown and hydrogen explosions, which caused a release of radioactivity into the ocean and atmosphere. After the accident, the Japanese government experienced difficulties in taking wholly consistent action and communicating effectively with an understandably anxious and questioning public. A post-crisis analysis found that government officers badly lacked timely access to rigorous scientific data and information. The confusion was exacerbated by the diverging stances that were taken by other countries as to the dangers for their citizens visiting Tokyo. Both the national scientific advisory mechanisms and the international coordination and exchange mechanisms proved to be seriously inadequate to cope in a disaster-response situation, where sound scientific information was clearly important for effective decision-making.
It was in the light of the L'Aquila and Fukushima event that in October, 2012, a number of countries asked the OECD Global Science Forum to explore what could be done to improve national science advisory mechanisms. The starting point was to try and get a handle on how science advice for policy-making actually worked in different countries and on the legal framework in which different advisory systems functioned.
The reality is that science advice functions different ways in different countries and there is a large diversity of mechanisms even within countries. The notion of a national science advisory system is somewhat too simplistic to describe the ecosystem of actors, institutions and processes that play a role in translating scientific knowledge into policy processes in most countries. History, culture, legal regime and political context all play a role in shaping this ecosystem and influence its efficiency and effectiveness. For example, some countries have a single Chief Science Advisor, whereas in many other countries the idea of embedding so much responsibility in a single individual would be an anathema. As for many OECD studies, one of the first conclusions of the GSF work was that there is no "one size fits all solution'. However, that does not mean that there are not important lessons that can be learned by comparing different national ecosystems and what works well and less well.
There are two important distinctions that became apparent at the outset of the GSF work and that need to be clear in any consideration of how to ameliorate science advice. The first is between the informal or formal role of individual scientific experts on one hand and the role of deliberative, collective advisory processes on the other. The second distinction is between science for policy and policy for science. In real world situations these distinctions are not always easy to maintain but conceptually they have important implications for the roles and responsibilities of individual scientists and expert bodies. In L'Aquila it was one individual in particular who appears to have been responsible for the unfortunate communication that contributed to people not evacuating but it was all those serving on the Major Risk Committee that were held liable. Whilst the immediate need in the Fukushima crisis was science for policy, the subsequent enquiry made important recommendations on policy for science and the organisation of science institutions. The GSF work focused mainly on deliberative advisory processes, which, for ease of analysis, can be broken down into four key phases. Phase 1 involves the framing of the question(s) that the advisory process is expected to inform - involving the key stakeholders and end users in this initial phase can be critical in ensuring the relevance of the advice at the end of the process. Phase 2 is the selection of the advisors, when involving the right experts from the relevant science disciplines and avoiding conflicts of interest are important. Phase 3 is the production of the advice, when its 'independence' needs to be ensured and uncertainties need to be fully considered (scientific evidence is very rarely 100% certain and frequently it is quite uncertain). Phase 4 is the communication and use of the advice, where clarity, openness and transparency are crucial.
Even when attention is paid to all four phases of an advisory process, and the advice itself it taken up and influences policy, this does not guarantee that it will not be publicly disputed or even, in extreme cases, the subject of legal action. It is important that all actors - individuals and institutions - involved in an advisory process are clear at the outset what their responsibilities and legal liabilities are. In practice this is rarely the case and if you don't know what you are potentially liable for it is very difficult to reduce your liability. Put simply, the potential legal liability is the product of the nature of the advisory structure (e.g. government mandated or independent), its specific mandate (e.g. science policy versus risk assessment), its role in decision-making (e.g. advisory versus policy prescriptive), its role in communication (if at all) and the national judiciary system under which it operates. Likewise, for individual scientists their legal liability is similar to that of the advisory body on which they serve but in addition it depends also on their own behaviour, particularly with regard to public communication. Whilst the risks associated with civil or criminal liability cannot be eliminated altogether from science advisory processes, being fully aware of the specifics of the key variables that determine these risks can help safeguard against them.
Large-scale crisis situations, such as the Fukishima Nuclear Accident, the Icelandic Eyjayfjallajökull volcano eruption in 2011 or the Ebola pandemic in 2014, raise particular challenges for science advice. When a rare event occurs which has impact at regional or global scale, routine emergency response systems and science advisory structures are suddenly confronted with new, complex and rapidly changing challenges. Collecting, coordinating and synthesising scientific input from many different fields and translating these into useful policy advice at very short notice can easily become unsurmountable obstacles. The GSF study involved interviews with key players involved in several recent major crises and revealed a number of important lessons to be learnt. In common with more routine advisory processes, but to an even greater degree, the issue of maintaining an authoritative voice and being clear where responsibilities lie in a complex and distributed scientific and public discourse are critical. There is also a clear need to improve the processes for international data exchange and coordination.
Drawing on the analysis of different advisory (eco-)systems, their exposure to legal risks and the particular requirements of crisis situations, it is possible to define three factors that are particularly important in determining the success or failure of a science advisory process. Hence, an effective and trustworthy scientific advisory process should have: 1) a clear remit, with defined roles and responsibilities; 2) involve the relevant actors; and 3) produce sound, unbiased and legitimate advice. These 3 meta-factors can be disaggregated into more detailed check-list for science advice that brings together the various lessons learned in the GSF project.
A check-list for science advice
An effective and trustworthy science advisory process needs to:
Have a clear remit, with defined roles and responsibilities for its various actors. This includes having:
- a clear definition and, insofar as is possible, a clear demarcation of advisory versus decision-making functions and roles
- defined roles and responsibilities and the necessary expertise for communication
- an ex-ante definition of the legal role and potential liability for all individuals and institutions that are involved the necessary institutional, logistical and personnel support relative to its remit.
Involve the relevant actors – scientists, policy-makers and other stakeholders, as necessary. This includes:
- engaging all the necessary scientific expertise across disciplines to address the issue at hand
- giving explicit consideration to whether and how to engage non-scientific experts and/or civil society stakeholders in framing and/or generating the advice
- using a transparent process for participation and following strict procedures for declaring, verifying and dealing with conflicts of interest
- having, as necessary, effective procedures for timely exchange of information and co-ordination with different national and international counterparts.
Produce advice that is sound, unbiased and legitimate. Such advice should:
- be based on the best available scientific evidence explicitly assess and communicate scientific uncertainties
- be preserved from political (and other vested-interest group) interference
- be generated and used in a transparent and accountable manner.
This check- list, which is deliberately not prescriptive with regards to specific structures, can provide a common basis for countries to evaluate existing advisory mechanisms and processes and/or establish new ones. Whilst its main focus is deliberative processes, it can be readily extrapolated to consideration of the roles of individuals with specific advisory responsibility.
One area of controversy, or strongly-held and discordant opinions, that became apparent in the GSF study is the role of non-scientific experts and civil society in science advice. At one end of the spectrum, there are those who argue that the independence and scientific 'purity' of any objective advice must not be tainted by involving other stakeholders. At the other end, there are those who make the case that science itself is value-laden, scientists are not the only ones with expertise and ignoring societal views makes any advice irrelevant. The former view certainly has some validity where technical scientific assessments feed into regulatory processes. But in some areas, such as clinical research, the involvement of patient groups in advisory processes has been demonstrated not only to make the final advice more 'sellable' but also to make that advice more rigorous by incorporating the experience and expertise of patients. For topics on which a strong values-based debate is underway in civil society, such as GM foods, stem cells or geo-engineering, scientists who try to present what they see as straightforward evidence, without acknowledging the societal context are opening the door wide to criticism. As the complexity of an issue increases, the necessity of having input from different science disciplines - including social sciences - and from different stakeholders - including lay person - also increases.
It is important to remind ourselves that scientific evidence, no matter how good, is only one consideration in most policy decisions. It is frequently the case many interests may have to be balanced in situations, where the science itself may be uncertain. The rapid evolution of information and communication technologies and moves towards more participative democratic decision-making have put additional pressure on science to help provide answers and solutions, whilst also opening up the academic enterprise to closer surveillance and criticism. What used to be ‘private’ debates between different scientific viewpoints over areas of uncertainty have now become public disputes that can be exploited by different stakeholders to confirm or deny entrenched positions. Science is truly at the centre of many important policy issues and scientists are increasingly visible and, in many cases, increasingly vulnerable, in policy-making processes. More careful consideration to how we formulate, communicate and use science advice is important for all countries as we rapidly advance into the knowledge based society.
Finally, there is one issue that was not covered in depth in the GSF report but that will also be important in future considerations of science advice. The L'Aquila case revealed in quite dramatic fashion the disincentives for scientists to get involved in science advisory processes. Whilst there has been no systematic assessment, there is some evidence of precautionary or defensive behaviour in recent years, including resignations from advisory posts of highly qualified scientists, fragmentation of mandates to dilute responsibilities, suppression of dedicated advisory services to reduce liabilities and more globally an "advice chill". There is little or no reward or credit for an academic to engage with policy-making. In fact, depending on the time commitment, such engagment may have an adverse effect on an individual's academic career and be frowned upon by his/her peers. If we really value good science advice, then there is an increasingly strong case to have it recognised as an important and measureable output of science in the way that publications in scientific journals or patents are currently assessed and accredited at the individual and institutional level. As was said earlier, science for policy and policy for science are distinct but cannot be completely separated.
The full GSF report: Science Advice for Policy Making: The Role and Responsibility of Expert Bodies and Individual Scientists (April 2015) can be found at http://www.oecd.org/sti/sci-tech/science-advice.htm