Climate change poses grave risks. Death, injury, ill health, and disrupted livelihoods for hundreds of millions may result from storm surges, coastal flooding, sea level rise, and extreme weather events. In turn, fresh water shortages, depletion of marine and coastal ecosystems resources, loss of marine and terrestrial ecological services, and agricultural degradation caused by drought, flooding, rising temperatures, and precipitation variability are likely. These harms are to be expected if current trends continue (IPCC 2014). But scientists acknowledge that climate change may accelerate with little warning, significantly increasing its impact.

Rapid shifts in atmospheric carbon and temperatures may occur owing to a number of self-reinforcing feedback loops, including the loss of tropical rainforests caused by warming that inhibits a biome’s capacity to generate its own humid microclimate, the release of frozen gas hydrates from the sea floor and methane from thawing northern permafrost, and extra heat absorbed by the planet when all floating Arctic ice melts—a process occurring much more rapidly than scientists once predicted. In light of such prospects, Lovelock (2009, 7, 44) writes:

Do not expect the climate to follow the smooth path of slowly but sedately rising temperatures…. The real Earth changes by fits and starts with spells of constancy, even slight decline, between the jumps to greater heat. Climate change is not at all like the smooth civil engineering of a major highway that climbs uninterruptedly up a mountain pass, but more like the mountain itself, a concatenation of slopes, valleys, flat meadows, rock steps, and precipices.

The ragged topography of climate change stymies reliable prediction as it “involves uncertainties in a breathtaking number of dimensions” (Heal and Kriström 2002, 34; see also Field 2012). Indeed, we cannot estimate the uncertainties very well because “we don’t know how much we don’t know” (Keith 2013, 33–34).

Climate uncertainties and their accompanying risks currently exceed the governance processes and structures required to address them (Rosa et al. 2014). As these uncertainties and risks are expansive and enduring, the case for robust climate policy is compelling. Instead of responding to current circumstances or a narrowly framed future in an optimal manner, robust policy anticipates varied outcomes with the goal of fairing “relatively well, compared to alternatives, across a wide range of plausible futures” (Lempert et al. 2006, 514). Rather than charting a single pathway, the goal is to foresee diverse prospects. Simplicity gets sacrificed, but adaptive capacity is gained (Rosenhead et al. 1972; Dessai et al. 2009; IRGC 2011).

The need for robust climate policy is widely acknowledged (Woodruff 2016; Shortridge and Zaitchic 2018; Marchau et al. 2019). But the means for its achievement remain underdeveloped. And while a wide variety of approaches and techniques have been suggested, there persists two trends that limit progress.

First, the impact of emerging technologies on climate change mitigation and adaptation is widely neglected (Dorr 2016). Yet emerging technologies will likely have a large and growing impact—both positive and negative. Second, while polycentric governance has been extensively studied and vindicated in other realms (Ostrom 2010), it tends to be ignored for climate policy. Scholars of policy processes have assessed the impacts of multiple, diverse actors (Kern and Rogge 2018), but the most common assumption is that decision-makers constitute a small, well-defined, stable, and relatively unified group. Procedures for climate policy are then developed for these key decision-makers (Roelich and Giesekam 2019). Consequently, the politics of policymaking is neglected, as well as the need for and benefits of broader participation in policy development.

As such, recommendations for robust climate policy development tend to understate the challenge and misconstrue the opportunities. The variability of climate is only one of three interactive arenas of uncertainty that must be confronted. This paper addresses the components of robust climate policy given the convergence of uncertainties in the arenas of climate, politics, and technology.

Climate, politics, and technology do not exhaust the realms of uncertainty to which robust policy might attend. Complex interactions of a host of variables within natural, technological, and social systems are germane (Fuss et al. 2014). Economic disruptions and transformative cultural change, for example, are important indeterminacies, as are diverse Earth system components beyond standard climate variables. Earthquakes no less than economic recessions might significantly alter the trajectories of climate policies. My claim is not that climatic, political, and technological uncertainties are exhaustive of pertinent uncertainties but that they should be of primary concern for robust climate policy development. The means for grappling with their impacts, outlined below, can also be applied to other socioeconomic and Earth system variables.

1 Interactive realms of uncertainty

Like climate change, political life does not follow a linear path. Throughout history, periods of constancy have been punctuated by revolts, revolution, other forms of upheaval, and quickly changing state and non-state actors. Historically, no less than presently, cycles of instability in political regimes are evidenced in interaction with demographic change and economic development (Turchin and Nefedov 2009). In the wake of recent unexpected events—such as Brexit, the rise of populism, and the election of Donald Trump to the US presidency—the predictive capacity of political science has been thrown into question, notwithstanding ever more sophisticated tools for data gathering and analysis.

The vicissitudes of political life—wherein radical change may occur from one election cycle to next, if not more frequently—appear woefully out of sync with the long-term strategies required to address climate change, wherein atmospheric and geophysical effects may endure for millennia. President Trump’s withdrawal of the USA from the Paris Accord is a case in point. Some hope that the dire consequences of climate change will serve to stabilize politics by putting everyone in the same (sinking) boat. Klaus Topfer, as the executive director of the United Nations Environment Programme (UNEP), stated that climate change produces no winners, only losers, and with widespread recognition of this truth, global collective action will follow. As Benjamin Franklin wryly noted to his fellow American separatists—if we do not hang together, we shall surely hang apart.

While Franklin was probably right about the consequences of a failed bid for American independence, Topfer and others are likely wrong about climate change. Long before global warming was perceived to be a serious threat, scientists, state officials, and entrepreneurs speculated on its regional and national benefits. They waxed enthusiastic about the opening of frozen shipping lanes and increases in agricultural yields in northern climes. And they were not wholly wrong. Gains and losses from climate change will be unevenly distributed, globally, regionally, nationally, and intra-nationally.

Some countries, notably Russia and Canada, will probably be able to increase their agricultural yields and can expect significant economic and military benefits from control of shipping lanes through an ice-free arctic. Such benefits may be more than offset by national forest loses due to pest outbreaks and the economic and political disruption arising from global resource shortfalls and refugee crises. Still, climate change will produce relative winners and losers, as will large-scale mitigation and adaptation efforts. Even if all nations are absolute losers, some will be made worse off than others. Desperate peoples and nations are prone to upheaval. Crises will precipitate conflict. Indeed, most future deaths attributable to global warming may well be the product of political rather than “natural” causes, as warring and failing states grapple with climate disruptions and the accompanying resource depletions (Brand 2009, 282). While we might always hope for the best, there is no assurance that politics will be tamed by global climate change. Volatility, even turmoil, appears as likely.

Even in relatively stable regimes, climate policy is a product of the interactions of multiple actors and agencies bearing different, and often discordant, interests, perspectives, and motivations (Roelich and Giesekam 2019). Policy development reflects the impact of diverse decision-makers, shifting coalitions, bargaining, rivalries, and trade-offs. The politics of policymaking—at international, national, regional, and local levels—heightens uncertainty. Robust climate policy must account for the diversity and flux of actors and interactions and the heightened uncertainties that politics produces.

Technological innovation, like climate and politics, demonstrates spells of constancy punctuated by disruptive change. Emerging technologies, such as artificial intelligence (AI), synthetic biology, nanotechnology, and robotics will significantly impact climate adaptation and mitigation. Yet, the rate of development of these technologies, their direct effects, and their side effects remain largely unknown. Emerging technologies exhibit non-linear growth. In turn, they undergo convergence as innovation and deployment are stimulated across contiguous fields. Information technology has been a key enabler of the development of other technologies over the last few decades. Nanotechnology promises to be a key enabler in the future. Genomics, synthetic biology, and AI are also likely to demonstrate strong interaction effects. Convergence dramatically accelerates technological development and its uncertainties (Al-Rodhan 2011, 130; Spohrer 2003).

Climate scenarios are typically “underpinned by the implicit and unrealistic assumption of ceteris paribus (all else being equal) with respect to technology” (Dorr 2016, 638). To be sure, scholars are beginning to address the linkage between climate strategies and technological change, typically with a focus on developments in the fields of clean energy (Molina-Perez 2016). For the most part, however, they neglect the impact of a broader range of emerging and converging technologies. Yet, developments in these fast-moving fields will significantly impact climate mitigation and adaptation efforts, political life, and policy development.

Consider a few examples. Nanotechnology has long been implicated in a variety of climate projects, including molecular manufacturing to develop carbon dioxide absorbing crops and forests and energy-producing solar cell paint for roofs and roadways (Drexler et al. 1991). Research in membrane nanotechnology for filters and energy production also appears promising (Olson and Rejeski 2005, 41–52). In turn, artificial photosynthesis that transforms light into energy and atmospheric carbon dioxide into organic material (that can be sequestered) is in development. Artificial photosynthesis operations might be installed in power plants that burn fossil fuels to increase their output while negating carbon emissions and designed into rooftop shingles for homeowners to produce energy while cleaning the air (Logan et al. 2017).

Or consider synthetic biology: bioengineering that employs natural or synthesized biomolecular components to create novel genetic and biochemical structures, processes, and organisms. Synthetic biology could transform deserts into carbon-absorbing greenspaces by engineering cyanobacteria to incorporate a synthetic polymer with enhanced water-retaining capacities. Other climate-impacting developments include synthetic bio-based chemicals, algae, and biomass organisms that could replace fossil fuels, milk, egg, meat, and leather products; the “assisted evolution” of species threatened by climate change by means of genetic and genomic interventions; and the reintroduction of extinct animals, such as wooly mammoths, to combat the thawing of boreal permafrost (National Academies of Sciences, Engineering and Medicine 2017; Church 2018).

Climate mitigation may also be significantly impacted by “intelligent machine labor” that makes carbon capture and storage both economically and practically feasible (Dorr 2016). In turn, geoengineering technologies have the potential for significant impacts. The most prominent—and controversial—form of geoengineering is solar radiation management (SRM), wherein aerosols sprayed into the stratosphere would reflect sunlight back into space. While SRM would not capture or store carbon, it might effectively slow down, stop, or even reverse the heating effect of greenhouse gases in the atmosphere.

Advances may also come from wholly new sources of clean energy, including those that can scale rapidly. Notwithstanding many false starts in the arena of fusion energy, the eventual development of a cost-effective “star in a jar” appears feasible. An international effort to fire up the ITER tokamak fusion reactor in Southern France is set for 2025. The China National Nuclear Corporation’s HL-2M tokamak, dubbed the “artificial sun,” is projected to be operational even sooner.

By the time any of the above emerging technologies get deployed at scale, artificial intelligence will be deeply involved in climate science, politics, and governance. Complex pattern recognition and strategic thinking, long held to be the exclusive provinces of a single species on the planet, are now quickly falling into the domain of artificial intelligence. In the coming years, AI will increasingly be employed by all branches of government, the military, and business corporations to improve or supersede cumbersome, slower, and less rigorous forms of human judgment, analysis, prediction, decision-making, and policy development. Such deployments of AI might radically impact national and global climate change adaptation and mitigation efforts. They will also produce unintended consequences for which state legislators, executives, and administrators are unprepared (Kissinger 2018).

With the risks and costs of various climate-impacting technologies in mind, some scientists argue that human engineering is a viable means to stabilize the climate. Synthetic biology might be employed to reduce the height and body mass of large portions of the human population. Corresponding lower metabolic rates would translate into fewer resources consumed and less greenhouse gas emitted (Liao et al. 2012). This far-fetched example underscores the fact that emerging technologies are not silver bullets that will dispatch climate change in a quick, clean, cheap, or risk-free manner. Many proposals are ethically dubious, prohibitively dangerous, or quite absurd. In turn, those technologies that get deployed will produce unintended consequences, as even bullets with silvery hues ricochet.

2 The epistemology of uncertainty

Climate, politics, and technology are characterized by three kinds of uncertainty: known knowns, known unknowns, and unknown unknowns. These epistemological categories—employed in scenario planning scholarship (e.g., Schoemaker 1995) well before Donald Rumsfeld infamous reference to them at a press briefing in 2002—do not indicate forms of ignorance. Ignorance might be thought of as an unknown known that arises when some parties remain unaware of available knowledge, perhaps owing to the silo effect that ensconces policymakers in restricted knowledge domains. The remedy for ignorance is better communication of knowledge. Our primary concern here is not ignorance but uncertainty that persists notwithstanding the best communication of all available knowledge.

Known knowns are uncertainties whose odds can be calculated. In such cases, both the impact and its probability are well understood. Actuarial tables and other analytical methods allow us to predict the probability that people of different ages, genders, races, body masses, and diets will suffer heart attacks from clogged arteries. Absent a crystal ball, no one can tell us when any particular person will face the music, hence the uncertainty of a known known. Still, there are sufficient empirical data and methods of analysis to allow a brisk business for insurance companies that can make financially sound, statistically grounded wagers regarding people’s health and mortality.

In contrast, known unknowns present us with known impacts but unknown probabilities.

We can identify the kind of risk, but do not know how likely it is. We do not have the empirical data or analytical power to assign probabilities to outcomes. The collapse of a major ice sheet off Antarctica is a known unknown. We understand quite well how high oceans would rise were an ice sheet of a given size to calve from the southern continent and melt, and we know a good deal about the consequences of such an event. The impact can be estimated. But we do not know the probability of this catastrophe. However, more data may become available over time (e.g., as smaller ice sheets collapse). And because we know what questions to ask, we can engage in more research, modeling, simulations, and experimentation with the aim of transforming this known unknown into a known known. At that point, the uncertainty can be subjected to risk-cost-benefit analysis, as is regularly done by insurance and finance companies grappling with the known knowns of actuarial science.

Unknown unknowns present quite a different situation. Both impacts and probabilities remain obscure. We are confronted with risks that we cannot foresee or predict. Technically speaking, risk is the wrong word here. Risk comes into play when there are threats whose level of uncertainty can be assessed. Deep uncertainty exists when threats are unknown or have no reliable mechanisms of assessment. There are no available models that adequately capture the number, types, and interactions of variables involved and the probabilities of specific outcomes. These uncertainties cannot be tamed by sophisticated modeling or simulation or by gathering more data because we have no idea what we are looking for. We do not know what questions to ask.

Consider an example. At the height of the Cold War, in response to the launch of Sputnik in 1957 and the perceived threat of a Soviet attack disabling all US telephone and communications networks, the US Department of Defense Advanced Research Projects Agency (ARPA) teamed up with MIT professor J.C.R. Licklider to propose a secure network of computers that would be able to maintain communications after a nuclear attack. Enhancing communications technology was deemed a crucial means of securing democracy from the threat of totalitarianism. The impact that networked ICT would have on democratic institutions and processes over the coming decades was not investigated. The question never arose.

With the invention of “packet switching” a few years later, information could be broken down into discrete bundles that take separate routes to their common destination, ensuring the security of the communication system notwithstanding the breakdown of any single node. In 1969, the first computer-to-computer message was sent on ARPANet. Within a decade, methods for computers to communicate more easily with each other were developed, enabled by various internet protocols (e.g., TCP/IP). In 1991, Tim Berners-Lee of the European Organization for Nuclear Research (CERN) developed a way for scientists not simply to send messages via computer but to easily access data employing a Hypertext Transfer Protocol (HTTP). The World Wide Web was born.

The next two decades saw the accelerated development of the Internet and the Web and their broad accessibility through home and office computers, laptops, tablets, smart phones, and burgeoning social media platforms such as Facebook, YouTube, Twitter, and Instagram. The impact on politics and democracy has been pronounced, as exemplified by the role of social media in the 2011 Arab Spring, Brexit, and the 2016 US presidential election. ARPANet was designed to enhance communication among a small group of professionals. The broader effects of networked ICT on social and political life could not be foreseen: there were too many unknown unknowns. No one knew, or could have known, what questions to ask.

Unknown unknowns will produce unforeseen but direct side effects. They will also generate non-linear events (IRGC 2015, 12). For illustrative purposes, consider the potential unintended consequences of an effort to mitigate climate change employing synthetic biology. A current project at the Max-Plank Institute aims to engineer synthetic bacterial enzymes that transform carbon dioxide into organic matter at twenty times the rate of standard plant photosynthesis, allowing algae or bacteria to be deployed as carbon dioxide removal devices (Max-Planck Gesellschaft 2016; and see Solé 2016). Unforeseen (negative) impacts might follow one of five trajectories:

  1. 1)

    Direct impacts: the modified organism exploits its advanced carbon dioxide uptake capacities in a carbon-rich world, reproducing quickly and prolifically and reducing carbon dioxide in the atmosphere to such a degree that the planet is pitched into an ice age.

  2. 2)

    Immediate side effects: the modified organism releases a toxin, decimating other species sharing its environment.

  3. 3)

    Chain reactions: the modified organism, though non-toxic, consumes other forms of natural bacteria, disrupting the food source of existing species, whose declining numbers negatively impact their own predators, and so on down the food chain. Eventually, entire ecosystems collapse. Such “cascade effects” produce impacts distant in time and/or space from the initial intervention owing to an unforeseen series of interactions.

  4. 4)

    Negative synergies: the proliferation of the modified algae in the seas, though insufficient in itself to harm other aquatic species, combines with thermal stress from warming waters to decimate coral reefs and other oceanic ecosystems. Here, a relatively minor side effect magnifies the impact of otherwise unrelated variables, producing a major, unforeseen harm.

  5. 5)

    Positive feedback loops: the modified organism adapts to thrive on snow, ice, and ocean surfaces while increasing its rate of growth in warmer climates. The planet’s glaciers and poles become covered by a green film that absorbs solar radiation, lowering albedo and causing the ice and snow to melt at accelerating rates. In turn, quickly rising, algae-covered oceans with lowered albedo absorb increased levels of solar radiation. The modified organism’s slow sequestration of carbon dioxide is not able to compensate for the heat-enhancing impacts of its rapid spread across the Earth’s surface, with the net effect being a sharply increasing rate of planetary warming. Here, a direct impact, side effect, chain reaction, or negative synergy comes to feed on itself, creating a vicious cycle.

Whenever a complex system is tweaked, unforeseen impacts are inevitable. Whether modifying a socioeconomic system by means of public policy or an ecological system by means of a technological intervention, we face unknown unknowns.

3 The components of robust policy

Robust climate policy, at a minimum, should confront three interactive realms of uncertainty—climatic, technological, and political. Within each of these realms, three epistemological categories of uncertainty arise—known knowns, known unknowns, and unknown unknowns. Visualize a three-by-three graph: the vertical axis is divided into political known knowns, political known unknowns, and political unknown unknowns, while the horizontal axis is defined by their technological counterparts. Now, transform this two-dimensional graph into a “cube of uncertainty” by adding the third dimension of climate. Observing one of the cube’s 27 cells, we are presented with a political known unknown interacting with a climatic known known and a technological unknown unknown. The 26 remaining cells represent other combinations. Importantly, 19 of the cube’s 27 cells incorporate one or more unknown unknowns. That is to say, in 70% of the cells, we cannot engage in standard risk assessment because we have entered the realm of deep uncertainty.

To adequately grapple with deep uncertainty, robust climate policy needs to exhibit three characteristics: (1) diverse, distributed, and transparent participation; (2) safe-to-fail experimentation; and (3) exploratory foresight.

3.1 Diverse, distributed, and transparent participation

Climate change is heedless of national borders. But, it will produce relative winners and losers. And this year’s winners may be next year’s losers. Those (individuals, groups, or nations) not participating in the crafting of climate policy may feel little if any obligation to abide by its prescriptions when the going gets rough. For climate policy to stand any chance of weathering the storm of varying impacts over time and space, an inclusive coalition of domestic and international partners is required. In short, policymakers addressing complex problems with long-term implications must prioritize the formation and sustaining of (winning) coalitions of actors (Levin et al. 2012).

Diverse, distributed participation will be easier to orchestrate and likely have better results for climate policies with a relatively tight temporal and geographic connection between (those who reap the) benefits and (those who bear the) risks and costs. That is to say, it will be easier and more effective for localized adaptation policies and more difficult and less effective for mitigation policies. Here, as in most realms of life, having skin in the game—quickly and directly suffering the consequences of one’s neglect or mistakes—contributes greatly to the development of effective decisions and policies (Taleb 2018; Berry 1981, 143).

Inclusive partnership in climate policy development creates a sense of risk ownership, and as such provides a crucial means of broadening the field of stakeholders who view themselves as having skin in the game. Without inclusive partnership in the governance of risk, it is unlikely that climate policy will be able to maintain broad support when inequitable harms are experienced. Broad participation in risk governance entails diverse, distributed participation in (1) risk assessment: the statistical, natural, and social scientific analysis of bio-physical dangers and socioeconomic impacts, along with their probabilities and distribution; (2) risk evaluation: the appraisal of the relative tolerability of the aforementioned hazards to diverse populations; and (3) risk communication: the comparative and probabilistic framing of the issue and an explanation of stakeholder participation in the analytic and deliberative process of risk management (OECD 2014, 115; Macnaghten and Chivers 2012; Kearnes 2012; Renn 2008).

Diverse, distributed participation in risk governance and policy development is not solely a mechanism for securing broad support in uncertain times. Neither is it simply a genuflection before the alter of democracy. It can improve the quality of decisions. Far too often, evidence-based policymaking loses out to policy-based evidence making. That is to say, decision-makers search for and deploy only those facts, or fictions, that align with their professional, economic, or ideological interests. Confirmation bias is heavily at play. Policy development gains strength from the tonic of alternative perspectives. Diverse “upstream” engagement by citizens, non-governmental organizations, scientists, industry spokespersons, and government officials within and across nations can improve the quality of policy (Barben et al. 2008; Bellamy 2016).

Studies indicate that (large) groups whose members provide diverse viewpoints, ideas, heuristic approaches, and skill sets outperform more uniform groups in problem-solving, prediction, and creativity. Diversity typically trumps ability in problem-solving situations characterized by complexity and uncertainty, allowing heterogenous groups to win out over high-ability groups (Bohman 1996; Hong and Page 2001, 2012; Hong et al. 2004; Lamberson and Scott 2012; Ober 2013; Page 2011; Woolley et al. 2010). The heterogeneity of agents and networks facilitates adaptive innovation by providing a greater range of interactions and a deeper reservoir of ideas and approaches while preventing a premature consensus that fails to capture the full range of available options.

Officials and experts who stand to benefit from large-scale projects consistently underestimate the economic costs and time needed for their completion (Flyvbjerg 2014; Flyvbjerg et al. 2003). Once the final price and delayed completion dates of such projects become evident, it is too late to pull the plug. The same dynamics may lead to the overestimation of benefits and underestimation of risks of emerging technologies, including geoengineering or others associated with climate change (Stilgoe 2015; Thiele 2020). By the time the full costs and risks are made evident, societies may experience technological “lock-in.” The process will have gone so far and become so embedded in socio-political and economic processes and structures that post-facto regulation would come at too high a price. This is a variant of the “technology control dilemma” identified by Collingridge (1980).

There is no sure-fire remedy for technological lock-in. But diverse, distributed, and transparent participation helps ensure that costs and risks will be adequately addressed and a broad variety of future options explored; that decisions will be based on evidence more than eminence, analysis more than aspiration, and the welfare of potential victims more than the profits of investors. A narrowly defined group of decision-makers is simply too susceptible to restricted purviews, premature consensus, corporate capture, and other forms of cooptation.

Diverse stakeholder involvement in policy pathways can benefit from models for participatory interaction that facilitate group capacity building, problem-solving, and transition management (van Bruggen et al. 2019; Malekpour et al. 2020). In turn, computer-based decision support tools allow multiple stakeholders to better understand the costs, benefits, and uncertainties of decision alternatives by providing computationally intensive analysis, modeling, and mapping (Wong-Parodi et al. 2020). Such computer-based modeling that accounts for diverse actors and constituencies developing and responding to climate policies can also help decision-makers select policy architectures that shape the conditions of future support and hence increase the probability that long-term goals will be achieved (Isley et al. 2015, 148). This allows policymakers to better navigate an uncertain future by developing and stress testing various transitional pathways associated with the interactions of multiple systems, including natural, technological, and socio-political.

Elinor Ostrom, a Nobel Laureate in economics, assessed the limits of centralized forms of policy creation in a turbulent, complex, quickly changing world. Reflecting on her life’s work, much of it oriented to the governance of common pool resources, Ostrom advocates the development of “diverse polycentric institutions” that stimulate the “innovativeness, learning, adapting, trustworthiness, levels of cooperation of participants, and the achievement of more effective, equitable, and sustainable outcomes at multiple scales” (Ostrom 2010, 24–25.) Diverse, distributed, and transparent participation is required for climate policies robust enough to generate sustainable outcomes.

3.2 Safe-to-fail experimentation

Since the agricultural revolution, our species has made unsteady progress in the realms of technology, politics, culture, and economics by regularly opening up Pandora’s box and reacting to the outcomes. Growing global interdependencies and the much-heightened power of emerging technologies makes such experimentalism an increasingly dangerous endeavor. Arguably, Pandora should no longer serve as the patron saint of human creativity.

Reviewing the proposal for atmospheric geoengineering, Rajendra Pachauri (2006, 5A), former chairman of the Intergovernmental Panel on Climate Change (IPCC), warns that “If human beings take it upon themselves to carry out something as massive and drastic as this, we need to be absolutely sure there are no side effects.” In a similar vein, the activist group Hands Off Mother Earth (HOME) harshly criticizes geoengineering research, employing the slogan “Our home is not a laboratory.” Pachauri’s plea for absolute certainty and HOME’s forceful declaration are understandable and appear prudent. Of course, the same demands might be made of all emerging technologies, which are commonly called disruptive technologies precisely because they exert large-scale, dramatic, and unpredictable impacts across multiple sectors. Clearly, we do not have the capacity—intellectual, economic, social, or political—to make the requirement of “no side effects” stick. We simply cannot foresee all the unintended consequences of technological developments, and it is naïve to believe that we could design “failsafe” systems to prevent them.

The point is that our planet is a laboratory. For billions of years, it has been a laboratory for evolution and natural selection. For thousands of years, it has been a laboratory for artificial selection and other human technologies. A global population of less than 20 million people some five millennia ago, at the start of the Bronze age, grew tenfold by the end of the Iron Age. With rising human populations came widespread deforestation, the development of agriculture, and animal husbandry. Each of these activities had a significant impact on biodiversity and produced large amounts of greenhouse gases. Prehistoric humans were significant biome and climate changers (Ruddiman 2005). Planetary modification is an experiment that began in antiquity.

Though individual contributions are by no means equal in impact or scope, each of the world’s nearly eight billion people are participating in planet modification today. With so many hands at work, there is little point in demanding that they all be taken off Mother Earth. Still, we can and should make planetary experiments safer and more responsive to stakeholders, both human and non-human. Experimentation may become a self-conscious, collective responsibility sanctioned and steered by a broad public, rather than the prerogative of scientific, technological, political, or economic elites (Stilgoe 2015, 201; and see Stilgoe 2016). Robust policy is best grounded in stakeholder-sanctioned, safe-to-fail experimentation.

Safe-to-fail experimentation is learning by doing that is both transparent (regarding potential impacts) and well prepared (for unintended consequences). Good experimenters anticipate the unexpected. That is why scientists working in laboratories wear protective clothing, post the phone numbers of emergency services, and have fire extinguishers and other safety devices at hand. Safe experimentation controls its venue as much as possible so that any damage done from an unforeseen event does not wreak havoc on the surrounding environment. Safe-to-fail experimentation entails sound preparation and security precautions to ensure that much-needed learning does not inadvertently produce catastrophic results.

Safe-to-fail experimentation also requires responsible data management and sustained feedback. Consider the European Union’s Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) directive, with its “no data, no market” policy. An absence of data produced through scientific experimentation in laboratories and field trials disallows the large-scale experiment of introducing untested chemicals into the open environment. An experimental orientation to policy is not an excuse for leaping before you look or making waste through haste. The point is to make experimentation safer. And the only way to do so is to conduct more safe-to-fail experiments.

Experimentalism produces progress—technologies, political institutions, and public policies that yield longer, safer, healthier, more prosperous lives for more people. But we have come to a juncture in humanity’s development when it is no longer possible—or at least no longer prudent—to haphazardly experiment our way into greater prosperity. So, we have to analyze, simulate, and model whenever possible and conduct field experiments when necessary, which is to say, whenever analysis, simulations, and models cannot produce the learning we need. That will be quite often.

What is said here of technological experimentation applies equally to political experimentation. For thousands of years, the human species has experimented with different systems of governance. Indeed, democracy itself is a living experiment—and as Dewey (1993) observed is at its best when understood as such. Public policies should also be understood as experiments in the making. Robust climate policies are safe-to-fail experiments in the arena of adaptive governance.

Adaptive governance, also known as adaptive management, relies on the iterated interactions of multiple stakeholders to generate, implement, and refine strategies for sustaining human-ecological relationships (Holling 1978; Lee 1994; Dietz et al. 2003; Folke et al. 2005; Armitage et al. 2008; Chaffin et al. 2014; Tarrant and Thiele 2016). The goal is to wed policy development to learning and to wed learning to cautious doing.

Surprising failures have always been the surest route to knowledge for our species. As Oakeshott (2001), 57) observes, learning arises out of error, not ignorance. Robust policy embraces the sort of errors and failures that increase knowledge with minimal risk. Learning by doing in the face of uncertainty is not an excuse for recklessness and is quite the opposite. However, owing to the inevitability of unforeseen effects—including those produced by direct impacts, immediate side effects, chain reactions, negative synergies, and positive feedback loops—we have no way of making accurate predictions about everything that might go wrong.

Gray (2004, 21–22) observes that “The world today is a vast, unsupervised laboratory, in which a multitude of experiments are simultaneously under way…. We can’t know the risks of these experiments because we don’t understand their interactions with one another.” The first step in making experiments safe-to-fail is to ensure that they do not remain unsupervised. Diverse, distributed, and transparent participation achieves this crucial goal. But the point of supervision is to ensure that experimentation occurs in a sufficiently informed and cautious manner. That entails having some idea of what opportunities can be exploited, what might go wrong, and how to prepare for an uncertain future.

“When the laboratory is the Earth,” writes Schneider (1997, xii), it is “no longer acceptable simply to learn by doing.” Today, we need “to anticipate the outcome of our global-scale experiments before we perform them.” There is no alternative to learning by doing for our species. But we can and should make our “doings” safer-to-fail. That entails informing experimentation—and policies understood as experiments-in-the-making—with foresight.

3.3 Exploratory foresight

Exploratory foresight improves present-day decisions by anticipating potential futures. The goal is both to envision beneficial options and become less vulnerable to the “tyranny of urgency.” Failure to foresee threats leaves one prey to strident calls for quick fixes in the face of crisis—and the consequences of hasty reactions. The systematic integration of foresight practices into policymaking has been advocated by a wide variety of scholars, practitioners, and agencies, including the National Research Council Committee on Geoengineering Climate (2015, 186), the National Academy of Public Administration (2016), and committees of the National Academy of Sciences (2017) mandated with addressing the impacts, risk pathways, and opportunities related to emerging technologies.

Foresight is not prediction. Prediction presents a singular path to the future. Foresight surveys the temporal topography and scouts out obstacles, dangers, and opportunities. While it is impossible to predict the future in conditions of deep uncertainty, it is possible to foresee many futures. The futility of prediction makes for the necessity of foresight.

The difference between foresight and prediction might be likened to the distinction that military strategists make between planning and plans. Reflecting on his own military career, President Eisenhower (1957, 818) observed that “Plans are worthless, but planning is everything. There is a very great distinction because when you are planning for an emergency you must start with this one thing: the very definition of ‘emergency’ is that it is unexpected, therefore it is not going to happen the way you are planning.” Exploratory foresight is rigorous planning that acknowledges the uselessness, and danger, of rigid plans.

Exploratory foresight can be enhanced by a wide variety of practices, such as alternative reality gaming, backcasting, cross-impact analysis (cross impact systems and matrices), computer modeling and simulations, Delphi methods, dynamic adaptive policy pathways, horizon scanning, scenario analysis (scenario building), prospective hindsight, and trend analysis (Mannermaa 1986; Schoemaker 1995; Swart et al. 2004; Börjeson et al. 2006; Klein 2007; Vergragt and Quist 2011; Marchau et al. 2019). As components of robust policy development, such foresight practices facilitate the stress testing of a wide range of plausible futures (Lempert 2019, 16, 25).

Foresight exercises illuminate viable action paths. They also reveal the pernicious effects, and unintended consequences, of inaction. By way of example, policy participants might imagine a negative outcome and backcast decision trees to better understand how this unfortunate state-of-affairs arose and what options, available at various time junctures, might have produced more favorable results (Morgan and Ricke 2010). Positive future outcomes can also be backcast to determine alternate paths to preferred states of affairs (Mander et al. 2007). Alternatively, one might develop scenarios to populate the 27 cells of the “cube of uncertainty.” Computational simulation models and probabilistic analysis can then be deployed to winnow down the number of plausible futures (Shortridge and Zaitchic 2018; Groves and Lempert 2007; Lempert and Schlesinger 2000). Alternatively, scenarios of “stop-gap” measures may be explored, integrating assessments of the risks, costs, and benefits of interim, emergency interventions that buy time for the development of long-term solutions (Buck et al. 2020).

There are important, but limited, roles for experts to play in foresight practices. Delphi methods, which entail a series of iterative assessments by participants who amend their prognostications based on a review of the contributions of colleagues, have historically been restricted to experts. That is ill advised, at least in instances where experts represent a relatively homogenous group.

Tetlock (2005) has demonstrated that experts are not particularly good at making predictions or at assessing their own proficiency as forecasters. The best forecasters are methodical in their analyses and do not become prisoners of their own preconceptions. They have a wide base of knowledge drawn from an eclectic array of disciplines, accept ambiguity and contradiction as inevitable features of life, gather evidence from a variety of sources, work well in diverse teams, assess probabilities rigorously, and iteratively revise their forecasts based on the systematic pursuit of new information. Tetlock’s research vindicates the benefits of diverse, distributed, and transparent participation and adaptive learning by doing. It buttresses scholarship on anticipatory governance that underlines the benefits of the upstream engagement of a broad array of stakeholders who challenge unexamined assumptions, the framing of debates, and the social, ethical, and political implications of proposed endeavors (Bellamy 2016; Ramos 2014; Bellamy et al. 2013; Quay 2010; Guston 2008; Barben et al. 2008).

It is likely that forecasting will increasingly be abetted by AI. Competitive chess today is often played by so-called centaur teams of humans working in tandem with AI programs. Centaur teams regularly defeat both the best AI chess programs and the best human players in tournaments. Likewise, the best weather prediction today is achieved by human forecasters working with computer models (Silver 2012, 125). Other arenas of forecasting are likely to follow this model of synthesizing human judgment with AI (Tetlock and Gardner 2015, 23). The natural science of climate change already relies heavily on AI for data analysis. Centaur forecasting might contribute significantly to the robustness of climate policy development.

Notwithstanding any advances in AI and exploratory forecasting, deep uncertainty will remain an enduring challenge. We simply cannot know whence all the surprises will come. In turn, foresight practices have little benefit if responsible parties pay no heed to findings.

Global pandemics, for example, have been frequently forecasted over the last two decades. Indeed, a foresight exercise in the first half of 2019 by the US Department of Health and Human Services anticipated the outbreak of a contagious respiratory virus originating in China and growing to pandemic proportions, eventually killing hundreds of thousands of Americans. This foresight exercise, conducted a year prior to the actual arrival of COVID-19 in the USA, anticipated disorganized and uncoordinated responses from state and federal agencies and critical shortages of medical supplies, including masks and ventilators (Sanger et al. 2020). And so it came to pass.

Exploratory foresight unaccompanied by responsive action is a recipe for regret. One might hope that the catastrophic impact of the COVID-19 pandemic will strengthen the political will for robust climate policies grounded in exploratory foresight while there is still time.

4 Conclusion

The difference between weather and climate, it used to be said, is that the former is changeable and unpredictable, while the latter is stable and knowable. That is a lesson we have to unlearn. Today, with the help of computer models, weather has become much more predictable. And climate is increasingly recognized to be unstable. Developing robust policy that accounts for a changing climate is difficult. Doing so in the face of uncertainties introduced by politics and emerging technologies makes for a much tougher challenge.

Many would argue that climate policy grounded in uncertainty and complexity is a political non-starter. Simplicity sells in the world of politics. Uncertainties can muddy the waters, jeopardizing the path to policy approval. Indeed, there is a personal price to be paid for attending to uncertainties. Scientists who insist that the available data do not (as yet) support causal linkages between some observable phenomena, such as local extreme weather and rising global mean temperatures, are pilloried as “climate confusionists.” Their integrity is attacked and their careers jeopardized (Pielke 2017). Yet in the arena of climate change, as Pielke (2010, 34) observes, “Uncertainties and ignorance are a reality to be lived with and managed. They are not going away.” Uncertainty can be poison to politics. But it is the lifeblood of science. To gloss over climate uncertainty is to ground climate policy in misinformation. It constitutes both a corruption of science and a long-term threat to democratic politics.

Uncertainties will remain with us even if we adequately resource scientific inquiry and technological endeavor. That is because science and technology generate uncertainty and manufacture risk (Lee 2012). To be sure, significant resources—far more than are currently being devoted—should be channeled toward the scientific effort of transforming climatic, political, and technological unknown unknowns into known unknowns and known unknowns into known knowns such that meaningful risk-cost-benefit analyses can be conducted. But scientific and technological enterprise will also produce new risks and uncertainties—possibly in greater quantities than those alleviated by rigorous research. Robust climate policy is grounded in the assumption that, regardless of crucial efforts to reduce uncertainty, we will remain dogged by it—and possibly to an ever-deepening degree.

Hawken et al. (1999), 316) state that “the most unlikely environmental scenario is that nothing unlikely happens. The biggest surprise would be no surprises.” We appear to be involved in a high-stakes game of chance, one played at breakneck speed with long-term consequences of planetary proportions. Faced with so many unknowns of momentous import, one might be tempted to throw up one’s hands and bow down before fate, or blindly to carry on business as usual, effectively burying one’s head in the sand.

There is an alternative. “Since we’re billions of times more ignorant than knowledgeable,” Vitek and Jackson (2008, 1) observe, “why not go with our long suit and have an ignorance-based worldview?” Practically speaking, going with our long suit means taking actions in the near term that preserve the most choices for the long term. In other words, “plan short and option long” (Brand 2009, 107). Robust climate policy plans short and options long by means of diverse, distributed, and transparent participation, safe-to-fail experimentation, and exploratory foresight. It promotes a clear-eyed confrontation with uncertainty in the realms of climate, technology, and politics, allowing us to engage in pliant planning without becoming wedded to rigid plans.

There is an old joke about a young traveler in Ireland arriving at a crossroads who seeks counsel from a local resident approaching from the other direction. “Excuse me sir, but how might I get to Kilkenny?” the wayward traveler asks. The old man furrows his brow and strokes his white whiskers, and then slowly replies: “Well if I were you laddie, I wouldn’t start from here.” The advice seems unhelpful. But the best route to a desired destination may not be among the apparent choices. Sometimes, it is necessary to seek additional counsel, climb a nearby hill to survey the territory, and proceed with the help of new sightlines. That is to say, we stand to benefit from expanded discussion, far-sighted vision, and fresh prospects.

Menken (1921, 158) quipped that “there is always a well-known solution to every human problem—neat, plausible, and wrong.” There are no clear and simple solutions to climate change. The path will not be straight or steady. We will often need to reorient ourselves and blaze new trails. To reach a stable climate in relatively short order—an existential necessity for our species—it would be preferable not to start from our current crisis condition. Unfortunately, that is where we find ourselves. Given this predicament, our best option is a widely participative and experimental approach that exploits our capacities for foresight. Such robust climate policy will make us more adaptive and innovative and allow us to better navigate the slopes, valleys, flat meadows, rock steps, and precipices ahead.