当前位置: X-MOL 学术Population and Development Review › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Toby Ord The Precipice: Existential Risk and the Future of Humanity Hachette Books, 2020, 480 p., $30.00
Population and Development Review ( IF 4.6 ) Pub Date : 2021-07-01 , DOI: 10.1111/padr.12423
Geoffrey McNicoll

By existential risk the Oxford philosopher Toby Ord means the “permanent destruction of human potential.” Actual human extinction is existential, but so would be the irreversible collapse of civilization. In the latter category, for example, catastrophic climate change through a runaway greenhouse effect could yield such a future, with human population reduced to a remnant left clinging to life; a world-wide totalitarian regime, self-perpetuating through “technologically enabled indoctrination, surveillance, and enforcement,” would also count. In this book, Ord lays out the range of existential threats, both familiar and novel, and offers a well-documented and (where documents fail) well-reasoned assessment of their various likelihoods. His bottom line: “Given everything I know, I put the existential risk this century at around one in six: Russian roulette” (p. 30). Alarming enough, and if continued, as he says, an unsustainable level of risk, “unlikely to last more than a small number of centuries.” For humans to survive over the longer term it will have to be greatly lowered. The period we are living in now, with humanity at high risk of destroying itself, Ord calls the Precipice.

Others have trod this ground. John Leslie's The End of the World: The Science and Ethics of Human Extinction (1996)—reviewed in PDR 23 no. 4—was an early entrant in the genre. Leslie's treatment, more casual than Ord's, arrived at a roughly 70 percent overall chance of avoiding extinction over the next five centuries, somewhat higher than Ord but in the same ballpark. In Ord's enumeration, anthropogenic risks are the main threat to survival, vastly exceeding natural risks. The largest natural existential risk is eruption of a supervolcano like the ones that created the Yellowstone caldera in Wyoming and Lake Toba in Sumatra, which is put at a one in 10,000 chance over the next century. (Existential risk from an asteroid collision is far smaller.) In contrast, anthropogenic risks are one or even two orders of magnitude greater. An existential catastrophe this century through nuclear war or from climate change are both assessed at one in 1,000; from a human-engineered pandemic, one in 30. Without the condition of irreversibility, of course, these risks would be much greater. Most threatening of all in Ord's account, though also the most speculative, is the possible malign consequence of the development of artificial general intelligence (AGI) to a degree that exceeds human levels, a prospect the “expert community” on average evidently considers achievable, more likely than not by the end of the century. An AGI system “optimized toward inhuman values,” could arrogate an ever-increasing share of power, with humanity, in effect, ceding its control. We may then face “a deeply flawed or dystopian future locked in forever.” The judged risk for the century: one in ten.

The risk assessment exercise points to where remedial efforts need to be directed, and to their urgency. The agenda is straightforward, calling for improvements in international coordination on security, devising institutions that take greater account of the interests of future generations, and strengthening the governance of potentially dangerous new technologies. Such efforts are grossly underresourced: “we can state with confidence that humanity spends more on ice cream every year than on ensuring that the technologies we develop do not destroy us” (p. 58).

That might seem to wrap up the author's task. But Ord's vision, spelled out in a final chapter, is more expansive. With existential security attained and humanity's potential secured, “we will be past the Precipice, free to contemplate the range of futures that lie open before us… the grand accomplishments our descendants might achieve with eons and galaxies as their canvas” (pp. 190–191). The time horizon is unlimited: mammal species in the fossil record typically last a million years. For us, therefore, “almost all humans who will ever live have yet to be born” (p. 43). (Interestingly, this directly contradicts an argument of Leslie, who applied a version of the so-called anthropic principle—that we today should be seen as temporally average, not exceptionally early, observers among all humans past and future—to conclude that an ultra-long human future is highly improbable.) Ord's future has no place for mundane demography, which might worry about sustainable net reproduction rates, or for regional differentiation, which might bring in geopolitics. Indeed, a radical impartialism prevails: all lives matter, and not just spatially, as in Peter Singer's One World: The Ethics of Globalization (2002) or in Ord's own innovative project on “effective altruism,” but also over time: “people matter equally regardless of their temporal location” (p. 44). Ethical purism accords massive weight to future generations.

The book ends with seven meaty appendices, on topics such as the purported inadmissibility of time discounting, past nuclear weapons accidents, and the value of protecting humanity (with existential risk formalized as a hazard rate, r).



中文翻译:

Toby Ord The Precipice: Existential Risk and the Future of Humanity Hachette Books, 2020, 480 p., $30.00

牛津哲学家托比奥德所说的存在风险意味着“人类潜力的永久破坏”。实际的人类灭绝是存在的,但文明的不可逆转的崩溃也是存在的。例如,在后一类中,通过失控的温室效应导致的灾难性气候变化可能会产生这样的未来,人口减少到只剩下生存的残余;一个通过“技术支持的灌输、监视和执法”自我延续的全球极权主义政权也很重要。在这本书中,Ord 列出了存在威胁的范围,既有熟悉的,也有新颖的,并提供了一个有据可查的和(在文件失败的地方)对其各种可能性的合理评估。他的底线是:“根据我所知道的一切,我认为本世纪存在的风险约为六分之一:俄罗斯轮盘赌”(第 30 页)。足够令人震惊,如果继续,如他所说,不可持续的风险水平,“不太可能持续几个世纪以上。” 为了让人类长期生存,它必须大大降低。我们现在生活的时期,人类处于自我毁灭的高风险之中,奥德称之为悬崖。

其他人已经踏上了这片土地。约翰莱斯利的《世界末日:人类灭绝的科学与伦理》(1996 年)—— PDR 评论23号 4 - 是该类型的早期进入者。Leslie 的处理方式比 Ord 的更随意,在接下来的五个世纪内避免灭绝的总体机会约为 70%,略高于 Ord,但处于相同的范围内。在 Ord 的列举中,人为风险是生存的主要威胁,远远超过自然风险。最大的自然生存风险是超级火山的喷发,就像在怀俄明州形成黄石火山口和苏门答腊岛的多巴湖一样,在下个世纪,这种火山的发生概率为万分之一。(小行星碰撞的生存风险要小得多。)相比之下,人为风险要高一个甚至两个数量级。本世纪因核战争或气候变化而导致的生存灾难被评估为千分之一;来自人为工程的大流行,三分之一。当然,如果没有不可逆转的条件,这些风险会大得多。在 Ord 的描述中,最具有威胁性,尽管也是最具推测性的,是通用人工智能 (AGI) 发展到超出人类水平的可能的恶性后果,“专家社区”平均认为这种前景显然是可以实现的,到本世纪末更有可能。一个“针对非人的价值观进行优化”的 AGI 系统可以攫取越来越多的权力,而人类实际上放弃了它的控制权。然后,我们可能会面临“一个永远锁定的存在严重缺陷或反乌托邦的未来”。本世纪的判断风险:十分之一。在 Ord 的描述中,最具有威胁性,尽管也是最具推测性的,是通用人工智能 (AGI) 发展到超出人类水平的可能的恶性后果,“专家社区”平均认为这种前景显然是可以实现的,到本世纪末更有可能。一个“针对非人的价值观进行优化”的 AGI 系统可以攫取越来越多的权力,而人类实际上放弃了它的控制权。然后,我们可能会面临“一个永远锁定的存在严重缺陷或反乌托邦的未来”。本世纪的判断风险:十分之一。在 Ord 的描述中,最具有威胁性,尽管也是最具推测性的,是通用人工智能 (AGI) 发展到超出人类水平的可能的恶性后果,“专家社区”平均认为这种前景显然是可以实现的,到本世纪末更有可能。一个“针对非人的价值观进行优化”的 AGI 系统可以攫取越来越多的权力,而人类实际上放弃了它的控制权。然后,我们可能会面临“一个永远锁定的存在严重缺陷或反乌托邦的未来”。本世纪的判断风险:十分之一。一个“专家社区”平均认为可以实现的前景,更有可能在本世纪末实现。一个“针对非人的价值观进行优化”的 AGI 系统可以攫取越来越多的权力,而人类实际上放弃了它的控制权。然后,我们可能会面临“一个永远锁定的存在严重缺陷或反乌托邦的未来”。本世纪的判断风险:十分之一。一个“专家社区”平均认为可以实现的前景,更有可能在本世纪末实现。一个“针对非人的价值观进行优化”的 AGI 系统可以攫取越来越多的权力,而人类实际上放弃了它的控制权。然后,我们可能会面临“一个永远锁定的存在严重缺陷或反乌托邦的未来”。本世纪的判断风险:十分之一。

风险评估活动指出了需要采取补救措施的地方及其紧迫性。议程简单明了,呼吁改进安全方面的国际协调,制定更多考虑子孙后代利益的机构,并加强对潜在危险新技术的治理。这些努力的资源严重不足:“我们可以自信地说,人类每年在冰淇淋上的花费比确保我们开发的技术不会摧毁我们的花费更多”(第 58 页)。

这似乎结束了作者的任务。但在最后一章中阐述的奥德的愿景更为广阔。随着生存安全的实现和人类潜力的获得,“我们将越过悬崖,自由地思考摆在我们面前的未来范围......我们的后代可能会以永恒和星系为画布取得的伟大成就”(第 190 页– 191)。时间范围是无限的:化石记录中的哺乳动物物种通常持续一百万年。因此,对我们来说,“几乎所有有生之年的人都还没有出生”(第 43 页)。(有趣的是,这直接与 Leslie 的论点相矛盾,后者应用了所谓人择原理的一个版本——我们今天应该被视为时间平均,而不是特别早,所有人类过去和未来的观察者——得出的结论是,人类的超长未来是极不可能的。)奥德的未来没有可能会担心可持续净繁殖率的世俗人口学,也没有可能带来地缘政治的区域差异. 事实上,一种激进的公正主义盛行:所有生命都很重要,而不仅仅是空间上,就像彼得辛格所说的那样One World: The Ethics of Globalization (2002) 或在 Ord 自己关于“有效利他主义”的创新项目中,但随着时间的推移:“无论时间地点如何,人们都同样重要”(第 44 页)。道德纯粹主义对子孙后代非常重视。

这本书以七个内容丰富的附录结尾,主题包括时间折扣的不可接受性、过去的核武器事故和保护人类的价值(存在风险被形式化为危险率,r)。

更新日期:2021-07-02
down
wechat
bug