Vanderbilt philosopher says optimism about existential risk is central to reducing it

earth

There has been increasing public and academic concern about existential risks—those that threaten humanity’s existence. Today, pioneers in artificial intelligence defend the importance of mitigating existential threats posed by future systems, and leading companies have signed a statement expressing concern about existential risk from artificial systems.

Many scholars and research organizations give pessimistically high estimates of the existential risks faced by humanity; one estimates the likelihood of existential risk this century at 1 in 6, and another puts the risk of civilizational collapse by the end of this century at 1 in 2: a coin flip. The Future of Humanity Institute at Oxford University estimates the change of existential risk occurring this century at nearly 1 in 5.

It’s often assumed that pessimism is a good way to argue for the importance of mitigating existential risk. After all, the higher a risk is, the more important it is to mitigate.

David Thorstad (Submitted)

But in a recently published article, “High Risk, Low Reward: A Challenge to the Astronomical Value of Existential Risk Mitigation” in the journal Philosophy and Public Affairs, Assistant Professor of Philosophy David Thorstad uses a series of mathematical models to argue that becoming more pessimistic about existential risk makes us significantly less enthusiastic about reducing it, and becoming more optimistic about existential risk makes us significantly more enthusiastic about doing so.

“We cannot entirely diminish the importance of taking existential threats seriously,” Thorstad said. “In general, this paper suggests a need for balanced and moderate approaches to existential risk, an avoidance of panic and exaggeration and a rethinking of recent arguments in favor of prioritizing existential risk over other pressing problems.”

According to Thorstad, this argument has important implications. Surprisingly, those who think that existential risk mitigation is very important would do well to stop trying to convince people that risks are already high—because those arguments will just benefit their opponents.

Proponents of effective altruism, a philosophy and movement that advocates using evidence and reason to maximize the positive impact of one’s efforts in helping others, advocate that the most important thing we can do right now is to protect the very long-term future of humanity. In practice, this “longtermist” approach has meant that billions in philanthropic funding that was previously devoted to causes such as global health and development are now being used to address hypothesized existential threats. Thorstad suggests that some of this funding might be better directed toward more traditional causes.

The recent success of ChatGPT and other large language models has brought a wave of public warnings by prominent AI researchers about the existential threats posed by future AI systems. Thorstad thinks there are some key points missing from the conversation—that pessimistic warnings may undercut the case for prioritizing risk reduction in this area, and that that across the board, mitigating near-term AI harms like bias, privacy violations and labor market distortions may be more important than focusing on more speculative existential risks.