Panda vs. Eagle
Much of Aschenbrenner's analysis is correct, but the policy implications are not.
On Wednesday, Ivanka Trump reshared Leopold Aschenbrenner’s influential Situational Awareness essay.
Aschenbrenner’s essay has taken the AI policy bubble by storm. Aschenbrenner argues that artificial general intelligence (AGI) will be built soon, that we can expect the US government to take control of AGI development by 2028, and that the U.S. should step up its efforts to beat China. According to Aschenbrenner, the stakes are high: “The torch of liberty will not survive Xi getting AGI first.” In my view, the U.S. national interest is much better served by a cooperative than an adversarial strategy towards China.
AGI may not be controllable
Aschenbrenner’s recommendation that the U.S. engage in an AGI arms race with China only makes sense if this is a race that can actually be won. Aschenbrenner himself notes that “reliably controlling AI systems much smarter than we are is an unsolved technical problem” and that “failure could easily be catastrophic.” The CEOs of the major corporations currently developing AGI, Sam Altman at OpenAI, Demis Hassabis at Google DeepMind, and Dario Amodei at Anthropic, all believe that their technology poses an existential threat to humanity (rather than just to China) and leading AI researchers such as Yoshua Bengio, Geoffrey Hinton and Stuart Russell have expressed deep scepticism about our ability to reliably control AGI systems. If there is some probability that the U.S. racing towards AGI wipes out all of humanity, including all Americans, then it might be more sensible for the U.S. government to pursue global cooperation around limits to AI development.
China will likely understand its national interest
You may however believe that the existential risk is small enough to be outweighed by the risk of permanent Chinese (technological) dominance or, like Aschenbrenner, feel very bullish on a breakthrough in our understanding of what it would take to control superhuman AI systems. Still, I don’t think this justifies an AI arms race.
In Aschenbrenner’s words: “superintelligence will be the most powerful technology—and most powerful weapon—mankind has ever developed. It will give a decisive military advantage, perhaps comparable only with nuclear weapons.” Clearly, if any of the existing superpowers believe that a rival power is about to gain a “decisive military advantage” over them, this will be hugely destabilising to the international system. To stave off subjugation by the United States, China and Russia will likely initiate preemptive military action to prevent a scenario in which the U.S. becomes the forever hegemon. An AGI arms race could push us to the brink of nuclear war, and this would seem a very strong argument for global cooperation over frenzied competition.
The view from Beijing
It takes two to tango and pursuing cooperation on AI is foolish if China will race ahead regardless. China certainly has its own equivalents of Marc Andreessen and Yann LeCun - the West’s loud and financially motivated evangelists of unbounded AI development. The Economist recently identified Zhu Songchun, the director of a state-backed programme to develop AGI, and science and technology minister Yin Hejun as two leading voices pushing back against any restraint.
Nevertheless, more safety-minded voices seem to be winning out for now. The summer saw the official launch of a Chinese AI Safety Network, with support from major universities in Beijing and Shanghai. Andrew Yao, the only Chinese person to ever have won the Turing award for advances in computer science, Xue Lan, the president of the state’s expert committee on AI governance, and a former president of Chinese tech company Baidu have all warned that reckless AI development can threaten humanity. In June, China’s President Xi sent Andrew Yao a letter with praise for his work and in July the President put AI risk front and centre at a meeting of the party’s Central Committee.
Cold shoulder?
November last year was particularly promising for US-China cooperation on AI. On the first day of the month, US and Chinese representatives quite literally shared a stage at the Bletchley Park AI Safety Summit in the UK. Two weeks later, Presidents Biden and Xi met at a summit in San Francisco and agreed to open a bilateral channel on AI issues specifically. This nascent but fragile coordination was further demonstrated at a subsequent AI Safety conference in South Korea in May.
Clearly, China and the US are at odds over many issues including the future of Taiwan, industrial policy and export controls. Some issues, such as climate change, nuclear security and AI safety, cannot however be solved within geopolitical blocs. They demand a global response. The moves that nations make over the coming months may well determine the global AI trajectory; towards an AI arms race with a deeply uncertain outcome or towards some form of shared risk management
The West (including the US) has two opportunities to keep China at the table and to empower the safety-minded voices in Beijing: the November San Francisco meeting between AI Safety Institutes and the Paris AI Action Summit in February. A substantial part of both summits will deal with safety benchmarks, evaluations, and company obligations. Whilst some of these issues are undoubtedly political, others are not. Ensuring that AI systems remain under human control is as much a Chinese as a Western concern and a meeting of the safety institutes in particular can be a neutral space where technical experts meet across the geopolitical divide.
Meme Onion article of the month
First takes on GPT o1 release
My boss, Anthony Aguirre, has shared some of his first impressions of OpenAI’s newest model release. An excerpt is below, you can find the full post on LinkedIn.