Will the White House walk into the Action Plan trap?
Analysis of company submissions suggests they seek to distract the administration
By Executive Order, President Trump has decreed that the United States will put in place an “AI Action Plan” by July 22. To help the government write this plan, it has just concluded a public consultation where everyone could share their views. The White House received a whopping 8755 submissions, and initial analysis shows that there is a widening gulf between what the major AGI corporations push and what the American public asks for.
Kevin Roose, the host of the New York Times Hard Fork podcast, analyses the corporate action plan submissions as follows: “You can imagine a world in which the AI labs were saying to the government: hey, we have all these ambitious plans, we want your help. But they're not asking for that stuff. What they're asking for instead is basically leave us alone and let us cook. (..) They are trying to give the government some stuff that they can do that will make them feel like they're helping.”
Big Tech’s submissions that could have been
The leaders of the major AGI corporations are acutely aware of how their technology may disrupt society. OpenAI CEO Sam Altman has famously said that the worst-case scenario for artificial intelligence is “lights-out for all of us”. To a lesser or greater extent, the CEOs have all commented on the potential for AGI to be labour replacing and about the way in which AI will soon make it impossible to tell whether you are dealing with another human online or with a machine.
Given these genuinely held concerns, the companies could have been expected to propose real solutions. They could have asked the government to prevent their competitors from undercutting them on basic safety standards, to aggressively experiment with universal basic income, or to develop the technological solutions we need to prove that someone online is who they say they are. Indeed, Sam Altman himself is not just a co-founder of OpenAI but also of World Network (formerly WorldCoin), an initiative that wants to provide a reliable way of authenticating humans online.
The CEOs, however, seem determined to avoid any meaningful role for the US government in AI development by i) elevating secondary problems to pole position, and ii) invoking China whenever they can.
AI’s hottest issue: ventilation engineers
In their submissions, the major AI players try to distract the White House by presenting secondary problems as massive priorities. Corporate submissions make much noise about access to energy, about the government’s own adoption of AI and about the risk of a “fragmented regulatory environment” [a euphemism for states passing their own laws]. Google, in particular, makes a point of calling for federal regulation on AI that would prevent states from passing their own, whilst knowing full well that the U.S. has not passed and enforced a meaningful federal digital law in decades.
The issues raised by the companies do of course matter, but they are not what CEOs talk about at the company town hall or during a free flowing podcast interview. In the worst example across all submissions, OpenAI pushes the distraction strategy to the limit by recommending in some detail that the U.S. expand a tax credit for parents such that more young people can be trained as heating, ventilation and air conditioning (HVAC) engineers. AI data centres will certainly need more ventilation engineers, but this is hardly one of the core issues in U.S. AI policy today.
Using China
OpenAI also invokes the ‘but China’ argument consistently in its submission and in the LinkedIn posts by its chief lobbyist. The OpenAI submission analyses China’s advantages in AI in some detail and essentially argues that it is necessary to abolish (!) intellectual property and copyright laws to “avoid forfeiting our AI lead to the PRC [People’s Republic of China].” If we would follow OpenAI’s logic here, then any U.S. law should be scrapped if China isn’t also bound by it. This is an absurd proposition that would spell the end of many worker protections or any and all antirust action in defence of little tech.
A - real - Action Plan
The submissions by scientists and think tanks do largely address the core issues - a detailed overview by Just Security is available here. Broadly, recommendations can be grouped into three major buckets: government capacity, government visibility, and creation of the tools that allow intervention if necessary.
There is broad consensus across all submissions that the U.S. needs some kind of central body to oversee the technology - whether that is the current AI Safety Institute under the National Institute of Standards and Technology or similar. CNAS and CSET also ask for the intelligence community to take a leading role in testing new AI models for chemical, biological, radiological and nuclear (CBRN) vulnerabilities.
The Center for Data Innovation, CNAS, CSET and my own organisation [FLI], each call for the creation of a national AI incident database to build government visibility over what accidents happen and how they impact society. Anthropic also has some interesting and detailed recommendations on monitoring the potential workforce impact of AI development, for example by updating the American Time Use Survey to include detailed questions on how people use AI in their jobs and by getting relevant agencies to understand how AI may undermine the tax base.
The most interesting bits of the submissions are on the tools the government should create to intervene if necessary. CNAS recommends mandatory on-chip modifications to make it possible for the administration to track and authorise foreign shipments of the most advanced AI chips. In our own submission, FLI calls upon the government restrict dangerous self-replication and self-improvement capabilities in AI systems.
Once the Action Plan gets issued at the end of July, we’ll learn whether the Administration will be able to set out meaningful AI policy for the U.S. government. The companies are gunning for a vague plan that allows them to do whatever they want. The question is if the White House can avoid walking into this trap.
Meme Graph of the Month
Suggested reading
Coverage of the California report mandated after last year’s veto of AI law SB1047. The report is likely to lead to a reintroduction of a similar law.
Keep The Future Human, an essay by my boss Anthony Aguirre on how the direction of AI development and what we should do. If you prefer to listen rather than read, you can hear him discussing it here.
Kevin Roose, who I quoted at the beginning of this Substack, in the New York Times on why both optimists and pessimists should take progress towards artificial general intelligence more seriously.
Thanks for reading. Please consider making this newsletter findable to others by clicking the like button below or by forwarding it to someone who might enjoy it.
Also please do keep the feedback coming, it remains welcome through mark [at] future of life [dot] org.