Recently, the AI policy community has become very excited about track II dialogues - informal gatherings involving both American and Chinese AI experts. The hope is that this type of information sharing will reduce uncertainty about the other superpowers’ AI capabilities, provide an opportunity to exchange best practices on safety measures, and build trust towards an eventual informal agreement on AI “red lines” that should not be crossed.
All these goals are worthwhile, but, paradoxically, increased enthusiasm for track II dialogues may inadvertently undermine their success.
What even is a “track II”?
Diplomats tend to define a track II as a curated convening of experts who have a measure of influence over their national governments, either through direct connections or through the role they play in a national debate over a certain issue. In other words, not every room full of American and Chinese AI professionals qualifies as a track II. Many events that are billed as track II’s are, in fact, conferences.
What makes for a good one?
When I worked as a diplomat previously, I managed a portfolio of track II’s involving various domestic and international parties to the Syria conflict. In my experience the best track II dialogues satisfy five conditions:
Prior to the convening(s), the organisers have articulated a clear vision of where they would want the group to end up and have structured an agenda accordingly (e.g. “by the end of the track II, participants will converge on the ideal government response to an AI accident”).
The facilitator is perceived as a neutral broker. A good AI track II, therefore, cannot be run by Brookings and Tsinghua University alone, but requires a neutral third party from a jurisdiction that is equally respected by both the U.S. and China (e.g. the Emirates or Switzerland).
The facilitator has the requisite standing to convene the desired participants and has some experience in reconciling wildly differing views.
The number of participants is small and of high-quality, and invitations are issued based on a deep analysis of how each person can exert influence over a given policy in the target country or countries.
If there is an outcome document, a clear pathway has been identified to bring this to the attention of relevant policymakers.
Unlike Beyoncé…
…the best track II is probably one you’ve never heard of.
It may seem counterintuitive, but a renowned track II is a pretty rare thing. As soon there is broad public awareness, participants can be expected to revert into their national trenches and recycle stale talking points. Intelligence agencies, including those representing countries not within the scope of the track II, will also take note and may choose to bug hotel rooms, break into participants’ devices, or otherwise reduce the willingness and ability of participants to make progress on sensitive issues.
Less is more
Track II’s can be incredibly powerful tools to tackle sensitive geopolitical issues when governments themselves are unwilling to engage in direct talks or cannot afford to be seen engaging in “soft” dialogues with the enemy. We definitely need these to be organised and Oliver Guest over at the Institute for AI Policy and Strategy has laid out some of the important topics that these convening can and should focus on.
At the same time, there is a real risk that too much funding and enthusiasm for track II’s can undermine their impact. Each country has a limited number of key people outside government who matter to AI policy and their appetite to travel internationally is likely to be limited. A key risk when track II’s proliferate too widely is that key figures from both sides end up in parallel gatherings rather than meeting each other in the same room. Another risk, which I have unfortunately seen up close in the Syrian context, is that poor facilitation fuels rather than reduces mistrust, and diminishes prospects for international cooperation.
The scientific exception
Of course, there is an exception to every rule. Larger, and sometimes even public, engagement between non-state actors can play a role in conflict resolution as long as various initiatives don’t cannibalise one another. Historically, rival powers have given scientists in particular more leeway to collaborate internationally. Their conversations also tend to focus on concrete technical challenges rather than abstract policy positions, making these interactions less politically fraught.
The Pugwash conference series, founded in Pugwash in Canada in 1957 for example, convened scientists from major powers to discuss the dangers of nuclear weapons. Eight years later, in 1965, the now famous nuclear physicist Robert Oppenheimer said of the Pugwash process: “I know it to be true that it had an essential part to play in the Treaty of Moscow, the limited test-ban treaty”. Much more recently, AI Professor Stuart Russell has pioneered a similar, Pugwash-style dialogue on AI between scientists from the West and China as covered in the Economist’s 1843 magazine.
Too much of a good thing
Ultimately dialogue between China and the West on AI is critical to prevent a reckless arms race where we all lose, and track IIs can be a powerful tool to complement direct bilateral diplomacy between governments. As a policy community, however, we should be careful not to jump head first into funding and organising ever more dialogues, or seek public recognition for our efforts in this space.
Instead, we should focus on identifying the small number of truly influential figures in each country, understanding their perspectives and constraints, and creating the conditions for sustained, private dialogue. This means fewer, smaller, more carefully designed convenings facilitated by genuinely neutral parties who understand both the technical and political dimensions of AI governance.
The best track II dialogue about AI may well be one that never makes headlines and that's exactly as it should be.
Meme of the Month
Suggested reading
The President of Singapore, Tharman Shanmugaratnam, gave a wide-ranging and excellent speech on artificial general intelligence at the opening of Asia Tech X late last month.
The world’s most cited AI scientist, Yoshua Bengio, last week launched LawZero. TIME covers this new initiative - partly funded by my employer FLI - aimed at “advancing scientific progress without rolling the dice on agentic systems.”
Too much controversy, Anthropic's CEO Dario Amodei has warned of a “white-collar blood bath” in an interview with Axios. His prediction is that AI will wipe out half of entry-level white collar jobs within the next five years.
Thanks for reading. Please consider making this newsletter findable to others by clicking the like button below or by forwarding it to someone who might enjoy it.
Also please do keep the feedback coming, it remains welcome through mark [at] future of life [dot] org.