Xiao Qian: US-China AI race must strike a balance between security and openness
Tsinghua scholar says raising barriers to entry in tech in the name of national security could stifle global AI development
The U.S. State Department has ordered a global push to bring attention to what it says are widespread efforts by Chinese companies, including AI startup DeepSeek, to steal intellectual property from U.S. artificial intelligence labs, according to a diplomatic cable seen by Reuters.
The cable, dated Friday April 24 and sent to diplomatic and consular posts around the world, instructs diplomatic staff to speak to their foreign counterparts about “concerns over adversaries’ extraction and distillation of U.S. A.I. models,” the news agency reported on the same day.
Xiao Qian, deputy director of the Centre of International Security and Strategy (CISS) and vice-dean of the Institute for AI International Governance at Tsinghua University, wrote about the distillation on April 23 in the South China Morning Post. Xiao has kindly agreed to our dissemination of her commentary, a timely intervention.
I first came across her Chinese version at CISS’s blog within WeChat.
US-China AI race must strike a balance between security and openness
Raising barriers to entry in tech in the name of national security could stifle global AI development
The United States House Select Committee on China recently released a report on artificial intelligence. Titled “Buy What It Can, Steal What It Must: China’s Campaign to Acquire Frontier AI Capabilities”, it captures a hardening view in Washington that Beijing’s artificial intelligence rise is closely tied to both market access and security concerns.
Whether fully substantiated or not, such beliefs are increasingly shaping the policy lens through which technology competition between the two countries is understood in the US – less as a matter of innovation, and more as one of national security.
Against this backdrop, recent controversy over model distillation involving leading US firms – including OpenAI, Anthropic and Alphabet – has drawn a great deal of attention. The coordination among these companies, coming soon after Washington’s push to build a “full-stack AI export” system, suggests that what appears to be a technical dispute is in fact part of a broader shift in how AI is governed – and contested – globally.
At first glance, the debate over model distillation concerns technical pathways and intellectual property boundaries. Distillation is a widely used machine learning technique that enables smaller models to approximate the performance of larger ones, reducing computational costs and accelerating adoption. Its legal status remains ambiguous, and even US firms have used similar methods among themselves.
However, in today’s geopolitical environment, the issue has been reframed. Some US policymakers and companies argue that distilled models could be misused for cyber operations, disinformation campaigns or even military applications. What was once a question of optimisation has been elevated to one of national security.
This shift reflects a deeper transformation in AI governance in the US. Over the past few years, Washington has moved from a primary focus on AI safety – including ethical risks and algorithmic harms – towards a more security-driven paradigm centred on strategic competition and technological control. Rhetoric around safety has not disappeared, but it is increasingly fused with national security considerations.
This transformation is also taking place against a changing technological reality. The latest report by the Stanford Institute for Human-Centred Artificial Intelligence provides concrete evidence that the gap between the US and China in AI development is narrowing. The United States still leads in producing frontier large language models, but China remains highly competitive in scale and diffusion – accounting for the largest share of global AI publications and patents, and rapidly expanding real-world deployment. Meanwhile, benchmark gaps between leading US and Chinese models have further compressed in the latest evaluation cycles, particularly in applied and multilingual tasks.
This narrowing gap may help explain the growing sense of urgency – and, in some quarters, anxiety – in Washington. The framing of AI development as a race to be won has become deeply embedded in US policy discourse. As competition intensifies, maintaining technological leadership is no longer seen as sufficient; slowing competitors is becoming an equally important objective.
Institutionally, this shift has reshaped the relationship between the government and industry. Through advisory bodies, export controls and standards-setting initiatives, leading US technology companies are being drawn into a governance framework that aligns closely with national security priorities. The result is a form of embedded coordination: firms remain market actors, but they also function, in part, as instruments of strategic policy.
In this evolving system, firms such as OpenAI, Anthropic and Google are no longer just innovation leaders. They are becoming gatekeepers of frontier AI capabilities. Security is no longer only about managing risks; it is also about defining who gets access to advanced technologies, and under what conditions. In the name of national security, those big tech companies are capable of shaping competitive dynamics, raising barriers to entry for potential rivals.
This shift sits uneasily with the long-standing ethos of democratising technology, which has underpinned much of the digital economy’s expansion. As access becomes more tightly controlled, the diffusion of cutting-edge capabilities risks slowing, potentially limiting broader participation in innovation while also reinforcing asymmetries between those who control core technologies and those who depend on them.
A direct consequence of this shift is the narrowing of strategic options for developing countries. For many in the Global South, access to advanced AI capabilities depends on integration into existing technological ecosystems, often dominated by a handful of firms. Participation in these ecosystems comes with embedded rules and standards. Building independent capabilities, meanwhile, requires significant resources and time. The result is a structural constraint that risks deepening global technological divides.
As for AI global governance, where international discussions once focused primarily on ethics, transparency and safety, they are now increasingly shaped by geopolitical competition. Governance is no longer only about mitigating risks, but also about managing strategic advantage.
Excessive securitisation risks fragmenting the global technology landscape into competing blocs, raising costs for all and slowing innovation. This transformation complicates international cooperation, even as it makes dialogue in critical areas – such as military AI applications and infrastructure security – more urgent.
For China, these developments present both challenges and opportunities. On one hand, tightening external controls are constraining traditional pathways for acquiring advanced technologies. On the other, they are accelerating efforts to strengthen domestic innovation ecosystems.
In the long run, building a comprehensive AI ecosystem – spanning data, computing power, models and applications – will be essential for enhancing technological resilience. At the same time, China has an interest in preserving an open and inclusive international environment for AI development.
China’s policy approach – emphasising both development and risk management – may offer a useful perspective in this context. The challenge is not simply to respond to external constraints, but to contribute to shaping a governance framework that balances security with openness.
Ultimately, the controversy over model distillation is not an isolated incident. It is a reflection of a broader shift in the logic of technological competition, in which security considerations are increasingly embedded into market and innovation strategies.
Thus, the key question for the international community is not only how to govern AI risks, but how to prevent security narratives from becoming tools of exclusion. If standards, regulations and capability controls continue to be used primarily to raise barriers, global AI development may move towards a trajectory of structural fragmentation: leading systems consolidate their advantages by controlling critical capabilities, while others face growing technological and institutional constraints.
The task ahead is not to choose between security and openness, but to find a sustainable balance between the two. How this balance is struck will determine whether AI becomes a shared engine of global progress or another fault line in an increasingly divided technological world. (Enditem)



