AI Race: When a senior Russian tech executive recently told a Moscow audience that artificial intelligence would create a new global “AI club” with power “comparable to nuclear weapons,” he wasn’t just indulging in sci-fi hyperbole. He was describing how many governments now see AI: not as a productivity tool, but as an instrument of sovereignty, coercion and long-term dominance. Reuters
From Washington to Beijing, Brussels to Seoul, Delhi to Tokyo and Moscow, policy documents and speeches invoke the same language: race, leadership, sovereign AI. The fear is stark: if you fall behind now, you may never catch up.
But what exactly are the United States, China, Russia, India, Japan, South Korea and the European Union racing for—and what does this scramble mean for everyone else?
1. From “AI arms race” to “geopolitical innovation race”
The phrase “AI arms race” has become a cliché, particularly in discussions of autonomous weapons and superintelligent systems. Scholars such as Noel Sharkey began warning in the late 2000s about an emerging race among high-tech militaries to build autonomous submarines, fighter jets and tanks that could select and attack targets with minimal human control.
More recent research argues that this metaphor is too narrow. Political scientist Stefka Schmid and colleagues propose thinking instead in terms of a “geopolitical innovation race”: a competition where national security, economic power and status all mix together, and where the line between civilian and military AI is constantly blurred. Taylor & Francis Online+2ingentaconnect.com+2
In this race:
- The United States and China are the undisputed frontrunners in frontier AI models, cloud infrastructure and chips.
- The European Union is trying to legislate while fearing it is “missing the boat.” Reuters+1
- Japan and South Korea are betting on “innovation-first” strategies and alliances. Asia Pacific Foundation of Canada+3IBA+3Future of Privacy Forum+3
- India is pushing to turn its vast pool of engineers and data into a late-mover advantage. pib.gov.in+2psa.gov.in+2
- Russia, under heavy sanctions, is determined not to be frozen out of the new “AI club” at any cost. IntelliNews+2Reuters+2
And increasingly, all of them talk about “sovereign AI”—national AI stacks for compute, data and models that reduce dependence on U.S. platforms or Chinese infrastructure. The Guardian+3Wall Street Journal+3EE Times Europe+3
The implications are profound: for global security, for the future of work and for countries in the Global South that did not choose this race but must now run on a track designed by others.
2. The United States: Chips, clouds and containment
In Washington, AI is framed as the next decisive general-purpose technology—and as the frontline of strategic competition with China.
Domestically, the CHIPS and Science Act of 2022 pledged roughly $280 billion to boost semiconductor research and manufacturing in the United States, recognizing that advanced chips are the “steel” of the AI age. Wikipédia Export controls first imposed in October 2022 and tightened in later rounds restrict China’s access to cutting-edge GPUs and chip-making tools, and similar measures have been pushed onto allies. Congrès.gov+3CSIS+3Wikipédia+3
At the same time, President Biden’s 2023 Executive Order 14110 set out a sweeping plan for “safe, secure and trustworthy AI,” directing U.S. agencies to manage risks while preserving innovation leadership. Federal Register+2The White House+2
Yet export controls keep tightening. New rules announced in early 2025 sought to cap advanced AI chip exports to much of the world, while carving out exceptions for close allies such as Japan and the U.K. Cloud providers can apply for global licenses, but only under strict security and human-rights conditions.
Critics warn that this “chips first” strategy risks entangling trade, human rights and national security in ways that are hard to unwind. An investigation by the Associated Press recently revealed how, even amid tough rhetoric, U.S. governments over decades allowed and sometimes helped domestic firms sell AI-relevant technology to Chinese security agencies, including those involved in mass surveillance in Xinjiang.
In short, the United States wants to lead in AI, limit rivals, and write the rules—a balancing act that grows harder as the technology spreads and as domestic politics swing between regulation and deregulation. The Guardian+1
3. China: AI as national rejuvenation—and a new front in regulation
China announced its ambitions early. The New Generation Artificial Intelligence Development Plan, released by the State Council in 2017, set a clear goal: by 2030, China should be the world’s primary AI innovation center, with AI deeply embedded in its economy and society. Ambassade de Chine en Finlande+2DigiChina+2
Since then, Beijing has poured money into AI research, startups, and surveillance platforms. Consulting firm PwC once estimated that AI could add around $7 trillion to China’s economy by 2030, roughly a quarter of projected GDP. MacroPolo
But China is not just racing on raw capacity. It has also moved aggressively to regulate AI, especially generative models. The Interim Measures for the Management of Generative AI Services, in force since 2023, require providers to ensure that generated content reflects “core socialist values,” avoids threatening state security, and respects intellectual property. White & Case+3Fasken+3China Law Translate+3
Those rules have two faces. Supporters say they demonstrate that China is ahead of the West in setting guardrails for powerful systems. Critics see them as a blueprint for algorithmic censorship at scale, embedding political red lines inside global AI products.
Meanwhile, U.S. chip export controls have forced China’s AI firms into creative workarounds. Some, like Alibaba and ByteDance, are reportedly training large models in data centers in Southeast Asia to access Nvidia GPUs indirectly, while others rely on domestic chips and stockpiled hardware. bis.doc.gov+3Tom’s Hardware+3CSIS+3
For Beijing, AI is part of a broader project of “national rejuvenation”: it promises economic growth, military strength and ideological control. For its rivals, it is a reason to double down on containment.
4. Europe: Regulator in chief—or AI also-ran?
Europe has taken a different path, attempting to wield regulatory power where it lacks tech giants.
The EU AI Act, adopted in 2024, is the world’s first broad risk-based law for AI, banning certain applications, imposing strict obligations on “high-risk” systems and introducing transparency rules for generative AI. Parlement Européen+2artificialintelligenceact.eu+2 It follows a familiar Brussels pattern: set standards for the single market and hope the world follows.
But inside Europe, anxiety is growing. European Central Bank president Christine Lagarde warned in November 2025 that the EU is “jeopardising its future” by lagging in AI adoption and development, citing fragmented regulations, high energy costs and limited venture capital as obstacles. Simply buying AI from foreign providers, she warned, would deepen dependency and vulnerability. Maaal+3Reuters+3European Central Bank+3
The result is a quiet shift. Investigations suggest that some enforcement timelines for the AI Act and related tech regulations are being loosened under pressure from industry and member states, even as regulators insist that fundamental rights remain central. The Guardian+1
At the same time, European leaders are flirting with “sovereign AI”—national or EU-wide AI stacks that rely on European data centers, chips and models, an idea aggressively marketed by Nvidia CEO Jensen Huang and increasingly discussed in policy circles. InnovationAus.com+2economist.com+2
Supporters say this could ensure cultural alignment, industrial jobs and strategic autonomy. Critics ask whether a continent that missed the smartphone and cloud revolutions can realistically afford to build a full AI stack from scratch—and whether the money might be better spent on education, infrastructure and targeted partnerships.
5. Japan and South Korea: Middle powers with big ambitions
Japan: Innovation first, with guardrails
Japan has quietly become one of the most active players in AI governance.
Domestically, the AI Promotion Act frames AI as a strategic asset and sets four pillars: promoting industrial use, mitigating risks through transparency, treating AI as a driver of public welfare, and contributing to international norms. IBA+2Future of Privacy Forum+2
Internationally, Tokyo led the G7 Hiroshima AI Process, which produced a set of International Guiding Principles and a voluntary Code of Conduct for organizations developing advanced AI systems. Those documents—now backed by a “Friends Group” of nearly 50 countries and monitored through an OECD reporting framework—aim to create a baseline of safety and accountability without freezing innovation. OECD+4digital-strategy.ec.europa.eu+4g7.utoronto.ca+4
Prime Minister Fumio Kishida has stressed both sides of the equation: AI’s potential to help aging societies like Japan’s and its risks in the form of deepfakes, disinformation and autonomous weapons. AP News+1
Japan’s bet is that a blend of innovation-first industrial policy and high-profile norm-setting can carve out influence between the U.S. and China—especially if linked to closer cooperation with Europe on AI-enabled weapons and digital standards. ISPI+1
South Korea: From chip giant to “sovereign AI” benchmark
If AI runs on chips, South Korea sits at the heart of the race.
In 2024, semiconductors—especially high-bandwidth memory used for AI data centers—generated over $140 billion in exports for Korea, with China as the largest buyer. Global demand for AI infrastructure drove a nearly 44% year-on-year jump in chip exports. Fondation Innovation et Technologie+1
The new government has doubled down, launching ambitious plans to expand AI data centers and domestic LLMs. Recent reports describe a working group involving Samsung, SK Hynix, Hyundai and Naver to deploy around 260,000 Nvidia GPUs, potentially increasing national GPU capacity five-fold. Asia Times+1
South Korea has also embraced the language of sovereign AI, presenting itself as a democratic alternative to both American corporate dominance and Chinese state-driven models. The country has backed UN resolutions on trustworthy AI and is developing its own security-oriented AI strategy. CSIS+2Asia Pacific Foundation of Canada+2
But Seoul’s position is precarious. Its chipmakers depend heavily on exports to China even as security ties pull it closer to the United States and its allies. Analysts warn that missteps in this balancing act could leave Korea squeezed between two rival hegemons. Fondation Innovation et Technologie+1
6. India: The world’s biggest democracy wants a seat at the table
India is not yet a frontier AI superpower, but it is too large to be a spectator.
The government’s IndiaAI Mission, approved in 2024, aims explicitly to “democratise” AI—creating shared compute infrastructure (at least 10,000 GPUs), a national datasets platform (AIKosh), application programs for public services, startup financing and a safety/ethics pillar. pib.gov.in+2psa.gov.in+2
New Delhi’s message is that AI should help solve India’s development problems—agriculture, health, logistics, digital identity—rather than simply feeding global platforms. India is already a major contributor to open-source AI projects and has a vast English-speaking developer base; officials argue this gives it a unique advantage in building models tuned to Indian languages and needs. pib.gov.in+1
Yet India is also courted as a strategic partner in AI-related supply chains, from chip manufacturing to data center siting. It must navigate between the U.S.—its main security partner—and Russia, a long-time arms supplier now pivoting to AI as a strategic equalizer; and between Western firms seeking markets and domestic concerns about data sovereignty and labour rights.
Supporters of India’s AI push see it as a chance to avoid past patterns where the country provided talent but not global platforms. Critics worry that without strong privacy laws, labour protections and independent oversight, AI will deepen domestic inequalities and enable new forms of surveillance.
7. Russia: Talent, sanctions and “front-line AI”
Russia’s AI story is shaped by sanctions and war.
Moscow’s National AI Development Strategy envisions Russia as a top AI power by 2030, with applications across civilian and military domains. Analysts note that Russia has world-class mathematical talent and a strong tradition in certain AI subfields, but suffers from limited access to chips and capital. Taylor & Francis Online+2IntelliNews+2
Sanctions since 2022 have torn at its tech ecosystem. Yet Russian companies such as Sberbank are investing heavily in generative models and claiming performance comparable to Western systems, with plans to open-source some models for influence and resilience. Reuters+1
On the battlefield, Russia has used AI for drone swarming, image recognition and analysis of stolen cyber data, accelerating a global trend toward AI-enabled warfare in Ukraine and beyond. russiapost.info+1
For Russian officials, AI is a domain where they can punch above their economic weight and offset NATO’s conventional advantages. For the rest of the world, it raises uncomfortable questions about escalation, autonomous weapons and the erosion of human control in war.
8. Beyond the big players: “Sovereign AI” and its discontents
The AI race does not stop at the G7 plus China and Russia. Dozens of mid-sized countries now talk about building their own “sovereign AI”—national models trained on local languages and data, hosted in domestic data centers, governed under local laws.
In just the past year:
- South Korea has become a showcase for sovereign AI tied to semiconductor strength. Wall Street Journal+2Asia Times+2
- Gulf states such as Saudi Arabia and the UAE are pouring billions into national LLMs and AI research hubs. Wall Street Journal+1
- Smaller countries like Singapore and Malaysia are backing regional-language models to reduce dependence on English-centric systems. The Guardian+1
Nvidia’s Jensen Huang has aggressively promoted the idea that “every country needs its own sovereign AI”—an argument that happens to align with his company’s interest in selling more GPUs. ASUS Pressroom+3Forbes+3EE Times Europe+3
Supporters of sovereign AI say it can protect cultural diversity, national security and data privacy. Critics counter that building competitive national models is extraordinarily expensive and energy-intensive, and that many smaller projects risk becoming costly “AI vanity projects” that quickly fall behind frontier systems. The Guardian+2EE Times Europe+2
This is where the AI race intersects with global inequality. Wealthy states can afford multiple attempts at sovereign AI and absorb failures. Low- and middle-income countries cannot. For them, the choice is often between renting AI from a handful of U.S. or Chinese platforms—or trying to band together in regional alliances, the digital equivalent of an “Airbus for AI.”
9. Risks: Arms race logic, fragmented norms and collateral damage
As competition accelerates, the risks are no longer theoretical.
9.1 AI-enabled weapons and crisis instability
Military planners see AI as a force multiplier: processing battlefield data, guiding drones, optimizing logistics, even generating combat simulations. But as AI is woven into command-and-control systems, scholars warn of new forms of crisis instability—systems that act faster than human decision-makers can understand, increasing the risk of miscalculation. ISPI+2SpringerLink+2
A widely cited essay recently compared the race for superintelligent AI to the nuclear arms race, suggesting that a state able to field AI capable of recursive self-improvement could gain nuclear-level geopolitical leverage. The author proposed a strategy of “Mutual Assured AI Malfunction” (MAIM) as a deterrent—a chilling echo of Cold War-era thinking. TIME+2AIRI+2
The more AI is framed as a winner-takes-all strategic asset, the harder it becomes to cooperate on safety or limit autonomous weapons.
9.2 Surveillance, repression and human rights
China’s use of AI for facial recognition, predictive policing and ethnic profiling—especially in Xinjiang—has become a symbol of what some call “authoritarian AI.” Yet recent investigations show that Western companies and export policies have also helped supply the hardware and software that underpin such systems. Fasken
Elsewhere, governments are experimenting with AI for welfare fraud detection, migration control and “social credit”-style risk scores. The temptation to import sophisticated AI tools faster than legal and ethical frameworks can keep up is not limited to autocracies.
9.3 Fragmented standards and regulatory arbitrage
The G7 Hiroshima AI Process and OECD reporting framework are attempts to prevent a complete fragmentation of norms—offering common principles on transparency, safety evaluation and accountability for advanced AI systems. Asia Pacific Foundation of Canada+3Affaires mondiales Canada+3Ministère des Affaires Étrangères+3
But global governance remains patchwork. The U.S. leans on voluntary commitments and sector-specific rules; the EU leads on comprehensive regulation but struggles with enforcement and industrial competitiveness; China embeds political control into its AI regulation; Japan and others try to bridge these approaches.
In this environment, AI firms can play governments off against one another, threatening to relocate R&D or data centers in response to regulation, while lobbying to shape export controls, safety standards and intellectual-property rules in their favor.
10. What this race means for the rest of the world
For billions of people outside the core AI blocs—in Africa, Latin America, Southeast Asia and parts of the Middle East—the AI race is a double-edged sword.
On the one hand, competition can drive down costs and push powerful countries to share technology and invest in infrastructure, from submarine cables to cloud regions and training programs. Initiatives framed as development—AI for agriculture, health, education—can bring real benefits if designed around local needs.
On the other hand, the race can turn entire regions into:
- Resource frontiers, providing critical minerals, data and cheap labour for data labeling and content moderation. IntelliNews+1
- Testing grounds, where lightly regulated AI systems are deployed in welfare programs, policing, or border control without strong democratic oversight.
- Geopolitical battlegrounds, pressured to pick sides in chip supply chains, telecom infrastructure and digital-currency experiments.
Scholars warn that if AI becomes another arena where a few powers dictate terms, the global South risks repeating the experience of earlier technological revolutions: users, not shapers, of the systems that structure their economies and politics. peasec.de+1
11. Choosing the metaphor—and the rules
Metaphors matter. Calling this an “AI arms race” encourages secrecy, speed and a view of safety as a luxury. Calling it a “geopolitical innovation race” highlights that states are competing not only on weapons, but on industrial policy, standards and narratives. peasec.de+3Semantic Scholar+3Taylor & Francis Online+3
But there is still space to reframe the contest as a race to the top: who can build the most reliable, transparent and broadly beneficial AI systems, not just the most powerful ones? That would mean measuring progress not only by model size or GPU counts, but by:
- Reduced fatalities in conflict zones.
- Improved access to education and health care.
- Lower emissions from more efficient infrastructure.
- Stronger protections for workers and minority groups in automated systems.
For that to happen, three things are essential:
- Hard security talks on AI-enabled weapons, cyber capabilities and escalation—comparable in seriousness to nuclear arms control, but adapted to a world where corporations hold much of the critical infrastructure. Wikipédia+2ISPI+2
- Inclusive governance, where countries beyond the usual G7/China/Russia circle meaningfully shape AI standards, not just implement them.
- Investment in people, from STEM education and digital rights advocacy to independent journalism and civil-society watchdogs that can hold both states and firms accountable.
The AI race between the United States, China, Russia, India, Japan, South Korea and the European Union is real. The danger is not only that someone “wins,” but that everyone else loses—through arms-race instability, deepened inequality and a world where the most powerful systems are accountable to no one.
The technology is still young enough that the rules are not fully written. Whether AI becomes a tool of domination, or a shared infrastructure for human flourishing, will depend less on the brilliance of a model’s architecture and more on the political courage to slow down when necessary, to cooperate where possible, and to put human dignity above national bragging rights.
Discover more from Interdisciplinary Research Journal and Archives
Subscribe to get the latest posts sent to your email.