AI has become a battleground for geopolitical competition, national resilience, and societal transformation. The stakes are no longer theoretical, and the window for action is closing fast.
In March, the U.S. escalated its efforts to shape the global technology landscape by expanding export controls on advanced AI and semiconductor technologies. Over 80 entities – more than 50 in China – were added to the export blacklist, aiming to regulate access to critical technologies. The move seeks to limit the development of high-performance computing, quantum technologies, and AI in certain regions, citing national security concerns.
As these export controls tighten, reports have surfaced of restricted chips entering China through unofficial channels, including e-commerce platforms. U.S. authorities are working to close these gaps by sanctioning new entities attempting to circumvent the restrictions. The Department of Commerce’s Bureau of Industry and Security (BIS) is also pushing for stricter Know Your Customer (KYC) regulations for cloud service providers to limit unauthorised access to GPU resources across the Asia Pacific region.
Geopolitics & the Pursuit of AI Dominance
Bipartisan consensus has emerged in Washington around the idea that leading in artificial general intelligence (AGI) is a national security imperative. If AI is destined to shape the future balance of power, the U.S. government believes it cannot afford to fall behind. This mindset has accelerated an arms-race dynamic reminiscent of the Thucydides Trap, where the fear of being overtaken compels both sides to push ahead, even if alignment and safety mechanisms are not yet in place.
China has built extensive domestic surveillance infrastructure and has access to large volumes of data that would be difficult to collect under the regulatory frameworks of many other countries. Meanwhile, major U.S. social media platforms can refine their AI models using behavioural data from a broad global user base. AI is poised to enhance governments’ ability to monitor compliance and enforce laws that were written before the digital age – laws that previously assumed enforcement would be limited by practical constraints. This raises important questions about how civil liberties may evolve when technological limitations are no longer a barrier to enforcement.
The Digital Battlefield
Cybersecurity Threat. AI is both a shield and a sword in cybersecurity. We are entering an era of algorithm-versus-algorithm warfare, where AI’s speed and adaptability will dictate who stays secure and who gets compromised. Nations are prioritising AI for cyber defence to stay ahead of state actors using AI for attacks. For example, the DARPA AI Cyber Challenge is funding tools that use AI to identify and patch vulnerabilities in real-time – essential for defending against state-sponsored threats.
Yet, a key vulnerability exists within AI labs themselves. Many of these organisations, though responsible for cutting-edge models, operate more like startups than defence institutions. This results in informal knowledge sharing, inconsistent security standards, and minimal government oversight. Despite their strategic importance, these labs lack the same protections and regulations as traditional military research facilities.
High-Risk Domains and the Proliferation of Harm. AI’s impact on high-risk domains like biotechnology and autonomous systems is raising alarms. Advanced AI tools could lower the barriers for small groups or even individuals to misuse biological data. As Anthropic CEO Dario Amodei warns, “AI will vastly increase the number of people who can cause catastrophic harm.”
This urgency for oversight mirrors past technological revolutions. The rise of nuclear technology prompted global treaties and safety protocols, and the expansion of railroads drove innovations like block signalling and standardised gauges. With AI’s rapid progression, similar safety measures must be adopted quickly.
Meanwhile, AI-driven autonomous systems are growing in military applications. Drones equipped with AI for real-time navigation and target identification are increasingly deployed in conflict zones, especially where traditional systems like GPS are compromised. While these technologies promise faster, more precise operations, they also raise critical ethical questions about decision-making, accountability, and latency.
The 2024 National Security Memorandum on AI laid down initial guidelines for responsible AI use in defence. However, significant challenges remain around enforcement, transparency, and international cooperation.
AI for Intelligence and Satellite Analysis. AI also holds significant potential for national intelligence. Governments collect massive volumes of satellite imagery daily – far more than human analysts can process alone. AI models trained on geospatial data can greatly enhance the ability to detect movement, monitor infrastructure, and improve border security. Companies like ICEYE and Satellogic are advancing their computer vision capabilities to increase image processing efficiency and scale. As AI systems improve at identifying patterns and anomalies, each satellite image becomes increasingly valuable. This could drive a new era of digital intelligence, where AI capabilities become as critical as the satellites themselves.
Policy, Power, and AI Sovereignty
Around the world, governments are waking up to the importance of AI sovereignty – ensuring that critical capabilities, infrastructure, and expertise remain within national borders. In Europe, France has backed Mistral AI as a homegrown alternative to US tech giants, part of a wider ambition to reduce dependency and assert digital independence. In China, DeepSeek has gained attention for developing competitive LLMs using relatively modest compute resources, highlighting the country’s determination to lead without relying on foreign technologies.
These moves reflect a growing recognition that in the AI age, sovereignty doesn’t just mean political control – it also means control over compute, data, and talent.
In the US, the public sector is working to balance oversight with fostering innovation. Unlike the internet, the space program, or the Manhattan Project, the AI revolution was primarily initiated by the private sector, with limited state involvement. This has left the public sector in a reactive position, struggling to keep up. Government processes are inherently slow, with legislation, interagency reviews, and procurement cycles often lagging rapid technological developments. While major AI breakthroughs can happen within months, regulatory responses may take years.
To address this gap, efforts have been made to establish institutions like the AI Safety Institute and requiring labs to share their internal safety evaluations. However, since then, there has been a movement to reduce the regulatory burden on the AI sector, emphasising the importance of supporting innovation over excessive caution.
A key challenge is the need to build both policy frameworks and physical infrastructure in tandem. Advanced AI models require significant computational resources, and by extension, large amounts of energy. As countries like the US and China compete to be at the forefront of AI innovation, ensuring a reliable energy supply for AI infrastructure becomes crucial.
If data centres cannot scale quickly or if clean energy becomes too expensive, there is a risk that AI infrastructure could migrate to countries with fewer regulations and lower energy costs. Some nations are already offering incentives to attract these capabilities, raising concerns about the long-term security of critical systems. Governments will need to carefully balance sovereignty over AI infrastructure with the development of sufficient domestic electricity generation capacity, all while meeting sustainability goals. Without strong partnerships and more flexible policy mechanisms, countries may risk ceding both innovation and governance to private actors.
What Lies Ahead
AI is no longer an emerging trend – it is a cornerstone of national power. It will shape not only who leads in innovation but also who sets the rules of global engagement: in cyber conflict, intelligence gathering, economic dominance, and military deterrence. The challenge governments face is twofold. First, to maintain strategic advantage, they must ensure that AI development – across private labs, defence systems, and public infrastructure – remains both competitive and secure. Second, they must achieve this while safeguarding democratic values and civil liberties, which are often the first to erode under unchecked surveillance and automation.
This isn’t just about faster processors or smarter algorithms. It’s about determining who defines the future – how decisions are made, who has oversight, and what values are embedded in the systems that will govern our lives.
