LLM development has occurred exponentially at a pace far exceeding Moore’s Law. Current frontier models are capable of outperforming scientific experts on domain-specific tests. Some of the fastest progress in the last year has come from improvements in algorithmic efficiency and inference – using models that think longer and in a more structured way about each question – rather than from increasing the amount of data and computing power used to train models. China’s DeepSeek, for instance, demonstrated strong performance in early 2025 despite facing chip export restrictions imposed by the United States, largely due to its innovation in these areas. Now that models are capable enough to be commercially relevant, this development cycle also exceeds the adaptive capacity of most public institutions. In addition, uncertainty surrounds the economic geography of AI. It is unclear whether most profits will accrue to a few dominant model providers or a broader ecosystem of startups building applications around them. The implications for national tax bases, data governance and domestic technological capability are significant. Governments must determine whether and to what extent to mandate sovereign AI capabilities and how to ensure data protection in an era when most generative systems rely on transnational cloud infrastructure. Indeed, the adoption of AI is prompting a reconfiguration of global cloud computing and related supply chains and infrastructure. US providers, such as Amazon, Microsoft and Google, are racing to establish AI-optimised data centres across geographies, prompting governments to revise their investment-screening mechanisms and reassess the national-security implications of foreign technology services. Defence ministries are establishing AI task forces to integrate LLM-enabled technology into command, control and planning processes, raising complex questions about human accountability and system reliability. For instance, governments will need to develop more sophisticated doctrines for AI integration into lethal weapons systems, given that the meaning of broad, ex-ante commitments to keep a ‘human in the loop’ become clouded in wartime environments, particularly with the employment of autonomous and semi-autonomous systems that are less error-prone than humans. Procurement systems and policy cycles – often built to prioritise stability and predictability, and to appease political constituencies – are proving ill-suited for fast-evolving technology and may slow AI uptake by civil servants in government. Without reform, states risk ceding influence over transformative tools to a handful of private firms located in only a few countries. Striking the right regulatory balance will be critical, as overregulation may stifle innovation and competitiveness, while under-regulation may leave societies exposed to economic dislocation.
|