Sejong Policy Briefs

(Brief 2025-10) Current Landscape and Key Issues of Global AI Governance in the Context of U.S.–China Competition

Date 2025-05-26 View 563 Writer Yoo JoonKoo

File Brief 2025-10 Writer Joonkoo Yoo

Current Landscape and Key Issues of Global AI Governance in the Context of U.S.China Competition

 

 

 

Joonkoo Yoo

Director, Center for Security Strategy, Sejong Institute

 

 

 

1. Problem Statement

 

Amid the intensifying and multifaceted rivalry between the United States and China, competition over AI technological supremacy has emerged as a critical pillar of their broader hegemonic contest. Technological competition now not only determines the outcome of power struggles, but the global political and economic environment also plays a key role in shaping the direction and speed of technological development. The U.S.China AI competition extends beyond a simple race for technological advancement; it is inherently linked to the restructuring of the international order and thus calls for evaluation through a techno-geopolitical lens. AI technology is permeating every dimension of human lifefrom states and industries to societies and individualsbringing about fundamental transformations in the global order.

 

As AI technology rapidly evolves and its applications proliferate, discussions on global governance have gained momentum. Since the discourse on global AI governance only began relatively recently, earlier debates primarily focused on formulating national strategic policies for AI development and innovation. So far, multilateral discussions have merely proposed basic ethical guidelines for AI use. However, the scope of global AI governance is now expanding to encompass core issues such as the establishment of platforms involving states, corporations, and individuals, the creation of new international institutions, the identification of key technical challenges, and the development of international norms.

 

With national-level strategies, policies, and legislative frameworks advancing rapidly, efforts toward multilateralizing AI regulations are also accelerating across countries and regions, including the United States, China, and the European Union. Underlying these governance efforts is an intensifying competition for technological hegemony among major powers. Technology leading States are increasingly reinforcing standard-setting and technology control regimes, not only to accelerate their own innovation but also as a means of deliberately constraining the technological progress of their rivals. While AI governance has traditionally been discussed by separating commercial and military domains, recent trends reflect a growing effort to subsume these categories within the framework of international security. This shift is largely driven by the highly dual-use nature of AI technologies and the ease with which they can be rapidly securitized during development and deployment. Against this backdrop, this paper analyzes the background and characteristics of U.S.China AI competition from a diplomatic and security perspective, examines the current landscape and key issues of global governance, and presents prospects and implications for the future.

 

 

2. Background and Characteristics of the U.S.-China AI Competition

 

a. Importance of AI from a National Security Perspective

 

As the technological power rivalry between the United States and China intensifies, their competition in the AI domain has also deepened and widened. In the era of techno-politics, where advanced technology drives geopolitical dynamics, AI has emerged as a key engine of strategic competition. The U.S.China AI rivalry is expanding across economic, military-security, and scientific-technological spheres. Major technology powers, including the U.S. and China, regard AI as a core infrastructure that determines national competitiveness and as a strategic security asset, prompting sustained and substantial investments.

 

Due to its general-purpose nature, AI possesses a vast range of potential applications, making it difficult to predict its long-term impact with precision. For example, recent projections anticipate a shift from the current stage of generative AIfocused on producing images and texttoward an era of "physical AI." Such evaluations underscore the variability in AI’s developmental trajectory. Moreover, future AI innovations may evolve toward more narrowly defined, purpose-specific functions. Currently, the AI landscape is governed by economies of scale, requiring massive datasets, large-scale algorithms and models, high-performance computing infrastructure, and significant energy resources. These demands present steep barriers for latecomer nations attempting to build AI ecosystems comparable to those of the U.S. and China. Nevertheless, cases such as DeepSeek suggest that it may become possible to achieve competitive AI performance with relatively limited resources, indicating potential shifts in the existing AI ecosystem.

 

b. Characteristics of U.S. and Chinese National AI Strategies

 

The U.S.China AI rivalry is increasingly manifested in their respective national AI strategies, which comprehensively integrate technological innovation with national security imperatives. The United States’ advancement of its AI strategy reflects not only the fundamental significance of AI but also the strategic imperative to counter China’s intensifying challenge in the domain of technological supremacy. This urgency stems from a growing recognition that China has achieved considerable competitiveness across the entire AI ecosystem, including talent cultivation, technological development, production, and commercialization. While several AI-related policies introduced under the Biden administration have since been rescinded or revised, the second Trump administration is expected to pursue a new suite of AI strategies and legislative initiatives. On January 21, 2025, the administration released a key policy document entitled Removing Barriers to American Leadership in Artificial Intelligence, signaling a renewed emphasis on enhancing U.S. global competitiveness in the AI sector. China, for its part, has steadily advanced its domestic AI agenda through nationally coordinated initiatives, most notably the New Generation Artificial Intelligence Development Plan, launched in 2017, and the AI Plus Initiative, introduced in 2024. These efforts reflect a sustained, state-led commitment to positioning AI as a central engine of national development and technological self-reliance.

 

Although both countries actively promote technological innovation, their models differ fundamentally. The United States relies on a market-driven innovation model led by private investors who pursue economic returns, with government support provided through policies and executive orders. In contrast, China, drawing on its tradition of centralized planning, operates a robust scientific and technological innovation framework that links commercial and defense sectors under integrated national strategies and directives. While the U.S. retains long-standing advantages in commercial and research R&D, its defense procurement system suffers from structural limitations that make it ill-suited for incorporating startups. China, by comparison, faces fewer institutional barriers when mobilizing commercial and industrial resources for defense applications, but its state-led model encounters challenges related to long-term sustainability.

 

c. Expansion of the U.S.China AI Rivalry into Global Governance Competition

 

Although the U.S.China AI rivalry was initially defined by competing innovation strategies, it has gradually evolved into a full-scale contest over global AI governance. This development is driven by the proliferation of national AI strategies and legislation around the world. Whereas only one AI-specific law existed globally in 2016, by 2022 alone, thirty-seven countries had enacted AI-related legislation. As of now, more than 1,000 AI-related policies and legal measures have been introduced across approximately seventy countries. Both the United States and China recognize the need to institutionalize AI governance at the global level and are actively vying for leadership in both commercial and military AI governance domains. This emerging global governance competition between the U.S. and China is expected to remain firmly competitive across all fronts, with geopolitical bloc alignments becoming increasingly complex and contested.

 

 

3. Current Issues and Agenda in Global AI Governance

 

a. International Organizations and Frameworks as Discussion Mechanisms

 

In recent years, global and regional institutions such as the United Nations, UNESCO, OECD, the European Union, and G7/G20 have rapidly advanced discussions on AI governance. However, these efforts remain limited by disjointed and fragmented structures. While AI governance has largely been pursued at the national level through strategic policies and legal frameworks initiated by leading AI powers, discussions at the global level have only recently gained significant traction. These international bodies commonly advocate core ethical principles for AI use and emphasize the need to establish robust governance frameworks. Central to ongoing debates are ethical and normative guidelines designed to address AI-related risks and support effective risk management mechanisms.

 

The UN's Summit for the Future, scheduled for September 2024, aims to adopt the Global Digital Compact, signaling the beginning of formal multilateral deliberations on AI governance. UN Secretary-General António Guterres first underscored the urgency of addressing AI-related risks at the General Assembly level in his 2021 Our Common Agenda report. He has since proposed the establishment of a dedicated international organization under the UN framework to manage such risks, citing the International Atomic Energy Agency (IAEA) and the Intergovernmental Panel on Climate Change (IPCC) as institutional reference models. These bodies are recognized for their robust mechanisms for verification, monitoring, and risk assessmentfunctions that are increasingly seen as relevant for AI governance. Parallels have been drawn between nuclear technologies (e.g., uranium enrichment and reprocessing) and AI-related domains (e.g., data, advanced chips, and foundational models). These analogies are explicitly referenced in both the interim and final reports of the UN High-Level Advisory Body on AI, underscoring the applicability of established international verification regimes to the regulation of emerging technologies.

 

Currently, competition is intensifying among the United States, China, and other leading AI powers over the establishment of international institutions and governance frameworks for artificial intelligence. In 2024, both the United States and China independently proposed and secured the adoption of AI-related resolutions at the UN General Assembly. A/RES/78/265, initiated by the United States, and A/RES/78/311, introduced by China, illustrate how both countries are seeking to shape the global AI agenda through multilateral channels. Despite their growing rivalry, the two resolutions contain limited yet meaningful elements of cooperation. This dynamic recalls the strategic dialogues that unfolded between the United States and the Soviet Union during the Cold War's nuclear arms race. While both resolutions underscore the role of AI in advancing sustainable development, key differences persist. The United States places greater emphasis on risk assessment and verification mechanisms to ensure responsible AI use. In contrast, China focuses on bridging technological gaps and building capacity, particularly with the goal of strengthening its leadership among Global South countries.

 

b. Key Agendas in Global Governance Discussions

 

The principal agenda in global AI governance focuses on establishing principles for the safe and responsible use of AI, with particular emphasis on human rights, transparency, accountability, and the mitigation of bias. However, the discourse remains uneven and fragmented, revealing ideological divides between Western democracies and the Sino-Russian bloc. Russia and China have criticized Western frameworks of responsibility as implicitly hypocritical and politically driven. The United States promotes a comprehensive approach to governance that addresses the entire AI lifecycle, from design and development to deployment and oversight. This approach is anchored in international law, human rights, data protection, sustainable development, and accountability, and it presumes the need for robust systems of risk assessment and verification. China, advocating from a Global South perspective, places greater emphasis on bridging technological gaps and enhancing national capacity in order to expand its international support base.

 

Global AI governance has traditionally distinguished between commercial and military applications. However, principles such as accountability and transparency, originally applied to commercial use, are increasingly relevant to military contexts. With the dual-use nature of AI, the boundary between these domains has become increasingly blurred. In terms of innovation, systems are often developed with both applications in mind, known in Korean discourse as "spin-up." At the same time, commercial AI is rapidly absorbed into national security agendas through "spin-off." As these trends intensify, the longstanding separation in governance frameworks may give way to a more comprehensive approach centered on international security.

 

Technical standardization in AI governance, particularly in the areas of data, computing, and algorithms, has become a central arena in the technological rivalry between the United States and China. In the domain of data governance, three dominant positions continue to shape global debates. The United States advocates for the free flow of data and open access. China, along with many developing countries, emphasizes data sovereignty. The European Union prioritizes privacy, human rights, and strong data protection standards.

 

Computing governance, now one of the most contentious areas, is closely tied to economic security and global supply chains. The U.S. currently maintains a dominant position and uses export controls, supply chain security, and technical standardization as its principal governance tools. China, responding to U.S. restrictions, is pursuing a more assertive role in global governance by capitalizing on perceived gaps in U.S. leadership.

 

The idea of modeling AI computing governance on the nuclear non-proliferation regime is receiving growing attention. Mechanisms such as verification, inspection, and supply chain control, which have been applied across the entire process of nuclear production and transfer, are now being discussed as key elements in the governance of AI computing. Proposals to establish an international organization for this purpose have also drawn inspiration from the International Atomic Energy Agency (IAEA). Despite notable structural similarities, applying the IAEA model to AI governance presents considerable challenges. In particular, adapting the comprehensive verification system used throughout the uranium enrichment cycle to the full AI computing lifecycle involves both technical and institutional obstacles. Nevertheless, the development of computing governance frameworks based on risk assessment and verification mechanisms is becoming an increasingly realistic prospect.

 

In algorithm and model governance, competition between open and closed models has emerged as a key issue. The United States is generally pursuing a closed-model strategy, while China promotes open or hybrid approaches. This divide is exemplified by the competition between ChatGPT and DeepSeek, with latecomers like China showing a preference for open or hybrid models. Governance over this issue is largely driven by private big tech companies and reinforced through national policy, and the approaches of individual companies and states are now expanding into the global governance landscape. As generative AI is expected to generate high added value, technologies previously developed through open-source models are increasingly becoming proprietary. The debate over appropriate levels of algorithmic safety, transparency, and reliability reflects ongoing divergence not only among states but also between major tech firms. These concerns are closely tied to the lack of explainability in AI systems. When malfunctions occur, unintended task execution may result, prompting regulatory efforts to enhance explainability. This includes the promotion of more interpretable systems, often described as white-box models.

 

c. Creation of International Norms as a Sub-piece of AI Governance

 

AI systems and technologies are characterized more as instrumental tools rather than as discrete domains or sectors, such as cyberspace or outer space. Although some degree of categorization exists, the formation of international norms related to AI tends to be fragmented and dispersed across multiple sectors and issue areas. In comparison, international norm-building in areas like cyberspace and outer space has developed in a more comprehensive and domain-specific manner. AI, by contrast, exhibits a clear tendency toward segmented and piecemeal norm formation. Within the context of international security, efforts to establish categories for AI governance are beginning to take shape. Recent initiatives are attempting to incorporate safety concerns into broader security-oriented norm development. As normative competition over emerging technologies intensifies, rivalry between the United States and China is becoming increasingly evident in the AI sector as well, particularly within security frameworks. The development of international norms in the security context requires careful consideration of three elements: risk, threat, and vulnerability. As actors begin to define and operationalize these concepts, political and strategic divisions among states and blocs are expected to intensify.

Certain technical elements of AI systems are being disseminated internationally through national policies and legal frameworks introduced by individual countries such as the United States. These efforts are further expanding through bilateral and minilateral normative documents. The key issues of data, computing, and algorithms that constitute AI systems are each undergoing international norm-setting through minilateral channels and international organizations. These processes are unfolding within the broader context of global supply chains and technology standardization. In practice, the United States is actively pursuing a range of legislative measures in related areas, including export controls, technology transfer restrictions, and investment screening. These measures are being extended from the national level to bilateral, minilateral, and multilateral frameworks. Regarding the military use of AI and its integration into weapons systems, major AI powers remain reluctant to promote the creation of binding international legal instruments. Although the early establishment of a treaty appears unlikely, discussions on non-binding international norms are expected to move forward in a more concrete and detailed manner.

 

 

4. Future Prospects and Implications

 

a. Advancing AI Global Governance in the Context of International Security

 

The expansion of AI applications in both commercial and military sectors has brought increasing attention to the need for global governance. Yet discussions on this issue have remained limited, due in large part to uncertainty surrounding the impact of AI and the unpredictable pace of its technological advancement. Nevertheless, significant shifts have begun to emerge. In recent years, discourse on AI governance has gained momentum, and by 2024, a turning point appears to have been reached, with the United Nations and other international platforms producing notable outcomes. Previously, issues related to AI development, safety, international security, military use, and weapons systems were addressed in a fragmented and disjointed manner. These discussions are now expected to increasingly converge within the broader framework of international security.

 

AI governance discussions centered on international security are more likely to develop along competitive rather than cooperative lines. As national interest increasingly overrides efforts to build shared norms, the establishment of a neutral international governance body is expected to face significant challenges. These include a lack of international cooperation and coordination, heightened interstate competition and erosion of trust, fragmentation of governance systems, the concentration of influence among large AI companies, and limited public understanding and education regarding AI-related risks. Regarding core governance agendas, existing debates around the human-centered approach and the risk-based approach are expected to evolve. Going forward, key priorities are likely to include identifying elements that pose threats to international security and developing confidence-building measures and international norms in response.

 

b. Escalating U.S.China Competition in Global Governance

 

The U.S.China AI rivalry, which previously focused predominantly on domestic innovation strategies, is expected to intensify further within global governance frameworks. Given the Trump administration's anticipated nationalist and inward-looking stance, short-term delays in global governance discussions may occur. However, increasing emphasis on commercial interests and nationalistic approaches will eventually compel active participation by the United States in shaping advantageous global governance structures. Concurrently, China will persistently pursue a proactive governance role, especially if it sees potential gains in technological proliferation and international influence. Thus, the widespread adoption of AI technologies globally will likely transform the ongoing national strategic competition between the U.S. and China into intensified competition for leadership in global AI governance.

 

c. Institutionalizing Governance for Military Applications of AI and Arms Control

 

Artificial intelligence is increasingly regarded as a key instrument of asymmetric power, and its military applications are expected to become more sophisticated, thereby intensifying the arms race. Technologically advanced countries are actively accelerating the development of AI-enabled military capabilities, and the scope and complexity of their applications will likely grow in proportion to national technological capacities. Within the broader framework of global AI governance, the need to establish governance mechanisms and international norms to ensure the safety of military applications is expected to gain traction. Although initial governance efforts focused on commercial uses, recent discussions have begun to address soft norms concerning military applications in a more concrete and structured manner.

 

As military uses of AI continue to expand, the issue of arms race prevention is emerging as a priority in military governance discourse. The inherent technical complexity of AI poses major challenges in devising effective verification mechanisms for arms control. Historically, successful arms control agreements have included detailed technical definitions along with robust verification and enforcement tools to deter violations. In addition, within the context of arms control, preventing the transfer of AI-based military technologies to non-state actors is likely to become a pressing concern.

 

d. Strengthening AI Technology Control Governance from an Economic Security Perspective

 

As the USChina competition for AI technological supremacy intensifies, the United States is expected to expand and reinforce its export control governance at the bilateral, plurilateral, and global levels as a means to secure a competitive edge over China. The new regulatory framework on AI technology controls currently being prepared by the United States is likely to shift toward negotiating individual deals with each country. In this case, preferential treatment previously granted to allied nations such as South Korea and Japan, which have been able to import US AI chips without restrictions as part of allied group arrangements, may be revoked. In this context, the US Bureau of Industry and Security (BIS) is expected to strengthen extraterritorial measures by defining the use of Huawei’s Ascend AI chips, developed in China, as a violation of US export controls. In fact, the Trump administration's first-term initiative to exclude Huawei was aggressively advanced through a multi-layered process, starting with domestic legal and policy tools, then expanding to bilateral negotiations, plurilateral platforms, and ultimately the multilateral “Prague 5G Security Conference.”

 

In light of Russia’s invasion of Ukraine and the deepening strategic rivalry between the United States and China, the international non-proliferation regime stands at a critical crossroads. This context has given rise to discussions on the emergence of a Post-Wassenaar Regime for AI-related technology controls. The Wassenaar Arrangement, which operates on a consensus-based decision-making structure, has shown clear limitations in producing binding agreements due to conflicting interests among member states. In particular, Russia’s membership and its favorable stance toward non-member China have made it difficult to adopt the stringent export control measures sought by the United States. Including emerging technologies such as AI and AI-related semiconductors in the Wassenaar control list has been especially challenging, prompting the United States to design a new export control governance framework. One prominent scenario was the formation of a Post-Wassenaar Regime that excludes Russia, but recent shifts in global dynamics have also opened the possibility of mini-lateral or economic securityoriented regimes. The United States has already added a wide range of AI-related software, semiconductor technologies, equipment, and items to new control lists and is actively working to apply these measures across multilateral platforms, including but not limited to the Wassenaar Arrangement. In response, China has also moved to strengthen its own export control mechanisms for emerging technologies and is increasingly positioning such measures as a key agenda within global AI governance discourse.