Commentary
by
Barath Harithas
Published December 20, 2024
Introduction
Current analysis of U.S. semiconductor export controls on China often misses the forest for the trees. Analysts fixate on the flicker of every new development without first asking, “What are the real stakes of the competition?”
The Real Stakes: AI, Not Semiconductors
The underlying reason for this confusion is that export controls are not primarily about semiconductors themselves. They are about AI, and specifically, the pursuit of advanced AI systems that require access to exponential computing power (i.e., compute). Training ChatGPT-4 in 2023 required approximately 25,000 graphics processing units (GPUs); by 2030, experts estimate this could rise to millions of chips for a single frontier AI model.
This perspective refocuses how we should evaluate the effectiveness of export controls. They are a means to an end, and crucially, just one half of a paired strategy. Export controls strangle China’s compute pipeline while U.S. domestic industrial policy and international partnerships seek to widen the absolute compute gap. Together, these measures aim to push China back as far as possible and catapult the United States toward artificial general intelligence (AGI). Regardless of whether one thinks AGI is a misnomer, for strategic planners, the thinking goes that the first country to secure the AGI laurel will usher in the hundred-year dynasty.
In this respect, there is broad agreement that China’s access to and production of advanced semiconductor technologies must be curbed. However, consensus on the starting point is not enough. Without well-defined success metrics, directionality risks becoming drift. For one, it encourages a form of legibility theater, where analysts measure what is easily quantifiable, not what matters. Second, it often leads to analytical whiplash, with export controls alternately hailed one moment and vilified the next.
Defining Success in the Compute Arms Race
This paper argues that compute power should be the principal unit of measurement in U.S.-China technological competition and estimates that by the end of 2025, the United States will have 9.5 million more AI accelerators than China (14.3 million vs. 4.6 million)—a three-fold advantage that translates into an even larger compute gap given that the United States has more performant chips. But this aggregate metric misidentifies the decisive factor. The real question is whether China can centralize sufficient compute resources to enable at least one domestic AI lab to match U.S. capabilities. National compute totals are not what matter; concentrated capabilities are.
This reframing has significant implications for policy effectiveness. Much like nuclear deterrence, China does not need to achieve parity. Current evidence suggests that China has access to enough GPUs to match the scale of leading U.S. training runs (approximately 100,000 GPUs by xAI in Memphis) and possibly keep pace as U.S. labs scale toward 300,000–500,000 GPU clusters and beyond in the future; in other words, China can achieve critical mass even while operating at a significant overall disadvantage.
However, in the long run, and if current scaling trends continue, training a single frontier AI model by 2030 will require millions of GPUs. As compute requirements grow exponentially, China will need ever-larger resources just to have a seat at the table, let alone compete effectively.
What Is the Long-Term Chinese Indigenization Playbook?
This paper further aims to challenge two analytical pitfalls in U.S.-China technology competition. The first is short-termism. Given that advanced AI systems have an uncertain and indeterminate timeline, policymakers must guard against the comforting instinct to celebrate premature victories in a contest that may be in its opening chapters. To this end, it is necessary to understand how the Chinese government is girding itself for long-term technological containment.
The second pitfall is a bias that tends to frame the United States as a dynamic protagonist, while casting China as a static and predictable foil. This reduces the United States’ ability to anticipate counterfactuals that policymakers should be considering.
As such, this section takes the vantage point of a Chinese planner. Taking a first-principles approach, it begins by asking: Given constrained inputs and uncertain breakthroughs, how might China tactically reshape the terms of engagement and redefine the competitive landscape to its advantage? This paper identifies four broad elements:
- Restructuring the Bureaucratic Machine: The first step is taking stock of existing resources and optimizing their use. This requires a dual approach of streamlining inefficiencies while consolidating authority to maximize operational effectiveness.
- Centralizing Compute Resources: To close the absolute compute gap with the United States, China will likely centralize its resources, creating massive GPU clusters (e.g., 100,000+ GPUs) to ensure that at least one Chinese AI lab can achieve near-parity with U.S. labs for training frontier AI models.
- Fog of War: Employing Informational Opacity: By concealing both its weaknesses and strength, China can deny predictability to the United States and prevent a premature clash before its capabilities are fully developed.
- Bypassing Linear Development Paths: A common fallacy is assuming that China must follow the same technological trajectory as the United States and its allies; i.e., deep-ultraviolet (DUV) lithography tools to the more advanced extreme-ultraviolet (EUV) tool. Instead, China will likely pursue alternative pathways that circumvent existing chokepoints and redefine the competitive landscape on its own terms.
Bureaucratic Triage: Streamlining and Consolidating the Chinese Machine
Streamlining: Beijing realized that the 2018 expansion of the Ministry of Science and Technology (MOST) was a classic centralization overreach. MOST’s sprawling mandate, absorbed from 15 other state organizations in 2018, drowned it in administrative tedium. As such, in 2023, significant responsibilities were reallocated away from MOST, refining its role into a leaner agency focused on long-term strategic planning.
Consolidation: Streamlining alone does not confer decisive advantage. To this end, China established a new party body in 2023—the Central Science and Technology Commission (CSTC)—at the Politburo Standing Committee (PBSC) level. In fact, the entirety of the trimmed-down MOST was designated as CSTC’s executive office. In June 2024, state media revealed that Vice Premier Ding Xuexiang leads the CSTC. Ding is the first-ranked vice premier of China and the sixth-ranked member of the Politburo Standing Committee, as well as its only trained engineer. That President Xi Jinping delegated CSTC leadership to Ding rather than assuming it himself reflects a pragmatic recognition that technical expertise must guide semiconductor policy.
Importantly, Ding holds dual reins for the CSTC and the semiconductor-focused “Leading Small Group”, which approves mergers and acquisitions and channels R&D funding to the private sector for “bottleneck technologies.” This dual authority prevents the common pitfall of leading groups devolving into mere coordinators of competing agencies. It also ensures technical breakthroughs are rapidly operationalized through targeted investments, industrial consolidation, or resource reallocation.
Centralizing Compute Resources
Launched in 2022, the National Unified Computing Power Network (NUCPN) is China’s most ambitious effort at centralizing compute resources, pooling power across the country much like an electrical grid. It aims to deliver over 300 exaflops of computing power by 2025, with 60 percent concentrated in “national hub node areas.”
For comparison, by the end of 2024, based on shipment data and installed base figures from SemiAnalysis, major U.S. AI labs (OpenAI, Google, Meta, Anthropic and xAI) are projected to command 2.21 million NVIDIA H100 GPUs, each delivering 4 petaflops, for roughly 8,840 exaflops—roughly 30 times China’s target. This gap widens dramatically when including all other AI accelerators and access to commercial data centers and neoclouds. By the end of 2025, U.S. labs will have access to 14.31 million AI accelerators.
A more precise comparison involves examining China’s projected compute resources by the end of 2025. According to SemiAnalysis data, China will have roughly 4.6 million AI accelerators. This figure excludes smuggled chips due to quantification challenges and assumes projected deliveries in the face of ever-changing export controls. While China has domestic cloud providers, their GPU deployments are already captured in our direct count. Lastly, analysis of Chinese access to foreign clouds through intermediaries is beyond the scope of this paper.
U.S. Compute Resources (End of 2025): 14.31 Million Total
- AI Labs/Hyperscaler resources: 13.26 million
- NVIDIA accelerators, Google TPUs, and AMD/Intel/AWS chips
- Additional compute access: 1.05 million
- U.S. commercial data centers and neocloud providers
China Compute Resources (End of 2025): 4.8 Million Total
- Modified GPUs for the Chinese market: 2.69 million
- A800/H800: 790,000 (pre-2023 ban)
- Confirmed sales to Baidu, Tencent, Alibaba, and ByteDance
- H20/B20: 1.9 million (2024–2025 projection)
- A800/H800: 790,000 (pre-2023 ban)
- Domestic production: 1.9 million
- Huawei Ascend 910B/910C GPUs (2024–2025 projection)