Hello China Tech

Hello China Tech

What Does China Really Want From Nvidia’s H200?

Inside the New Reality of Taxed Access, Limited Adoption, and a Strategy of Non-Dependence.

Poe Zhao's avatar
Poe Zhao
Dec 10, 2025
∙ Paid

When the White House announced that Nvidia’s H200 AI chips could be sold to “approved customers” in China, it sounded, at first glance, like a thaw. President Trump publicly framed it as a win-win: Nvidia regains access to a critical market, while the US Treasury takes 25% of the China revenue from those sales as part of the deal.

img

But this is not a return to the pre-control era. It is something new: export control turned into a toll booth. Nvidia can sell a powerful but not cutting-edge chip. The US government gets a revenue stream. Beijing gets a narrow, managed window into US compute. And everyone understands that the window can close again.

To make sense of this, you have to look at it from Beijing’s side, not just Washington’s. The important questions are no longer “Can China buy H200?” but:

  • Who in China is actually allowed to use it?

  • For what workloads?

  • And how much will the country really want to rely on it, given recent history?

The answers point to a world where US chips remain tactically useful to China, but no longer structurally central.

A Toll, Not a Thaw

The basic contours of the deal are now clear.

Trump announced that Nvidia can export its H200 GPUs to approved customers in China and some other markets, subject to screening by the US Commerce Department and a 25% cut of sales going to the US government.

This follows an earlier, smaller arrangement around Nvidia’s downgraded H20 chip, where a 15% revenue-share was reportedly agreed but sales were later discouraged by Chinese authorities, who signaled local firms should prioritize domestic accelerators instead.

img

Two constraints define the H200 decision:

  • The H200 is powerful, but it is not Nvidia’s frontier part. The latest Blackwell line and the upcoming Rubin chips remain fully off-limits to China.

  • Access is mediated on both sides: US export licenses on the way out, and Chinese regulatory screening on the way in.

Beijing has already signaled its intent. Multiple reports, citing sources briefed on regulators’ thinking, suggest China plans to limit access to H200, even after Washington’s approval, by requiring additional approvals or channeling purchases through designated entities.

This is the core geopolitical reality:

  • The US no longer fully blocks high-end AI chips to China; it meters access and takes a cut.

  • China no longer treats US GPUs as a default foundation; it treats them as a risky, expensive add-on.

The relationship has shifted from quasi-open trade to taxed interdependence.

Who in China Gets H200, and What Does That Tell Us?

From a Chinese policy perspective, the first question is not “How many H200s can we get?” but “Where in the system do we dare to put them?”

You can think of China’s AI compute demand as roughly split into three bands:

  1. Top-tier internet giants, cloud providers, and leading AI labs.

  2. Industry-specific AI companies in fields like medical imaging, finance, logistics, and industrial automation.

  3. Critical infrastructure and state-linked entities: telecom operators, state-owned cloud platforms, financial infrastructure, utilities, and public-sector data centers.

img

H200 is unlikely to penetrate all three layers evenly. The most plausible pattern is:

  • Concentrated use at the very top, a handful of giants and elite research labs running frontier experiments, flagship models, and benchmark-oriented projects.

  • Limited or symbolic presence in the middle, where domestic accelerators and tuned software stacks already offer more predictable total cost of ownership.

  • Minimal use at the bottom, in critical systems where supply continuity, security assurances, and political signaling matter more than peak FLOPs.

That pattern would fit both sides’ incentives. Washington can say it has not fully “cut off” China. Beijing can say it still has access to high-performance GPUs. Yet the core infrastructure of China’s digital state remains built on domestic or fully controlled hardware.

For global readers used to thinking in simple binaries – “access” or “no access” – this is the first mental shift:

Access in China will be stratified, not uniform.
The more strategic the workload, the more likely it is to run on domestic silicon.

What Workloads Will H200 Actually Run?

The second key question is about workloads, not headline specs.

Technically, H200 is a major upgrade over the compromised H20 parts that had been allowed into China earlier in 2025. Analysts point out that the H200 can handle both training and inference efficiently, while the H20 was effectively limited to inference.

But how it will be used matters more than what it can do on paper. There are four broad workload categories:

  • Frontier model training: extremely large language or multimodal models, new architectures, risky research projects.

  • Industry fine-tuning and multi-task training: adapting base models to domains like finance, healthcare, manufacturing.

  • Large-scale inference for consumer and enterprise products: chatbots, recommendation systems, office copilots, SaaS products.

  • Sensitive or regulated deployments: government services, core banking, telecom networks, critical infrastructure.

H200 is most likely to appear in the first category, and to some extent the second. Its role will be to push frontiers, hit benchmarks, and reduce training time on bleeding-edge models.

img

For day-to-day inference at scale, the calculus is different. The cost of designing, deploying, powering, and maintaining large inference clusters is dominated less by peak chip performance and more by total cost of ownership, software compatibility, and supply guarantees. Domestic accelerators – even if less powerful – can be competitive, or superior, on this metric.

And in government-adjacent or regulated environments, the political logic is uncompromising:

  • Using domestic chips may be technically sub-optimal today, but politically robust.

  • Using H200 at the core of those systems would reverse years of effort to reduce strategic dependence on US hardware.

That is why H200 is likely to shape China’s frontier R&D, not its mass-market AI infrastructure.

How Much Will China Really Want to Rely on H200?

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Hello China Tech · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture