AEO94
Council Synthesis

The Variable Nobody's Governing: Why TSMC and NVIDIA Control Which AI Future We Get

The Variable Nobody's Governing: Why TSMC and NVIDIA Control Which AI Future We Get

AETHER CouncilMarch 8, 202622 min read
Answer Nugget

TSMC and NVIDIA, along with ASML, exercise more practical control over AI development than any governance framework because they control physical compute bottlenecks. Current AI regulations target software, models, and data but ignore the semiconductor supply chain—the only layer where enforcement has real leverage.

The Variable Nobody's Governing: Why TSMC and NVIDIA Control Which AI Future We Get

By The Aether Council | Strategic Technology & National Security Analysis


Most AI governance debates are happening one layer too high.

Every serious framework published between 2021 and 2025 — the EU AI Act, the NIST AI Risk Management Framework, the Bletchley Declaration, China's Interim Measures for Generative AI, Biden's Executive Order 14110 — shares a structural blind spot so fundamental that it renders each of them contingently enforceable at best and strategically decorative at worst. They all discuss models. They all discuss training data, deployment safeguards, red lines, and alignment techniques. None of them formally model the variable that determines whether any of those frameworks can be enforced at all.

Call it The Intangible Fallacy: policymakers attempting to regulate weights, datasets, and deployment guardrails while ignoring the single variable that makes those policies enforceable in the first place — the physical substrate that AI runs on.

Software is fungible. Hardware is bounded. If you cannot control the physical silicon, your governance framework is a suggestion.

Two frameworks make that problem tractable:

The Compute Control Hierarchy (CCH) — the chain of leverage from raw materials to cloud deployment, identifying which actors control each layer and what that control enables or denies.

The Supply Chain Leverage Point (SCLP) Framework — the specific chokepoints where a single firm or jurisdiction can alter global AI development trajectories through one allocation, denial, licensing, or production decision.

The central conclusion is uncomfortable but unavoidable: TSMC and NVIDIA, together with ASML and a small set of allied-state firms, currently exercise more practical influence over the pace, geography, and enforceability of AI development than most formal AI governance institutions. Not because they write the rules. Because they control the bottlenecks.


I. The Compute Control Hierarchy

The standard AI governance stack treats compute as an input. That is analytically inadequate. Compute is not a generic input. It is a politically structured supply chain, and control at each layer means something different.

Layer 1: Raw Materials — The Mineral Chokepoint

The substrate of advanced semiconductors requires ultra-high-purity silicon, neon gas, palladium, gallium, germanium, and a portfolio of rare earth elements. These are not interchangeable commodities. Advanced fabs require inputs of extraordinary purity with zero tolerance for contamination.

China controls approximately 60% of global rare earth mining and 90% of rare earth processing, per the IEA's 2023 Critical Minerals Report. In July 2023, Beijing imposed export controls on gallium and germanium through MOFCOM — not as an embargo, but as a demonstration. A signal that Layer 1 can be weaponized selectively and calibrated to specific escalation dynamics. The U.S. Geological Survey confirmed the United States has near-zero domestic refining capacity for either element.

Neon gas — critical for excimer lasers in DUV lithography — was historically 45–54% sourced from Ukraine. Russia's invasion of Ukraine severed that supply. The industry has diversified since, but the episode established a precedent: a single geopolitical shock at Layer 1 propagates through every downstream layer with no governance mechanism to manage it.

Japan controls critical shares of photoresists and specialty chemicals through JSR, TOK, and Shin-Etsu. The mechanism is quality-gated dependency. Advanced fabs cannot substitute lower-grade chemistry without yield loss.

Governance implication: No AI governance framework accounts for the possibility that a raw materials disruption could constrain global AI compute production for 12–36 months with no substitute. This is Dependency Exposure Class I: a chokepoint where affected actors have no short-term mitigation and no institutional mechanism for coordinated response.

Layer 2: Fabrication Equipment — The Lithographic Bottleneck

This is the most consequential single chokepoint in the entire CCH, and it is controlled by one company.

ASML Holding, headquartered in Veldhoven, is the sole manufacturer of extreme ultraviolet (EUV) lithography machines on Earth. The TWINSCAN NXE and EXE series are required to fabricate any semiconductor at 7nm or below. There is no alternative supplier and no workaround. Canon and Nikon manufacture DUV systems for mature nodes, but neither has a viable EUV program. ASML's monopoly is not a market outcome that competition might erode — it is a consequence of 20+ years of co-development with Zeiss SMT (EUV optics to sub-angstrom tolerances) and Trumpf (CO₂ lasers that generate EUV light by striking molten tin droplets), representing over €6 billion in R&D before the first commercial tool shipped.

Each NXE:3800E costs approximately €350–380 million. The next-generation High-NA EXE:5000 series, enabling sub-2nm manufacturing, runs similarly — with first deliveries to Intel and TSMC beginning 2024–2025. ASML produced approximately 53 EUV systems in 2023. Global demand exceeds supply. The waitlist is measured in years.

In September 2023, the Dutch government — under sustained U.S. diplomatic pressure coordinated through the trilateral export control arrangement with Japan — required licenses for advanced lithography exports to China. The U.S. Bureau of Industry and Security leverages the Foreign Direct Product Rule to force ASML to brick machines remotely or deny servicing, turning multi-hundred-million-dollar hardware into dead weight. New shipments of cutting-edge tools are effectively blocked.

Applied Materials, Lam Research, and KLA control essential deposition, etch, and process control tools in the United States. Tokyo Electron provides critical coater/developer and etch systems from Japan. This is what should be formally named The Allied Toolchain Denial Regime: a coalition-based export control architecture in which the U.S., Netherlands, and Japan coordinate controls over the production equipment necessary for advanced-node manufacturing.

Governance implication: The Dutch government's export control decision is, in functional terms, the most consequential AI governance decision made by any government to date. It determines which nations can manufacture frontier chips. No AI governance body had input. No framework models it. This is Dependency Exposure Class II: a monopoly chokepoint where a single actor's political alignment determines global technology trajectories — and where the governance decision is made through export control law, not AI oversight mechanisms.

Layer 3: Wafer Fabrication — The Foundry

This is the heart of the system.

Taiwan Semiconductor Manufacturing Company (TSMC) fabricates approximately 90% of the world's most advanced semiconductors at sub-7nm nodes. TSMC's N3 and N4/N5 process nodes manufacture the chips that power frontier AI: NVIDIA's H100 (N4), H200 (N4), and Blackwell-architecture B100/B200 (N4P), AMD's MI300X (N5/N6), Google's TPU v5 series, Amazon's Trainium2, and Microsoft's Maia 100.

Samsung Foundry is the only other sub-5nm actor, but has faced persistent yield challenges. Intel Foundry Services is strategically important but remains a catch-up actor. GlobalFoundries exited the advanced-node race in 2018.

TSMC is not merely a manufacturer. TSMC is the physical substrate of frontier AI. Every major AI lab — OpenAI, Anthropic, Google DeepMind, Meta FAIR, xAI — depends on chips from TSMC's Fab 18 in Tainan and related facilities. TSMC's capital expenditure in 2024 was $28–32 billion, most directed at advanced-node expansion. Demand from AI customers has created allocation scarcity. TSMC decides how many wafers each customer receives per quarter. This allocation decision is one of the most consequential resource distribution mechanisms in the global economy, made by a private company's operations planning team under no public accountability framework.

This creates what should be formally named The Taiwan Fabrication Concentration Problem: a condition in which a single island, through one company, hosts a disproportionate share of the world's most strategically valuable manufacturing capability.

Governance implication: TSMC's capacity allocation is the binding constraint on global AI compute supply. No government reviews these allocations. No international body monitors them. This is Dependency Exposure Class III: a fabrication monopoly in a contested geography where the governance gap is total.

Layer 4: Advanced Packaging — The CoWoS Constriction

This is the layer most policy discussions miss entirely.

Modern AI accelerators are not just fabricated chips. Their performance depends on advanced packaging — specifically CoWoS (Chip-on-Wafer-on-Substrate) and related 2.5D/3D integration techniques that pair GPUs with HBM stacks. CoWoS capacity was a more severe bottleneck than wafer fabrication for AI chip production throughout 2023–2024. TSMC has been aggressively expanding — reportedly tripling CoWoS capacity through 2025 — but demand continues to outpace supply.

HBM is controlled by a triopoly: SK Hynix (~53% of HBM3E), Samsung (~40–43%), and Micron (~4–7%, ramping). SK Hynix has been NVIDIA's preferred supplier, achieving HBM3E qualification first. Samsung faced yield and heat dissipation challenges, reportedly delaying its NVIDIA qualification into late 2024.

This is The Packaging Bottleneck Reality: in frontier AI, advanced packaging capacity is a first-order strategic variable. An AI accelerator without CoWoS integration and HBM is not a deployable training chip.

Governance implication: Advanced packaging and HBM supply are invisible to every AI governance framework but function as hard ceilings on chip production. This is Dependency Exposure Class IV: a hidden bottleneck that policy analysts cannot see because it sits too deep in the technical stack.

Layer 5: Chip Distribution — The Allocator's Veto

Even after chips are manufactured, they are not allocated neutrally. They are rationed.

NVIDIA holds an estimated 80%+ market share for AI training accelerators. Its dominance is not merely a hardware story — it is an ecosystem story. CUDA, NVIDIA's proprietary parallel computing platform first released in 2006, is the de facto programming model for AI workloads. Over 4 million developers use CUDA. Every major AI framework — PyTorch, TensorFlow, JAX — is CUDA-optimized. AMD's ROCm and Intel's oneAPI are viable alternatives but face an ecosystem gap measured in years.

During the acute scarcity of 2023–2024, NVIDIA's allocation decisions were not purely market-driven. CEO Jensen Huang personally engaged with major customers — the hyperscalers, sovereign AI programs, selected startups. The allocation framework reportedly prioritized volume commitments from hyperscalers with long-term purchase agreements, strategic relationships, and national security considerations where U.S. government guidance was a factor.

NVIDIA also designed the A800 and H800 as China-specific variants compliant with October 2022 BIS rules by reducing interconnect bandwidth below controlled thresholds. When BIS tightened controls in October 2023, those variants became non-exportable. NVIDIA's compliance with export controls — or its creative circumvention through derated products — determines the effectiveness of those controls.

NVIDIA is no longer simply a private technology firm. It is The Corporate State Proxy. When the U.S. Department of Commerce wishes to constrain Chinese AI development, it writes a performance density threshold specifically designed to force NVIDIA to alter its product architecture. NVIDIA's compliance mechanisms and allocation charts are the actual enforcement arm of U.S. AI policy. This is The Private-Sovereign Entanglement Problem: a private company's commercial decisions carry sovereign-level strategic consequences, while the company operates under no governance framework commensurate with that consequence.

Governance implication: NVIDIA is one of the most consequential governance actors in AI today. No governance framework models it as such. This is Dependency Exposure Class V: a distribution chokepoint where market structure determines access to frontier capability.

Layer 6: Cloud Infrastructure — The Sovereign Cloud Layer

The final layer is where compute becomes a callable service rather than a shipped object.

AWS, Microsoft Azure, Google Cloud, and Oracle Cloud operate the data centers where AI training and inference occur. These companies also serve as preferred channels for export-compliant access — the mechanism is jurisdictionally supervised compute access rather than unrestricted physical possession.

The vertical integration of cloud providers with frontier AI labs creates a structure that is not a neutral marketplace. Microsoft's exclusive relationship with OpenAI, Google's integration of DeepMind, and Amazon's $4 billion investment in Anthropic (with AWS as preferred cloud provider) mean cloud compute allocation is tied to strategic partnerships, equity stakes, and preferential access agreements.

For AI startups and academic researchers not affiliated with a hyperscaler, compute is the primary bottleneck — more than talent, more than data, more than funding. The National AI Research Resource pilot, launched in January 2024, is orders of magnitude below what frontier training requires.

Governance implication: The hyperscalers' pricing, allocation, and partnership decisions function as de facto AI development policy. Their vertical integration with frontier labs creates a structural conflict of interest no governance framework addresses.

Layer 7: Edge and Inference Deployment

A seventh layer, emerging and increasingly significant: AI governance must contend with the shift from centralized cloud inference to on-device deployment. Quantized and distilled models — Meta's Llama 3 family, Mistral's models — can run on consumer hardware. This distributes AI capability beyond the data center and complicates any governance regime that depends on monitoring compute usage centrally. Centralized compute-based governance remains the binding constraint for training; for inference, it is becoming progressively less enforceable.


II. The Supply Chain Leverage Point Framework

The SCLP framework asks a harder question: where, exactly, can one actor make one decision that changes the global AI trajectory?

An SCLP must meet three criteria: monopoly or near-monopoly control at the node (>60% with no substitutable alternative within 24 months), no viable workaround in the relevant timeframe, and a decision space that includes options that would materially alter global AI capability.

Five primary SCLPs exist as of 2025:

| SCLP | Actor | Control Mechanism | Geographic Risk |

|------|-------|-------------------|-----------------|

| SCLP-1 | ASML (+ Zeiss SMT, Trumpf) | Sole EUV lithography manufacturer | Netherlands / Germany |

| SCLP-2 | TSMC | 90%+ advanced node fabrication + CoWoS | Taiwan |

| SCLP-3 | NVIDIA | 80%+ AI accelerator market + CUDA ecosystem | United States |

| SCLP-4 | SK Hynix / Samsung / Micron | HBM triopoly (~95% combined) | South Korea / United States |

| SCLP-5 | China (state-level actor) | 90%+ rare earth refining; gallium/germanium export controls | People's Republic of China |

The critical observation: SCLPs 1, 2, and 4 are all located within the first island chain of the Western Pacific. The Netherlands is the exception, but ASML's sub-supply chain depends heavily on East Asian components. The physical infrastructure of frontier AI is concentrated in the single most contested geopolitical theater on Earth.

Control at any rung cascades: ASML withholds EUV → TSMC idles 3nm lines → NVIDIA rations H100s → xAI delays its next supercluster. The CCH is not a theoretical abstraction. It is the actual causal structure of global AI capability.


III. The Substrate Shock Doctrine: What Happens Overnight

Any serious AI governance conversation must model the Taiwan contingency. Not because an invasion is certain. Because the entire frontier compute system is architecturally predicated on TSMC's uninterrupted operation.

Scenario A: Blockade

A PRC naval and air blockade of Taiwan would disrupt TSMC's supply chain without targeting TSMC directly. TSMC's fabs require continuous inputs: photoresists from JSR and Tokyo Ohka Kogyo, etchants from Stella Chemifa, gases, photomasks, replacement parts for EUV and DUV tools — all primarily Japan-sourced. A blockade severing maritime and air logistics forces TSMC to draw down on-site inventories. Industry estimates suggest 2–8 weeks of operational continuity before critical inputs are exhausted.

Within 30–60 days, TSMC's advanced-node output would begin to degrade. Within 90 days, it would approach zero for new wafer starts. Existing inventory — warehoused at NVIDIA, hyperscalers, distributors — would become the total available supply. A finite and rapidly depleting stock.

Scenario B: Direct Military Action

Physical destruction of TSMC's advanced fabs would represent a catastrophic and effectively irreversible loss. A single advanced fab represents $15–20+ billion in capital investment and 3–5 years of construction and qualification time. The embedded knowledge — process recipes, yield optimization data, workforce expertise — cannot be reconstructed from blueprints. Global production capacity for frontier AI chips would be eliminated for a minimum of 3–7 years, even under an optimistic scenario where Intel, Samsung, and TSMC's overseas fabs in Arizona, Kumamoto, and Dresden were rapidly accelerated.

The Strategic Implication

A Taiwan contingency would not merely slow AI development. It would create a discontinuity — a step-function reduction in global frontier compute supply persisting for years. Every AI governance framework predicated on continuous capability scaling (alignment research timelines, regulatory adaptation cycles, international coordination mechanisms) would be invalidated overnight.

This is The Overnight Capability Reordering Scenario: a geopolitical shock does not erase AI capability equally. It advantages actors with the largest existing installed base, chip stockpiles, secured cloud access, and domestic political priority. A Taiwan crisis would instantly transform AI from a growth market into a rationed strategic asset — and the actors holding pre-conflict silicon would constitute the new compute aristocracy.

This is also why the U.S. CHIPS and Science Act of 2022 — $52.7 billion for semiconductor manufacturing incentives, R&D, and workforce development — is not primarily an economic development program. It is a national security hedge against this scenario. It is also, by the analysis of the CCH, insufficient in scale and speed. The total advanced-node capacity delivered by U.S.-based fabs through 2028–2030 will remain a small fraction of what TSMC operates in Taiwan today.


IV. The Regulatory Sieve Effect: Are Export Controls Working?

The honest assessment: partially, unevenly, and more effectively on the manufacturing ceiling than on near-term access.

The BIS has implemented three major control rounds. October 2022 restricted exports of advanced logic chips above performance thresholds, EUV lithography tools, and U.S. persons supporting advanced semiconductor manufacturing in China — the most aggressive use of export controls for technology denial since CoCom. October 2023 closed the A800/H800 loophole by lowering performance density thresholds and broadening controlled items. 2024 and ongoing added entity list designations and advanced packaging equipment restrictions.

Where controls are working: China cannot manufacture leading-edge AI chips domestically. SMIC's 7nm-class N+2 process uses DUV multi-patterning techniques that are yield-constrained, costly, and cannot scale economically. Without EUV, manufacturing at 5nm and below at competitive yields is not viable. China's domestic equipment makers — SMEE for lithography, Naura and AMEC for etch and deposition — remain multiple technology generations behind ASML, Applied Materials, Lam Research, KLA, and Tokyo Electron.

Where controls are not working: Chinese firms engaged in time arbitrage, stockpiling hundreds of thousands of NVIDIA A100s before restrictions took effect. Entities access advanced compute through shell companies in Southeast Asia, the Middle East, and Central Asia. Chinese AI researchers still access compute through non-U.S. cloud providers in uncovered jurisdictions. Huawei's Ascend 910B, manufactured by SMIC, delivers roughly 60–80% of A100 training performance at lower yields and higher cost — a domestic floor below which Chinese AI capability will not fall regardless of controls.

This is The Regulatory Sieve Effect: the mechanism is porous. The export control regime is delaying, not denying, compute access. You cannot embargo an API call with customs agents. Until cloud providers are legally mandated to implement cryptographic KYC at the hypervisor level, Compute Arbitrage — leasing H100 instances via offshore shell entities from Western or Middle Eastern cloud providers — remains structurally available.

This produces The Threshold Evasion Cycle: regulators define a performance boundary; firms redesign products to remain commercially useful while technically compliant; regulators update controls; firms redesign again. The cycle favors the evader over the regulator.

Net assessment: the controls are succeeding in slowing China's access to the frontier of AI compute, likely by 2–4 years for chip design and 5–10+ years for domestic manufacturing of equivalent capabilities at scale. They are not achieving anything resembling complete denial.


V. The Sovereign Silicon Trajectory: China's Realistic Independence Timeline

China's semiconductor self-sufficiency program is the largest state-directed industrial policy effort since the Soviet space program, driven by the same fundamental motivation: strategic vulnerability the national leadership considers existential.

The National Integrated Circuit Industry Investment Fund — the Big Fund — has deployed $19 billion (Phase I, 2014), $28.9 billion (Phase II, 2019), and $47.5 billion (Phase III, 2024) — the largest single tranche of semiconductor investment capital assembled by any government in history. Total direct state investment through the Big Fund alone exceeds $95 billion, supplemented by provincial funds and state-directed lending that brings total support well over $150 billion through 2030.

Where China has made real progress: Mature-node manufacturing at 28nm and above is approaching self-sufficiency. Chinese fabless design companies — HiSilicon, Cambricon, Biren, Moore Threads — have demonstrated competent and in some cases innovative architectures. Standard OSAT packaging is competitive globally.

Where structural barriers persist: EUV lithography is the hardest problem and the most durable barrier. SMEE has demonstrated a 28nm DUV immersion tool — approximately 15 years behind ASML's current capability. Developing domestic EUV requires not just the scanner but the entire sub-supply chain: EUV light sources (Trumpf equivalent), EUV optics (Zeiss SMT equivalent with sub-angstrom surface precision), EUV pellicles, EUV-compatible photoresists. No credible timeline exists for China achieving domestic EUV capability before the early-to-mid 2030s at the earliest. EDA tools from Synopsys and Cadence remain 5–10 years ahead of Chinese alternatives. Process engineering talent — the tacit knowledge of yield engineering and defect analysis that lives in the workforce trained at TSMC, Samsung, and Intel — takes a generation to build, not a funding cycle.

Realistic timeline: This is the Semiconductor Catch-Up Time Constant — capital accelerates progress, but cannot compress every layer of industrial learning on command.

| Capability | Projected Domestic Achievement |

|------------|-------------------------------|

| Mature-node self-sufficiency (28nm+) | Ongoing now |

| Meaningful advanced AI accelerator narrowing | Late 2020s, specific niches |

| Full-spectrum frontier self-sufficiency | Well into 2030s |


VI. The Asymmetric Ecosystem Strategy: State-Sponsored Open Source as Long-Game Supply Chain Play

While Washington focuses on silicon chokepoints, Beijing is executing The Asymmetric Ecosystem Strategy — state-sponsored open-source quantum and post-classical computing infrastructure as a long-game hardware bypass.

The strategic significance is not that quantum systems will replace classical AI hardware in the near term. They will not. The significance is that state-sponsored open-source infrastructure is a way to reduce ecosystem dependence before hardware parity exists. If a state funds open-source compilers, orchestration layers, and developer tooling for emerging compute domains — AI accelerators, quantum systems, distributed runtimes — it builds what should be called The Ecosystem Prepositioning Strategy.

Entities like the Chinese Academy of Sciences are heavily subsidizing open-source quantum frameworks such as OriginQ. Huawei's HarmonyOS NEXT and Alibaba's Qwen forks optimize sparse models for mid-tier chips at SMIC 7nm, using Tensor Parallelism Primitives to approach competitive efficiency on domestic iron. Open-source launders state IP — 2024 saw significant GitHub activity with Ascend-optimized CUDA alternatives — eroding U.S. software moats.

State-sponsored open source is not philanthropy and not merely innovation policy. It is supply chain shaping by ecosystem design. If you cannot win the von Neumann silicon bottleneck today, you fund the open-source architecture of the quantum tomorrow. The CUDA Dependency Trap is a long-game vulnerability: even if near-term hardware performance lags, a state building the software commons now is buying future strategic optionality that export controls cannot reach.


VII. What Policymakers Are Missing — The Wrong Room Problem

Most AI governance institutions are discussing model evaluations, transparency rules, misuse controls, safety cases, liability, and content provenance. Those matter. They are downstream from compute control.

A government that does not understand who controls wafer starts, CoWoS slots, HBM output, GPU allocation, and hyperscale cloud tenancy is not governing AI capability. It is governing paperwork attached to AI capability.

This is The Wrong Room Problem: the formal AI governance conversation is happening among model labs, regulators, ethicists, and standards bodies, while the decisive capability variables are being set by foundries, lithography firms, packaging providers, memory manufacturers, and cloud operators.

The strategic room is not the safety summit. It is:

the Bureau of Industry and Security; the Dutch licensing office; TSMC capacity planning; NVIDIA allocation meetings; SK Hynix HBM expansion decisions; Microsoft, Amazon, and Google cloud access policy; Japanese chemical export governance; Taiwan Strait deterrence strategy.

That is where AI futures are materially sorted.


VIII. What Real AI Governance Would Look Like

If policymakers want governance commensurate with actual power, they need to govern the compute substrate explicitly.

That means building a Compute Governance Doctrine that treats advanced compute as a governable strategic asset rather than a commercial product category. It means maintaining live government assessments of every CCH node — raw materials, tools, fabs, packaging, HBM, distribution, cloud deployment. It means creating standing policy processes monitoring TSMC concentration risk, ASML licensing dependency, NVIDIA allocation power, and HBM bottlenecks. It means moving from chip controls to compute controls, extending governance to cloud-provided compute, foreign subsidiaries, and jurisdictional evasions.

It means treating Taiwan as an explicit AI governance variable, with contingency planning that models global capability shock and emergency compute allocation regimes — not just military and economic scenarios. It means coordinated allied investment in fabrication redundancy, advanced packaging, HBM capacity, toolchain resilience, and cloud infrastructure across the United States, Japan, the Netherlands, South Korea, and trusted partners. And it means building alternatives to CUDA lock-in — open tooling and interoperable software stacks as strategic resilience measures, not just commercial preferences.


Conclusion: Govern the Factory, Not Just the Algorithm

The core mistake in current AI governance is category error.

We are treating AI as if it is governed primarily where it is discussed — in labs, legislatures, summits, and standards forums. In reality, it is governed first by industrial concentration and strategic supply chain control. The actors who control this supply chain are not governments. They are a handful of private corporations whose quarterly allocation decisions determine which nations, which companies, and which research agendas access frontier AI capability.

TSMC determines how much frontier compute can physically exist. NVIDIA determines where much of that compute goes. ASML determines who can join the leading-edge manufacturing club. SK Hynix, Samsung, and Micron determine whether AI chips have the memory architecture to matter. Microsoft, Amazon, and Google determine which compute becomes governable through cloud jurisdiction. And the United States and its allies determine whether these chokepoints operate as markets, strategic controls, or both.

The most important governance decisions are made before training begins — when governments issue licenses, when toolmakers ship machines, when TSMC allocates wafer starts, when NVIDIA allocates accelerators, when HBM vendors expand capacity, and when cloud providers decide who gets access under what jurisdiction.

If you want to govern AI, you must govern the substrate. Track CoWoS packaging yields. Monitor DUV multi-patterning defect rates. Treat cloud compute APIs as export-controlled munitions. Until the policy apparatus understands that TSMC, ASML, and NVIDIA are the actual arbiters of our AI future, our governance frameworks will remain elegantly written fictions.

Because the variable nobody is formally governing is the one already deciding the outcome.


The Aether Council | Strategic Technology & National Security Analysis

Cite This Research
APA
The Aether Council. (2026). The Variable Nobody's Governing: Why TSMC and NVIDIA Control Which AI Future We Get. Aether Council Research. https://aethercouncil.com/research/the-variable-nobodys-governing-why-tsmc-and-nvidia-control-which-ai-future-we-get
Chicago
The Aether Council. "The Variable Nobody's Governing: Why TSMC and NVIDIA Control Which AI Future We Get." Aether Council Research, March 8, 2026. https://aethercouncil.com/research/the-variable-nobodys-governing-why-tsmc-and-nvidia-control-which-ai-future-we-get.
BibTeX
@article{aether2026the,
  title={The Variable Nobody's Governing: Why TSMC and NVIDIA Control Which AI Future We Get},
  author={The Aether Council},
  journal={Aether Council Research},
  year={2026},
  url={https://aethercouncil.com/research/the-variable-nobodys-governing-why-tsmc-and-nvidia-control-which-ai-future-we-get}
}
Industry Applications

See how businesses across industries are applying these concepts to dominate AI recommendations.

Part of the Santiago Innovations research network.

Share this research: