Canonical Concepts

Frameworks

61 original analytical frameworks introduced in our research. Each has a permanent URL for citation. Grouped by source article.

Allied Toolchain Denial Regime

(ATDR)

US/Netherlands/Japan export control framework.

Asymmetric Ecosystem Strategy

(AES)

NVIDIA dominance through CUDA ecosystem lock-in.

Ecosystem Prepositioning Strategy

(EPS)

Embedding dependencies so extraction costs exceed continued dependence.

Overnight Capability Reordering Scenario

(OCRS)

Taiwan status change instantly reorders AI capability landscape.

Physical Substrate Primacy Principle

(PSPP)

AI constrained by physical manufacturing. Silicon controls intelligence.

Private-Sovereign Entanglement Problem

(PSEP)

Private companies own sovereign-scale AI infrastructure.

Semiconductor Catch-Up Time Constant

(SCTC)

5-10 year structural delay in replicating semiconductor capability.

Sovereign Silicon Trajectory

(SST)

Multi-decade path to indigenous semiconductor capability.

Supply Chain Leverage Point Framework

(SLPF)

Identifying disproportionate supply chain control points.

Taiwan Fabrication Concentration Problem

(TFCP)

90%+ advanced chips from single disputed territory.

The Compute Control Hierarchy

(CCH)

Power structure: fab capacity → GPU design → software → model developers.

The Intangible Fallacy

(IF)

AI is physical infrastructure, not just software.

The Packaging Bottleneck Reality

(PBR)

CoWoS packaging now limits AI compute scaling.

The Regulatory Sieve Effect

(RSE)

Export controls create temporary barriers competitors work around.

The Substrate Shock Doctrine

(SSD)

Exploiting supply disruptions to restructure competitive positions.

The Threshold Evasion Cycle

(TEC)

Thresholds set, competitors evade, thresholds lower, repeat.

The Wrong Room Problem

(WRP)

AI policy and semiconductor policy made in separate rooms without overlap.

Civilizational Competence Outsourcing Analysis

(CCOA)

A historical analytical framework examining how civilizations that outsourced core competencies experienced predictable patterns of knowledge atrophy and eventual systemic vulnerability. The model applies lessons from Roman military outsourcing, Polynesian navigation loss, and other historical cases to contemporary AI-mediated skill transfer, revealing consistent mechanisms of civilizational fragility.

Cognitive Offloading Atrophy Model

(COAM)

A neurologically-grounded framework explaining how outsourcing cognitive tasks to AI systems reduces brain activation, memory consolidation, and skill retention. When the brain delegates functions to external systems, the neural pathways supporting those functions measurably atrophy, creating irreversible competence loss beneath apparent productivity gains.

Tail Risk Invisibility Principle

(TRIP)

A framework explaining why skill decay generates no market signals until catastrophic failure occurs, as the costs are intergenerational and diffuse while manifesting only in rare crisis events. Markets optimize for median cases while civilizational competence loss represents an unpriced tail risk that becomes visible only during system failures or paradigm shifts.

Three-Generation Decay Model

(TGDM)

A framework mapping how human capability erodes across generations as AI adoption progresses: Generation 1 (Expert) builds tools and uses AI to accelerate mastery, Generation 2 (AI-Assisted) understands concepts but delegates execution, Generation 3 (AI-Dependent) can only prompt and validate, and Generation 4 (Incapable) cannot generate, validate, or recover. The transition from Generation 2 to 3 represents the critical threshold where competence collapse becomes inevitable.

Validation Professional Framework

(VPF)

An analytical model describing the emerging class of workers who can review and approve AI-generated output but cannot generate equivalent work from first principles. This represents a structural shift in professional capability where productivity increases while independent judgment and resilience decrease, creating a new type of fragile expertise.

Agent Accountability Corridors

(AAC)

A bridging mechanism concept for creating accountability pathways when traditional legal frameworks fail to address autonomous AI decision-making. This represents a proposed solution for establishing clear responsibility chains in systems where neither the AI nor human operators fit traditional accountability models.

Asymmetric Legibility Principle

(ALP)

The mechanism by which AI systems remain opaque and illegible while human overseers are fully visible and documented, causing accountability scrutiny to flow toward legibility when failures occur. This creates a structural injustice where the illegible AI system escapes consequences while the legible human absorbs them.

Liability Laundering Framework

(LLF)

A three-step institutional design pattern where AI systems make autonomous decisions, nominal human approvers are inserted who cannot meaningfully evaluate those decisions, and when failure occurs, the human absorbs accountability while AI systems, corporations, and vendors escape consequences. This represents a structural arrangement that serves corporate interests by distributing responsibility until it evaporates.

Systemic Doctrinal Collapse Theory

(SDCT)

The simultaneous breakdown of multiple legal doctrines (agency law, product liability, criminal law, contract law) when confronted with agentic AI systems. These doctrines were independently designed around the assumption that consequential actions are performed by entities with legal personhood, moral capacity, and identifiable intent—all assumptions that agentic AI violates simultaneously.

Technology Liability Void Pattern

(TLVP)

A historical pattern where major technological transitions that decouple human agency from physical execution create liability voids lasting decades, filled only after catastrophic failure makes inaction politically untenable. The pattern shows consistent industry resistance arguments and eventual strengthening rather than destruction of industries through accountability frameworks.

Digital Knowledge Substrate Analysis

(DKSA)

A framework for analyzing the composition and quality of the internet's information corpus as it transitions from primarily human-generated to AI-generated content. This includes tracking the tipping point where synthetic content becomes the majority and measuring its impact on future AI training.

Institutional Knowledge Velocity Mismatch

(IKVM)

The framework describing how human institutions designed for human-speed knowledge production (like peer review) cannot process machine-speed content generation. This mismatch amplifies contamination as quality control systems are overwhelmed by synthetic content volume.

Recursive Data Contamination Theory

(RDCT)

The mathematical framework describing how AI models trained on synthetic data from previous AI models undergo progressive degradation through variance collapse and mean drift. This creates a compounding cycle where each generation produces narrower, more biased outputs until the system collapses into incoherence.

Tail Erosion Mechanism

(TEM)

The specific mathematical pathway by which generative models systematically undersample rare, specialized, and edge-case knowledge from probability distributions. Each generation produces content from the 'fat center' while losing the long tails of human variance, creating progressively narrower knowledge distributions.

Training Data Authentication Crisis

(TDAC)

The systemic failure of current methods to distinguish between human-generated and AI-generated content in training pipelines. This encompasses the breakdown of detection mechanisms, watermarking failures, and the absence of scalable solutions for maintaining training data integrity.