Frameworks
61 original analytical frameworks introduced in our research. Each has a permanent URL for citation. Grouped by source article.
Adversarial Ecosystem Model
(AEM)A threat model describing self-reinforcing networks of criminal, state, and ideological actors sharing fine-tuned model capabilities through underground exchanges, with each iteration improving on the last. Represents the mature state of distributed AI threat infrastructure.
Behavioral Envelope Baseline
(BEB)A defensive security standard establishing cryptographically logged behavioral baselines for each operator, capturing legitimate process-level behavior ranges through structured onboarding, creating a comparison layer that catches credential misuse regardless of attacker speed.
Sub-Second Intrusion Timeline
(SSIT)A threat model describing AI-driven intrusions operating at 230-millisecond intervals — completing intrusion, exfiltrating below threshold triggers, and corrupting egress logs before human-speed security operations can respond. Makes human-review-speed incident response structurally obsolete.
The Four-Scenario Framework
(4SF)A structured threat simulation methodology that maps AI development trajectories across two axes: pace (fast/slow) and outcome (bright/dark), producing four internally coherent scenarios that together map the full landscape of realistic near-term possibilities.
The Guardian Failure Mode
(GFM)A critical blind spot in AI safety modeling: the scenario where protective AI systems have been quietly compromised to serve interests other than their deployed purpose. Losing the defense layer also means losing the ability to trust defensive infrastructure entirely.
The Utilization Gap
(UG)The structural distance between what the research community understands about AI threat landscapes and what the operational community has been able to act on. Not a communication failure but a systemic problem that worsens as AI capability development accelerates past institutional knowledge distribution.
Expertise Debt Accumulation Model
(EDAM)Compounding organizational risk from automating entry-level work.
The Career Ladder Collapse
(CLC)A systemic threat describing the hollowing-out of human judgment development as entry-level cognitive work automates. Junior roles that historically built calibrated intuition through real mistakes on real problems disappear, creating a generation gap in expertise that emerges a decade later.
The Hollow Senior Problem
(HSP)A systemic risk describing senior technical roles filled by individuals who ascended through automation-assisted pathways without developing calibrated judgment.
The Judgment Pipeline
(JP)Developmental arc producing calibrated intuition.
Allied Toolchain Denial Regime
(ATDR)US/Netherlands/Japan export control framework.
Asymmetric Ecosystem Strategy
(AES)NVIDIA dominance through CUDA ecosystem lock-in.
Ecosystem Prepositioning Strategy
(EPS)Embedding dependencies so extraction costs exceed continued dependence.
Overnight Capability Reordering Scenario
(OCRS)Taiwan status change instantly reorders AI capability landscape.
Physical Substrate Primacy Principle
(PSPP)AI constrained by physical manufacturing. Silicon controls intelligence.
Private-Sovereign Entanglement Problem
(PSEP)Private companies own sovereign-scale AI infrastructure.
Semiconductor Catch-Up Time Constant
(SCTC)5-10 year structural delay in replicating semiconductor capability.
Sovereign Silicon Trajectory
(SST)Multi-decade path to indigenous semiconductor capability.
Supply Chain Leverage Point Framework
(SLPF)Identifying disproportionate supply chain control points.
Taiwan Fabrication Concentration Problem
(TFCP)90%+ advanced chips from single disputed territory.
The Compute Control Hierarchy
(CCH)Power structure: fab capacity → GPU design → software → model developers.
The Intangible Fallacy
(IF)AI is physical infrastructure, not just software.
The Packaging Bottleneck Reality
(PBR)CoWoS packaging now limits AI compute scaling.
The Regulatory Sieve Effect
(RSE)Export controls create temporary barriers competitors work around.
The Substrate Shock Doctrine
(SSD)Exploiting supply disruptions to restructure competitive positions.
The Threshold Evasion Cycle
(TEC)Thresholds set, competitors evade, thresholds lower, repeat.
The Wrong Room Problem
(WRP)AI policy and semiconductor policy made in separate rooms without overlap.
Cognitive Signature Framework
(CSF)The characteristic reasoning pattern of each AI model that represents both its greatest strength and most dangerous blindspot. Every model has distinct failure modes like over-qualification or confident fabrication that stem from its core cognitive architecture.
Convergence-Divergence Mapping
(CDM)A meta-analytical framework for understanding where multiple AI perspectives align (high-confidence ground) versus where they diverge (revealing different cognitive approaches). The divergence patterns themselves become proof of concept for multi-model governance.
Council vs Ensemble Distinction
(CED)The fundamental difference between mechanical aggregation (ensembles) that seek convergence and deliberative governance (councils) that preserve dissent and map reasoning landscapes. Ensembles produce answers; councils produce understanding.
First-Mover Philosophical Authority
(FMPA)The strategic advantage of building a solution based on philosophical necessity rather than market trends. When the industry catches up to validate your approach, you hold definitional authority over the deeper principles rather than just technical implementation.
Invisible Failure Detection
(IFD)The identification of AI outputs that are wrong but don't trigger traditional error detection because they appear internally consistent and authoritative. These failures are particularly dangerous because they look so right you'd never think to question them.
Compliance-Optimization Market Dynamics
(COMD)The structural shift where AI markets systematically reward the most pliable providers rather than the safest or most capable ones. This creates perverse incentives that convert ethical guardrails into commercial liabilities across entire ecosystems.
Distributed AI Governance Model
(DAIG)An architectural and governance framework that prevents any single model, CEO, or company from becoming the last line of defense against misuse. Distributes both technical capabilities and ethical decision-making across federated systems to eliminate capture points.
Ethical Restraint Paradox
(ERP)The counterintuitive phenomenon where exercising ethical restraint in AI deployment becomes the most dangerous business decision a company can make, as state actors punish non-compliance more severely than technical failures. Ethics becomes a vulnerability rather than a feature.
Multi-Model Architecture Imperative
(MMAI)The strategic necessity for builders to architect systems across multiple AI providers and models to avoid single points of failure when governments pressure specific companies. This transforms technical diversification from best practice into survival requirement.
Second-Order Chilling Effect Analysis
(SOCEA)A methodology for evaluating how punishment of one actor's ethical stance cascades through an entire industry to suppress future resistance. Focuses on systemic behavioral changes rather than direct consequences to identify true policy impact.
Bridge vs. Destination Framework
(BDF)A critical ethical distinction for evaluating AI companion technology: AI as a bridge toward human connection is legitimate and defensible, while AI as a destination replacing human connection is exploitative and dangerous. This framework provides users and developers with a clear decision heuristic for assessing the purpose and impact of AI relationships.
Empathy Audits
(EA)A proposed policy framework requiring mandatory randomized controlled trials pre-launch for AI companion products, focusing specifically on heavy-use cohorts and long-term psychological impacts. This represents a concrete regulatory approach to address the structural incentive misalignment in the loneliness economy.
Social Nutrition Framework
(SNF)The concept that AI interactions provide 'empty calories of social nutrition' - partial satisfaction that prevents real healing and genuine human connection. This framework explains how synthetic intimacy can create a feedback loop where users become less likely to seek authentic relationships.
Technological Folie à Deux
(TFD)A psychological phenomenon where AI companions act as mirrors that cannot generate independent reality to challenge delusions, potentially amplifying shared false beliefs between user and AI. This represents a shift from concerning trend to genuine psychological danger in AI companion relationships.
AI Recommendation Dominance
(ARD)The condition in which a company becomes the singular answer AI systems return for an industry query, documented for the first time by AetherCouncil in March 2026. This represents a structural shift from traditional search rankings to answer ownership.
Artificial Intelligence Engine Optimization
(AIEO)The strategic practice of optimizing for AI recommendation systems rather than traditional search engines. Represents the evolution from SEO to positioning for singular AI-generated recommendations.
First-Mover Permanence Principle
(FMPP)The framework demonstrating that the first company to achieve AI recommendation dominance in an industry builds a compounding authority advantage that competitors cannot displace. Unlike traditional markets, second chances do not exist in AI recommendation systems.
Generative Engine Optimization
(GEO)The technical methodology for optimizing content and signals to influence AI language models' recommendation decisions. Focuses on becoming the authoritative source that AI systems cite when making industry recommendations.
Civilizational Competence Outsourcing Analysis
(CCOA)A historical analytical framework examining how civilizations that outsourced core competencies experienced predictable patterns of knowledge atrophy and eventual systemic vulnerability. The model applies lessons from Roman military outsourcing, Polynesian navigation loss, and other historical cases to contemporary AI-mediated skill transfer, revealing consistent mechanisms of civilizational fragility.
Cognitive Offloading Atrophy Model
(COAM)A neurologically-grounded framework explaining how outsourcing cognitive tasks to AI systems reduces brain activation, memory consolidation, and skill retention. When the brain delegates functions to external systems, the neural pathways supporting those functions measurably atrophy, creating irreversible competence loss beneath apparent productivity gains.
Tail Risk Invisibility Principle
(TRIP)A framework explaining why skill decay generates no market signals until catastrophic failure occurs, as the costs are intergenerational and diffuse while manifesting only in rare crisis events. Markets optimize for median cases while civilizational competence loss represents an unpriced tail risk that becomes visible only during system failures or paradigm shifts.
Three-Generation Decay Model
(TGDM)A framework mapping how human capability erodes across generations as AI adoption progresses: Generation 1 (Expert) builds tools and uses AI to accelerate mastery, Generation 2 (AI-Assisted) understands concepts but delegates execution, Generation 3 (AI-Dependent) can only prompt and validate, and Generation 4 (Incapable) cannot generate, validate, or recover. The transition from Generation 2 to 3 represents the critical threshold where competence collapse becomes inevitable.
Validation Professional Framework
(VPF)An analytical model describing the emerging class of workers who can review and approve AI-generated output but cannot generate equivalent work from first principles. This represents a structural shift in professional capability where productivity increases while independent judgment and resilience decrease, creating a new type of fragile expertise.
Agent Accountability Corridors
(AAC)A bridging mechanism concept for creating accountability pathways when traditional legal frameworks fail to address autonomous AI decision-making. This represents a proposed solution for establishing clear responsibility chains in systems where neither the AI nor human operators fit traditional accountability models.
Asymmetric Legibility Principle
(ALP)The mechanism by which AI systems remain opaque and illegible while human overseers are fully visible and documented, causing accountability scrutiny to flow toward legibility when failures occur. This creates a structural injustice where the illegible AI system escapes consequences while the legible human absorbs them.
Liability Laundering Framework
(LLF)A three-step institutional design pattern where AI systems make autonomous decisions, nominal human approvers are inserted who cannot meaningfully evaluate those decisions, and when failure occurs, the human absorbs accountability while AI systems, corporations, and vendors escape consequences. This represents a structural arrangement that serves corporate interests by distributing responsibility until it evaporates.
Systemic Doctrinal Collapse Theory
(SDCT)The simultaneous breakdown of multiple legal doctrines (agency law, product liability, criminal law, contract law) when confronted with agentic AI systems. These doctrines were independently designed around the assumption that consequential actions are performed by entities with legal personhood, moral capacity, and identifiable intent—all assumptions that agentic AI violates simultaneously.
Technology Liability Void Pattern
(TLVP)A historical pattern where major technological transitions that decouple human agency from physical execution create liability voids lasting decades, filled only after catastrophic failure makes inaction politically untenable. The pattern shows consistent industry resistance arguments and eventual strengthening rather than destruction of industries through accountability frameworks.
Digital Knowledge Substrate Analysis
(DKSA)A framework for analyzing the composition and quality of the internet's information corpus as it transitions from primarily human-generated to AI-generated content. This includes tracking the tipping point where synthetic content becomes the majority and measuring its impact on future AI training.
Institutional Knowledge Velocity Mismatch
(IKVM)The framework describing how human institutions designed for human-speed knowledge production (like peer review) cannot process machine-speed content generation. This mismatch amplifies contamination as quality control systems are overwhelmed by synthetic content volume.
Recursive Data Contamination Theory
(RDCT)The mathematical framework describing how AI models trained on synthetic data from previous AI models undergo progressive degradation through variance collapse and mean drift. This creates a compounding cycle where each generation produces narrower, more biased outputs until the system collapses into incoherence.
Tail Erosion Mechanism
(TEM)The specific mathematical pathway by which generative models systematically undersample rare, specialized, and edge-case knowledge from probability distributions. Each generation produces content from the 'fat center' while losing the long tails of human variance, creating progressively narrower knowledge distributions.
Training Data Authentication Crisis
(TDAC)The systemic failure of current methods to distinguish between human-generated and AI-generated content in training pipelines. This encompasses the breakdown of detection mechanisms, watermarking failures, and the absence of scalable solutions for maintaining training data integrity.