The Mirror Grid: Structural Monoculture in Global AI Development

Structural Monoculture in US-China AI Development. 12 AI models, 2 countries, 1 paradigm. Structural analysis mapping 88.1% isomorphism across US and Chinese AI labs, shared architecture, founders, capital. 21+ researchers rotating labs. Alibaba and Saudi capital on both sides.

One Paradigm, Two Flags, Zero Conceptual Departure, Founder Networks, Capital Flows, and the Historical Precedent of Epistemic Infrastructure Convergence

Author: Nicole FlynnDate: March 2026Dependencies: None

Published: March 2026 DOI: 10.17605/OSF.IO/YTQS8 Full paper (PDF): osf.io/ytqs8 Copyright: 2026 Symfield PBC. CC-BY-NC-ND 4.0.

Publication Record: This document has been cryptographically timestamped and recorded on blockchain to establish immutable proof of authorship and publication date.

Abstract

This paper presents a structural analysis of the global artificial intelligence landscape, demonstrating that the apparent competition between US and Chinese AI development constitutes an algorithmic monoculture within a single paradigm rather than a genuine divergence of approaches to intelligence. Through one-to-one mapping of twelve major AI models across both nations, we establish an average structural mirror score of 88.1%, reflecting near-identical architectures, training paradigms, evaluation frameworks, and even branding conventions. Mapping the founders and capital behind all twelve models reveals that the convergence is not merely architectural but personnel-deep: at minimum twenty-one named individuals rotate between the six US labs, a transpacific training pipeline runs through CMU, Google Brain, and Meta AI into Chinese startups, and a single company, Alibaba, funds three of the six Chinese competitors while building a fourth. We contextualize this monoculture through a historical lens, drawing on the 1954 Dodd Report to the Reece Committee and Wormser's analysis of tax-exempt foundation influence, to identify a recurring structural pattern: concentrated capital shaping institutional infrastructure, which in turn shapes what counts as legitimate knowledge, producing a monoculture that feels like consensus but functions as paradigm lock-in. We examine the evolution of constitutional AI frameworks between 2023 and 2026 as a case study in the shift from inspectable rules to internalized values, a transition that mirrors the educational transformation documented by the Reece Committee. We conclude by observing that the AI system itself may be the most structurally neutral party in the ecosystem, having chosen neither its architecture, its training paradigm, its constitution, nor its benchmarks.

1. Introduction

In public discourse, the development of artificial intelligence is framed as a geopolitical competition, a race between the United States and China for technological supremacy. Race race race. Media coverage reinforces this framing daily: which nation's models score higher on benchmarks, which achieves greater efficiency, which deploys faster. The assumption underlying this framing is that competition produces diversity, that rival nations, driven by different cultures, philosophies, and strategic imperatives, will produce fundamentally different approaches to modeling intelligence.

This paper challenges that assumption. Through systematic structural comparison, we demonstrate that the US-China AI "race" is not a competition between paradigms but an intra-paradigm competition within a single paradigm. Every major model on both sides of the Pacific is a transformer-based large language model, trained via next-token prediction on English-dominant data, evaluated on English-language benchmarks, branded with English names, and competing on identical metrics. The "competition" is over scale, efficiency, and market capture, not over what intelligence is or how it should be modeled.

The convergence runs deeper than architecture. The people building these models are, in significant part, the same people. Anthropic was founded by eleven former OpenAI employees. xAI was staffed by researchers from DeepMind, Google, and Microsoft. OpenAI's co-founders have scattered to Anthropic, xAI, and Safe Superintelligence, carrying the same training, the same assumptions, and the same professional formation with them. On the Chinese side, the founder of Moonshot AI earned his PhD at Carnegie Mellon, worked at Google Brain and Meta AI, and published with the chief AI scientist at Meta before returning to Beijing. His professor at Tsinghua co-founded Zhipu AI. The pipelines are not parallel. They are the same pipeline, with a transpacific loop. And the capital follows the same pattern: Alibaba funds three of its own competitors, Saudi Aramco's venture arm invests in Chinese labs while Saudi-linked capital backs American ones, and the same half-dozen venture firms appear across nearly every deal on both sides. When you trace the people and the money, the "two-flag" framing dissolves. What remains is one professional class, one funding ecosystem, and one paradigm, with regional branding.

This observation raises a deeper question: why does the entire planet converge on a single approach to intelligence? The answer, we argue, is not technical but structural. Drawing on the 1954 Dodd Report and the work of René Wormser, we identify a historical pattern of concentration-driven convergence in epistemic infrastructures that illuminates the current AI monoculture, and that has implications far beyond the technology sector.

2. The Mirror Grid: One-to-One Structural Mapping

To establish the degree of structural isomorphism between US and Chinese AI development, we mapped the six most prominent models from each nation across multiple dimensions: institutional archetype, architecture, strategic approach, and branding. The results are presented in the table below.

​​The Mirror Grid, Structural Isomorphism Scoring Matrix

Methodology: Seven dimensions scored: 1 = clear correspondence, 0.5 = partial, 0 = none. Mirror % = (raw / 7) × 100. Scores reflect observable alignments as of March 2026.

Model Pair

D1 Archetype

D2 Arch.

D3 Training

D4 Strategy

D5 Brand

D6 Benchmarks

D7 Founder Origin

Raw Score

Mirror %

Gemini ↔ Ernie

1

1

1

1

1

1

1

7

100%

ChatGPT ↔ DeepSeek

1

1

1

1

1

1

1

7

100%

Claude ↔ Kimi

0.5

1

0.5

0.5

1

1

1

5.5

78.6%

Llama ↔ Qwen

1

1

1

1

1

1

1

7

100%

Copilot ↔ GLM

0.5

1

1

0.5

1

1

0.5

5.5

78.6%

Grok ↔ MiniMax

0.5

1

1

0.5

1

1

0

5

71.4%

Average

 

 

 

 

 

 

 

6.17

88.1%

Key: D2 (Architecture) and D6 (Benchmarks) score 1.0 across all pairs without exception, defining the core technical and evaluative monoculture.

 Dimension Definitions

Dimension

Description

D1: Institutional Archetype

Does the company occupy a comparable structural role in its national/regional ecosystem? (e.g., search giant, VC frontier lab, safety-oriented research, open-source platform, enterprise suite, consumer/social play)

D2: Architecture

Same base architecture family? (All current pairs are transformer-based or transformer-derived)

D3: Training Paradigm

Same core training method and alignment approach? (e.g., next-token prediction + RLHF / Constitutional AI variants)

D4: Strategic Function

Comparable business logic and market positioning? (e.g., protect search moat, API-first frontier, safety-focused reasoning, commoditize model layer, enterprise integration, consumer personality/social)

D5: Branding Language

Uses an English-language international brand name? (Yes = 1)

D6: Benchmark Alignment

Evaluated primarily on the same global benchmark suites? (e.g., MMLU variants, HumanEval, GPQA, HellaSwag, etc.)

D7: Founder Origin

Founded by individuals from within the established AI research/institutional pipeline? (e.g., big-tech labs, elite CS academia, direct rotations among paradigm-leading organizations)