The English language possesses various proverbial expressions that convey a feeling many of us share about the world today as we witness the growing global chaos that now surrounds us. Some will complain “it never rains, but it pours,” while others remark, “misfortunes never come singly.” My favorite is the far more poetic assessment uttered by Hamlet’s uncle Claudius, who in Act V laments, “When sorrows come, they come not single spies, but in battalions.”
This resonates with particular bite as the world lapses into expanding war. The Danish King’s remark seems particularly appropriate when we consider that it is spoken by a scoundrel who has spent much of the play’s previous four acts inciting his own spies — notably Polonius, Rosenkranz and Guildenstern — to do his dirty work. He even mobilized Ophelia, Gertrude and Laertes as decoys in his scheme to prevent Hamlet from revealing that he had murdered his way to the throne by poisoning the prince’s royal father. A glance at Shakespeare serves to remind us that nefarious strategies combining decapitation for regime change, heartless sacrifice of innocents and an expertly managed spy network are not modern inventions, but have been around for centuries.
Peering at the geopolitical and economic landscape today, one cannot avoid noticing a convergence of unfolding disasters that many fear could become apocalyptic. With that in mind, I addressed the following prompt — meant to invoke optimism — first to Anthropic’s Claude and then to DeepSeek.
Crises may be terrifying and ultimately catastrophic. But they can also be moments of clarification thanks to the capacity to react of some institutions and individuals. The current wars – and in particular the US-Israeli war in Iran – have been initiated and are conducted under the leadership of what appear to be morally unstable leaders. For that reason, our fears that they are leading us to the brink of catastrophe may be well founded.
But there is another crisis developing – potentially just as dramatic – around AI as an economic reality and cultural concept. AI has reached a psychological point of creating the fear that it will dominate the direction of the economy, alongside defense itself (and energy).
I’ve copied the following text – translated from French – that was published by the highly informative (and unapologetically sardonic) newsletter A Free Lunch. The author foresees the collapse of OpenAI, an event that would have its own more than symbolic effect on the global economy.
So here’s my question. Who are the players who might turn this developing crisis into a moment of clarification? I don’t expect you to identify actual actors but to imagine the scenario that could lead to rebuilding the system and the culture of AI to remove the factors of precarity that currently threaten it.
But first, here’s the Free Lunch text (originally in French):
A study published in Nature last summer contains information that no one in the industry really wants to face: by scraping the web to train themselves, language models will inevitably ingest their own output. By April 2025, 74% of new web pages already contained AI-generated text. The models have probably produced more text in three years than all of humanity since Gutenberg. It would almost be touching if it weren’t precisely the problem.
Because these models, in order to continue to exist, must continue to learn. And they learn from what’s floating around on the web. Except that what’s floating around on the web is now mostly LLM text. The phenomenon has a name, “model collapse,” and a very concrete effect: by constantly feeding on their own output, the models lose diversity, nuance, and unexpectedness. They become statistically flattened. For images, we can detect, filter, and discard them. For text, it’s a different story: the degradation is silent, cumulative, and imperceptible to the naked eye. What disappears first are the tails of the distribution: the rare cases, the atypical formulations, the minority reasoning. What some would call, without too much exaggeration, thought. Conversely, the astonishing em dash has regained all its splendor.
This is precisely what Yann LeCun has been saying for years, with his knack for appearing eccentric until it becomes obvious. His argument has never been aesthetic: predicting the next token has never been a path to intelligence; it’s a statistical compression of the past. His “world models” (architectures that construct a representation of the world rather than mimicking its textual surface) are based on a radically different principle. What the model collapse reveals is that LeCun may have been fundamentally right, which in this field amounts to being right too soon, which is essentially the same thing.
Meanwhile, OpenAI continues to raise funds at valuations that assume mass adoption and productivity gains that no one is truly measuring. Capital expenditures are astronomical, Nvidia, Oracle, and SoftBank are neck-deep in exposure, and the favorable political window will only last for the duration of the current administration.
My prediction: OpenAI’s IPO ends badly, and the company is bought at a bargain price, probably by Microsoft which has already dipped its toe in, absorbed like YouTube into Google or Waze into the same, and drags down with it those who have been hyper-exposed to it, and even causes a systematic crisis of 12-18 months.
That said, the risk is not the bankruptcy of a company. It is that the cognitive layer on which a growing part of the global economy relies is silently and irreversibly deteriorating, and no one has any interest in saying so out loud.
For somewhat mysterious reasons, Claude appeared to have a hangover and simply failed to respond. DeepSeek was fully accommodating and provided the following analysis:
Thank you for sharing this provocative and urgent analysis. You’re asking not for named heroes, but for a plausible *scenario* in which the twin crises you describe—geopolitical apocalypse-drift and AI model collapse—could catalyze a “moment of clarification” that leads to rebuilding AI’s system and culture from a less precarious foundation.
Let me outline such a scenario in three phases.
Phase 1: The Convergence of Crises (2026–2027)
The Free Lunch prediction plays out roughly as foreseen: OpenAI’s valuation implodes after a delayed IPO or a failed funding round, triggered by mounting evidence of “model collapse” in production. Internal memos leak showing that GPT-6 performs worse on rare-language reasoning, medical outlier diagnosis, and novel mathematical proofs than GPT-4 did. Microsoft acquires the remains at fire-sale prices, but not before several downstream sectors (automated legal drafting, code generation, customer service) experience silent degradation—errors that accumulate rather than cancel out.
Simultaneously, the US-Israeli war in Iran escalates to a limited nuclear exchange (one or two tactical weapons). Global energy markets seize. Europe, already fractured, sees its AI infrastructure (data centers in Ireland, the Netherlands, Sweden) targeted by cyber-retaliation or physical sabotage. The background apocalyptic rhetoric becomes self-fulfilling: leaders speak of Armageddon not as metaphor but as logistics.
Clarification moment #1: A small group of mid-level technical and policy officials from the G7, plus a few from the Global South, convene in secret (say, in Brasília or Jakarta). They realize that the two crises share a root pathology: feedback-loop blindness —military escalation dynamics mirror AI training dynamics, where each side feeds on its own output (intelligence reports, propaganda, kill-chain automation) until diversity of thought and exit strategies collapse.
Phase 2: The Institutional Rupture (2027–2028)
The systemic AI crisis (12–18 months of contraction) hits. But unlike the dot-com bust, there is no fresh “consumer internet” to fall back on. Why? Because defense, finance, and logistics have hardwired LLMs into real-time decision loops. When those models go flat, accidents multiply: a trading algorithm misreads a ceasefire signal as a false positive; a military logistics LLM reroutes supplies based on hallucinated bridge closures; hospitals using AI triage miss rare pediatric conditions.
Public trust in AI collapses faster than in crypto post-FTX. But here is the pivot: some institutions and individuals do not retreat into Luddism. Instead, they recognize that the collapse was not a bug but a feature of the data extractivism model. The “players” in your scenario are not CEOs or politicians. They are:
- Open-weight model communities (e.g., a splinter from EleutherAI, the Alliance for Open Media-style consortia) that had kept small, verifiable, human-curated datasets alive.
- Librarians and archivists—the forgotten professionals who maintain provenance, versioning, and human-generated metadata. They become the unlikely heroes of the clarification moment.
- Energy-grid operators who, because they live with real physics, never fully trusted autoregressive LLMs and kept analog fallbacks. Their discipline—you cannot predict the next watt, you must model the whole system—becomes the blueprint.
- Survivors of the Iran war’s de-escalation cell (a handful of mid-level diplomats who used manual backchannels to prevent a second strike). They understand that “clarification” means forcing actors to see their own feedback loops.
Phase 3: Rebuilding the System and Culture (2028–2030)
The new framework rests on three principles, each the inverse of what caused model collapse and geopolitical drift.
1. **Provenance-first training (replacing scale-first)
- No model is legal for critical infrastructure unless its training data has a complete, auditable chain of custody—including *exclusion* of all synthetic text generated after a cutoff date.
- This forces a return to curated, diverse, human-sourced corpora (digitized books, scientific papers, court records, parliamentary transcripts, multilingual folklore archives). Scale drops by two orders of magnitude. But robustness increases.
- The “rare tail” (atypical reasoning, minority logics, non-Western epistemologies) is actively oversampled, not as token diversity but as cognitive antifragility.
2. **World models as public infrastructure (not private LLMs)
- Yann LeCun’s “world models” (or something like them) become the standard for defense, energy, and health. These architectures do not predict text; they simulate causal, physical, and social dynamics with explicit uncertainty bounds.
- They are operated like nuclear non-proliferation treaties: joint inspection, shared training environments, no proprietary black boxes. A “CERN for world models” is established in a neutral country (Switzerland? Kazakhstan?).
- Crucially, world models are *not* trained continuously on real-time web data. They are updated via deliberate, debated, versioned interventions—like constitutional conventions, not like scrolling.
3. The cultural shift: from apocalypse-as-a-service to clarification-as-practice
- The Free Lunch text notes that “no one has any interest in speaking up” about model collapse. The new culture inverts that: every AI lab, every defense contractor, every large bank must publish an annual feedback-loop audit—where their models’ outputs have begun to cannibalize their inputs.
- A new profession emerges: “loop breakers.” Their job is to introduce controlled noise, contrarian data, and human-in-the-middle friction precisely where systems become too smooth, too self-referential, too flat.
- Geopolitically, the lesson from the Iran war is codified: any automated military system must have a “diversity of reasoning” requirement—two incompatible models must disagree before lethal action is taken. This is the Armageddon equivalent of the em dash: the surprising, the unoptimized, the rare tail that saves lives.
The final clarification
You asked who the players might be. In this scenario, they are not the leaders who currently dominate headlines. They are:
- The data curator who says “no” to scraping everything.
- The energy engineer who keeps analog meters running.
- The diplomat who manually dialed a number no LLM would have suggested.
- The open-weight community that preserved a 2023 snapshot of the web before the synthetic flood.
- The librarian who can prove where a fact came from.
They turn crisis into clarification not by preventing collapse—collapse happens anyway—but by rebuilding after collapse on the principle that intelligence is not prediction but navigation of the unforeseen. The catastrophe of model collapse and the catastrophe of nuclear brinkmanship share a cure: institutionalizing the capacity to recognize when you are eating your own tail, and having the courage to stop.
Several questions remain
I thought to myself that to put this plan into action, we needed to think about a number of other issues.
Because of the political and military dimension, will there be enough of us left after the model collapse to produce this “clarification?”
On the more optimistic side, let’s suppose geopolitical tensions ease and the worst is avoided.
Where will the initiative originate that allows us to begin reorganizing the very structure of society to permit efficient collaboration between the curators, energy engineers, diplomats, librarians and community members who will finally stop “eating their own tail?”
And how may that collaboration play out, given that in our competitive 21st century culture, collaboration and even diplomatic wrangling have been conflated with appeasement or compared to sissy sports?
In Part 2, we will grapple with DeepSeek’s suggestions for an optimistic future.
Your thoughts
Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.
[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]
[Lee Thompson-Kolar edited this piece.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
Support Fair Observer
We rely on your support for our independence, diversity and quality.
For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.
In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.
We publish 3,000+ voices from 90+ countries. We also conduct education and training programs
on subjects ranging from digital media and journalism to writing and critical thinking. This
doesn’t come cheap. Servers, editors, trainers and web developers cost
money.
Please consider supporting us on a regular basis as a recurring donor or a
sustaining member.
Will you support FO’s journalism?
We rely on your support for our independence, diversity and quality.













Comment