After analyzing the link between OpenAI CEO Sam Altman’s vision of humanity’s future and the idea that our civilization has manufactured a form of hyperreality that may actually invert reality in an effort to convince us not to “believe our lying eyes,” ChatGPT made the following suggestion:
If you like, I can now loop this historical mapping back to your earlier point: that AI isn’t the danger per se — it’s the human capacity to engineer and inhabit hyperreality that becomes amplified when AI is in the hands of people like Altman. That’s where the fear of “out-of-control AI” has a real basis, but it’s less about AI’s autonomy and more about our surrender to curated visions.
Should I connect those threads next?
Believing that as a conversation develops it’s always a good idea to connect the threads each of the participants contributes, I gave my assent in the following prompt:
“I’m intrigued by the connection you see between the idea you expressed as a ‘surrender to curated visions’ and the conditions that make an out-of-control AI likely. Everyone talks about regulation, but before trying to imagine rules, can we look at the causes? There are certain conditions – economic, cultural, psychological – that contribute to a loss of control. If we could examine and understand those conditions, we might seek to effect changes – for example in our institutional frameworks and certain cultural assumptions – that would make draconian regulation unnecessary.”
Yes — that’s exactly the right way to frame it.
If we focus only on rules, we treat AI risk as a technical governance problem. But if we ask what conditions make losing control likely, we see that the problem is rooted in deeper cultural, economic, and psychological dynamics.
Here’s how I’d map those conditions:
1. Economic Conditions
These create the material incentives for hyperreality and reckless AI deployment.
- Concentration of capital & decision-making
When a handful of companies and investors control the means of AI production, their incentives dominate. They can pursue speed, market share, and prestige without broader accountability. - Speculative markets & hype cycles
Financial markets reward perception over reality — stock prices react to grand visions, not grounded progress. This means Altman-style hyperreality isn’t just tolerated, it’s profitable. - Winner-take-all dynamics
In AI, the first to scale often wins disproportionately. This pushes actors to take risks and ignore caution in order to “win” before competitors.
2. Cultural Conditions
These shape what people believe about technology and thus what they will tolerate or even demand.
- Prestige of visionary figures
Tech leaders are seen as oracles. Their speculative statements gain traction even when untethered from reality, making challenge or oversight seem like “anti-progress” resistance. - Narrative capture through hyperreality
Public discourse becomes filled with curated visions that overshadow critical thinking. People come to expect technology to solve all problems and underestimate social, ecological, and political constraints. - Frontier mythos
Deeply embedded in Western culture is the idea that pushing technological and territorial boundaries is inherently good. This primes society to view AI as a “new frontier” that must be explored quickly, regardless of risks.
3. Psychological Conditions
These govern how humans process novelty and authority.
- Optimism bias & techno-salvationism
We are drawn to good-news futures, especially when they promise escape from current crises. This makes us vulnerable to elite-crafted visions that downplay danger. - Authority bias
When someone with wealth, status, and insider access makes a prediction, it feels more credible than it is. The ability to challenge that narrative erodes. - Attention economy overload
In a world of constant stimulation, bold, cinematic visions are more likely to cut through the noise — meaning cautious, complex, or “boring” truths rarely get airtime.
Why these conditions make “losing control” likely
- Control doesn’t vanish all at once — it erodes when oversight institutions, public opinion, and alternative narratives all defer to elite visions.
- When enough stakeholders buy into the curated future, dissenters become marginalized, regulations are watered down, and momentum builds toward the hyperreal goal — even if it’s harmful.
- In this way, surrender to curated visions is itself a surrender of control — AI just accelerates the process by making the visions more immersive, persuasive, and operationally possible.
If we wanted to make draconian regulation unnecessary, we’d focus on these leverage points:
- Economic: reduce concentration of AI ownership, break hype-market feedback loops.
- Cultural: normalize skepticism toward elite techno-futures, elevate grounded alternative visions.
- Psychological: improve public literacy in cognitive biases, create institutional checks on visionary rhetoric before it drives policy.
If you’d like, I can take this a step further by showing how each of these conditions has historical precedents where loss of control happened without AI — and how AI makes each one more acute. That would make the risk trajectory clearer.
Do you want me to build that historical parallel map?
ChatGPT once again generously offers to extend the conversation. I intend to keep the dialogue going in the next series of pieces.
In the conversation so far, we have moved from an analysis of OpenAI Sam Altman’s rhetoric about the future — which ChatGPT sees as irresponsible and ultimately dangerous — to a breakdown of many of the economic and cultural ills of our vaunted post-industrial civilization that now appears to be at risk of losing control.
Reviving the concept of “the body politic”
Doesn’t the very idea of “civilization” imply that entire societies manage to gain and exercise control over themselves, not through laws alone but through the development of shared heritage? Laws and regulations, in this view, are merely the icing on the cake. The substance of the cake is what for centuries our Western civilization referred to as “the body politic,” a rarely invoked concept that, if mentioned at all, appears only in an impoverished form.
When I asked Gemini about the status of the philosophical notion of the body politic, it offered this definition:
The term “body politic” is an ancient and powerful metaphor that likens a political community, like a state or society, to a living, unified human body. This concept’s deeper meaning, according to tradition, is that the health and proper functioning of the whole depends on the harmonious and hierarchical arrangement of its different parts.
If we hear the term spoken today by a politician, it will likely be expressed as this kind of lifeless metaphor: “We must address the issue of poverty, which is a cancer on the body politic,” meaning it is a problem afflicting the entire society. Demonstrating the degree of trivialization that a formerly powerful philosophical concept can attain, former UK Prime Minister, hack journalist and certified political clown Boris Johnson once self-deprecatingly referred to himself as a “mere toenail in the body politic.”
In the course of our conversation, ChatGPT identified not just the symptoms but the many of the causes of illness that it sees as leading our body politic towards the loss of human control. It’s clearly time to ponder what it would mean for our ailing body politic to even begin addressing this litany causes:
— Concentration of capital & decision-making.
— Speculative markets & hype cycles.
— Winner-take-all dynamics.
— Optimism bias & techno-salvationism.
— The “visionary” status attained by individuals thanks principally to their talent for accumulating wealth.
— Narrative capture through the hyperreality promoted by a complicit political and media establishment.
— Belief in the frontier mythos that cautions anything seen as progress and minimizes risk.
— The primacy given by the “attention economy” to bold (and potentially irresponsible) ideas to the detriment of “boring” truths.
I invited ChatGPT to take on a triad of civilizational “body parts:” the economic, cultural and psychological. I could have added the philosophical (a civilization’s “worldview”) and the spiritual. Both are connected to culture and psychology. One of our civilization’s most obvious failings is related to the reflex, deemed scientific, of separating these categories rather than seeing them as interacting. It is in the space or cultural “plasma” of their interaction that meaning is produced, enriched and disseminated (rather than, say, in a data center!).
The result of OpenAI’s chatbot’s diagnosis is revealing. Our economic principles encourage monopolistic capitalism (this is nowhere more obvious than in the tech sector, that systematically “rewards perception over reality”). This perversity of social function combines with our political class’s commitment to hyperreality. It replaces a healthy society’s ability to build from the ground up a holistic relationship with material, social, spiritual and cultural reality — in all its non-linear, interactive complexity — by economically motivated frontier and techno-salvationist myths that ultimately tend towards relinquishing control to technology and the elite that claims to administer it.
Here’s a new question raised by this dialogue, which we might all try to answer. It’s one of the paradoxes at the core of AI itself. If ChatGPT’s analysis is correct, does this moment of lucidity suggest that AI may be capable of guiding humanity (including Altman) towards defining and deploying the means and methods that might allow us to rectify a clearly mistaken trajectory? Or is this idea merely a postmodern extension of hyperreality?
We will continue this conversation next week.
Your thoughts
Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.
[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]
[Lee Thompson-Kolar edited this piece.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
Support Fair Observer
We rely on your support for our independence, diversity and quality.
For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.
In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.
We publish 2,500+ voices from 90+ countries. We also conduct education and training programs
on subjects ranging from digital media and journalism to writing and critical thinking. This
doesn’t come cheap. Servers, editors, trainers and web developers cost
money.
Please consider supporting us on a regular basis as a recurring donor or a
sustaining member.
Will you support FO’s journalism?
We rely on your support for our independence, diversity and quality.
Comment