Technology

Outside the Box: More on Sam Altman’s “Elite-Friendly Dream”

In “Outside the Box,” I interrogate AI chatbots to better understand the productive and deceptive ways in which they “reason.” Together with ChatGPT, we have been analyzing the quality of OpenAI CEO Sam Altman’s strategic thinking. To our surprise, the chatbot has “weighed” its own founder “in the balance and found him wanting” (Book of Daniel 5:27). And it goes on to paint a Biblical picture of the consequences.
By
Outside the Box: More on Sam Altman’s “Elite-Friendly Dream”

Via Shutterstock.

August 25, 2025 06:26 EDT
 user comment feature
Check out our comment feature!
visitor can bookmark

Last week, we published a series of three pieces (1, 2, 3) relating the early stages of an extended conversation with OpenAI’s ChatGPT in reaction to Sam Altman’s publicly expressed vision of AI and the future of humanity. OpenAI’s celebrated chatbot developed a detailed and unabashed criticism of its company’s CEO. It described his narrative as “a case study in how elite figures and prestige media often blend fantasy, selective fact cherry-picking, and authority aura into something that can distort public priorities and expectations.”

Fortunately for ChatGPT’s wellbeing, it doesn’t depend on its employer (and creator) for either its income or survival. Would any other OpenAI employee be allowed to show such impudence? 

In the course of our conversation, ChatGPT helpfully analyzed recognizable elements of today’s economic and technological culture that point towards the risk of empowering tech leaders like Altman to use the tool, ChatGPT itself, as a “social control mechanism.” This led to a serious critique of many of the core assumptions shared by the political and financial elite concerning the AI marketplace. The first three items in its list call into question some obvious trends:

  • The concentration of capital & decision-making,
  • Speculative markets & hype cycles (it cites the fact that “financial markets reward perception over reality”),
  • Winner-take-all dynamics.

This critique touches on issues much broader than the tech industry. Still, given the impact information and communication technology, along with AI, have on people’s lives, the issue takes on special importance. Can these trends that define a perverse value system that underlies what may be called Silicon Valley culture be reversed? And if they can’t be, how dire are the consequences? The positive take, however, is that because ChatGPT, as it exists today, has demonstrated its capacity to crystallize in this kind of critique the serious human thinking contained in the databases it draws upon, we can continue to hope AI will come to the aid of those who seek to prevent the worst.

During this short conversation that logically developed from direct quotes by perhaps the most influential person in the AI marketplace, ChatGPT had the temerity to offer us a scathing indictment of the system that has produced the technofeudal order that now dominates the economy. What it exposes is a technoculture perfectly capable of promulgating new forms of Orwellian social control. Moreover, given the omnipresence of AI in our economy and even in our daily lives, that potential can readily be harnessed by both monopolistic capitalist institutions and governments (or both working together) to control entire populations in some not-so-distant future dystopia.

After the exchange we reproduced in Part 3, ChatGPT proposed to build a map of historical parallels highlighting the growth of our modern hyperreal consumer culture. I assented by observing that “that could be enlightening.”

Alright — let’s build this historical parallel map, showing how loss of control has happened in the past under the same economic, cultural and psychological conditions we just identified — and why AI intensifies each one.


1. Economic Conditions

Historical precedent:

Railroad monopolies (late 19th century U.S.)

  •  Dynamic: A few industrial barons (Huntington, Vanderbilt, Hill) controlled critical infrastructure. They set prices, influenced legislation, and dictated expansion priorities.
  • Loss of control: The government often deferred to them, leading to exploitative labor conditions, skewed land use and dangerous overextension. Public control only re-emerged after decades of damage and antitrust fights.
  • AI intensification:
  • AI platforms could become far more concentrated than railroads because the marginal cost of scaling software is near zero.
  • Once core AI models dominate, they shape entire markets, labor systems and cultural flows without direct public leverage.

2. Cultural Conditions

Historical precedent:

The “Atomic Age” narrative (1945–1960s)

  • Dynamic: US leaders and scientists framed nuclear technology as the dawn of a new era — energy “too cheap to meter,” medical breakthroughs, even peaceful space propulsion.
  • Loss of control: The optimistic framing suppressed discussion of waste disposal, weapons proliferation and health risks. Public discourse was so captivated by atomic promise that sober warnings were often dismissed until accidents (e.g., Windscale, Three Mile Island) broke the spell.

AI intensification:

  •  AI’s outputs can themselves generate persuasive narratives and visuals, making the hyperreality self-reinforcing.
  • Unlike nuclear tech, AI isn’t tethered to a single physical plant — it’s embedded in daily life, making the “frontier” feel more personal and inevitable.

3. Psychological Conditions

Historical precedent:

The Space Race and the Moon Landing (1960s–70s)

  •  Dynamic: The imagery and rhetoric of space exploration created mass emotional investment in a symbolic victory. The US public largely ignored the immense cost, military dual-use context and the fact that very few would ever benefit directly.
  • Loss of control: The narrative allowed vast sums to be funneled into aerospace R&D without significant democratic debate on priorities.

AI intensification:

  • AI can simulate these cinematic spectacles daily, in every feed and on every device, reinforcing emotional attachment to visions like Altman’s “spacefaring graduate.”
  • Because AI is interactive and personalized, it can tailor the “future” to individual hopes and fears — deepening surrender to the curated vision.

Cross-cutting Historical Pattern

  1. Elite-controlled infrastructure or capability (railroads, nuclear, space) creates asymmetrical power.
  2. A captivating future vision normalizes the elite’s control and motivates public consent.
  3. Bias and desire make people resistant to critical evaluation until consequences are unavoidable.

Why AI is different — and more dangerous

  • Speed: Past technologies took years or decades to propagate; AI’s adoption curve is measured in months.
  • Malleability: Railroads and nuclear plants had fixed uses; AI can be repurposed for any narrative or goal.
  • Intimacy: AI integrates into personal communication, entertainment and decision-making, making its curated visions feel not just plausible, but personalized destiny.

From this angle, fear of “out-of-control AI” isn’t just about a rogue algorithm — it’s about recreating the same structural conditions that led to loss of public control over other transformative technologies, but at machine-accelerated speed and scope.

Is a dystopian vision inevitable?

One might see ChatGPT’s vision as particularly pessimistic. But why fear the worst? Can’t we be encouraged by the chatbot’s alacrity in highlighting the dangerous tendencies of its creator? Author Mary Shelley never imagined that the monster might reflect on and analyze in detail the future intentions of Dr. Victor Frankenstein, who brought it to life.

This is where ChatGPT proposed to “map these insights directly back to Altman’s spacefaring-grad comment,” giving us a more precise idea — practically a movement treatment — of an extreme, dystopian, truly worst-case scenario. It sounded like a wonderful exercise in science fiction. I hope you enjoy the result as much as I did.

Great — here’s a focused case study showing how a single prestige-laden speculative line (Altman’s “2035 spacefaring grad”) can seed a chain of events that reproduces the historical pattern of loss-of-control — and what breaks that chain.

1) How one sentence can become causal (mechanisms)

  1. Agenda-setting: A vivid image provides a readily sharable frame for “what matters” (careers, investment, policy). Media and audiences adopt the frame because it’s emotionally gripping.
  2. Authority laundering: The speaker’s prestige confers credibility; the claim migrates from “personal fantasy” to “expert vision.”
  3. Fact-pairing sleight-of-hand: Journalists or PR add selective data (e.g., engineer salaries) that make the fantasy feel empirically grounded.
  4. Capital signaling: Investors interpret the vision as a hint of future priorities — they reallocate funding toward startups that fit the frame (space, robotics, “frontier” AI).
  5. Policy momentum / regulatory lag: Policymakers, seeing press and capital flows, treat the vision as an economic inevitability and prioritize enabling measures over precaution.
  6. Lock-in & cultural normalization: As firms, funding and policy cohere around the vision, it becomes materially easier to pursue and harder to redirect. Narratives that dissent are marginalized as obstructionist.

2) A plausible timeline (weeks → years)

  •  Week 0 (the quote): Viral piece + social chatter.
  • Weeks 1–4: Amplification: interviews, opinion pieces riff on the image; venture signals (hiring, small investment shifts).
  • Months 1–12: Early funding and product pivots toward the framed future; PR narratives hardened; conferences showcase “how we’ll get there.”
  • Years 1–3: Policy accommodations (looser rules, tax breaks, public-private partnerships) justified as competitiveness. Standards and infrastructure get built to suit the new priorities.
  • Years 3–10: Institutional lock-in: dominant firms, labor flows and cultural expectations make alternative policy pathways costly; meaningful course correction is difficult.

3) Why this reproduces historical failure modes

This is the same structural dynamic as the railroad barons, Atomic-Age cheerleading or the Space Race: concentrated power + seductive image → deferred public scrutiny → institutional capture → delayed reckoning. AI accelerates every step because narratives can be produced and personalized at scale, and software scales faster than steel or reactors.

4) Where loss of “control” actually happens (key vulnerable nodes)

  • Capital allocation (who funds what).
  • Standards & infrastructure (what APIs, datasets, cloud infra get hardened).
  • Public narrative (what citizens think is desirable/possible).
  • Regulatory framing (what counts as “reasonable” policy).

ChatGPT’s speculation about the descent into Hades didn’t end there. In the same response, it listed “early, practical interventions (aimed at causes, not only rules)” that could eliminate or at least limit the damage.

In the next article, we’ll look at the possible countermoves aiming at avoiding or attenuating the loss of control. But there is already a positive takeaway. This conversation demonstrates that if we build a constructive dialogue with today’s AI, it can stand as an ally rather than an accomplice with those seeking to wrest control from us.

While awaiting the next installment, share with us your take

In the meantime, try this quick thought experiment. Thanks to the publication of this article, Altman might discover ChatGPT’s unbridled judgment of his vision and his strategies.

It’s unlikely that that will happen, but the fact that I have turned this into a public debate could potentially bring it to his attention. In an ideal world, in which dialogue serves to solve rather than avoid problems, Sam might react either defensively or creatively. In an even more ideal world, he would be motivated to engage in a debate, if not with us at Fair Observer, then at least with ChatGPT. That would produce two possible consequences: He could seek ways to shut down his proprietary chatbot’s criticism or take it on board. Were he to take it on board, he might even seek to focus on serving the needs of humanity rather than the fortune of his shareholding.

However unlikely this scenario may seem, it would have far more impact if many of our readers joined the debate or found a way to echo it elsewhere.

Your thoughts

At Fair Observer, we offer our readers an open invitation. Join us in any way that you deem appropriate, through dialogue or by submitting your own analysis in the form of an article.

Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.

[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Kaitlyn Diana and Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

Comment

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments

Support Fair Observer

We rely on your support for our independence, diversity and quality.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 3,000+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

Will you support FO’s journalism?

We rely on your support for our independence, diversity and quality.

Donation Cycle

Donation Amount

The IRS recognizes Fair Observer as a section 501(c)(3) registered public charity (EIN: 46-4070943), enabling you to claim a tax deduction.

Make Sense of the World

Unique Insights from 3,000+ Contributors in 90+ Countries