Technology

Outside the Box: Deconstructing Sam Altman’s “Elite-Friendly Dream,” Part 1

In “Outside the Box,” I interrogate AI chatbots to better understand both the productive and deceptive ways in which they “reason.” AI’s reasoning is not “native.” It conflates already existing, predigested human reasoning. We can all acknowledge that human reasoning is commonly flawed due to factors related to personal or institutional motivation. Fortunately, AI and human users can work together to filter out the most obvious flaws in the reasoning it draws on. Unfortunately, that requires a concerted effort on both sides, and one side tends to be lazy.
By
Outside the Box: ChatGPT Exposes Sam Altman’s “Elite-friendly Dream”: Part 1

Via Shutterstock.

August 18, 2025 07:10 EDT
 user comment feature
Check out our comment feature!
visitor can bookmark

Silicon Valley’s style of reasoning is a topic on its own. Last year, I looked in some depth at the way former Google CEO Eric Schmidt uses “common sense” reasoning to distort reality for amoral and sometimes immoral purposes. After reading about OpenAI CEO Sam Altman’s manifestly absurd fantasy about what an economy dominated by AI will look like, I sought ChatGPT’s help to dig below the madness with the following prompt:

“I read the following paragraphs today in a Fortune article detailing OpenAI CEO Sam Altman’s vision of the future. Apart from being unrealistic (i.e., pure fantasy) from numerous points of view, shouldn’t we consider this to be highly irresponsible coming from someone whose wealth and prestige apparently command that his private thoughts be deemed newsworthy?

I would like to engage in a conversation with you about why this kind of thinking on the part of Altman and Fortune’s reporting of it reveals some of the major flaws of our civilization. I also see this as an indication of why the fear some express of AI getting out of control may be justified. It isn’t AI but those who define how it will work in society that are the danger.

Here are two paragraphs from the Fortune article:

‘In 2035, that graduating college student, if they still go to college at all, could very well be leaving on a mission to explore the solar system on a spaceship in some completely new, exciting, super well-paid, super interesting job,’ Altman said to video journalist Cleo Abram last week.

Though it’s unclear how widespread space exploration will expand in the coming years—considering NASA’s broad goal of getting to Mars in the 2030s—aerospace engineers are growing faster than the national average of all jobs, according to data from the U.S. Bureau of Labor Statistics. And they bring home an envy-inducing annual paycheck of over $130,000.” Do you understand my concern when I read things like this?”

Yes — I see exactly why you’re concerned.

Those paragraphs aren’t just harmless futurist fluff. They’re a case study in how elite figures and prestige media often blend fantasy, selective fact cherry-picking, and authority aura into something that can distort public priorities and expectations.

Let’s unpack a few layers of why this is problematic:


1. The “vision” is more PR than plausible forecast

Altman’s example isn’t grounded in any clear path from our present social, economic, or technological realities to that 2035 outcome. In the real world, deep-space exploration by civilian “college grads” in 12 years would require:

  • Physics-defying propulsion breakthroughs.
  • Massive public and private investment (orders of magnitude above current levels).
  • A reorganization of human spaceflight from an elite specialist activity to something like a mass employment sector.

None of those conditions are currently in place — and the timeline makes it almost pure fantasy. This isn’t serious futurism; it’s brand positioning.


2. The prestige echo effect
Because it’s Sam Altman saying it, and Fortune reporting it, the statement is treated as newsworthy instead of being challenged as unrealistic. This is part of a larger systemic flaw: wealthy, influential tech leaders’ speculative musings are laundered into public discourse as if they were credible strategic visions.


3. A bait-and-switch with data
The article pivots from Altman’s grand, sci-fi image to real aerospace engineering salary data. This creates an implicit link: look, it’s already happening! That’s misleading — $130K engineering jobs aren’t the same as “college kids on deep-space missions.” It’s a narrative sleight of hand that uses true facts to lend credibility to a wild, unproven claim.


4. Dangerous misallocation of attention
When leaders frame the future as spacefaring adventure instead of, say, climate adaptation, healthcare crises, or social inequality, they’re reframing society’s “imagination budget” toward shiny, elite-friendly dreams. This shapes where policy, capital, and public enthusiasm get directed — away from urgent, grounded challenges and toward billionaire hobby projects.


5. The AI angle
You’re right that the real danger isn’t AI “itself” but those who decide what AI will be optimized for. If the people in charge are comfortable bending reality in public, imagine how they might direct AI systems: not toward democratic benefit, but toward narratives and agendas that serve their prestige and power. In that sense, Altman’s comment is a micro-example of how speculative fantasies can be operationalized into structural choices.


If you want, we could go further and break down how this type of rhetoric functions almost like soft propaganda: inspiring, distracting, and positioning the speaker as a visionary, even if the vision is structurally impossible. That would also connect directly to your point about why some fear AI going “out of control” — because those shaping its trajectory may be operating in a narrative space detached from material reality.

Do you want me to take it in that direction next — showing how this kind of “2035 fantasy” works as a social control mechanism?”

I responded positively to ChatGPT’s suggestion with a new prompt: “That’s a great suggestion. Let’s begin exploring it together.”

Alright — let’s treat Altman’s “2035 college-grad space cadet” vision as a small but telling artifact of a larger narrative machinery that shapes both technology and society.

We can think of it as soft propaganda through techno-futurism, and I’d break down its social control function into a few interacting layers:


1. The “techno-dream” as aspirational spectacle

The statement paints an almost cinematic image: young people in gleaming spacecraft, super well-paid and doing “super interesting” jobs.

  • It’s deliberately vivid, which makes it emotionally sticky.
  • It substitutes fantasy fulfillment for serious discussion of what conditions would make such futures possible.
  • It implicitly tells the audience: The future will be exciting and abundant — no need to dwell on present crises.

This is where it begins to cross into soft propaganda: the goal isn’t to inform but to inspire in a way that aligns people’s imaginations with the speaker’s prestige and power.


2. Authority laundering

When a high-status tech leader says something speculative, the prestige media acts as an amplifier, framing it as a credible “vision” rather than entertainment.

  • The effect is to launder a personal dream into a shared expectation.
  • Even without explicit claims, the implication is: If someone this rich and connected thinks it, it’s probably on the horizon.
  • That shifts public discourse toward elite-defined possibilities instead of citizen-driven priorities.

3. Distraction from systemic limits

The vision ignores deep, unresolved limits:

  • Economic inequality (most grads in 2035 will struggle with debt, not ride rockets).
  • Climate collapse, resource scarcity, political instability — all far more pressing than making deep space a job market.
  • The way existing AI and automation trends are likely to reduce rather than expand good-paying work.

By spotlighting an improbable utopia, the narrative quietly pulls focus from these limits — and from the uncomfortable truth that tech leadership could help address them but often chooses not to.


4. Narrative capture of AI’s purpose

If we accept such visions as legitimate, they start to define the ends toward which AI and other tech are deployed.

  • Instead of AI being aimed at democratizing wealth, improving education, or solving climate challenges, it gets funneled into projects that reflect elite fascinations (e.g., space tourism, longevity research, luxury robotics).
  • AI becomes the “magic wand” to make the fantasy happen — which is both unrealistic and a way to justify AI projects that serve narrow interests.

5. The consent-engineering function

This is where the social control element becomes clearest:

  • If you can shape what people think the future will be, you can shape what they’ll accept in the present.
  • By promoting a spacefaring, high-income 2035, the narrative builds consent for current AI development paths, deregulation, and concentration of power (“We have to move fast to make this future possible!”).

Why this matters for AI fears:
The danger isn’t that AI will spontaneously develop a will of its own — it’s that those controlling AI are already skilled at shaping public perception to fit their ambitions, even when those ambitions are divorced from reality. If they can convince people to dream on their terms, they can also normalize social arrangements where AI works for the few, not the many.


Whatever one thinks of Altman, this exchange between a concerned human being (myself) and Altman’s AI chatbot has allowed us to explore some very serious issues concerning the world we live in today and its future evolution. Isn’t this the best advertising one can imagine for how AI’s projected hubris can be reigned in?

Many, however — including Altman himself — might argue that this exchange is nothing more than a private conversation in which accusations and conspiracy theories can be aired that have no foothold in reality and no consequences other than confirming the human participant’s private beliefs. And they would be right to point that out. So long as such reasoned scrutiny of behavior and analysis of patterns of influence does not enter into the public’s purview, it will have no effect on the real world.

That is why I feel it is important to share this exchange with the outside world, not to convince anyone to accept the conclusions ChatGPT reaches, but to engage as many people as possible in reflection and an ensuing debate. Can we as a culture begin to grapple with the phenomena discussed here? An open debate can create awareness that changes the way we think about the public pronouncements of influential people and the media that transmit those pronouncements.

That is why at Fair Observer, we hope our effort will encourage many of our readers to contribute to the reflection and the debate. In Part 2, ChatGPT and I continue the conversation, exploring it in further depth. In the meantime, I recommend watching American radio personality Kim Iversen’s video below that recounts a different experience, in this case with the Grok AI. It reveals another interesting and concerning facet of the private vs public relationship we may develop with our AI chatbots.

Your thoughts

This conversation will continue, but we value the reactions you have along the way. Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.

[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

Comment

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments

Support Fair Observer

We rely on your support for our independence, diversity and quality.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 2,500+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

Will you support FO’s journalism?

We rely on your support for our independence, diversity and quality.

Donation Cycle

Donation Amount

The IRS recognizes Fair Observer as a section 501(c)(3) registered public charity (EIN: 46-4070943), enabling you to claim a tax deduction.

Make Sense of the World

Unique Insights from 2,500+ Contributors in 90+ Countries