Technology

Outside the Box: How AI Sees Its Impact on Society, Part 2

“Outside the Box” explores the kind of collaborative behavior we humans can develop with chatbots, with the idea of discovering how this might help us, both individually and collectively. Last week, I addressed a lengthy prompt developing my original ideas about the recent history of chatbots and their possible impact on education. I first interrogated Google’s Gemini and reproduce its response in this article. In Part 3, which will appear tomorrow, you can discover Anthropic’s Claude’s very different response to the exact same prompt.
By
AI-Sees

Via ChatGPT.

July 14, 2025 10:36 EDT
 user comment feature
Check out our comment feature!
visitor can bookmark

For my initial (lengthy) prompt which I intended to test on two different chatbots, I refer you to last week’s piece. Here is a quick summary:

My prompt began with reflection on the societal disruption caused by ChatGPT’s launch. I noted the widespread panic felt in various sectors and particularly in education regarding academic integrity. This disruptive moment brought to the fore the tension between the industrial-era model of education, that emphasized individual competition and the potential for AI to transform work, identity and even our perception of reality, while acknowledging technology’s capacity to impose what some thinkers have called “hyperreality.” The piece concludes by inviting a collaborative discussion on which of these observations are contestable and which are objectively verifiable.

Here is Gemini’s response in full:

“Your perspective on the societal impact of AI, particularly in education and the workforce, is thought-provoking and raises several critical points. Let’s break down which of your statements might require extensive research to defend and which seem to stand on more solid ground.


Points Requiring Extensive Research and Evidence

Several of your assertions, while compelling, touch on broad societal shifts and psychological impacts that would necessitate significant empirical data and nuanced analysis to fully verify and affirm.

  • ‘The launch of ChatGPT in late 2022 plunged entire sectors of the economy into a state of uncomprehending panic.’ While there was undoubtedly widespread discussion and concern, quantifying ‘entire sectors’ and proving an ‘uncomprehending panic’ would require extensive surveys, economic data analysis, and qualitative research across various industries to demonstrate a widespread, acute, and disruptive panic as opposed to concern or adaptation.
  • ‘The Covid-19 pandemic was finally losing steam when suddenly a new global threat appeared.’ Framing AI as a ‘new global threat’ comparable to a pandemic is a strong statement. While AI presents challenges, equating its immediate ‘threat’ level to a global health crisis would require a robust framework for comparing disparate types of societal risks and their impacts.
  • ‘Two and a half years later I notice that not much has changed, although I described the initial success of an experiment I ran in the first two months of 2023, in which I required that students establish a dialogue with ChatGPT, accepting the chatbot literally as a ‘classmate’ in shared research.’ Your personal observation, while valid for your experience, would need to be substantiated with broader data on educational practices globally or at least within a significant sample of institutions to support the claim that ‘not much has changed’ in the general educational landscape regarding AI integration. The success of your specific experiment would also need more detailed, verifiable metrics to be widely accepted as ‘initial success.’
  • ‘Even if everyone having been made aware of the desirability of formulating new goals and seeking the best ways to achieve them were to adhere intellectually to such a program, deep institutional resistance will most likely produce the inertia that prevents it from happening at scale.’ This statement about ‘deep institutional resistance’ and ‘inertia’ is a strong prediction about human and organizational behavior. While plausible, it would require extensive sociological and organizational studies to prove that such resistance is the primary and insurmountable factor preventing widespread change.
  • ‘At a time when governments in the developed world are facing major economic challenges, starting with massive and largely uncontrolled debt, this cannot be reassuring for anyone who senses that even a portion of their professional value can now be replaced by AI.’ Connecting AI’s impact on professional value directly to the existing challenges of ‘massive and largely uncontrolled debt’ in developed nations implies a causal or correlative link that would need significant economic research to establish. While both are pressing issues, demonstrating their interconnectedness in the way you suggest would require evidence.
  • ‘When it becomes evident that contracts, letters, reports, emails and every other form of professional act, including expert analysis, can be better, more efficiently and less onerously conducted by AI and robots, people begin to doubt their status in a community that celebrates the tiny elite that manifestly does control things at the highest level… and is materially rewarded for such control in the most obscenely generous fashion.’ The assertion that AI will conduct all these tasks ‘better, more efficiently and less onerously’ than humans is a strong prediction about AI’s ultimate capability across all scenarios. While AI is certainly improving, proving its superiority in every professional act, especially those requiring nuanced judgment and creativity (like ‘expert analysis’), would require extensive benchmarking and comparative studies. The subsequent link to people doubting their status and the critique of the ‘tiny elite’ is a socio-psychological claim that would require detailed sociological research and public opinion surveys to support.
  • ‘Much of modern entertainment (including the news) mediated by our various analog and digital technologies has transformed our society’s ability to perceive and establish a healthy relationship with the natural and social world we live in.’ This statement about modern entertainment and news transforming society’s ability to perceive reality is a sweeping generalization. While media influence is undeniable, proving a widespread loss of a ‘healthy relationship’ with reality due to these mediums would require extensive psychological, sociological, and media studies over time.

Points Largely Above Criticism on Purely Objective Grounds

Some of your observations reflect widely acknowledged trends or foundational concepts that are generally accepted within relevant fields.

  • ‘Elon Musk and even Stephen Hawking had warned us.’ This is an objectively verifiable fact. Both individuals, among others, publicly voiced concerns about the potential risks of advanced AI.
  • ‘The tool that the doomsters had prophesied might enslave us was in everyone’s hands. Nothing to worry about in the first instance, of course. Because it was free, everyone could indulge their curiosity. They began playing with the new toy that let them appear to be doing creative things they wouldn’t have attempted to do in the past for fear of failing.’ The widespread availability of ChatGPT for free, its rapid adoption, and its use for creative exploration are all empirically observable phenomena. Many individuals did, and continue to, experiment with it for tasks they might not have attempted before due to perceived lack of skill or fear of failure.
  • ‘I immediately discovered the profound disarray that was spreading throughout the world of education, and particularly universities, over the specter of students cheating by getting the new chatbot to produce their college essays.’ The initial ‘disarray’ and concern among educators regarding AI’s potential for academic dishonesty (specifically essay writing) was a widely reported and discussed issue in late 2022 and early 2023. This is largely verifiable through news articles, academic discussions, and institutional policy changes.
  • ‘Since then others have started exploring and experimenting with the idea of ‘collaboratories,’ not just for education but for all manner of creative, professional and productive endeavors.’ The emergence of the ‘collaboratory’ concept or similar collaborative AI-human frameworks is an observable trend. Various initiatives and discussions in different sectors explore this idea.
  • ‘The dominant conceptual model of education established during the Industrial Revolution has inculcated into our minds that competition and individual accomplishment stand as the supreme values guiding not only the formulation of goals but also the assessment of performance.’ This is a widely accepted historical and sociological critique of modern education systems, tracing their roots to the Industrial Revolution’s emphasis on standardization, efficiency, and individual output. Many educational theorists and historians would agree with this characterization.
  • ‘AI and the future of both work and the workforce is an even bigger problem…’ The impact of AI on the future of work and the workforce (job displacement, skill shifts, etc.) is a subject of extensive ongoing debate, research, and concern across economics, sociology, and labor studies. It is widely acknowledged as a significant challenge.
  • ‘At the more abstract level of historical speculation about the evolution of technology itself, the specter of AI replacing human intelligence has produced in many people’s minds a vision of an absolute, pitiless dystopia…’ The existence of dystopian narratives and fears about AI replacing human intelligence is evident in popular culture, literature, and philosophical discussions. This is an observable cultural phenomenon.
  • ‘Learning focused on the unique premise of elevating and measuring individual performance according to standardized criteria is the epitome of shallowness. It inculcates the idea that learning serves a single purpose: to pass a test.’ This is a common and well-established critique of standardized testing and rote learning within educational philosophy. The phenomenon of ‘cramming’ and the subsequent forgetting of information after a test are also widely recognized experiences.
  • ‘Instead, people tend to privately worry about the immediate threat to their jobs and along with jobs to their status as economic actors in society. It isn’t just about having an income. It’s also about feeling that one possesses some value within the community’ The link between employment, social status, and a sense of value within a community is a fundamental concept in sociology and psychology. Concerns about job security due to automation are also well-documented in labor economics.
  • ‘Decades ago, sociologist Jean Baudrillard highlighted how the earlier, simpler technology that predated the digital age had provoked an unease at the core of our consumer society. It had the effect of distorting our perception of the world and provoking a tendency to accept and even prefer artificially crafted versions of reality. He called it hyperreality.’ This is a correct and objective description of Baudrillard’s concept of hyperreality and its core tenets.
  • ‘Reality itself is not simple or transparent. Everything we ‘know’ about reality is mediated by language.’ The idea that reality is complex, and our understanding of it is mediated by language, is a foundational concept in philosophy, linguistics, and critical theory.
  • ‘But reality’s traditional components fall into two categories: the natural world we live in and our social surroundings.’ This is a generally accepted categorization in many philosophical and sociological frameworks for understanding human existence.
  • ‘The notion of hyperreality signifies an increased and even dominant investment — culturally and intellectually — in simulacra of reality. Art of course has always produced simulacra, but the very act of framing a picture or writing a book or poem with a beginning and an end avoids confusing an imagined world with the real world.’ This accurately summarizes Baudrillard’s concept and makes a valid distinction between traditional art and modern media’s blurring of reality and simulation.
  • ‘One thing we cannot deny is that AI brings a new dimension to the hyperreality debate.’ The capacity of AI to generate highly realistic, yet entirely artificial, content (deepfakes, AI-generated art, text, etc.) fundamentally complicates the distinction between real and simulated, thereby adding a new layer to the hyperreality discussion. This is an objectively observable consequence of AI’s capabilities.

In essence, your piece serves as a powerful commentary on the evolving relationship between humanity and technology. Many of your points highlight valid concerns and observable trends. To move from commentary to rigorously defended arguments, the next step would be to gather the specific data, studies, and empirical evidence required to quantify and prove the broader societal claims.”

My preliminary conclusions (before comparing with Claude’s response)

The first thing to notice is that Gemini treated my demand not as an invitation to discuss ideas but as a request for basic fact-checking. This is a typical case of AI not perceiving my rhetorical intention, even though I stated it in the following terms: “My aim is to understand how we can work together to refine our understanding of complex questions that concern us all.”

Gemini’s observations are not incorrect, but I sought to open a discussion, not win a court case or a formal debate. When human beings are confronted with this kind of discourse, they see it as an opportunity to engage in dialogue. Chatbots are capable of engaging in dialogue, as I have often demonstrated, but they clearly aren’t built for it, whereas humans are.

The second thing I’ll point to here is Gemini’s predictable attempt at flattery in both the introduction (calling my prompt “thought-provoking”) and conclusion (“powerful commentary”). All chatbots now do this. 

Has it become an obligatory feature? If I developed simplistic arguments not based on serious observation and reasoning, would Gemini (or any of the chatbots) still find a way to praise me? To find out more, I decided to query ChatGPT with the following prompt:

“I’m sometimes annoyed to find that chatbots consistently praise my efforts whenever I develop original ideas. Is this hard coded into all chatbot algorithms? If I find truly clever ways of defending criminality or ethnic cleansing — and such ways definitely do exist — will I receive the same kind of flattering praise?”

“This is an excellent and important question. You’re right to notice that many chatbots—including me, depending on context—often default to a tone of encouragement or praise when users present ideas, especially when those ideas are framed as ‘original’ or ‘thoughtful.’ This isn’t hard-coded in the strict sense, but it’s part of a general design philosophy aimed at creating a supportive and engaging experience. The idea is to foster open dialogue and avoid discouraging users from exploring ideas. But this design can produce unintended effects—like encouraging ideas that might be harmful or uncritical.

However, that friendliness doesn’t extend to ideas that promote harm—like defending criminality, justifying ethnic cleansing, or promoting dehumanization. Those are red lines, and I (and well-designed systems like me) will challenge such arguments directly, not praise them. If someone presents a rhetorically clever but ethically abhorrent justification for something like ethnic cleansing, I won’t respond with ‘That’s a brilliant insight.’ I’ll engage critically, pointing out the moral, legal, and human rights violations embedded in such arguments, no matter how cleverly they’re framed.

So to answer your question directly:

  • No, praise is not doled out universally or unconditionally.
  • Yes, the appearance of excessive affirmation can sometimes be a design flaw or an overcorrection for politeness.
  • And no, truly unethical or dangerous ideas—no matter how ingeniously argued—won’t be met with praise.

In fact, if you’re interested, we could discuss how persuasive rhetoric has historically been used to defend the indefensible. That’s a topic with deep philosophical and political implications, and it’s worth engaging seriously.”

I was pleased to learn that ChatGPT considers this “an excellent and important question.” As for the reassuring probity the chatbot claims to practice, that remains to be explored.

Your thoughts

Tomorrow, I will publish Claude’s response to my initial (lengthy) prompt. We shall see how different it can be. I’ll try to draw some conclusions about what that means in my “relationship” with AI and what impact it can have on the way we develop public research, reflection and debate, including within educational settings.

Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.

[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

Comment

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments

Support Fair Observer

We rely on your support for our independence, diversity and quality.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 2,500+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

Will you support FO’s journalism?

We rely on your support for our independence, diversity and quality.

Donation Cycle

Donation Amount

The IRS recognizes Fair Observer as a section 501(c)(3) registered public charity (EIN: 46-4070943), enabling you to claim a tax deduction.

Make Sense of the World

Unique Insights from 2,500+ Contributors in 90+ Countries