Science & Technology

Outside the Box: How AI Sees Its Impact on Society, Part 1

In “Outside the Box,” I interrogate AI chatbots to better understand how AI “reasons.” It’s like a conversation with an intelligent friend, sharing ideas and challenging some of the explanations. Just as with friends and colleagues, it’s important to understand how they think about themselves. In the first part of today’s exercise, I composed a lengthy prompt challenging Gemini to weigh in on AI’s impact on the world. Gemini’s response and the extended dialogue will appear in Part 2.
By
Outside the Box

Via Shutterstock.

July 07, 2025 06:00 EDT
 user comment feature
Check out our comment feature!
visitor can bookmark

The launch of ChatGPT in late 2022 plunged entire sectors of the economy into a state of uncomprehending panic. The Covid-19 pandemic was finally losing steam when suddenly a new global threat appeared. Elon Musk and even Stephen Hawking had warned us. Our time-tested manners that had made our civilization confident that we humans could manage (and sometimes creatively mismanage) the world, the physical environment, entire populations and the mass of information they produced would soon be called into question.

And now the tool that the doomsters had prophesied might enslave us was in everyone’s hands. Nothing to worry about in the first instance, of course. Because it was free, everyone could indulge their curiosity. They began playing with the new toy that let them appear to be doing creative things they wouldn’t have attempted to do in the past for fear of failing. But like the pods in the film, “Invasion of the Body Snatchers,” the potential evil had now become omnipresent in everyone’s everyday environment.

I happened to have committed to teaching a course in geopolitics at an Indian university precisely at that moment. I immediately discovered the profound disarray that was spreading throughout the world of education, and particularly universities, over the specter of students cheating by getting the new chatbot to produce their college essays. Two and a half years later I notice that not much has changed, although I described the initial success of an experiment I ran in the first two months of 2023, in which I required that students establish a dialogue with ChatGPT, accepting the chatbot literally as a “classmate” in shared research.

That isn’t a radical concept. Since then others have started exploring and experimenting with the idea of “collaboratories,” not just for education but for all manner of creative, professional and productive endeavors. We should, however, acknowledge that the dominant conceptual model of education established during the Industrial Revolution has inculcated into our minds that competition and individual accomplishment stand as the supreme values guiding not only the formulation of goals but also the assessment of performance. Transforming those individualistic values into their opposite and sharing them widely would require the work of a modern alchemist combined with a marketing wizard. Even if everyone having been made aware of the desirability of formulating new goals and seeking the best ways to achieve them were to adhere intellectually to such a program, deep institutional resistance will most likely produce the inertia that prevents it from happening at scale.

Education’s ambiguous relationship with the job market

AI and the future of both work and the workforce is an even bigger problem, though it could be argued that a transformation of educational goals and methods would be the best way of attenuating it. The effects on real people are already visible. I can cite a case within my own family. At a time when governments in the developed world are facing major economic challenges, starting with massive and largely uncontrolled debt, this cannot be reassuring for anyone who senses that even a portion of their professional value can now be replaced by AI. In principle, everyone’s productivity should increase, but our current institutions see that primarily as a pretext for cost reduction, and the most difficult costs to manage are, of course, human costs.

At the more abstract level of historical speculation about the evolution of technology itself, the specter of AI replacing human intelligence has produced in many people’s minds a vision of an absolute, pitiless dystopia in which all decisions are made or at least mediated by AI, which may then decide that humans are too unreliable to allow to exercise any control over intelligent machines. We should ask ourselves why our culture permits us to entertain such a thought.

To answer this question, we must return to my earlier remark about the model our industrial-consumer society has crafted for education, a model, I would dare to suggest, that can be accurately described as “shallow learning,” in direct contrast with AI’s pretention of engaging in “deep learning.”

Learning focused on the unique premise of elevating and measuring individual performance according to standardized criteria is the epitome of shallowness. It inculcates the idea that learning serves a single purpose: to pass a test. This inevitably leads to the phenomenon of “cramming” and a paradoxical sense of relief after one’s relative success on the test that sends a contradictory message, a feeling I think we have all experienced expressed in this sentiment: “Now I can forget everything I had to learn.”

It’s the economy, stupid!

At a time when international events regularly remind us that a simple accident or even slip of the diplomatic tongue could provoke a nuclear holocaust — a situation to which few of us feel that our current leaders are capable of responding with the required rationality — the dystopia of a future takeover of global politics by AI may seem to remote to worry about. Others may even think that, compared to the current style of leadership of our politicians, such a takeover might even be preferable.

Instead, people tend to privately worry about the immediate threat to their jobs and along with jobs to their status as economic actors in society. It isn’t just about having an income. It’s also about feeling that one possesses some value within the community. When it becomes evident that contracts, letters, reports, emails and every other form of professional act, including expert analysis, can be better, more efficiently and less onerously conducted by AI and robots, people begin to doubt their status in a community that celebrates the tiny elite that manifestly does control things at the highest level… and is materially rewarded for such control in the most obscenely generous fashion.

There are, of course, many other questions raised by AI, the most profound of which is what I like to call its relationship with hyperreality. Decades ago, sociologist Jean Baudrillard highlighted how the earlier, simpler technology that predated the digital age had provoked an unease at the core of our consumer society. It had the effect of distorting our perception of the world and provoking a tendency to accept and even prefer artificially crafted versions of reality. He called it hyperreality.

Reality itself is not simple or transparent. Everything we “know” about reality is mediated by language. But reality’s traditional components fall into two categories: the natural world we live in and our social surroundings. The latter includes relatively stable forms of reality such as government, religion, science, art and even mathematics (which some see as a feature of natural reality, but that is a different debate).

The notion of hyperreality signifies an increased and even dominant investment — culturally and intellectually — in simulacra of reality. Art of course has always produced simulacra, but the very act of framing a picture or writing a book or poem with a beginning and an end avoids confusing an imagined world with the real world. Much of modern entertainment (including the news) mediated by our various analog and digital technologies has transformed our society’s ability to perceive and establish a healthy relationship with the natural and social world we live in.

It’s easy to reject the idea that hyperreality has any foundation within a society in which we — unlike our machines — remain sentient beings capable of differentiating between what is real and artificial. But the theorizing about hyperreality correctly describes a very real qualitative change that has taken place within human cultures. There’s room to debate how profound or pervasive that has become.

One thing we cannot deny is that AI brings a new dimension to the hyperreality debate. That debate is only beginning. To start it off, let me finalize my challenge to you with this prompt: What in everything I’ve said in the above paragraphs is contestable to the point that, if defended, it would require extensive research and evidence to verify and affirm? And which of my points already possess sufficient evidence to be considered largely above criticism on purely objective grounds?

My aim is to understand how we can work together to refine our understanding of complex questions that concern us all.

Gemini’s response in Part 2

Before presenting Gemini’s response, which we will publish on Friday, July 11, I invite readers to imagine how AI is likely to respond to an interrogation about its role in society. I asked it to help me better frame my own ideas about the impact of AI. I could be mistaken on many of the points. The advice an intelligent friend may provide will always be helpful.

In the followup, I will compare Gemini’s with ChatGTP’s response to the same prompt and the ensuing dialogue that grew from the first exchange. I highly recommend this kind of exercise that permits us humans to explore and refine ideas that we think valid and worth sharing. I’ll simply add one important observation: This exercise does indeed help us to understand the weaknesses in any thesis we develop, but it also reveals the limits in AI’s ability to understand human thinking. We will explore that phenomenon in Part 2.

Your thoughts

In the meantime, I invite readers to think about the issues and even to speculate on how an AI chatbot is likely to respond.

Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.

[Artificial Intelligence is now a feature of nearly everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

Comment

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments

Support Fair Observer

We rely on your support for our independence, diversity and quality.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 2,500+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

Will you support FO’s journalism?

We rely on your support for our independence, diversity and quality.

Donation Cycle

Donation Amount

The IRS recognizes Fair Observer as a section 501(c)(3) registered public charity (EIN: 46-4070943), enabling you to claim a tax deduction.

Make Sense of the World

Unique Insights from 2,500+ Contributors in 90+ Countries