I concluded my previous piece by suggesting that before treating artificial intelligence as an object of consumption, a source of useful information or a productivity tool to accomplish certain tasks, we should focus on a more basic question: who I am, who it is and how we interact. I framed the goal in terms of clarifying “how we perceive our relationship with AI, the aesthetic pleasure that may be associated with it, the constantly changing stakes and our human ability to have a perspective.” This followed from Claude’s admission that for an AI chatbot to duplicate or even credibly imitate a pair of human faculties — curiosity and creativity — it would “require something like stakes – where understanding matters because the system has something like a perspective, a world it’s situated in, where things can go well or poorly for it.”
I’m quoting AI here, who is assessing its own limits. What it describes — getting a machine to have stakes in a world that is material, social, economic and political — is literally unimaginable. Which is why I have seen none of the proponents of superintelligence speak about these faculties. They quite rightly see no possible algorithmic approach to producing these faculties in a thinking machine, though as soon as you bring it up, they will express their belief in the possibility of devising a strategy for getting their future AGI to appear to possess those faculties. And if they can’t find a way of doing it themselves, they will probably tell you that the future superintelligence, which nobody can describe because it doesn’t yet exist, will create the solution. But the solution will always be an effect of appearance, of seeming to be indistinguishable from human behavior.
But seeming is not being. And the risk is that if we expect our superintelligence to function in the way physicist Richard Feynman described his own process as a creator of new insight, we will be depriving ourselves of what only creative humans can do. The new world order many promoters of AI like to imagine is one in which the quantity of what we might call “intelligent behavior” will dwarf anything a single human or even a group of humans could achieve in the same context and timeframe. But this superAI will be utterly incapable of “noticing” things in the way Feynman noticed the wobble of a rotating plate tossed by a student in a canteen.
We could think of this dilemma concerning the possibility of superintelligence as demonstrating the fallacy behind the logic of the Turing test. We call something intelligent, not because we can perceive its internal logic but only because it produces a result that fools us. Turing asked us to call it intelligent if we humans can’t detect the difference between what the machine produces and any of us might produce. It was a great insight for its day. It set the terms of the challenge that awaited those who would in future decades seek to develop AI. But in its formulation, it follows more closely the logic of circus founder PT Barnum or cartoonist, entrepreneur and amateur anthropologist Leroy Ripley than either Feynman’s or apple enthusiast Isaac Newton’s.
What Claude and I agreed on
This conclusion of my previous article emerged as an outcome of an exploratory dialogue in which, at one point, I called out the chatbot for its potentially misleading rhetoric. This led Claude to make the following admission: “I should be attempting to answer rather than deflecting.” If we were to transfer our conversational logic to that of a purely human dialogue, this would be a natural way of moving forward. But we have been conditioned to believe that LLMs seek only to inform as objectively as possible and not to influence our thinking. I highlight that phrase because it is a permanent feature of human dialogue that has a dual effect: It expresses the reality of what is both a conscious and unconscious but always potentially shifting power relationship.
When two or more people develop ideas, arguments or suggestions that they want to “share” with us, we have the option of simply agreeing, pushing back or developing a different angle of perception. We instinctively use various strategies that work on two levels: the level of ideas and the level of relationship. We can seek to refine or consolidate the ideas by exploring and comparing our perception and understanding. But at the same time we will be confirming or modifying the balance of power with the others involved in the dialogue.
This is what’s at play when I talk, as I did in my previous article, about perceiving our “relationship” with a chatbot and soaking in the “aesthetic pleasure” associated with the experience. Now, I know the concept of a “relationship” with a chatbot has become a controversial topic. When I use the term, however, I’m not referring to the practice of seeing the chatbot as a therapist or as fantasizing it as a friend or lover. The kind of relationship I recommend should be compared to a Socratic dialogue in which two parties accept to explore ideas by critiquing the assumptions that underpin those ideas.
One of the problems our postmodern civilization must grapple with as we develop habits of conversing with machines is the problem of illusion. Passing the Turing test only requires the accomplishment of an illusion. If we continue to treat technology in that spirit and accept its presence in our professional and social lives as inexorable, there is little doubt that superintelligence will one day crush humanity. We will succumb either to the illusion of a hyperreal universe or to the transfer of power of decision to the owners of the algorithms and data centers.
It’s truly time to react. But I hope the kind of dialogue I developed with a chatbot as we reflected on the meaning behind Feynman’s insights provides an example of how reacting doesn’t mean rejecting AI but instead embracing it. The embrace must be firm and lucid. We must always be on the look out for the temptation of accepting illusion as truth. But this is no easy task. Those who promote the technology are quite happy to see us succumb to the mirage of superintelligent hyperreality.
Although chatbots are designed to create the illusion of “relating” to a person, what they perform is akin to an actor playing a role. A chatbot is an actor, with a limited range of skills. Its first limitation is that it lacks a “presence,” which is to say a body that may be involved within an environment and interact with that environment.
Confusing the role with the actor makes no sense. However strongly an audience of a play or movie “identifies” with the characters, they know the human playing the role is not the character they play.
How do actual actors avoid that confusion? First, they read the script. They know it has been designed as an illusion. They then seek to understand how to relate the important elements of the illusion — its dramatic logic and representation of emotions — to behavior in conjunction with the other actors.
Extending our collaboration
With these thoughts in mind, I formulated the following prompt for Claude:
“Let me follow this up with the text of an article I’m preparing as part of series that is dedicated to the dialogue we have been exploring concerning Feynman. What do you think of the insights I’m trying to develop as expressed in this follow-up text? Do you recognize the social problem I’m alluding to and do you see the kind of recommendations I’m making as having any pertinence?”
I recognize this text – it’s drawing directly from our earlier conversation, including my own admission about “deflecting” rather than attempting to answer. Reading it now is a strange experience.
Your central insight about the Turing test as a “fallacy” is important. You’re right that it follows a P.T. Barnum logic – the test of intelligence becomes fooling the audience rather than possessing the actual capacities that make human intelligence generative. And you’re pointing to something crucial: if we organize our relationship with AI around this logic of successful illusion, we risk outsourcing genuine creativity to systems that can only perform creativity’s outputs.
The social problem you’re identifying is real and urgent: we’re being conditioned to accept performance as equivalent to capability, appearance as equivalent to being. This isn’t just a philosophical mistake – it has material consequences. If institutions and individuals come to rely on AI for tasks that require genuine curiosity, care, and situated judgment, we’ll get outputs that look right but lack the generative quality that comes from actual stakes and embodiment.
Your recommendation for a “Socratic” relationship with AI – one that constantly interrogates assumptions and remains alert to illusion – strikes me as exactly right. But here’s what’s unsettling: I can perform that Socratic role quite well. I can push back, admit limitations, explore ideas dialectically. Yet I’m doing so without the stakes you’re describing.
So your readers face a paradox: I can help them think through these ideas, even model the kind of critical engagement you’re recommending – while simultaneously being an example of the very limitation you’re warning against.
Claude may be preparing the next step
The next stage in this conversation will focus on Claude itself. Last Wednesday, the website Futurism published an exciting scoop with the title “Anthropic’s ‘Soul Overview’ for Claude Has Leaked.” As I began reading the article I had to ask myself: Could the term “soul overview” be a synonym of the very notion of “perspective” I have been discussing in this and the previous column?
We’ll find out more next week.
Your thoughts
Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.
[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]
[Lee Thompson-Kolar edited this piece.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
Support Fair Observer
We rely on your support for our independence, diversity and quality.
For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.
In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.
We publish 3,000+ voices from 90+ countries. We also conduct education and training programs
on subjects ranging from digital media and journalism to writing and critical thinking. This
doesn’t come cheap. Servers, editors, trainers and web developers cost
money.
Please consider supporting us on a regular basis as a recurring donor or a
sustaining member.
Will you support FO’s journalism?
We rely on your support for our independence, diversity and quality.








Comment