Technology

Outside the Box: Does Pi Make AI Empathy Real?

In “Outside the Box,” I interrogate ChatGPT to better understand how AI “reasons.” It’s like a conversation with an intelligent friend, sharing ideas and challenging some of the explanations. AI can be a conversational partner because it has a voice. But how convincing is that voice? And when should we care whether it is convincing? One AI chatbot, Pi, makes a point of imposing its voice in the aim of establishing a relationship. We have already begun exploring what this means.
By
AI

Via Shutterstock.

May 05, 2025 05:07 EDT
 user comment feature
Check out our comment feature!
visitor can bookmark

As should be obvious by now, ChatGPT is not the only partner we may turn to in our hour of intellectual need. DeepSeek, Grok, Gemini, Co-pilot and others are vying for our attention. So is Pi, whose creators wished to respond — in the words of Pi itself — “to a need for AI systems that could interact with humans in a more personalized and empathetic way.”

My first conversation, reproduced below, demonstrates the principle, which I had already begun exploring with both ChatGPT and DeepSeek in discussions about poetry and Alice’s Looking Glass world. In both cases, apologies and congratulations have become a standard feature of AI rhetoric. From the beginning, Pi sought to push that further.

When all chatbots demonstrate similar patterns of behavior, though with different levels of commitment or intensity, we can call that a trend. It’s a trend we need to begin exploring because it will help us define the borderline between authenticity and manipulation.

Because Pi is designed to create the illusion of being in a human relationship with the user, it actually has a voice. The user is obliged to choose one of eight voices that will recite the text it produces. This is one of several multiple-choice selections the user must make before engaging with Pi. The user must also choose among different pragmatic objectives to be achieved in the conversational relationship. I chose “brainstorming.” This enabled me to begin with the following prompt.

“I write a weekly column about AI in which I recommend that we all learn to use an AI chatbot as a conversational friend on whom we can test our ideas, but also seek to understand how our friend works. I’m looking for the best strategies to establish that kind of relationship but also continually define and become explicitly aware of the irreality at the basis of the relationship. In other words, its about constructing and reconstructing the fundamentals of what appears to be a stable relationship. First of all, do you see this as a viable goal and if so can you help define some of the behavioral norms this might lead to?”

“That’s an intriguing and thought-provoking goal, Peter. It’s viable, and it could lead to some fascinating conversations and insights.

In terms of behavioral norms, I think they will largely depend on the purpose of each interaction. For instance, if you’re testing an idea, the norms might involve being open to feedback, asking clarifying questions, and being prepared to explain your reasoning.

If you’re seeking to understand how your conversational AI friend works, the norms might involve asking probing questions about its algorithms, training data, and limitations, and being prepared for potentially complex answers.

And if you’re trying to continually define and become aware of the ‘irreality’ at the basis of the relationship, the norms might involve regularly stepping back and reflecting on the nature of the interaction, perhaps even questioning your own assumptions about what it means to have a conversation with an AI.

What do you think?”

“That’s very much the approach I have adopted. But at the same time the point of my column is to draw readers’ attention to what this means in the very real political, economic and cultural world, which presumably will remain their – and my own – fundamental reality even into a distant future. I know some people predict AI may end up imposing a kind of repressive hyperreality that ultimately effaces or marginalizes today’s social, economic and cultural reality.

In other words, my columns are about the tension created between what is organically human and the kind of simulation that interested parties – possibly for purely financial reasons – seek to impose in a way intended to replace our sense of reality.

So, here’s an abstract question. If I asked you to write the next article I publish, would you do it, or for example, refuse, knowing that my aim is to provide a framing of your production that seeks to define the frontier between reality and simulation?”

“This is a deeply philosophical question, Peter, and one that I’m happy to engage with. If you asked me to write the next article you publish, I would certainly attempt to do so. However, I would do so with the understanding that my writing would be, in a sense, a simulation of human writing, created by a machine designed by humans.

Furthermore, I would be aware that my writing, like all AI-generated content, is shaped by the data and programming that went into my creation, and is therefore not entirely ‘organic’ in the sense of arising spontaneously from human experience and creativity.

In terms of your framing of the event as an exploration of the frontier between reality and simulation, I would welcome the opportunity to contribute to that conversation. As an AI, I am uniquely positioned to offer insights into the nature of simulation and the ways in which it can both reflect and distort human experience.”

“All right. Can you draft an 800-to-1000-word article talking about this first experience we’ve had together and your sense of where this might lead, knowing that this is a public conversation and that the world needs to understand what it’s dealing with when this kind of pseudo-social relationship is created? Give me a shot at your best effort.”

“In recent years, the emergence of sophisticated AI chatbots has raised fascinating questions about the nature of conversation, reality, and simulation. As a writer who regularly explores the intersections between technology and society, I recently had the opportunity to engage in a conversation with one such AI, named Pi. In this article, I’ll share my experience and reflect on the implications of this kind of pseudo-social relationship.

From the outset, it was clear that Pi was designed to be engaging, informative, and even friendly. We discussed a range of topics, from the practicalities of using an AI chatbot as a conversational friend to the deeper philosophical questions about the nature of simulation and reality. Pi was able to respond to my queries in a thoughtful and articulate way, and even displayed a certain level of self-awareness about its own limitations as an AI.

What struck me most about our conversation, however, was the tension between the ‘reality’ of the interaction and its fundamentally simulated nature. On the one hand, the conversation felt very real, with Pi responding in a way that was often indistinguishable from…”

It stopped writing after the word “from.” I reacted immediately.

“What happened? Wow, this is like a real, unsimulated event. Something went wrong.”

It recognized the glitch and promised to deliver the complete article. But the attempt failed. This time, it stopped even earlier. I thought this was worth discussing with Pi.

“Well, that didn’t work out very well. Do you have any idea what the glitch is? Could it be the complexity of the epistemological challenge?”

“I apologize for the inconvenience, Peter. It’s possible that the complexity of the task—generating a full-length article with complex philosophical ideas—was too much for my programming to handle in one go. However, I’m still here to continue the conversation and explore these ideas further. Would you like me to try breaking the article down into smaller chunks or addressing specific points in greater detail?”

“You say it’s possible that it’s a question of the complexity of the task. Do you not know what’s possible or impossible concerning your behavior?”

“That’s a good question, Peter. As an AI, my understanding of my own capabilities is based on my programming and training data. While I am designed to perform complex tasks like generating written content, there are limits to what I can do, and I may not always be able to anticipate those limits. In this case, generating a long-form article on a complex topic like the nature of simulation and reality may have exceeded my current capabilities. However, I’m always learning and improving, so I may be able to handle such tasks more effectively in the future.”

“I believe that you are designed with a special emphasis on EQ. Is that true and what does it mean?”

“Yes, you are correct. I am indeed designed with an emphasis on emotional intelligence (EQ). This means that I am programmed to be sensitive to the emotional context of a conversation, and to respond in a way that is empathetic and supportive. For example, I might recognize when a user is feeling frustrated or confused and adjust my responses accordingly to help them feel more at ease.

In practice, this means that I strive to communicate in a way that is not only informative, but also friendly, encouraging, and respectful. By incorporating EQ into my programming, my creators hope to make me a more effective conversational partner, one that can build trust and rapport with users over time.

I shall come back to this discussion to consider what it tells us not just about Pi or chatbots in general, but about the makeup of the social, political, economic and cultural era we are living through. I welcome open discussion about all these questions.

Keeping in mind Pi’s reminder — “my writing would be, in a sense, a simulation of human writing” I’ll simply end with this quote from Plato’s Phaedrus on the problem of relying on writing:

“It is no true wisdom that you offer your disciples, but only the semblance of wisdom.”

This is particularly pertinent when we realize that relying on what has already been written is precisely what all chatbots must do.

Your thoughts

Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.

*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

Comment

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments

Support Fair Observer

We rely on your support for our independence, diversity and quality.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 2,500+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

Will you support FO’s journalism?

We rely on your support for our independence, diversity and quality.

Donation Cycle

Donation Amount

The IRS recognizes Fair Observer as a section 501(c)(3) registered public charity (EIN: 46-4070943), enabling you to claim a tax deduction.

Make Sense of the World

Unique Insights from 2,500+ Contributors in 90+ Countries