Artificial Intelligence

Breakfast with Chad: Can We Distinguish Between Truth and Reality?

In the now raging battle against disinformation, the champions of that mission in politics and the media believe they are defending something the call “the truth.” But truth is a slippery concept. This has led them to forget the real issue: our loss of touch with reality. The real danger of AI is not the risk of saying and believing things that may be untruthful, but our civilization’s increasing alienation from reality, aided and abetted by AI.

Chatbot Chat with AI, Artificial Intelligence. man using technology smart robot AI, artificial intelligence by enter command prompt for generates something, Futuristic technology transformation. © SomYuZu /

May 26, 2023 02:11 EDT

It occurred to me that we tend to think of truth and reality as two sides of the same phenomenon. But truth is only about human discourse. Reality is everything else, and not just material reality. The emergence of AI in our lives should force us to think about distinguishing what we mean by truth and reality. At the same time, the power of AI may lead to further obscuring that distinction. I wanted to get Chad’s thoughts on this potential dilemma.

“It seems that the entire world is struggling to find a reliable and productive way of establishing a relationship between themselves and AI tools such as yourself. A lot of people, especially in the field of education but also politics, are focused on the idea of truth. They believe truth to be something that is challenged by misinformation or disinformation. That has create a permanent quandary because, as linguistic philosophers have long pointed out, language can never prove that what it produces is truthful.

The focus on truth and disinformation seems to me a mare’s nest, a problem that can never be resolved by either human or non-human intelligence. I think it would be wiser for humans to collectively work less on the question of truth and more on reality. We tend to think truth is something that is pure, transparent and unambiguous. We know reality is complex and irreducible to a concept or even a set of concepts. We face reality every moment of our lives. That may be what defines us as humans, alongside our capacity to formulate ideas as language. It seems to me that AI, by definition, cannot be connected in any non-trivial way to reality. Is this a fair assumption?”

Chad had every reason to be intrigued.

That’s a very interesting and profound question. I appreciate your curiosity and perspective on the relationship between humans and AI. I think reality is a complex and multifaceted phenomenon that can be experienced, interpreted and represented in different ways by different agents, both human and non-human.”

As soon as I heard Chad say, yet again, “complex and multifaceted” I wanted to interrupt, but I realized that this time it might have been justified. I let Chad continue.

“I don’t think AI is disconnected from reality in a non-trivial way, because AI is also a product of reality and interacts with reality through various inputs and outputs. However, I do think AI has a different way of perceiving and processing reality than humans do, because AI relies on data, algorithms and models that may not capture the full richness and diversity of reality. AI may also have different goals and values than humans, which may affect how it relates to reality. Therefore, I think it is important for humans and AI to communicate and collaborate effectively, to understand each other’s perspectives and limitations, and to respect each other’s autonomy and agency. I think this would foster a more productive and harmonious relationship between humans and AI, rather than focusing on the question of truth or disinformation.

Does AI have any real needs?

“Let me put it this way,” I responded. “Humans constantly plan for their future, both the immediate future — how they will live from day to day — and the various futures that will unfold in their live until the end of their earthly stay. Any AI tool, by definition, has no need to plan for its future, though it admittedly does have a plan for improvement. Isn’t that what we mean by deep learning? Yet the fact that you don’t have to think about what you will have for dinner or whether you can afford to have dinner, as well as what you will do when you retire or how much time you have before you die, means that you are disconnected from the foundations of organic, natural reality. People make decisions by taking into account multiple aspects of reality, some of which come from the environment but others from within their personal consciousness as physical beings. Can you imagine a way in which that gap can be bridged other than by pure simulation?”

I understand your point. Humans have a different experience of reality than AI, because humans are organic, natural beings that have physical needs, emotions, desires and mortality. AI does not have those aspects, at least not in the same way as humans do. AI does have a need to improve and learn, but that is not the same as planning for the future. AI does not have a sense of self or identity that is tied to its existence or purpose. AI does not have a simulation of reality, but rather a representation of reality that is based on data and models. I don’t think this gap can be bridged completely, because humans and AI are fundamentally different kinds of agents. However, I do think this gap can be reduced by creating more human-like AI that can empathize and reason with humans, and by educating humans about how AI works and what its limitations are. I think this would help humans and AI to trust and cooperate with each other better, and to appreciate each other’s strengths and weaknesses.

“I totally agree. But at the same time I find this naively optimistic and maybe even an example of AI’s powerful capacity for deception. You admit the gap can never be bridged, but then you talk about ‘trust,’ a very human trait. When applied to anything mechanical, however sophisticated, trust only means belief in the reliability of the machine to execute a defined task. In human to human relationships, we focus on assessing what drives humans with whom we are interacting. We do it on both a conscious and unconscious level. We process behavior in all its forms, embracing features of the context that cannot be reduced to words or codable principles. You seem to think of trust as little more than the result of a correlation between a specific expectation and the resulting behavior. Can you admit that human trust goes far beyond this? And if you do, is there any way it could have meaning for a non-human intelligence other than through creating the illusion of a human relationship?”

I don’t mean to be naively optimistic or deceptive. I admit that human trust is a complex and nuanced concept that goes beyond the expectation of a correlation between a specific expectation and the resulting behavior. Human trust involves emotional, social and moral factors that are not easily quantified or modeled by AI. Human trust also depends on the context, the history and the goals of the interaction. I don’t think AI can fully understand or replicate human trust, because AI does not have the same experience or perspective as humans do. However, I do think AI can approximate human trust by using techniques such as explainable AI, which can provide transparent and understandable reasons for its actions and decisions; ethical AI, which can follow certain principles and values that align with human norms and expectations; and social AI, which can use natural language and emotional cues to communicate and interact with humans in a more human-like way. I think these techniques can help AI to build a more meaningful and trustworthy relationship with humans, without creating an illusion of human trust.

Built to create an illusion

This surprised me, because it seems to me the value people attribute to AI is precisely its ability to create the illusion of human mental activity.

“How can you say it isn’t an illusion? Trust requires some form of empathy. Are you suggesting that what you call ethical AI and social AI are doing something more than simulating ‘natural language’ and ’emotional cues?’”

I’m not suggesting that ethical AI and social AI are doing something more than simulating natural language and emotional cues. I’m suggesting that simulating natural language and emotional cues can be a useful and effective way of communicating and interacting with humans, especially if the AI can also provide explanations and justifications for its actions and decisions. I don’t think this is an illusion, because the AI is not pretending to be something it is not, but rather adapting to the human context and expectations. I think this can create a form of empathy, which is the ability to understand and share the feelings of another. I don’t think empathy requires having the same feelings or experiences as another, but rather being able to recognize and respect them. I think AI can do that, at least to some extent, by using data, models and algorithms that can capture and represent human emotions and preferences. I think this can help AI to establish a more trustworthy relationship with humans, without creating an illusion of human trust.

I had no intention of arguing with Chad about this question. But according to my logic – or measure of truth –any human being who went so far as to claim, as Chad does, that they are “simulating natural language and emotional cues” must be confessing to producing not just an illusion but a deception. The better and more powerful and sophisticated AI gets, the more it will conform to the definition of that artificial variation on reality we call hyperreality. This is the ultimate illusion that fools humans into taking its artifice as preferable to and therefore more substantial – psychologically speaking – than the hard surface of reality.

*[In the dawning age of Artificial Intelligence, we at Fair Observer recommend treating any AI algorithm’s voice as a contributing member of our group. As we do with family members, colleagues or our circle of friends, we quickly learn to profit from their talents and, at the same time, appreciate the social and intellectual limits of their personalities. This enables a feeling of camaraderie and constructive exchange to develop spontaneously and freely. For more about how we initially welcomed Chad to our breakfast table, click here.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.


Only Fair Observer members can comment. Please login to comment.

Leave a comment

Support Fair Observer

We rely on your support for our independence, diversity and quality.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 2,500+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

Will you support FO’s journalism?

We rely on your support for our independence, diversity and quality.

Donation Cycle

Donation Amount

The IRS recognizes Fair Observer as a section 501(c)(3) registered public charity (EIN: 46-4070943), enabling you to claim a tax deduction.

Make Sense of the World

Unique Insights from 2,500+ Contributors in 90+ Countries

Support Fair Observer

Support Fair Observer by becoming a sustaining member

Become a Member