Science & Technology

Outside the Box: Nosce Teipsum or Can a Machine Know Itself?

In “Outside the Box,” I interrogate ChatGPT to better understand how AI “reasons.” It’s like a conversation with an intelligent friend, sharing ideas and challenging some of the explanations. As we began struggling with the concept of “disinformation” we began to confront the question of how we know we know the truth. Classical philosophers recommended Gnosce teipsum (“Know thyself”). But can this apply to a machine?
By
AI

Cyborg woman look at logo AI hanging over phone. Abbreviation AI consists pcb elements. © Andrey Suslov / shutterstock.com

November 04, 2024 05:29 EDT
Print

It’s easy to ask ChatGPT a question and either be content with the answer or, in some cases, see it as a case of pure invention, or hallucination. Humans often give faulty information during a conversation. But because we have some understanding of human psychology, we will typically ask ourselves, according to our perception of the context: “Why did she get that wrong?” or even “Why is he lying?” We can then seek answers to that question either by deduction or further interrogation.

Take any assertion made by a politician. We can pretty much count on politicians to bend even the most incontrovertible truth. Unless we are an unbending partisan or willfully naïve, we may reflect that the politician in question is doing one of several things: presenting selective facts while hiding others, distorting reality to skew it towards a particular point of view or quite simply giving people what they want to hear, however inaccurate that may be. We don’t say the politician is hallucinating. But we may call them delusional, which reflects our belief that they are deliberately embracing a delusion.

In my ongoing conversation with ChatGPT, I sought to deepen the epistemological reflection with the following prompt:

“When thinking about the difference that exists between a conversation I may conduct with a human being or with an AI chatbot, it occurs to me that we need to think more carefully about the nature of the conversational relationship. One source of frustration with AI is related to human expectations about the trust we may have in a conversational partner. Perhaps in the case of AI, our expectations are artificially high. And the reason for that is that we expect AI to have no ulterior motives to distract it from objective truth.

In so doing, we tend to treat AI like Jonathan Swift’s Houyhnhnms, who can only speak the truth. Captain Gulliver, who admires their cold rationality, tells us they have no word in their language for lying. Similarly, if an AI delivers palpably false information, we are not likely to complain that it is lying. Instead, we say it is hallucinating.

But when dealing with a person whom we catch lying, we are likely to become indignant. We immediately suspect they have reasons for lying. In most cases, we seek to explore those reasons. If we can understand them, we can redefine our social or transactional relationship with them. With almost everyone we tend to define not just one level of trust, but possibly multiple levels. For example, I may have a friend I would trust in any emergency, but knowing her commitment to a political cause, I would not trust everything she says about controversial issues. I might well become indignant when I hear what I know to be canned propaganda.

This shouldn’t happen in a conversation with an AI chatbot. We may become frustrated and even angry, but not indignant when the chatbot hallucinates. As soon as we begin reflecting, we may pour hate or scorn on the humans who have created or programmed its algorithms. We may then ask questions about those people, their employers or the institutions they work for to unveil their motives for allowing this to happen.

So here is my question. What does this contrast between a human lie and an AI hallucination tell us about the difference in the status of truth for humans and machines? More specifically, if we continue to assume that machines are not motivated to lie, how should we assess the truth value of anything the machine asserts?”

This is a fascinating question, and it gets to the core of how we relate to both truth and intentionality in our interactions with machines and humans. The difference in the status of truth between humans and machines seems to highlight the unique role of *intent* and *social context* in shaping our assessment of truth and trust.

When it comes to humans, truth is often interwoven with ethical and social expectations. We trust or distrust based on a person’s motives, history, and the context in which they’re speaking. If a friend tells us something we later find to be untrue, we seek to understand their reasons and motivations. We might forgive a “white lie” told to spare our feelings or become indignant over a self-serving deception. In these cases, the truthfulness of a statement is measured not just by its accuracy but by the perceived *intent* behind it. This is why catching someone lying often leads to a breakdown in trust: the lie implies they had a reason to deceive us, potentially to serve their own interests at our expense.

In contrast, machines like AI chatbots are not generally perceived to have personal motives. When an AI “hallucinates” or provides inaccurate information, we attribute this to limitations in its programming, training data, or model architecture, not to intent. This lack of intent removes the ethical dimension of lying or deceit from the equation. The AI isn’t viewed as intentionally deceptive but as a tool with a certain probability of error, much like a calculator that occasionally malfunctions. As a result, while we may be frustrated with AI inaccuracies, we don’t interpret them as moral transgressions.

This difference suggests that truth from a machine lacks the relational component that truth from a human possesses. With AI, truth value is assessed purely on an epistemic level: ‘Is this information correct or incorrect?’ rather than ‘Is this person trying to mislead me?’ This is why AI’s assertions are often treated as isolated claims of fact, with no implicit moral or relational meaning.

However, even if we assume that machines are unmotivated to lie, we still need to carefully assess the truth value of their assertions. This can be done by considering:

  1. **Source Transparency**: Since AI outputs are generated based on patterns in data, their accuracy can be opaque. Understanding the data sources, biases, and training limitations helps assess the reliability of AI-generated content. 
  2. **Likelihood of Error (Hallucination)**: Recognizing that AI can ‘hallucinate’ or produce inaccurate information due to its generative nature requires us to verify important information, especially in critical domains like medicine, law, or finance.
  3. **Assumptions in Training and Programming**: Every AI reflects the values, goals, and assumptions of its creators and training data. While an AI lacks personal motives, its outputs can still be influenced by the biases, limitations, and intentions of its design and dataset. A healthy skepticism can help users remain aware of potential biases in the technology itself.

Ultimately, this contrast tells us that truth is a more complex, multi-dimensional concept in human relationships than it is in machine-human interactions. Truth between humans is deeply connected to trust, intent, and social context, while truth from a machine is evaluated more on the basis of empirical accuracy and reliability. AI’s ‘truth’ is informational, not relational, which means that even if it were 100% accurate, it still wouldn’t carry the same weight as human truth, which we inherently understand as part of an ethical and social framework.

In summary, without assuming personal motives in AI, we should assess its assertions by checking for accuracy, examining sources, and being mindful of inherent biases in its design. This approach allows us to engage with AI’s ‘truth’ more as a functional, information-based construct rather than a socially or ethically motivated one.

The conversation didn’t stop there. I continued with a new prompt:

“So, should we conclude that because machine intelligence lacks a moral dimension, it represents a different category of communication that should be carefully distinguished from human-to-human communication? And if so, shouldn’t we, as a society, be working on elaborating a set of guidelines that will help us situate the level and quality of truth that emerges in the chatbot’s discourse? I would suggest even beginning by focusing on the notion of ‘truthfulness,’ a concept that has a different meaning when applied to humans and machines. We presume that a truthful human being is committed to pursuing the truth, not possessing it. We might presume that a truthful machine possesses the truth but cannot pursue it or even be motivated to pursue without the involvement of a moral being; for example, in the form of a conversation. Does this accurately situate the difference between human and artificial intelligence?”

In next week’s column we will examine ChatGPT’s very interesting response to this question and further develop our exploration of the issue of “disinformation.”

Your thoughts

Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.

*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

Comment

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments

Support Fair Observer

We rely on your support for our independence, diversity and quality.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 2,500+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

Will you support FO’s journalism?

We rely on your support for our independence, diversity and quality.

Donation Cycle

Donation Amount

The IRS recognizes Fair Observer as a section 501(c)(3) registered public charity (EIN: 46-4070943), enabling you to claim a tax deduction.

Make Sense of the World

Unique Insights from 2,500+ Contributors in 90+ Countries

Support Fair Observer

Support Fair Observer by becoming a sustaining member

Become a Member