Artificial Intelligence

Can Humans Like Something that Claims to Be Humanlike?

A new version of ChatGPT has provoked a reboot of the news cycle as humanity attempts to understand whether AI is a pet or an alien invader. We must wonder whether AI can generate ideas and at the same time whether we aren’t ourselves abandoning that capacity.
By
Can Humans Like Something that Claims to Be Humanlike

Businessman holding a light chatbot hologram intelligence AI. Chat GPT chat with AI Artifice intelligent developers by OpenAI generate. © Paper piper / shutterstock.com

March 22, 2023 01:41 EDT
Print

Since the release of ChatGPT last November, the media have been captivated by it. The trend accelerated last week when a new version of the AI “large language model” was launched, highlighting some surprising new features and a significant gain in power.

This week The Washington Post’s tech columnist Geoffrey A. Fowler weighed in with an article titled, “ChatGPT can ace logic tests now. But don’t ask it to be creative.” He pointed to its strengths in an area that became monumentally important in the industrial age: evaluation, and specifically, academic style testing. Fowler admires ChatGPT’s improved ability to pass the LSAT, a test whose questions require combining facts and logical reasoning. Fowler tells us that GPT-4 scores well because it “stays focused on the link between the presented facts and the conclusion it needs to support.” By contrast, “GPT-3.5 gets distracted by facts that aren’t relevant.”

In other words, Fowler sees a trend in the evolution of AI towards greater and more accurate focus in the expression of its reasoning capacity. At the same time, Fowler sees no advance in AI’s ability to be creative. He even suggests its attempts at creativity more closely resemble inept parody. In his exploration of the tool, he quotes an example of AI’s taste for self-flattery when GPT-4 described its own performance as “adaptable, intuitive, and more humanlike than ever before.”

Today’s Weekly Devil’s Dictionary definition:

Humanlike:

Anything fundamentally unhuman that manages to mimic some isolated behavior or superficial feature of human reality to the point of convincing humans that their own reality is just as meaningless and superficial as the object that successfully mimics them.

Contextual note

Since it was ChatGPT itself that boasted of being “adaptable, intuitive, and more humanlike,” I decided to ask it to provide its own definition of “humanlike.” Here is how it responded (I’ve placed all quotes from GPT in italics).

The word ‘humanlike’ refers to something that resembles or is similar to a human in appearance, behavior, or characteristics. It can describe anything from the physical appearance of a humanoid robot to the way an animal behaves in a manner that is similar to how a human might behave. The term is often used to describe traits or features that are characteristic of human beings, such as the ability to communicate using language, use tools, exhibit emotions, and display social behavior.

Nothing surprising there, since GPT consistently demonstrates its commitment to literal meaning and nothing but literal meaning. In other words it tends to be unimaginative, conventional and utterly lacking in “humanlike” creativity. After reading its definition of “humanlike,” I wondered what ChatGPT might think about my Devil’s Dictionary definition cited above. I therefore quoted my definition and asked it the following question: “So if something is humanlike it must also, in some sense, be human unlike. Does that make sense?”

Here is the beginning of GPT’s response: “While the terms “humanlike” and “human unlike” are technically antonyms, it’s not accurate to say that something that is humanlike must also be human unlike.

At this point I should have asked it what criteria it proposed to determine accuracy. But the problem is not one of factual accuracy. It’s a question of logic. If some non-human thing in the world appears “humanlike” — it might be a dog, an algorithm or a cartoon character —  we are clearly talking about it as humanlike because we know it is not human, and therefore, in its essence it is human unlike. So my definition is perfectly accurate, but GPT’s is, well… let’s just call it misinformation. 

In the next part of its answer, GPT appeared to validate my position about being the human and humanlike being “fundamentally different.”

The term ‘humanlike’ refers to something that shares some similarities or characteristics with humans, such as appearance or behavior, but it doesn’t necessarily imply that it is identical to a human or completely unlike any other form of life. For example, a humanoid robot may be described as humanlike in appearance and behavior, but it is still fundamentally different from a biological human.

That was much better. But the answer continued.

On the other hand, the term ‘human unlike’ implies that something has no similarities or characteristics in common with humans, which may not always be the case. For instance, a tree is very different from a human in many ways, but it still shares some similarities with humans, such as the ability to grow and reproduce.

This makes no sense, linguistically or philosophically. It is a distortion of what I said. to say. I did nhot state nor imply that “human unlike” means there are no similarities. ChatGPT is guilty of over-generalization in its interpretation of what I wrote. Differences do not exclude the existence of similarities, and vice versa. This sounds more like the very humanlike trait on GPT’s part of wanting to win an argument even at the price of manifestly faulty reasoning.

Historical note

For centuries, in Europe, students of the standard (but not standardized) educational program known as “the seven liberal arts,” based their learning on two distinct areas of human inquiry and expression: Logic and Rhetoric. Along with Grammar, these two disciplines established the complex role language played in structuring our understanding of the world. The other four arts were Music, Geometry, Arithmetic, and Astronomy. The focus on language skills across three of the seven arts contributed powerfully to the emergence of what some have called “the scientific revolution,” which in turn led to the industrial revolution, long before the digital revolution.

Studying logic, rhetoric and grammar in parallel meant that in pre-industrial Europe, the educated class, in all its diversity, developed an acute awareness of the forms and mechanics of reasoning and argumentation. The educated debated amongst themselves not just ideas, but the levels of meaning conveyed by the language they employed. That has become a lost art. Today’s culture is obsessed with “information,” imagining it to be something separate from language. Only linguistic philosophers in Oxford and other ivory towers take the trouble nowadays to critically investigate language. The rest of us, including AI, are mere consumers of language.

ChatGPT continued its naively argumentative response to my question about “humanlike” with this reflection: “The definition you proposed for ‘humanlike’ seems to be more of a philosophical or critical perspective on the concept, rather than a literal definition of the term. It implies that something that is humanlike may be capable of deceiving humans into thinking that their own reality is as meaningless and superficial as the object that mimics them. While this may be true in some contexts, it’s not necessarily a universal or objective truth about everything that is humanlike.

Can AI have an attitude? In this case the answer seems to be “yes.” It pooh-poohs philosophy’s concern with concepts and assumes that literal definitions are the basis of reality. Because it appears to have no taste for reasoning on more than one literal level and has an extremely limited awareness of context, it simplifies and thus distorts the meaning of my definition. In this case, it interprets my point about possibly causing humans to doubt their own reality – an effect of being exposed to a humanlike object – as the expression of an AI’s intention to deceive. This whole experience should demonstrate that AI has no intentions, only the appearance of intentions.

At the same time it reveals that rather than being open to dialogue, exchange and reformulation, it is built on the current cultural model that supposes that all issues can be framed according to a logic of binary opposition. The principle of machine learning is that the machine perfects itself through the accumulation of the data to which it is exposed and the interactions it experiences. Its reliance on data in the form of language means it is very likely to absorb uncritically the assumptions and even beliefs dominant in its database. Some may say that is very much “humanlike.” But it’s true only to the extent that humans allow themselves to be programmed as machinelike organisms.

*[In the age of Oscar Wilde and Mark Twain, another American wit, the journalist Ambrose Bierce produced a series of satirical definitions of commonly used terms, throwing light on their hidden meanings in real discourse. Bierce eventually collected and published them as a book, The Devil’s Dictionary, in 1911. We have shamelessly appropriated his title in the interest of continuing his wholesome pedagogical effort to enlighten generations of readers of the news.
Read more of Fair Observer Devil’s Dictionary.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

Comment

Only Fair Observer members can comment. Please login to comment.

Leave a comment

Support Fair Observer

We rely on your support for our independence, diversity and quality.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 2,500+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

Will you support FO’s journalism?

We rely on your support for our independence, diversity and quality.

Donation Cycle

Donation Amount

The IRS recognizes Fair Observer as a section 501(c)(3) registered public charity (EIN: 46-4070943), enabling you to claim a tax deduction.

Make Sense of the World

Unique Insights from 2,500+ Contributors in 90+ Countries

Support Fair Observer

Support Fair Observer by becoming a sustaining member

Become a Member