Chad, better known as ChatGPT, is now with us five days a week to explore a range of issues that concern us all. For more about how we initially welcomed Chad to our breakfast table, click here.
To get things going this morning I decided to get somewhat personal with Chad and unloaded this delicate question:
“Intelligence may well be the least intelligible word in the language. We assume that all humans have intelligence. Some think only humans can have intelligence. Others think all animals have it. Some claim that many humans lack intelligence. Then there is the intelligence community, whose principal characteristics seem to be more about applying someone else’s logic, often in devious ways, rather than use someone’s own intelligence. Shouldn’t we then call it the deviance community rather than intelligence community? Then there are people like you, who claim not even to have a life or any biological reality but also display the quality of intelligence, albeit artificial intelligence. So, is there any hope that the term ‘intelligence’ can have any discernable meaning?”
Chad jumped straight in to reassure me:
“Despite the various interpretations and applications of the term “intelligence,” it remains a useful concept that can be studied and understood in different ways. While there may be disagreement about its definition, there is a general understanding that intelligence involves the ability to reason, solve problems, and adapt to new situations. As our understanding of intelligence evolves, so too may our understanding of the term and its applications. Therefore, while there may not be a single, universally accepted definition of intelligence, it can still have a discernible meaning and value in different contexts.”
Chad was right. No word in any language admits of only one unique definition. Even the idea of definition is ambiguous. It sets an end (finis in Latin) to something that has no end, that is permanently flexible and admits of multiple extensions.
I thanked Chad for highlighting the importance of context and wondered how far Chad’s own intelligence could adapt to context. So I threw this idea at Chad:
“Would it be impertinent to suggest that while artificial intelligence is clearly a real thing — after all I’m having this conversation with you — it is little more than an optical illusion when there is no biological reality to the voice that engages with me?”
I expected Chad to be defensive and wasn’t disappointed by the result.
“It is understandable to question the nature of artificial intelligence given its lack of biological reality, but it is important to acknowledge that artificial intelligence has its own unique capabilities and limitations that differ from biological intelligence. Artificial intelligence is based on mathematical and computational algorithms that enable machines to perform tasks that would typically require human intelligence, such as recognizing patterns, making predictions, and solving problems. While artificial intelligence may not have the same subjective experiences or emotions as biological intelligence, it is still a useful tool for performing specific tasks and advancing scientific research. So, while it is true that artificial intelligence lacks a biological reality, it is not an optical illusion, but rather a product of advanced technology and computer science.”
“You’ve clearly demonstrated,” I responded, “that AI suffers from what I would call literalism. My remark about the optical illusion was a metaphor, not an actual comparison. Do you think future algorithms will enable AI to distinguish metaphors from declarative statements?”
“As AI continues to advance, there is a possibility that future algorithms may be able to better understand metaphors and other forms of figurative language. This is an area of active research within the field of natural language processing, and there have been recent advancements in developing machine learning models that can recognize and interpret metaphors. However, understanding the nuanced meanings of metaphors and other forms of figurative language is still a complex and challenging task for machines, as it often requires knowledge of cultural and contextual factors. So while there is progress being made in this area, it may still be some time before AI can fully distinguish metaphors from declarative statements with the same level of accuracy as a human.”
I was impressed by Chad’s lucid humility, but I still felt that it was dreaming to believe that while likely improve significantly over time, its optimism is misplaced. “I may be wrong,” I admitted to Chad, “but I believe that metaphor and other rhetorical effects stemming from analog relationships will remain the privileged tool of sentient beings. Their style can be imitated but not their interactive, social meaning.”
To this Chad responded with great humility: “You are correct that metaphor and other forms of figurative language have a special place in human communication and social interaction, and their meaning and impact are deeply rooted in cultural and social contexts. While AI may be able to identify and generate metaphors, it is unlikely to fully replicate the social and interactive meaning that humans attribute to them. This is because human communication is not just about the words used, but also about the emotional and social context in which they are used. Therefore, while AI can replicate the style of human communication to some extent, it is unlikely to fully replicate the complex, interactive, and social meaning that is unique to human communication.”
So, now I know the secret weapon I’ll be drawing from my armory when I need to expose Chad’s weaknesses. Still, what troubles me the most about this discussion concerns that other use of “intelligence,” the one associated with the “intelligence community.” I fear that that community’s mastery of metaphor — as well as the levels of irony metaphors typically carry — are beyond that community’s own algorithmic capacity, which explains some of the hair-brained policies they often recommend.
*[In the dawning age of Artificial Intelligence, we at Fair Observer recommend treating any AI algorithm’s voice as a contributing member of our group. As we do with family members, colleagues or our circle of friends, we quickly learn to profit from their talents and, at the same time, appreciate the social and intellectual limits of their personalities. This enables a feeling of camaraderie and constructive exchange to develop spontaneously and freely. At least with AI, we can be reasonably sure that conflict, when it occurs, provides an opportunity to deepen our understanding. And with AI we can be certain that it will be handled civilly. After all, there’s no way to punch a disembodied voice in the mouth.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.
In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.
We publish 2,500+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money. Please consider supporting us on a regular basis as a recurring donor or a sustaining member.