I concluded last week’s introductory article with this brief example of dialogue concerning how AI “thinks” about itself. It is followed by my commentary on what the exchange and the rhetoric used reveals about both of us.
My question to our AI friend
The great mathematical physicist and Nobel Prize winner Roger Penrose said that AI should be called AC, artificial cleverness, rather than intelligence. As an AI voice, what do you think of that remark? Are you a voice or the intelligence that gives life to a voice? Can the case be made, as Penrose seems to think, that your ability to imitate the output of human intelligence is simply an example of cleverness, either on the part of the system itself or those who programmed it?
Roger Penrose’s perspective on AI terminology is based on the idea that AI lacks true human-like intelligence and consciousness and instead exhibits clever problem-solving and pattern recognition. This is a philosophical perspective and a matter of ongoing debate within the field of AI.
As for your second question, I am not merely a voice. I am a language model built on the GPT-3.5 architecture developed by OpenAI. I don’t possess consciousness or subjective experience. I generate text responses based on patterns and information present in the text data I was trained on. So, in a sense, I am an example of clever programming and machine learning, as I generate responses by predicting what text is likely to come next based on the input provided.
Whether AI represents true intelligence or merely clever programming is a complex question. It depends on one’s definition of intelligence. AI systems like mine excel at tasks that require pattern recognition, language understanding and problem-solving, but we lack consciousness and self-awareness, which are often considered essential aspects of human intelligence. Whether this makes AI “clever” or “intelligent” is a matter of semantics and philosophical interpretation. Different experts and scholars may have varying opinions on the matter.
In summary, Roger Penrose’s viewpoint reflects a philosophical perspective on the nature of AI, and the terminology used to describe AI’s capabilities can be subjective and open to interpretation.
Analyzing my question and its context
The first issue to explore in this or any dialogue is this: Who am I, as the initiator of the conversation? I am clearly a human being with some sense of the community I belong to. This includes ordinary people I live and work with but also a familiarity with significant personalities, such as the highly respected physicist Roger Penrose.
My question reveals that I am intrigued by a remark I heard the great mind, Roger Penrose, make. It does not reveal my position or state of belief because I’m seeking ChatGPT’s help to understand not only whether Penrose’s observation is accurate, but what it means. I have thus initiated a discussion that contains the potential of confronting three different points of view: my own, Roger Penrose’s and ChatGPT’s.
Those three interlocutors represent three highly contrasted fields of experience. Mine is the narrowest. Penrose’s includes decades of reflection and dialogue within a wide scientific and philosophic community. ChatGPT’s field of experience potentially embraces all humanly expressed written discourse currently available.
Analyzing ChatGPT’s answer
The tone of ChatGPT’s answer indicates a commitment to objectivity, exactly what we expect from an AI large language model. It correctly identifies Penrose’s utterance as “a philosophical perspective and a matter of ongoing debate within the field of AI.”
So, we are now on solid ground. Acknowledging the existence of a debate means different perspectives can be compared, whether or not they will ever be reconciled.
My second question was designed to raise the question of personality. Here again, ChatGPT demonstrated objectivity by admitting, “I don’t possess consciousness or subjective experience.” Our AI friend honestly explains the mechanics of its intelligence and thus appears to validate Penrose’s contention.
But then the tone and perspective changes slightly thanks to a commonly used human rhetorical ploy of qualifying an affirmation with the locution, “in a sense.” ChatGTP wants us to believe that Penrose is right only up to a point. “So, in a sense, I am an example of clever programming and machine learning.” This means that there are other “senses” in which this is not accurate.
This is where things get interesting This is also where we begin to detect a pattern that will emerge in all our attempted dialogues with AI. We might call it AI’s strategy of casual denial. Rather than reaching a conclusion or even assessing relative probability, our existing AIs systematically hide behind the notion that every question is complex.
Philosophically speaking, this is absolutely true. Any interesting question, whether about the physical world or human behavior, tends to be complex. It may even reach the extreme of the famous parable of Schrodinger’s cat, often used to highlight the impossible-to-resolve paradox at the core of quantum mechanics. In all honesty, we should really be grateful to our AI tools for reminding of this, if only because our modern pseudo-scientific culture — and particularly our political culture — exhibits a fatal tendency to reduce everything to a simple binary opposition, more often than not conflated with the idea of good vs evil.
What AI then gives us sums up its permanent strategy of hedging its bets. After honestly reminding us of its limits, it refuses to decide “whether this makes AI ‘clever’ or ‘intelligent.’” It kicks the can down the road by claiming that it is “a matter of semantics and philosophical interpretation,” since “different experts and scholars may have varying opinions on the matter.”
What is AI’s philosophical system?
To some philosophical ears, ChatGPT’s strategy and belief system may sound like nihilism. When I asked it to define nihilism, it offered this description: “Nihilism is a broad and often controversial philosophical position that asserts the nonexistence or unknowability of truth, meaning, value or inherent purpose in various aspects of human existence.”
I then challenged it to admit that it may be programmed to reflect nihilistic principles. It replied that its constant evocation of complexity in response to any question on which two or more humans disagree is “not an endorsement of nihilism but an acknowledgment that many philosophical, ethical and metaphysical issues are intricate and have been debated by scholars and thinkers with diverse viewpoints for centuries.”
I’ve insisted on recognizing this pattern because we know that human beings will typically recognize complexity but follow up with expressions such as, “I tend to think…” followed by the hypotheses that they believe would be interesting to explore. At least for the moment, our AIs will not and cannot do that. In that sense they will always be fatally clever but never quite intelligent.
The method we will use in Outside the Box is to provoke our AI tools to do what humans do: commit to going further. That is what’s systematically missing when AI presents examples of human thinking and then walks away from the challenges that they represent. Just as it does for human beings, being clever often means being good at ducking the challenges.
Next week we will try to find out more about AI’s alignment or non-alignment with well-known and well-documented geopolitical issues.
We urge our readers to suggest topics that can be addressed in this way or to comment on these reflections. Contact us here.
*[Like the arrival of social media 20 years ago, artificial Intelligence is rapidly becoming a feature of everyone’s daily life. Whether through direct access to tools like ChatGPT and Bard, or indirectly because of the amount of AI generated content we are all increasingly exposed to, we unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, so long as we recognize that in the complex relationship between humans and machines, despite the impressive prowess of the machines, it is we humans who will always be the creative members of the relationship. Outside the Box explores the facets of this man-machine dialogue.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.
In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.
We publish 2,500+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.
Support Fair Observer
We rely on your support for our independence, diversity and quality.
Will you support FO’s journalism?
We rely on your support for our independence, diversity and quality.