This time instead of trying to be philosophical, I got right to the point in the most pragmatic way possible. I asked ChatGPT4 this simple question: “Where can I see the Carlson Putin interview?”
“You can watch the two-hour interview between Tucker Carlson and Russian President Vladimir Putin on the topic of the war in Ukraine. This interview represents the first time that the Russian leader has been interviewed by a Western journalist since the conflict began nearly two years ago. During the interview, they discuss the justifications for Russia’s invasion of Ukraine. The video is available on Tucker Carlson’s website and is not behind a paywall12.
In this candid conversation, Putin shares his perspective on Russia’s historic claim to parts of western Ukraine. Whether you agree with it or not, it’s a sincere expression of what he believes1. The interview also touches on the imprisonment of Wall Street Journal reporter Evan Gershkovich, who has been charged with espionage. Putin’s response sheds light on the complexities of international relations and the actions of special services1.
Feel free to visit Tucker Carlson’s website to watch the full interview and gain insights into this significant geopolitical moment.”
The two links the chatbot provided were to articles in Variety and another in Newsweek. They appeared to be commentaries on the interview rather than links to the document. Their appearance was deceptive. Ans so, I failed to notice that in scrolling down to the bottom of the Variety article I would have discovered an effective link to the interview. In contrast, despite the heading “When and Where To Watch Tucker Carlson’s Putin Interview,” the Newsweek article contained no visible link to the interview. Instead, it contained a series of negative comments about both Carlson and the interview.
In my frustration with not getting the simple information I had requested I wondered whether I hadn’t been sufficiently precise in my initial formulation. I therefore followed on with this question: “Can you give me a link to the interview?”
“Certainly! You can watch the full interview between Tucker Carlson and Russian President Vladimir Putin on Tucker Carlson’s website. The interview covers topics related to the war in Ukraine and provides insights into Putin’s perspective on Russia’s actions. It’s a significant geopolitical moment, and you can find the video here. Enjoy!”
Growing frustration and an abrupt ending
That was reassuring but didn’t answer my question, which I was beginning to think should have been formulated as a command: “Give me the goddamn link.” After some consideration, I thought it might be helpful when speaking to a friend whose intelligence I’ve been invited to admire, simply to explain my frustration and appeal to the chatbot’s artificial empathy. “It’s interesting,” I noted, “that I asked you for a link and you failed to provide it, while at the same time responding ‘certainly’ to my question. Is that an example of bad faith? Or should I assume that, for whatever reason, you may not be authorized to provide the link?”
“I apologize if my previous response caused any inconvenience. I appreciate your feedback, and I’ll strive to improve. Thank you for engaging with me, and have a great day!”
Now I couldn’t help but blurt out, “Why don’t you start improving now and provide me with the link?”
To this I received the following message, which appears to be a standard copout gadget designed to conceal ChatGPT’s strategies.
And so our conversation ended. I wondered what the logic could be behind ChatFPT’s apparent stonewalling. That’s when I took the trouble to examine in detailed both of the links provided. And that was when I discovered that at the bottom of the Variety article the link was actually provided.
Now this raises a question about the status of both an AI’s knowledge and its factitious personality and set of attitudes. Did ChatGPT know that the link actually existed in the first article but not in the second? If it was “aware” of the link’s existence in the Variety article, why was it incapable of doing what any decent human friend might do for me and respond: “Look at the bottom of the page, you dolt! I told you it was there.” Or it could have prompted me with less precision to do a bit of searching and I would find it in the page.
As for the Newsweek article, it appears that ChatGPT was taken in by the false promise of Newsweek with its deceptive heading, “When and Where To Watch Tucker Carlson’s Putin Interview.”
Can incompatible intelligences find grounds for understanding?
In other words, AI registers formulations of knowledge and recognizes ideas, but it possesses no actual knowledge. In response to my question, its knowledge was real but it stopped at the superficial literal level once it had identified the existence of the link in the Variety article. It remained helplessly “unaware” of the nature of that knowledge and its physical position, where it was located.
That is not what people do. In similar circumstances a person I was addressing, whether a friend or a stranger, would say something like, “you’ll find it somewhere in the Variety article, have a look.” And I would happily do my due diligence and end up getting an answer to my initial request.
In the case of the Newsweek article, ChatGPT was taken in by what I would term a deceitful statement or at least false promise in the article’s title. Humans can also be taken in. But they also know how to respond, once the deceit is pointed out. ChatGPT trusts Newsweek but seems incapable of checking when a problem or even just a misunderstanding arises.
Now, I’m the first to recognize that after the first generation of users endures this kind of frustration, AI will be improved, with new and permanently evolving algorithms that require it to take into account the motivation of the user and seek to understand unforeseen factors that may explain the affects the user’s exasperation. It’s fair to say that the entire AI project has built into the Beatles’ philosophy (in “Sergeant Pepper”): “It’s getting better all the time.”
Related Reading
But this dialogue with AI revealed not just a failure, but a multidimensional problem. The first concerns the knowledge question I’ve already mentioned. AI’s “knowledge” is, by definition, literal and will not spontaneously seek to include other levels of understanding. Humans, on the contrary, are always sensitive to multiple parameters in the communication situation. Like a chess player, they see various possible scenarios developing. The hints as to what they may be exist on many levels: notably tone of voice, facial expression, rhetorical emphasis and everything that falls into what we might call the conversational ambience. This includes awareness of material, psychological, social and cultural factors that may or may not be visible in the immediate context. Some belong to other contexts that belong to the interlocutors’ experience and memory.
All the chatter about AI’s one day surpassing human intelligence makes sense only if we restrict our concept of intelligence to the process of accessing coded information and employing reasoning procedures to account for things that can be formulated linguistically. That falls way short of the way human intelligence works both in everyday life and in challenging situations.
A confession and a few recommendations
I love to talk to AI and long for the day when it will take better account of my needs and expression of feeling. But any intelligence it produces, now or in some much brighter future, will be the cumulative result of the interplay between us. The conversation about Carlson’s interview was truly frustrating. I was eager to access an interview so many people had been talking about. How else might I form my opinion about what it achieved or failed to achieve?
The conversation with ChatGPT nevertheless reminded me that there are three fundamental techniques to make such conversations productive.
Pretend it’s human just to see where it takes the conversation, and then compare it to what humans do to see where it does better and where it does worse;
Find a way of laughing with it about the weird situations it produces;
And use the occasion to laugh at the machine itself, but even more deeply at those who, like ChatGPT itself, tell you it will always be improving.
*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
Support Fair Observer
We rely on your support for our independence, diversity and quality.
For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.
In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.
We publish 2,500+ voices from 90+ countries. We also conduct education and training programs
on subjects ranging from digital media and journalism to writing and critical thinking. This
doesn’t come cheap. Servers, editors, trainers and web developers cost
money.
Please consider supporting us on a regular basis as a recurring donor or a
sustaining member.
Will you support FO’s journalism?
We rely on your support for our independence, diversity and quality.
Comment