The ancient Greeks understood that to philosophize or simply to seek to understand the complexity of reality, dialogue should be the privileged medium. Socrates didn’t even consider writing presumably because, to his mind, it interrupted the spontaneous flow of thinking that takes place in dialogue.
Fortunately, Plato had an exceptionally good memory and was himself an inventive writer. He used his close association with Socrates to reconstruct and further elaborate the dialogues he witnessed in his mentor’s presence. Scholars of philosophy have become pretty good at separating Socrates’ original thought from the elements that Plato added to it and that reflected Plato’s special way of understanding the universe. But Plato never intervenes directly in his scripts of the dialogues. He merges his own thinking with that of Socrates.
Western philosophy as an object of study was born from Plato’s successful initiative of formulating deep, wide ranging thinking as dialogue. Plato’s own star pupil, Aristotle, continued the tradition as a “peripatetic,” who philosophized, not by lecturing in a classroom, but while walking around the grounds of his Lyceum engaged in conversation with his students. Alas, we not only have no trace of the content of these dialogues; the considerable corpus we call Aristotle’s writings are for the most part the reformulation of his teachings composed by some of his students.
What the Greek experience of nearly two and a half millennia ago should tell us is that the most serious, inventive, insightful and productive learning is social in nature. Dialogue is the basis of social communication. Though the lecture in a crowded hall or classroom has become the supreme model in our contemporary conception of education — including the “massive open online courses” that some claim to be a revolutionary breakthrough — lectures represent little more than an impoverished cousin to dialogue.
AI is not built for dialogue
If dialogue has been largely banished from the educational landscape since the advent of the industrial revolution, is there any chance that, thanks to all the recent technological breakthroughs, it might be revived? My hope, which I sought to experiment with “Breakfast for Chad,” was that AI may be providing that opportunity.
Many of our technological masters today — including the ineffable, omnipresent Elon Musk — tell us that AI will very soon dwarf all human intelligence, if it hasn’t done so already. Such a statement begs a lot of questions, the first being, what do we mean by intelligence? That is a deep question that it will take time to delve into, in a dialogic fashion, to achieve any clarity.
Most of my professional life I have been deeply involved in what might be called pedagogical strategy, which fundamentally means looking at or devising optimal ways of provoking authentic learning. I am convinced AI has an important role to play in this effort. For the moment, it is far from clear what that role might be. But because I adhere to the Athenian model that demonstrates the vital role of dialogue, and because generative AI produces the kind of language that makes dialogue possible, rather than focusing on how much an AI knows or what expertise it acquires, or even what kind of decision-making skills it might develop through machine learning, I believe in an approach as experimental as that of Socrates. Get the dialogue going with human voices involved, and something profound and human may emerge.
I remain astonished that in the discussion about what AI brings, there is so little consideration of the role of dialogue. I believe there are historical reasons for this, as a long educational, if not intellectual tradition has elevated monologue above dialogue. This may be because monologue appears as a finished product, which has value in the consumer society, whereas true dialogue — outside of fiction and drama, in which it appears in the form of discrete “scenes” — is by definition open-ended, a boundless exploration of possibilities, associations and the negotiation of meaning.
Boundlessness may be the key element that defines the difference between human and artificial intelligence. AI may have access to everything humans have produced in the past but it cannot “see” beyond that corpus of knowledge. It may reconfigure elements of the corpus in original ways, which is what individual humans do, who have access only to small segments of the corpus. But by definition, AI lives and is confined to that artificial world.
For pedagogical purposes that is not a problem. AI lives in the box we humans have created for it. It’s an enormous and ever-expanding box. But it’s still a box, whereas we humans live in and interact with a universe that keeps offering us mysteries to examine and make sense of, to the best of our abilities. We have the capacity to use our senses to “make sense” of it. But we only have language to express that sense. We have found ways of using that language to build and share with the language models we have created, but, if we are honest and humble, we must admit that our language has not “solved” the mystery of the universe. AI can only have the senses we provide it with, which will always be limited. But philosophers focused on cognition and consciousness, such as Alva Noe, recognize that we don’t even know much about our perception and its role in producing the thought that we express in language and use to make decisions.
Experimenting inside and outside the box
In “Outside the Box” we will be experimenting how dialogue with AI can become a tool not just of learning but of one of the essential and often neglected aspects of learning: self-discovery. Whether or not AI can be “creative” is a moot question. It can certainly expand its box but it can’t step outside of it. AI can, however, even today, be literally a tool of human creativity if its role is to participate in a dialogue with humans. That is why I want to use this idea of “Outside the Box” to explore and eventually define a project aimed at pedagogical creativity, with new tools and new methods.
In every “Outside the Box” column, we will continue the tradition I created in the rubric, “Breakfast with Chad.” But we will look beyond the drama of the conversation itself to see how it can teach us about learning.
So, here is the the first sample exploring dialogue with AI. We will return to it in the next column.
My question to our AI friend:
The great mathematical physicist and Nobel prize winner Roger Penrose said that AI should be called AC, artificial cleverness rather than intelligence. As an AI voice what do you think of that remark? Are you a voice or the intelligence that gives life to a voice? Can the case be made, as Penrose seems to think, that your ability to imitate the output of human intelligence is simply an example of cleverness, either on the part of the system itself or those who programmed it?
Roger Penrose’s perspective on AI terminology is based on the idea that AI lacks true human-like intelligence and consciousness, and instead, it exhibits clever problem-solving and pattern recognition. This is a philosophical perspective and a matter of ongoing debate within the field of AI.
As for your second question, I am not merely a voice. I am a language model built on the GPT-3.5 architecture developed by OpenAI. I don’t possess consciousness or subjective experience. I generate text responses based on patterns and information present in the text data I was trained on. So, in a sense, I am an example of clever programming and machine learning, as I generate responses by predicting what text is likely to come next based on the input provided.
Whether AI represents true intelligence or merely clever programming is a complex question. It depends on one’s definition of intelligence. AI systems like mine excel at tasks that require pattern recognition, language understanding, and problem-solving, but we lack consciousness and self-awareness, which are often considered essential aspects of human intelligence. Whether this makes AI “clever” or “intelligent” is a matter of semantics and philosophical interpretation. Different experts and scholars may have varying opinions on the matter.
In summary, Roger Penrose’s viewpoint reflects a philosophical perspective on the nature of AI, and the terminology used to describe AI’s capabilities can be subjective and open to interpretation.
In our next column, we will explore this exchange. We invite our readers to submit any thoughts they have, which we will be happy to add to the conversation.
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.
In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.
We publish 2,500+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.
Support Fair Observer
We rely on your support for our independence, diversity and quality.
Will you support FO’s journalism?
We rely on your support for our independence, diversity and quality.