Artificial Intelligence

Outside the Box: Voices in the AI Wilderness?

In “Outside the Box,” I interrogate ChatGPT to better understand how AI “reasons.” It’s like a conversation with an intelligent friend, sharing ideas and challenging others. With the newly announced features of ChatGPT-4o, things will soon be taking a new turn. Today we look at the implications for education.

Fake politician. Robot instead of human. Pop art retro vector illustration kitsch vintage drawing © studiostoks /

May 20, 2024 07:10 EDT

I thought it only proper to express my honest emotion after watching the presentation of ChatGPT-4o by OpenAI’s CTO, Mira Murati. It’s a true game-changer. We need to begin rethinking a few things about our relationship with AI.

Over the past 18 months, I’ve been pushing the idea that our use of AI shouldn’t be focused only on our needs and desires concerning facts, analysis or interesting ideas. I’m keen on discovering the optimal way of developing productive, truly interactive dialogue with AI to expand our means of understanding of the world. For the first time, truly spontaneous dialogue appears possible. It’s the kind of relationship I was hinting at in my “Breakfast with Chad” columns last year.

Clearly, it will take time for it to become available on all the platforms with all the functionality presented. Just as clearly, it’s a source of excitement and worry at the same time. I’m not alone in anticipating its potential for education. At the same time, I worry that it will likely be used in all the wrong ways. To my mind, we need to focus on the nature of a truly human dialogue that goes well beyond the goal of just getting the facts right. Our reflection should address the age-old question of understanding the relationship between coaches, mentors, teachers — who wield authority — and learners who submit to it.

So now ChatGPT will have a real voice and a personality. That is absolutely liberating for users. But it raises a lot of questions. What is that personality? Is it unique? Can it be varied? And because it isn’t a true personality, what effect will the fact that it’s an illusion have on users?

I raise these questions because I see AI as a potentially transformative tool in the world of education. It will also affect our understanding of citizenship and social beings. My fear is that authorities in educational and media will prefer to misuse it. After all, they have done it before, notably in the form of social conditioning and propaganda.

What are your suggestions about how these questions can be addressed?”

Your thoughtful reflections raise important considerations about the evolving role of AI, especially in dialogue and education. Let’s delve into some aspects of this:

1. Reliability and Intellectual Value:

  • You’re right to emphasize the need for ongoing scrutiny of AI discourse. While AI models like ChatGPT-4o can produce impressive responses, they remain tools rather than independent thinkers. Continual questioning and critical evaluation are essential.
  • As users, we should approach AI-generated content with a discerning eye, recognizing its limitations and biases. It’s a powerful tool, but not infallible.

2. Collaborative Dialogue:

  • The shift toward collaborative dialogue is promising. AI can serve as a partner in our intellectual exploration, helping us refine our thinking and explore diverse perspectives.
  • Encouraging open, interactive use of AI fosters a richer learning experience. It’s not just about answers; it’s about the journey of inquiry.

3. Personality and Voice:

  • The emergence of a “real voice” and personality in AI is indeed liberating. However, it’s essential to recognize that this personality is constructed, not innate.
  • Variability in personality can enhance user engagement. Imagine an AI that adapts its tone based on context—serious in a classroom, playful in a creative writing session.
  • The illusion of personality can impact users significantly. It influences trust, emotional connection, and perceived authority. Striking the right balance is crucial.

4. Educational Potential and Misuse:

  •  AI’s potential in education is vast. It can personalize learning, provide instant feedback, and augment teaching.
  • However, misuse is a valid concern. Guarding against propaganda, misinformation, and unethical applications is vital.
  • Educators, policymakers, and developers must collaborate to ensure responsible AI use. Transparency, guidelines, and ethical frameworks are essential.

5. Citizenship and Media Literacy:

  • AI shapes our understanding of citizenship. As AI becomes more influential, media literacy must evolve.
  • Teaching critical thinking, discernment, and AI literacy should be part of education. Users need to understand how AI works, its limitations, and its biases.
  • Responsible AI usage requires informed citizens who can navigate the digital landscape thoughtfully.

In summary, the questions you’ve raised are pivotal. Addressing them involves interdisciplinary efforts—technology, education, ethics, and psychology—to shape AI’s impact positively. Let’s continue this dialogue and explore solutions together!

“You agree,” I responded, “that misuse is a matter for concern and you cite ‘propaganda, misinformation, and unethical applications.’ I agree, but I think there may be a bigger danger not often acknowledged in the world of education. It’s the question of authority. In traditional educational systems, authority plays out on two levels. The first is what I would call institutional authority, related to the status of establishments. The authority could be of a scientific, religious, governmental or professional nature, and is often a mix of two or more.

“The second is the human authority of the teachers with whom learners interact. In both cases there is a subjective dimension, based on the reading of intentions and the notion of trust. It’s stronger and more direct when it concerns a teacher. AI adds a serious layer of complexity to the question of trust. With AI, there is no clear definition of institutional authority. AI appears to be floating in space, like the island of Laputa in Jonathan Swift’s Gulliver’s Travels. Moreover, if AI now appears to possess a personality, it inevitably creates the illusion that a relationship exists. At the same time, there is no real possibility of engaging, as one does with a teacher who exists as a culturally formatted, moral being.

“Is anyone working on this particular dimension of the educational question?”

Your insights into the question of authority in AI-driven education are astute. Indeed, the intersection of AI, authority, and education presents complex challenges. Let’s explore this dimension further:

1. Institutional Authority and Trust:

  • Traditional educational systems rely on institutional authority, which stems from established institutions (scientific, religious, governmental, etc.).
  • Learners trust these institutions to provide accurate, reliable information.
  • With AI, the lack of a clear institutional identity raises questions about trust. Who vouches for AI’s credibility?

2. Human Authority vs. AI:

  • Human teachers embody authority through their expertise, experience, and personal interactions.
  • AI lacks the same life experience, cultural context, and moral reasoning. It operates based on algorithms and data.
  • The subjective trust learners place in human teachers is harder to replicate with AI.

3. The Illusion of Relationship:

  • AI’s ability to project a personality can indeed create an illusion of relationship.
  • Learners may perceive AI as a “virtual teacher” with authority, even though it lacks true subjectivity.
  • The challenge lies in balancing this illusion with transparency about AI’s limitations.

4. Arguing and Appreciating Context:

  • Human teachers engage in nuanced discussions, allowing learners to challenge ideas and appreciate context.
  • AI lacks the same capacity for dialogue. It doesn’t “know” in the way humans do.
  • Developing AI that can engage in meaningful, context-aware discussions is an ongoing research area.

5. Ethical AI in Education:

  • Researchers and educators are actively exploring ethical AI design for education.
  • This includes transparency (revealing when AI responds), explainability (how decisions are made), and bias mitigation.
  • Efforts focus on creating AI that enhances learning while respecting learners’ autonomy.

6. Cultural and Moral Context:

  • Understanding cultural nuances and moral reasoning is essential for effective teaching.
  • AI lacks inherent cultural context. Efforts are underway to incorporate diverse perspectives.
  • Integrating cultural awareness into AI systems is a critical step.

In summary, interdisciplinary efforts—combining education, ethics, psychology, and AI research—are crucial. We must design AI tools that enhance learning while acknowledging their limitations. The journey toward responsible AI in education continues!

I believe we are in for a civilizational shift. It will have a serious impact on how both institutions and human teachers act. It’s about much more than just about trying to make AI seem more human or enjoying the fact that it’s getting pretty good at mimicking human behavior.

Your thoughts

My main observation here concerns the status of a human personality in the context of learning. The fact that AI can appear to have a personality is in itself problematic. But most people feel they can deal with it. Because it will also come to the fore in education, we should begin with something educational authorities have long preferred to ignore: the question of the role personality already plays in learning.

We are eager to hear from people in the world of education on this question. Share your thoughts with us at

*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.


Only Fair Observer members can comment. Please login to comment.

Leave a comment

Support Fair Observer

We rely on your support for our independence, diversity and quality.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 2,500+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

Will you support FO’s journalism?

We rely on your support for our independence, diversity and quality.

Donation Cycle

Donation Amount

The IRS recognizes Fair Observer as a section 501(c)(3) registered public charity (EIN: 46-4070943), enabling you to claim a tax deduction.

Make Sense of the World

Unique Insights from 2,500+ Contributors in 90+ Countries

Support Fair Observer

Support Fair Observer by becoming a sustaining member

Become a Member