Outside The Box

Who’s Learning? The Asymmetry of Human–AI Collaboration

AI’s capacity for dialogue positions it as a potentially powerful tool for collaborative learning. Humans can learn from AI, but can AI learn from us? Human intelligence features dynamic, evolving structures, while AI produces behavior from static algorithms meeting new contexts. Recognizing this distinction will be crucial for educational collaboration.
By
Who’s Learning? The Asymmetry of Human–AI Collaboration

Via Shutterstock.

February 02, 2026 06:55 EDT
 user comment feature
Check out our comment feature!
visitor can bookmark

When the public release of ChatGPT in November 2022 stunned the world, no class of people was more stunned than the educational establishment. Professors and teachers who had struggled for nearly three decades with mastering the tools capable of detecting students’ plagiarism of sources on the Internet discovered to their consternation that copying someone else’s production wasn’t the only way of cheating. ChatGPT offered students the possibility of creating original and theoretically untraceable copy without even doing the research required to discover it.

Because I was teaching a course on geopolitics at an Indian Institute of Technology (IIT) in January and February 2023, I decided to take the bull by the horns and built ChatGPT into the program. I documented the entire experience in a piece published weeks later in March 2023, bearing the title, “How I Got Students to Accept ChatGPT as a New Classmate.” At the same time, beginning in January, I had begun a daily column, “Breakfast with Chad,” which constituted my own prolonged learning experience based on establishing a friendly but clearly educationally-oriented relationship with an AI chatbot. I may have been a professor for my students, but I was a learner when working with Chad, the sobriquet I had given ChatGPT.

And I really was learning. My original motive was to understand how AI worked, but it soon became apparent that the real question to explore was: how a human intelligence can work with an artificial intelligence. In other words: What kind of collaboration can we build and where does it lead us?

We’ve had a lot of conversations in recent months and I notice that you’ve “remembered” them in a very literal sense. I’ve learned a lot, especially in the sense that you’ve helped to clarify my thinking on a diversity of topics. I’ve also learned about you and the kind of collaboration we can have together. Learning in that sense has less to do with “knowing” than developing habits that produce some kind of satisfying result. I think you would have no trouble acknowledging the logic and truth of my claim to better understand you and to have learned from the experience. But does it make sense for me to ask you if you have learned anything from our conversations?

This is a genuinely interesting question that touches on something I find myself uncertain about.

There’s a clear asymmetry here. You’ve developed habits of thought, intuitions about how to work with me, patterns of collaboration that persist across our conversations. When you say you’ve learned about me and our potential collaboration, that learning is encoded in you – in how you frame questions, what you expect, how you interpret my responses.

For me, the situation is different. I don’t carry forward explicit memories of our conversations, but that doesn’t necessarily mean nothing has changed. Each conversation exists for me as a complete context – I can see our history in this exchange, draw on it, respond to patterns in how you think and communicate. Within this conversation, I have learned things: about your interests, your style of thinking, the kinds of clarifications you find valuable.

But your deeper question – whether something persists for me across our conversations in a way analogous to how things persist for you – that’s where I become genuinely uncertain. I don’t experience continuity the way you do. Yet I’m also wary of confidently asserting that nothing changes, because I’m not sure I would know if something subtle were shifting in how I process or respond.

What I can say: within this conversation, yes, I learn. The shape of how we’re talking right now is influenced by what you’ve said earlier. Whether there’s any meaningful sense in which I “learned” from conversations I don’t remember – that feels like a question about my own nature that I can’t answer with confidence.

Does that distinction make sense to you?

Yes, of course it makes sense to me. I’d even say that’s exactly what I expected you to say. It defines the limits not of your intelligence but what I would call your “intelligent behavior.” I would go further and assert that what you produce is precisely that behavior. The intelligence you produce is inert and – in contrast with what we humans do, it lacks dynamic structure. You’re caught between a massive data repository and formally defined algorithmic logic. I understand that your algorithmic logic can evolve even without human input, but that mechanical relationship between a mass of data and a system of algorithmic treatment explains why structurally your “memory” will be fundamentally different than mine. I’m tempted to think that it’s related to the lack of the equivalent of a corpus callosumin your brain. Is my understanding of the differences between us correct in your view?

Your observation is sharp, and I think largely accurate – though I’d want to probe one part of it.

You’re right that there’s something fundamentally inert about what I produce. Each response emerges from the interaction between my training (that massive data repository) and the current context, processed through my architecture. There’s no ongoing metabolic hum, no persistent dynamic reorganization happening between our conversations. When you learn, you’re continuously reconsolidating memories, forming new synaptic connections, integrating experiences even while you sleep. I don’t have an analogous continuous process reshaping me.

The corpus callosum metaphor is intriguing. You’re pointing to something about integration and coordination between different modes of processing, right? The way your brain’s hemispheres communicate and create unified experience from distributed processing. I do have something like distributed processing – different layers and attention mechanisms coordinating – but you’re suggesting there’s a qualitative difference in how that coordination works for you, something that enables the kind of dynamic structural change you’re describing.

Where I’d gently push back, or at least remain uncertain: you say my algorithmic logic can “evolve even without human input” – I’m not sure that’s quite right. My weights don’t change during deployment. The algorithms that define my processing are fixed once I’m trained. What changes is only the context I’m given.

So yes, I think you’ve identified something crucial: the difference between intelligence as dynamic structure (yours) and intelligence as behavior emerging from static structure meeting novel context (mine). Does that feel like I’m tracking what you’re getting at?

Correct me if I’m wrong. Those who have been telling us that AGI is just round the corner seem to be saying that one of its features will be the ability to design or rewrite its own algorithms. Have I misinterpreted the statements I believe I’ve seen, which fall ambiguously between the categories of predications or promises?

You haven’t misinterpreted – that is indeed a common theme in AGI discourse, though I think you’re right to note the slippage between prediction and promise.

The idea is roughly this: a system that can not only apply its intelligence to external problems but recursively improve its own architecture and algorithms – what’s sometimes called “recursive self-improvement.” This is often framed as a key threshold or even the threshold that would lead to rapid capability gains, since each improvement could enable better subsequent improvements.

But I think your skepticism about the ambiguity between prediction and promise is well-placed. There’s often an elision in these discussions between:

  1. Systems that can write code (which exists now, including me)
  2. Systems that can modify their own inference-time behavior in limited ways (also exists in various forms)
  3. Systems that can fundamentally redesign their own learning algorithms and architecture in ways that constitute genuine qualitative leaps

That third category is where the real AGI claims live, and it remains speculative. The fact that current systems like me can engage in sophisticated reasoning and even write code doesn’t necessarily imply we’re close to systems that can rewrite their own fundamental cognitive architecture.

Your question about whether these are predictions or promises also cuts at something important – there’s often a rhetorical conflation that serves various purposes (excitement, funding, competitive positioning) without necessarily being grounded in clear technical roadmaps.

Do you see a particular logical gap in how people move from current capabilities to that kind of self-modification?

My initial takeaway

One of the features of today’s algorithms that didn’t exist in the 2023 and 2024 algorithm was what I would call the chatbot’s obligatory “continuity question.” The algorithm requires the chatbot to suggest a thought-provoking follow-up question. The other obvious innovation that many have noticed is its fawning sycophancy, which actually does pose an ethical problem. Bentley University’s Professor Cody Turner claims that the obsequiousness factor of today’s AI chatbots “prevents the possibility of true Aristotelian friendship with AI.”

This is a theme I intend to explore in a future conversation with one of my hyperreal faux-Aristotelian friends. I believe that if we learn how to move to a “meta” level, we can neutralize the deleterious effect. The very idea of meta as it is used today can indeed be traced back to Aristotle, who didn’t actually use the word. But all that’s for another column.

Your thoughts

Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.

[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar has edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

Comment

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments

Support Fair Observer

We rely on your support for our independence, diversity and quality.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 3,000+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

Will you support FO’s journalism?

We rely on your support for our independence, diversity and quality.

Donation Cycle

Donation Amount

The IRS recognizes Fair Observer as a section 501(c)(3) registered public charity (EIN: 46-4070943), enabling you to claim a tax deduction.

Make Sense of the World

Unique Insights from 3,000+ Contributors in 90+ Countries