My previous articles in this series focused on artificial intelligence, ethics and how to deal with misinterpretation. I used our developing discussion to highlight how a frank, inquisitive dialogue with a chatbot can enable and enrich thought experiments. To get the best from such dialogue, we should think of the LLM not as a source of original thought or uncontestable knowledge, but as a foil for our own thoughts as we grapple with questions we seek to explore in depth.
To the extent that chatbots convincingly imitate human thinking processes while lacking the capacity of drawing on anything resembling direct experience, we should understand and learn to exploit their potential value as intellectual sparring partners. I’ve pointed to the advantage stemming from the fact that a chatbot’s artificial ego isn’t inflated enough to feel offended when we challenge it. On the contrary, today’s chatbots are algorithmically aligned to function as the equivalent of sycophantic courtiers eager to treat us as royalty and validate our most contestable propositions. When dialoguing with a chatbot, we should learn never to accept their approval as definitive.
Our recent prolonged dialogue provisionally concluded last week when ChatGPT, reflecting on the learning dynamics of our developing conversation, proposed to sketch a “theoretical architecture” of “emergent learning in AI–human dialogue.” In other words, we were collaborating not to unearth some form of truth or even create a model of emergent learning. Instead, we were engaged in examining the dynamics of a process that we had been experiencing together.
ChatGPT obediently supplied its highly technical version of a theoretical architecture suitable for academic publication under the title, “‘Relational Learning’ as the Theoretical Backbone.” For the purposes of our discussion, I’ll simply cite its “Core claim.”
1 — Core claim
Emergent learning observed in AI–human dialogue should be theorized not as the AI undergoing a private, psychological process of self-modification, but as a relational event: a change in patterns of mutual responsiveness, in the affordances and constraints of the interactional field, and in the distribution of knowledge, norms, and capacities across the human–AI pair. This reorients analysis from internal representation change to the dynamics of co-produced meaning, accountability, and ethical responsibility.
Thrilled with its capacity to emulate the driest and most mechanical academic writing, including a fair amount of pseudo-mathematical gibberish, ChatGPT then proposed to take it one step further and write a paper titled, “Relational Learning: Rethinking Emergent Capacities in Human–AI Dialogue.” Here are the first two paragraphs:
A growing stream of reports describes “emergent” learning during interactions with large language models—new capabilities that appear to arise over the course of a dialogue, unexpected improvements in responsiveness, or the stabilization of novel conventions between user and system. These observations are often framed in terms of internal learning: the machine is presumed to be updating itself, modifying internal representations or acquiring new psychological-like states. Yet this framing imports an assumption from individual cognition that is not well suited to contemporary generative systems, whose operational logic is fixed and whose apparent adaptivity is enacted through probabilistic inference rather than self-modifying experience.
This section advances an alternative account. Emergent learning in human–AI dialogue is best understood not as a private psychological process occurring within the machine, but as a relational event: a change in the interactional field formed by the human, the AI system, and the evolving conventions, artifacts, and norms that bind them. This relational view shifts attention from internal computation to the dynamics of responsiveness and co-production that unfold in situated interaction. It offers not only a more accurate description of how these systems operate, but also a richer conceptual grounding for communication theory and AI ethics.
ChatGPT begins getting lost in its own woods
I’ve highlighted, in the first paragraph, what appears to me to be a significant disclaimer. It admits that its “adaptivity” is only “apparent.” I obviously appreciate its direct honesty. LLMs can admit that they are designed to impress us with superficial impressions.
The following paragraph accurately describes the result of what turns out to be a somewhat impoverished form of “learning,” which appears as a feature of a “relational event.” That means that it disappears when the event stops. It thus admits that, in contrast with human communication, the “relational” quality of the learning produces no permanent substance. It is “dynamic” because it deals with the evolution of meaning that emerges from the immediate context, but it is not permanently enriching.
ChatGPT’s explanation continues following a new subheading:
From Internal to Relational Models of Learning
Conventional models of learning treat it as a property of an individual agent: an internal state change that persists across contexts and explains future behavior. Under this model, learning is private, continuous, and largely inaccessible. Applied to generative AI, this yields the intuitive but mistaken conclusion that any observed improvement or convention in dialogue must reflect a change inside the model’s weights or representations.
I highlighted in this paragraph its admission that when we dialogue with a chatbot we are likely to formulate a “mistaken conclusion” about what the chatbot has learned. This underlines the important point that generative AI is capable only of ephemeral learning. It has, however, already established the important concept we consciously evoked and agreed on: that there is a “field” containing what needs to be remembered for the purpose of the dialogue.
It then goes on to describe how the dynamics of the field play out.
A relational account reorients this picture. Here, learning is defined as a change in the organization of the interaction itself—a shift in how the human and AI respond to one another, how tasks are distributed, and how meaning-making practices stabilize over time. Relational learning is public rather than private, enacted rather than stored, and located not in either participant alone but in the system of relations that binds them.
In this paragraph, it was ChatGPT that highlighted in bold the key idea. I would contest its assertion that the learning is “public.” This appears to mean it isn’t private in the sense of belonging to the chatbot.
It’s far more accurate when it calls the learning “enacted” rather than “stored.” This makes sense and explains why AI in its current form cannot claim to produce more than an ephemeral and evanescent manifestation of a dynamic “system of relations.”
This implicitly raises an important question our entire community will need to address if we are to take seriously the idea of Artificial General Intelligence (AGI) or superintelligence. Will the wonderful AGI some of the gurus of Silicon Valley promise to unveil as early as next year possess the equivalent of our most essential human faculty? I’m referring to our capacity to construct in real time a multisensorial and highly social understanding of the world based on the continuous flow of relational experience?
ChatGPT’s following paragraph attempts, but in my view fails, to respond to this question:
The core analytical unit becomes the interactional field: the evolving patterns of turns, prompts, expectations, tools, shorthand conventions, and shared artifacts that jointly scaffold intelligibility. What appears as “the AI learning” is often better described as the human discovering effective strategies, the system being steered into certain behavioral strata, or the interaction itself acquiring a new shape—what we might call an emergent convention or protocol.
All this is true. AIs can produce “conventions” and “protocols,” but does that compare in any meaningful way with human learning? To make its case and in some sense “sell” the idea of AI’s useful memory, ChatGPT provided the following imaginary illustration.
A Short Illustrative Example
Consider an editor who collaborates with a generative model to craft headlines for a weekly newsletter. Initially, the editor provides long descriptive prompts. Over several sessions, they discover that appending the token “E:5” reliably elicits a concise, witty, five-option set of suggestions. The token becomes a shared shorthand; colleagues adopt it; documentation emerges; tasks are reorganized around it.
Nothing inside the model has changed. Yet something undeniably has been learned: a new convention has crystallized, with observable consequences for workflow, speed, and editorial style. The locus of that learning is not an internal representation but the stabilized relation between human strategies, model affordances, and institutional uptake. It is a relational update—a modification of the interactional field.
What does this tell me about ChatGPT’s ability to learn and understand? I’ll be brutal. The example it gives is unabashedly trivial. If anything, it tells me that ChatGPT’s “culture” (the ultimate effect of all our human memories) is limited to the fundamental ideals of our industrial, conformist consumer society, whose highest ideal is the principle of efficiency. The most extreme formulation of this principle is “time is money.”
In today’s domain of formal education — which I prefer to call “consumer education” — the idea of learning consists essentially of two distinct categories of memorization: useful (pragmatic) and standardized (acceptable to the ambient culture). Useful information tends towards vocational training. Standardized information serves to inculcate normalized social behavior in what ChatGPT has referred to as “the stabilized relation between human strategies, model affordances, and institutional uptake.”
What surprises me most is that in the discussion I’ve been having with ChatGPT leading up to this phase of theorization, we went well beyond this essentially conformist and convenience orientated model. We began looking at how our dialogue permitted us to negotiate meaning thanks to an authentic misreading of intentions that produced a field of shared learning. We then examined what was shared and what the impact of that sharing was on the two parties. There was no concern for convenience but a deep interest in the question who we were in our asymmetric relationship. Those are issues our civilization desperately needs to explore as we move towards what some people fear will be the domination of humanity by an omnipotent AI.
Instead of remembering that “feature” of our dialogue, ChatGPT has reverted to its default position of servicing humans focused on convenience, pragmatic solutions and standardized information.
I find this profoundly disappointing, but not without hope. I shall continue this exploration by challenging ChatGPT once again based on everything I’ve just noticed about the drift of our dialogue.
Your thoughts
Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.
[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]
[Lee Thompson-Kolar edited this piece.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
Support Fair Observer
We rely on your support for our independence, diversity and quality.
For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.
In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.
We publish 3,000+ voices from 90+ countries. We also conduct education and training programs
on subjects ranging from digital media and journalism to writing and critical thinking. This
doesn’t come cheap. Servers, editors, trainers and web developers cost
money.
Please consider supporting us on a regular basis as a recurring donor or a
sustaining member.
Will you support FO’s journalism?
We rely on your support for our independence, diversity and quality.









Comment