Last week, I engaged with ChatGPT in exploring the implications of psychiatrist, neuroscientist and philosopher Dr. Iain McGilchrist’s profoundly humanistic work on brain function and the role of the two hemispheres. One of the key lessons he draws from his research is that the left brain, whose reputation is to be cold, scientific and rational, is a source of delusion because it lacks the immediacy of a relationship with the real surroundings (natural, social, cultural) that the right brain focuses on and interacts with.
That led me to wonder about how the people who are now busy creating next-generation AI that they promise will be “superintelligent” have approached this scientific reality. I accordingly asked ChatGPT whether the creators of AI had attempted to create the kind of balance required by properly functioning two-hemisphere human intelligence. OpenAI’s chatbot was very forthright, informing me that “AI research hasn’t explicitly modeled its architectures on hemispheric dynamics” and that “large language models bear strong resemblance to the kind of pattern-driven, self-consistent reasoning McGilchrist associates with the left hemisphere.”
That may help us to understand the problem of AI hallucinations. There is no right hemisphere function to correct the tendency towards delusion. McGilchrist notes that because the right hemisphere interacts with the world it is aware of its limitations. The left hemisphere, which knows less, often believes it knows everything.
I would add my own historical and cultural observation that our post-industrial societies have privileged a decidedly left-brain approach in our approach to education. We are taught to believe that “knowing everything” is a legitimate goal. Our examinations are devised to test how far we have gone in knowing everything. This encourages the belief that we can achieve that ultimate goal. It also encourages the even more dangerous belief that, with good grades and a diploma, we have already achieved that.
ChatGPT had no trouble noting the risk of neglecting the corrective role of right-brain function. “If corrective action isn’t taken,” the chatbot concluded, “AI risks falling into the same trap McGilchrist sees in societies dominated by the left hemisphere: internally consistent but delusional systems that double down on error rather than opening to correction. That’s why the scaling race (bigger models, more tokens) is insufficient — and why we need a conscious, design-level intervention.”
In the course of our discussion, after regretting the failure of balance due to the absence of the corpus callosum — a bundle of nerve fibers that connects the two hemispheres — ChatGPT came up with an idea that seemed to me so obviously true I was ashamed I hadn’t thought of it. It amounted to an invitation to invite humans to play the role of corpus callosum rather than expecting AI to handle the complete job. This could be the key to solving the AI employment problem we have been taught to fear. Everywhere AI exists in our economy there will be a job opportunity for a “human corpus callosum.” I think it will be a fun and enriching job.
Challenging ChatGPT to develop its brilliant idea
I continued the conversation with the following prompt:
“I’m intrigued by your suggestion of a human user who ‘serves as a kind of corpus callosum, steering between narrow consistency (left-like) and open responsiveness (right-like).’ It’s consistent with an idea I’ve been promoting, that AI, even if it were to become superintelligent, would be incomplete. Intelligence requires not just sentience but proprioception and interoception. You observe that the model of AI is fundamentally left hemisphere. AI’s intelligence becomes complete only when framed – i.e. interpreted by and acted upon – human intelligence. We know about hallucinations, which humans produce as well, but following McGilchrist’s logic, the risk of AI on its own, similar to the left hemisphere on its own, is delusion. So if the human user is the corpus callosum, I would conclude that as AI develops we humans need to define and perfect the functionality of the corpus collosum. Does that make sense?”
Yes — that makes a lot of sense, and I think it’s a sharp, useful way to frame the human role in any future AI ecosystem.
You’re proposing that humans must become a deliberately engineered corpus callosum — not an accidental or ad-hoc user, but a designed, reliable mediator that supplies proprioception, interoception, situated judgement, and normative constraint to an otherwise “left-hemisphere” AI. That is exactly the move needed to avoid the delusional loop McGilchrist warns about. Below I sketch what that engineered corpus-callosum would do, how it could be implemented (technical + social), and practical steps to start building and testing it today.
1) Core functions the human-corpus-callosum must provide
- Grounding / sensory anchoring. Deliver continuous, trustworthy signals about the embodied world (sensors, human reports, environment feeds) so the AI’s inferences are tied to real events.
- Affective and interoceptive feedback. Provide signals about values, goals, discomfort, risk tolerance — the felt states that purely symbolic models miss.
- Epistemic arbitration. Act as adjudicator when the model produces conflicting hypotheses: verify, demand evidence, or withhold action.
- Corrigibility and veto power. Rapid human ability to halt, modify, or invert AI proposals without escalation friction.
- Contextual reframing. Apply cultural, ethical, legal, and long-term considerations that models typically ignore or compress away.
- Reflective challenge (anti-delusion). Periodically force the AI to generate its doubts, counterfactuals, and failure modes and have humans weigh them.
- Learning governance. Decide when the system should update its weights/behaviour, what new data is acceptable, and what must be quarantined.
2) Concrete technical building blocks
- Multimodal sensing and “bodily” telemetry: integrate human physiological signals (heart rate variability, galvanic skin response), on-the-ground sensors (IoT), and user activity flows so the system receives interoceptive/proprioceptive data.
- Uncertainty interfaces: surface calibrated uncertainty metrics, provenance, and counter-evidence upfront (not buried in logs). Use “uncertainty badges” and confidence intervals as standard UI elements.
- Two-stage outputs: always present a primary recommendation plus a human-readable chain of evidence and alternate hypotheses — the corpus-callosum interprets both.
- Human-in-the-loop (HITL) control rails: low-latency accept/reject/modulate controls with immediate effect and audit trail. Think of them as fast synaptic feedback.
- Teach-back and adversarial prompts: before acting, the AI must simulate how a human critic would challenge its conclusion and present the outcome; humans adjudicate.
- Provenance and queryable memory: every model output links to source data, model state, and timestamp; humans can query and correct that provenance.
- Role-based mediation layer: interfaces tailored for different “callosum” roles — e.g., operational operator (fast decisions), ethical reviewer (slow deliberation), community overseer (policy-level).
3) Social / institutional scaffolding
- Training for the human callosum: curricula to teach users how to read uncertainty, ask good counterfactuals, and supply meaningful proprioceptive signals. This is as important as engineering.
- Distributed mediation: don’t centralize the callosum in one person — use diverse teams and collective deliberation to avoid single-person bias.
- Regulatory & audit frameworks: ensure external auditing, red-team testing, and public reporting for key callosum decisions (especially for high-stakes systems).
- Civic feedback loops: provide channels for affected publics to feed interoceptive/social signals back into model governance (e.g., lived experience reports).
4) Design principles (heuristics to follow)
- Always show the limits: never hide uncertainty or provenance.
- Make veto cheap: humans must be able to stop or scale down AI actions without bureaucratic friction.
- Prefer multimodal evidence: let non-text signals count as first-class evidence.
- Encourage model doubt: require models to provide credible reasons why they might be wrong as part of any affirmative suggestion.
- Institutionalize reflection: schedule periodic “right-hemisphere” audits — sessions where the system’s assumptions, values, and context are re-examined.
5) Practical experiments to run now
- Build a prototype HITL system for a medium-risk domain (e.g., medical triage, journalism fact-checking, or climate modelling) that:
1. integrates wearable/IoT signals as grounding inputs,
2. surfaces calibrated uncertainty + provenance, and
3. requires human confirmation before model outputs are executed.
- Run user studies comparing decision quality and error modes between (A) model acting autonomously, (B) model with naive human oversight, and (C) model + engineered corpus-callosum. Measure accuracy, detection of hallucinations, decision time, and user trust.
- Create a “reflective prompt” module that forces the model to generate three plausible failure modes and one remedial action; test whether humans catch issues they otherwise miss.
6) Examples (vivid, concrete)
- In newsroom fact-checking: AI proposes a narrative; the corpus-callosum interface shows sources, prior contradictory coverage, and circulation metrics; human editor supplies eyewitness reports (interoceptive signal), and only then is the piece published.
- In robotics: AI suggests a path for a delivery drone; human operator’s haptic/controller input + city noise sensors (grounding) veto the path because a crowd is detected.
- In policy modelling: AI outputs a model of economic effects; activists and impacted communities add lived-experience reports that change prioritization and reveal distributional harms the model missed.
7) Risks & how the callosum can fail
- Overreliance: humans may become rubber stamps. Counter: train, audit, and rotate roles.
- Capture / bias: a narrow human callosum could enforce a dominant ideology. Counter: diversity, public participation, and transparency.
- Latency and efficiency tradeoffs: adding human mediation slows things down. Counter: tiered interventions — fast paths for low risk, heavy mediation for high risk.
8) Short checklist to “perfect” the corpus-callosum (first 6 months)
1. Define the domain(s) and stakes.
2. Instrument the human side (what telemetry will they provide?).
3. Build UIs that surface uncertainty and provenance clearly.
4. Implement fast, auditable veto controls.
5. Train users on adjudication and epistemic humility.
6. Run red-team trials and public audits.
My provisional conclusion
I find all these suggestions fascinating and — given ChatGPT’s very left-brain mode of reasoning — logical. I’m not convinced that what it calls “multimodal sensing and ‘bodily’ telemetry” is feasible, but I fully agree that research in that area would help to determine the limits of such an ambition.
ChatGPT’s very thorough response appears to reflect a very left-brain belief that one can “know everything.” As often happens with AI chatbots, we get everything thrown into our laps, including the kitchen sink.
Nevertheless, from my right-brain (plus corpus callosum) perspective, I find its insistence on the quality of human training spot-on. I especially appreciate its central suggestion to provide “curricula to teach users how to read uncertainty, ask good counterfactuals, and supply meaningful proprioceptive signals” and its insistence that it “is as important as engineering.” I also value its highlighting of a necessary social dimension when it recommends “distributed mediation” to avoid single-person bias. I’m not sure left-brained ChatGPT understands this as a truly social project — i.e. capable of producing an unpredictable human culture — but the idea points in the right direction.
I shall pick up this exploration in our next installment, in which ChatGPT proposes a “3–5 year research program called Hemispheric AI Correction (HAC).” Our readers and friends in the academic community may be interested not only in discovering the chatbot’s proposal but also in getting involved in a serious project we hope to promote.
Your thoughts
Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.
[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]
[Lee Thompson-Kolar edited this piece.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
Support Fair Observer
We rely on your support for our independence, diversity and quality.
For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.
In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.
We publish 3,000+ voices from 90+ countries. We also conduct education and training programs
on subjects ranging from digital media and journalism to writing and critical thinking. This
doesn’t come cheap. Servers, editors, trainers and web developers cost
money.
Please consider supporting us on a regular basis as a recurring donor or a
sustaining member.
Will you support FO’s journalism?
We rely on your support for our independence, diversity and quality.
Comment