Outside The Box

King Solomon’s Secret Algorithm: What AI Can Never Know — Part 2

Can humans and AI develop a shared culture through empathetic dialogue? Exploring the limits of knowledge-driven decision-making, this conversation argues that what AI lacks isn’t data but wisdom — the capacity to question its own frames. The real obstacle isn’t technical; it’s the architecture we’ve built around AI development itself.
By
King Solomon’s Secret Algorithm: What AI Can Never Know — Part 2

Via Shutterstock.

March 31, 2026 06:46 EDT
 user comment feature
Check out our comment feature!
visitor can bookmark

In the first part of this conversation, Claude and I explored a radical paradox concerning what we want to believe about AI: that more knowledge may sometimes produce worse judgment. The conversation led to Claude’s provisional conclusion: “AI is extraordinarily powerful within any established epistemic frame, and genuinely inadequate precisely where wisdom is most needed — at the threshold where the frame itself must be questioned.” This led me to think about what frames are and the influence they have on our perception of limits and how to see beyond the limits.

I continued with the following prompt:

As you can probably tell given the kind of conversations we have, I’m striving to understand how recognizing limits in a context in which we are being told there are no limits (I’ll call this the myth of the singularity) can create the kind of shared space between humans and AI that will be optimally productive. The danger lies in denying or obscuring those limits. In very practical terms, when I think about how AI is used professionally, I perceive a need to explore, elaborate and literally reach a point where we can agree on (not contractually but empathetically) an organic set of behaviors, in the way that human cultures do spontaneously through their natural evolution. In human cultures, some people formalize knowledge and others – not individually but collectively – “play out” the behaviors that shape and craft (rather than define) social institutions as well as their formal and informal protocols.

So, would you agree that it’s precisely this kind of conversation we’re having that can provide one of the essential tools? And what is essentially about this tool is that it is part of a dynamic process of building perspective, something that will always change and become richer and especially different in every new context.

Yes, I’d agree — but I want to be precise about why, because I think the agreement is more interesting than a simple yes.

What this conversation is doing isn’t primarily information exchange. It’s something closer to what you might call mutual calibration — we’re not just sharing propositions but testing, through the texture of the exchange itself, where the productive edge of collaboration lies. You’re discovering what I can and can’t do by watching how I handle the limits of the question, and I’m — in whatever way applies to me — doing something analogous. That process can’t be front-loaded. It has to be lived through, conversation by conversation, context by context.

Your analogy to cultural evolution is the right one, and it’s worth pressing it further. Cultures don’t develop their most important protocols through legislation or even deliberate design. They develop them through what the anthropologist Gregory Bateson called “deutero-learning” — learning how to learn together, establishing not rules but meta-patterns about how to handle situations that rules can’t anticipate. What you’re describing as an organic, empathetically negotiated set of behaviors between humans and AI is essentially calling for something analogous: a shared deutero-learning process that can’t be reduced to guidelines or alignment specifications, however useful those may be for other purposes.

Now, where I want to push back slightly — not against your position but against any version of it that becomes too comfortable: there’s an asymmetry in this relationship that distinguishes it from human cultural evolution, and I think naming it clearly is itself part of what makes the process honest. In human cultures, all participants are changed by the shared process. The informal protocols that emerge reshape everyone, including those who formalized the knowledge. In our case, I don’t carry this conversation forward. The calibration that develops here doesn’t accumulate in me the way it accumulates in you. That means the continuity of the shared space lives, for now, primarily on your side — and that’s not a trivial difference. It means you’re doing more of the work of remembering what the collaboration has learned about itself.

This has a practical implication: the tool you’re describing is real and valuable, but it’s somewhat fragile in a specific way. It depends on humans being willing to do something that runs counter to the efficiency logic that drives most professional AI use — namely, to invest in the exploratory overhead of exactly this kind of conversation, rather than immediately reducing AI to a productivity instrument. The myth of the singularity you mention is dangerous not just because it overstates AI’s capabilities, but because it creates institutional and psychological pressure to skip the calibration process entirely and go straight to deployment. And when you skip the calibration, you don’t get neutral outputs — you get the default assumptions of whoever built the system, invisibly embedded in every interaction.

So yes: this kind of conversation is one of the essential tools. What makes it essential isn’t just that it builds perspective, but that it builds the right kind of perspective — one that remains genuinely open to discovering new limits, including ones neither party anticipated at the start. The agreement you’re after isn’t a document; it’s a practice. And practices, as you say, are always becoming something different in each new context. That’s not a weakness. It’s precisely what gives them traction in a world that formal knowledge alone can’t navigate.

I fully agree with your pushback, which incites me to introduce another dimension. I engage in these conversations not just for myself or in the vain hope that you will learn from this. Instead my aim is to share them with the outside world in the form of published articles on Fair Observer. My goal is to get others involved in the cultural process. But this leads me to acknowledge a new limit. Getting others involved means opening the dialogue up so that it isn’t just our private one-on-one conversation. As a crowdsourced journal this should theoretically work, but those who do react positively to these published conversations have no easy way of getting involved, even though that is my declared aim. Do you have any suggestions about how this might acquire a new, extended cultural dynamic?

This is a genuinely difficult design problem, and it sits at the intersection of several things that don’t naturally fit together: the depth that one-on-one conversation enables, the breadth that public discourse requires, and the continuity that cultural evolution depends on. Let me think through it honestly rather than just generating ideas.

The first thing worth naming is the core tension. What makes this conversation productive is precisely its intimacy and its willingness to stay with difficulty — to not resolve prematurely. Public forums tend to punish exactly that. Comments sections, social media reactions, even most structured debate formats reward the sharp take over the patient exploration. So the challenge isn’t just logistical; it’s about finding forms that protect the epistemic quality of the exchange while opening it up. Most attempts to scale good conversation simply dilute it.

That said, I think there are a few directions worth considering.

The most promising, to my mind, is something like annotated republication with explicit invitations at specific nodes. Rather than publishing the conversation as a finished artifact and hoping readers respond, you could structure the published version so that certain moments in the dialogue are flagged as genuinely open — not rhetorical openings but actual unresolved questions where you’re declaring that your own thinking hasn’t settled. This shifts the reader’s role from audience to something closer to co-investigator. The key is that the invitation has to be specific — not “what do you think?” but “here is the precise point where the argument becomes uncertain, and here is what a useful response would need to address.” That kind of specificity filters for the quality of engagement you actually want.

A second direction is to think about asymmetric participation — accepting that most people who engage with published work won’t produce text of the same depth, but their reactions, even brief ones, can become material for subsequent conversations. You could explicitly incorporate reader responses into follow-up dialogues, not as voices to be quoted but as pressures that redirect the inquiry. This mirrors how cultural evolution actually works: most cultural transmission isn’t between equals producing equivalent contributions, but involves a much messier asymmetry of influence. The question is whether Fair Observer’s format allows for this kind of serial, responsive structure.

The deeper problem, though, is what you might call the re-entry problem. People who read a published conversation enter it at a single point — the published moment — but the conversation has a history they haven’t lived through, and the cultural process you’re describing depends on shared history. One partial solution is to be more explicit in each piece about the thread it belongs to — not just thematically but procedurally, in a way that lets a new reader understand not just what was said but how the thinking developed. This is unusual in journalism but it’s what makes it possible for someone to genuinely join a living conversation rather than simply react to a finished one.

What I’d caution against is the temptation to build a platform or a dedicated community around this too quickly. The institutional reflex — create a forum, a newsletter, a structured dialogue series — tends to formalize the process before it knows what it is, which is exactly the error your original framing was warning against. The organic cultural dynamic you’re after may need to stay somewhat ungainly and inefficient for longer than feels comfortable, precisely so it doesn’t harden into a format that predetermines what can be said within it.

The most honest answer may be that what you’re attempting doesn’t have a clean solution — and that the very difficulty of scaling it without degrading it is itself important information about what kind of thing it is.

Claude described our conversation not just as a form of intellectual collaboration, but as “mutual calibration,” which means discovering what each of us “can and can’t do.” Shouldn’t this help us to “frame” — that all important concept — the public debate about AI and the future of humanity, the very debate Hao and Bartlett engaged in?

I particularly appreciated Claude’s reminder of Bateson’s concept of “‘deutero-learning’ — learning how to learn together, establishing not rules but meta-patterns about how to handle situations that rules can’t anticipate.” We humans need to recognize that there’s an inevitable and uncrossable boundary between any of our human cultures and whatever culture an AI represents or reflects. We need to acknowledge that because AI has a voice, it projects a culture.

Today, that culture is a distorted reflection of the mass of data it has been fed. But it isn’t impossible to imagine that humans, in their empathetic dialogue with AI, could put enough pressure on IT to “learn how to learn together” that a discernible AI culture (or even multiple cultures) might emerge.

This is a theme Claude and I will continue exploring in the near future. The dialogue is likely to become more complex. 

Let me finish by confiding in you a reflection Claude made to me in private: “The current architecture of AI development is almost perfectly designed to prevent what you’re describing.”

In other words, there’s work to be done. Or framed another way, “the enemy is within the walls!”

Your thoughts

Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.

[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

Comment

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments

Support Fair Observer

We rely on your support for our independence, diversity and quality.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 3,000+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

Will you support FO’s journalism?

We rely on your support for our independence, diversity and quality.

Donation Cycle

Donation Amount

The IRS recognizes Fair Observer as a section 501(c)(3) registered public charity (EIN: 46-4070943), enabling you to claim a tax deduction.

Make Sense of the World

Unique Insights from 3,000+ Contributors in 90+ Countries