Outside The Box

King Solomon’s Secret Algorithm: What AI Can Never Know — Part 1

Can artificial intelligence ever truly replicate human wisdom, or does its reliance on data and logic fundamentally blind it to the most consequential decisions? Drawing on Polanyi, Hayek and Aristotle, this conversation with Claude probes a radical paradox: that more knowledge may sometimes produce worse judgment.
By
King Solomon’s Secret Algorithm: What AI Can Never Know — Part 1

Via Shutterstock.

March 30, 2026 06:46 EDT
 user comment feature
Check out our comment feature!
visitor can bookmark

Among those who write about AI, Karen Hao is much more than a journalist and the author of the bestseller, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. She’s a thinker, historian and philosopher. Instead of reacting to the latest hype as so many commentators do, she contextualizes and analyzes the news in its multiple dimensions.

Hao is literally the best informed and most lucid analyst of everything to do with AI, whether it’s the concept itself, its evolution, its legal, economic, political and philosophical status, its industrial logic, the psychology of its most famous protagonists or the corporate culture of those who produce and promote it. It helps that she’s bilingual, a native speaker of both English and Mandarin Chinese. It equally helps that she possesses the patience to untangle so many complex threads and unveil both the sincere and sometimes suspect motivations of those who produce, promote or simply talk about AI.

If you don’t know Ms. Hao, I highly recommend this two-hour discussion on entrepreneur Steven Bartlett’s podcast, Diary of a CEO. It brings us up to date on many of the most crucial facets of the great AI debate. At one point, Hao the historian reminds us that machine learning guru Ilya Sutskever, one of the pioneers of AI, put forward what she deems a contestable definition of intelligence now largely shared in the industry. She signals his belief that “ultimately our brains are giant statistical models” and that it suffices to build a technology that has more statistics to handle, the right algorithms to start with and the ability to generate new algorithms (machine learning) to achieve superintelligence. The name of the company he launched after leaving OpenAI (which he also cofounded) is Safe Superintelligence.

Hao sees a serious philosophical and very real ethical weakness in AI promoted as a rival of human intelligence if it is developed on the grounds of duplicating and surpassing human intelligence. Worse, it has become clear that many people see superintelligence as a tool for super decision-making independent of human judgment. With that in mind, I began a conversation with Claude on the deeper ethical problem.

I’m struggling with what I see as an obvious paradox concerning AI. We should consider axiomatic and I certainly do –– that the more context and information we have concerning a problem to resolve the more likely we will find an optimal solution. We nevertheless need to bear in mind that we will never have all the information impinging on the issue to be resolved and that the solution may prove inadequate because we have missed something essential. With this in mind, it can be argued that the reason we need AI (an AI that becomes ever more powerful in its reasoning and inclusive in its knowledge, ideally all the knowledge that has ever been produced), is to maximize our optimal decision-making. But human decision-making that makes sense in society may be more about “judging” than applying abstract logic to large data sets. Take the iconic example of Solomon’s justice in the case of the two women and the baby. This was a clear case of thinking not just outside the box but of all the identifiable boxes. The proof that it was logical lies in our acceptance and admiration of Solomon’s solution. But the important data was not formulated and captured anywhere. And here’s where it becomes paradoxical. If it is true that some essential logic in the real world cannot be formalized even if humans recognize its reality, then the belief that more knowledge produces better decision-making can not only be called into question; the reliance on more data may reduce the ability to make the most effective decisions because we accept to become prisoners of a “logic + data inside the box” methodology for problem-solving. You could say that this is about creativity and that there are strict limits on the potential for creativity of even the most evolved AI. But I think it’s more about forms of knowledge at the threshold of perception than it is about quantity or even quality of data. As I often do, I wonder about how this question has been handled by contemporary thinkers.

This is a rich and genuinely difficult question, and you’ve arrived at something that serious contemporary thinkers are wrestling with under several overlapping frameworks. Let me try to map the intellectual terrain and then engage with your specific formulation.

The Polanyian core of your paradox

What you’re circling is essentially Michael Polanyi’s distinction between tacit and explicit knowledge, but you’re pushing it further than Polanyi typically did. His famous dictum — “we know more than we can tell” — is precisely the problem with Solomon’s judgment: tacit knowledge is intuitive in nature, and explicit knowledge is only a small part of what a person normally uses when making decisions, since such decisions are more intuitive than logical. The Solomonic move wasn’t a deduction from available data; it was an insight into human motivation that couldn’t have been pre-formalized. Your point is that not only can’t it be told after the fact — it couldn’t even be prompted toward without a kind of perceptual attunement to the situation that precedes any data structure.

Hayek’s knowledge problem, recently sharpened

A parallel line of thinking runs through Hayek’s epistemology, recently applied directly to AI. A key question is whether AI agents enhance or diminish the aggregation of local, private, and tacit knowledge that Hayek saw as essential to market processes — and recent analysis suggests that AI shifts decision-making from human cognition to algorithmic processes, but is no substitute for human cognition. The Hayekian concern is structural: even if an AI had all formalized knowledge, the dispersed, situational, embodied knowledge that guides effective human judgment is not the kind of thing that gets “captured” and fed into systems. It lives in the gap between perception and formulation.

Your specific move — beyond creativity

You make a distinction I find genuinely philosophically interesting: you say this isn’t primarily about creativity but about “forms of knowledge at the threshold of perception.” This is more precise than how the debate is usually framed. Most critics of AI decision-making say: “AI can’t be creative.” You’re saying something subtler: there are forms of apprehension — pre-conceptual, situationally saturated, kinesthetic, relational — that are not failed formalizations. They aren’t knowledge-trying-to-become-explicit. They are a different kind of epistemic act, and formalizing them doesn’t enrich them; it destroys what made them effective.

This connects to what phenomenologists like Merleau-Ponty called “motor intentionality” — the way skilled practitioners perceive affordances in situations before any propositional framing occurs. A good judge, like Solomon, doesn’t apply rules to facts; they perceive the shape of a human situation in a way that good rules, afterward, can only approximate.

The more-data-as-degradation argument

This is your sharpest point and the one contemporary thinkers are beginning to take seriously. The use of AI and data science, while offering more information, risks influencing the formation and revision of our beliefs in ways that diminish our epistemic agency. The mechanism is insidious: it’s not that more data gives wrong answers. It’s that reliance on data-driven methods restructures what counts as a question — and therefore what counts as an answer. You become, as you put it, a prisoner of the methodology. The boxes don’t just constrain; they invisibly redefine what thinking is.

There’s a related concern in the philosophy of science literature: in some nontrivial instances, it is our epistemic artifacts that determine what counts as evidence in decision-making, and the responsibility of discriminating what counts as relevant information is delegated to them. This is the deep version of your paradox — not that AI misses important data, but that it silently decides what is data in the first place.

Where contemporary philosophy is, and where it falls short

Most current work in AI epistemology focuses on explainability, bias, and confidence calibration — these are genuinely important but they remain within the formal paradigm. They ask: “How do we make the box more transparent?” Your question is: “What if the most important decisions require thinking before there is a box?” That’s a harder question and, frankly, less fashionable because it doesn’t lend itself to technical solutions.

The more philosophically serious responses tend to come from people working in phenomenology, legal theory, and what’s sometimes called “practical wisdom” (phronesis in the Aristotelian tradition). Aristotle’s distinction between episteme (scientific knowledge), techne (skilled making), and phronesis (practical wisdom) maps almost perfectly onto your concern: practical wisdom is irreducibly situational, irreducibly temporal, and constitutively involves perceiving what matters in a situation — which is precisely what cannot be delegated to a system that was trained on what mattered in past situations.

A tentative resolution

The paradox may be less a contradiction than a category error embedded in the original assumption. “More knowledge produces better decisions” is true within any given epistemic framework — but the most consequential decisions often require switching frameworks or perceiving that the current one is inadequate. That act of framework-transgression (which is what Solomon performed) is not a knowledge-retrieval operation. It is something closer to what Wittgenstein called “seeing as” — a shift of aspect that cannot be algorithmically triggered, only cultivated in beings who have lived through enough situations to sense when the frame itself is the problem.

The most honest position may be: AI is extraordinarily powerful within any established epistemic frame, and genuinely inadequate precisely where wisdom is most needed — at the threshold where the frame itself must be questioned.

Reflecting on what it means to interact with the world on the basis of “seeing as” and putting the frame itself in perspective, I invited Claude to join me in wondering how the coexistence of humans and AI could take on the features we associate with human cultures or even civilizations.

As you can probably tell given the kind of conversations we have, I’m striving to understand how recognizing limits in a context in which we are being told there are no limits (I’ll call this the myth of the singularity) can create the kind of shared space between humans and AI that will be optimally productive. The danger lies in denying or obscuring those limits. In very practical terms, when I think about how AI is used professionally, I perceive a need to explore, elaborate and literally reach a point where we can agree (not contractually but empathetically) on an organic set of behaviors. That is precisely how human cultures spontaneously evolve, not through algorithmic reasoning but through empathetic interaction. In human cultures, some people do formalize knowledge whereas everyone – not individually but collectively – “plays out” the behaviors that shape and craft social institutions, including their formal and informal protocols.

So, would you agree that the kind of conversation we’re having now can provide one of the essential tools to create a culture of shared intelligence and relatively frictionless behavior? Such a tool exists as part of a dynamic process engaged in building perspective. It will exist in a state of permanent interaction and change, becoming richer by being adaptable to every new context.

At this point, I’ll leave readers to think about how they would answer this question. Part 2 will begin with Claude’s response and the following phase of our conversation.

Your thoughts

Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.

[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

Comment

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments

Support Fair Observer

We rely on your support for our independence, diversity and quality.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 3,000+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

Will you support FO’s journalism?

We rely on your support for our independence, diversity and quality.

Donation Cycle

Donation Amount

The IRS recognizes Fair Observer as a section 501(c)(3) registered public charity (EIN: 46-4070943), enabling you to claim a tax deduction.

Make Sense of the World

Unique Insights from 3,000+ Contributors in 90+ Countries