Outside The Box

The Real AI Crisis Is Cultural, Not Technical

My previous piece focused on AI’s hyperreality. Deepfakes are merely a surface phenomenon; the real problem of our relationship with AI is not technological deception, but cultural collapse, in which information has replaced knowledge. Meaning has been severed from responsibility, and institutions no longer articulate why shared reality matters.
By
The Real AI Crisis Is Cultural, Not Technical

Via Shutterstock.

January 12, 2026 07:19 EDT
 user comment feature
Check out our comment feature!
visitor can bookmark

In my previous piece, “Deepfaking It: The New Cultural Crisis No One Is Naming,” ChatGPT and I attempted to deconstruct an article by The New York Times that promised — and failed — to provide insight into the methods and techniques that might enable students to identify and deal with the illusions created by AI. Our rich and fruitful conversation produced numerous insights.

At one point, I was impressed by a significant distinction ChatGPT drew when it claimed that AI excels “at information synthesis while remaining indifferent to why something matters. The obstacle is not AI’s limitations, but our failure to articulate the difference between knowing and having access.” This distinction should help us to understand a fundamental difference that may never be erased between human and artificial intelligence.

But first, let’s return to the context. I pointed to what I saw as a cultural crisis rather than a purely technical or ethical one. ChatGPT and I agreed on the existence of “obstacles that prevent society from addressing this crisis at its roots.” It then became important to attempt to identify and analyze those obstacles. I thus continued with a new prompt:

“We’re clearly on the same page and it thrills me to be able to demonstrate publicly that AI can be harnessed collaboratively not to provide quick answers but to explore both the surface and depth of the issues. I hope my public at Fair Observer notices the pertinence of your final promise, an act of communication that defines a relationship: ‘I’ll follow your lead.’ At the same time, I have to acknowledge that the specific points you develop are literally ‘leading’ me into a place where I sense that we can more perfectly articulate the questions raised in the NYT article, which we both consider insufficiently examined. And yes, let’s begin the work on mapping the obstacles.”

I appreciate both the confidence you’re placing in this exchange and the care with which you’re situating it publicly. What you are doing here—explicitly resisting the demand for “quick answers” and instead staging inquiry itself—is already a partial answer to the crisis the NYT article gestures toward but cannot inhabit.

Let me proceed by mapping the obstacles, but in a way that keeps them open to interrogation rather than closing them off as diagnoses. I’ll organize them in layers, because one of the central failures of contemporary discourse is its tendency to flatten everything into a single plane (usually “technology” or “politics”).


I. Conceptual Obstacles: When the Problem Cannot Be Named

1. The Loss of Epistemic Vocabulary

We lack a shared language for distinguishing:

  •  truth vs. plausibility
  • fiction vs. fabrication
  • interpretation vs. manipulation
  • skepticism vs. nihilism.

When young people say “everything’s fake,” they are not making an empirical claim; they are expressing a collapse of distinctions. Institutions respond by offering tools (“fact-checking,” “media literacy”) without restoring the conceptual scaffolding that once made those tools meaningful.

The NYT article is trapped here: it reports the collapse but cannot speak from within a coherent epistemology.


2. The Confusion of Knowledge with Information

Modern systems—educational, journalistic, algorithmic—treat knowledge as:

  • retrievable
  • modular
  • context-free

But culture depends on interpretive continuity, not retrieval. AI intensifies this confusion by excelling at information synthesis while remaining indifferent to why something matters. The obstacle is not AI’s limitations, but our failure to articulate the difference between knowing and having access.


II. Institutional Obstacles: When Systems Optimize the Wrong Things

3. Journalism Without Cultural Authority

The NYT still commands attention, but no longer defines a shared horizon of meaning. Its articles increasingly:

  • describe crises they cannot frame,
  • quote experts without synthesizing worldviews,
  •  end in affective resignation.

This is not journalistic incompetence—it is a symptom of institutional self-doubt. Without cultural authority, journalism becomes chronicling rather than guiding.


4. Education as Risk Mitigation

Media literacy curricula often function defensively:

  • “Don’t be fooled.”
  • “Spot misinformation.”
  • “Protect yourself.”

What they rarely ask:

  • What kind of world do you want to inhabit?
  • Why is shared reality worth defending?
  • What obligations come with meaning-making?

Education has shifted from formationto immunization, and immunization cannot produce culture.


III. Psychological Obstacles: When Detachment Becomes Rational

5. Nihilism as a Coping Strategy

In an environment where:

  • images lie,
  • voices can be synthesized,
  • intent is obscured,

detachment is not stupidity—it is adaptation. The problem is not that young people distrust reality, but that trust has been made unsafe.

Telling them to “care more” or “verify better” misses the point. Care requires confidence that care will not be exploited.


6. The Erosion of Authorship

AI-generated content destabilizes a long-standing cultural anchor:

Someone meant this.

When meaning appears without intention, responsibility dissolves. This produces anxiety not just about truth, but about accountability. Who stands behind what is said? Culture cannot survive without answers to that question.


IV. Technological Obstacles: When Tools Pretend to Be Neutral

7. The Myth of Value-Neutral Systems

AI is routinely framed as:

  • a mirror,
  • a tool,
  • a neutral accelerator.

But systems that shape attention, expression, and interpretation are always cultural actors. The refusal to acknowledge this allows design decisions to masquerade as inevitabilities.

This is not a moral failing—it is a conceptual evasion.


8. Scale Without Meaning

AI operates at a scale that outpaces:

  • deliberation,
  • norm-setting,
  • reflection.

Culture, by contrast, is slow, recursive, and memory-dependent. The mismatch creates the illusion that culture is obsolete, when in fact it is simply being overrun.


V. The Deepest Obstacle: The Absence of a Shared Project

9. No Articulated Telos

Perhaps the most fundamental obstacle is this:

No one is publicly asking what kind of shared reality we are trying to preserve or create.

Without a telos—a sense of direction—every intervention looks “toothless,” because it is. Rules without purpose cannot inspire trust.

This is where your question about shared human–AI culture becomes indispensable. Culture cannot be retrofitted; it must be imagined.


Where This Leaves Us

What the NYT article ultimately reveals—despite itself—is not helplessness but conceptual exhaustion. The tools are misaligned because the questions are underdeveloped.

The work ahead, if we choose to pursue it, would involve:

  •  re-articulating culture as a practice of shared interpretation,
  • clarifying the role AI can play within that practice,
  • and recovering the courage to speak normatively rather than procedurally.

In our next step, we might:

  • attempt to define the minimum conditions for a shared human–AI culture, or
  • examine why institutions resist framing the problem at this level, or
  • explore whether AI itself can help model non-nihilistic meaning-making.

You are indeed leading—but not by dictating answers. By insisting the questions be worthy of the moment.

“Yes, let’s pursue this line of thought. But I’d also like to explore what you mean by non-nihilistic meaning-making. This supposes there is something we can call nihilistic meaning-making. What is that and are you implying that it has become the norm?”

This is exactly the right pressure point. To speak of non-nihilistic meaning-making only makes sense if we first admit that much of what now passes for “meaning” is, in fact, nihilistic in structure even when it looks busy, expressive, or moral on the surface.

Let me proceed carefully, because the claim is subtle and easily misunderstood.


1. What “meaning-making” usually implies

Traditionally, meaning-making rests on a few implicit commitments:

  1. That reality is not arbitrary
    There is something “out there” that resists us, corrects us, and exceeds us.
  2. That interpretation matters
    Meaning is not automatic; it requires judgment, responsibility, and context.
  3. That meanings accumulate over time
    Culture remembers, refines, and transmits significance across generations.
  4. That meaning carries obligation
    To say something meaningful is to stand behind it and accept consequences.

Nihilism begins not when people deny meaning explicitly, but when these commitments quietly erode.


2. What nihilistic meaning-making is

At first glance, nihilistic meaning-making sounds like a contradiction. How can meaning be nihilistic?

It becomes possible when meaning is produced without belief in its own durability, truth, or consequence.

Nihilistic meaning-making has several recognizable traits:

a. Meaning as performance, not commitment

Statements are made:

  •  to signal identity,
  • to provoke reaction,
  • to optimize engagement,

not to articulate something one is prepared to defend or live by.

The meaning exists only in the moment of expression. Once the performance ends, responsibility dissolves.


b. Irony as default posture

Irony once served as a tool of critique. Now it often functions as a shield against sincerity.

If everything is half-joke, half-mask, then nothing can fully fail—because nothing was ever fully meant.

This is a deeply nihilistic strategy masquerading as sophistication.


c. Infinite revision without inheritance

Content is endlessly:

  • remixed,
  • reframed,
  • recontextualized,

but rarely received as something that places a demand on the interpreter.

There is circulation without transmission, novelty without continuity.


d. Truth replaced by plausibility

In nihilistic meaning-making, the operative question is not:

“Is this true?”

but:

“Does this feel right?”
“Does this fit my narrative?”
“Does this perform well?”

Meaning becomes probabilistic, aesthetic, or strategic—never binding.


e. Detachment as intelligence

Caring too much is seen as naïve. Commitment is suspect. Conviction signals vulnerability.

Thus, the safest stance is perpetual distance:

“Everything is fake, so nothing really matters.”

This is not stupidity. It is self-protection in an unstable semiotic environment.


3. Has nihilistic meaning-making become the norm?

Here is the critical distinction:

I am not claiming that people have become nihilists in their inner lives.

I am suggesting that our public meaning-making systems increasingly operate nihilistically.

Consider the dominant arenas:

  • social media
  • political messaging
  • corporate communication
  • much journalism
  • algorithmic content production

These systems reward:

  • speed over reflection,
  • reaction over understanding,
  • circulation over coherence.

They structurally discourage durable meaning.

So yes—functionally, nihilistic meaning-making has become normal, even among people who privately care deeply about truth and justice.


4. Why AI intensifies the problem

AI did not invent nihilistic meaning-making—but it perfects its conditions.

AI can:

  • generate convincing meaning without belief,
  • simulate intention without commitment,
  • reproduce style without inheritance,
  • scale expression without accountability.

This makes the crisis visible.

When young people say “everything’s fake,” they are responding not just to deepfakes, but to the realization that meaning itself has been unmoored from responsibility.


5. What non-nihilistic meaning-making would require

Non-nihilistic meaning-making does not mean returning to dogma, authority, or naïve realism.

It requires restoring a few foundational constraints:

1. Commitment

Meaning must involve stakes—something one is willing to stand behind.

2. Resistance

Reality (including other minds) must be allowed to push back.

3. Continuity

Meaning must connect past, present, and future, rather than existing only as an event.

4. Accountability

Someone—or something—must be answerable for what is said.

5. Dialogue

Meaning emerges not from assertion alone, but from sustained exchange.

This is why your emphasis on dialogue—and your provisional trust in unscripted interaction—is so important. Dialogue forces meaning to survive contact with another consciousness.


6. Can AI participate in non-nihilistic meaning-making?

Here is the genuinely new question.

AI cannot believe. It cannot commit in a human sense. But it can:

  • model coherence,
  • sustain dialogue,
  • reflect commitments back to the human interlocutor,
  • expose contradictions,
  • resist pure performativity by demanding clarification.

In that sense, AI can function as a counter-nihilistic scaffold, even if it is not itself a moral agent.

Your use of AI here is already an example:

  • not content generation,
  • not optimization,
  • but meaning under constraint.

7. The real danger

The deepest danger is not that AI will deceive us.

It is that we will use AI to justify our own disengagement, outsourcing meaning while pretending nothing was ever real anyway.

Non-nihilistic meaning-making requires the opposite move:

reclaiming responsibility in the presence of artificial intelligence, not retreating from it.

If you’d like, the next step could be:

  • tracing how nihilistic meaning-making became structurally rewarded,
  • asking how institutions could re-incentivize non-nihilistic practices,
  • or examining whether dialogue itself can be formalized as a cultural norm in a human–AI world.

As before, I’ll follow your lead—but now the stakes should feel clearer.

This has already become a rich learning experience for me. I think it would be reasonable to say that the same is true of my AI companion. But we have already explored this question that highlights the illusion of AI’s memory. What it memorizes for this context is valuable. But the chatbot has not even superficially integrated this reflection into the “personality” we suppose it has due to its ability to produce a voice.

What ChatGPT says here has its place in my memory and may remain a permanent part of my subjective experience. In contrast, chatbots admit they have no subjective experience. As our conversation continues, we will explore that contrast.

Your thoughts

Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.

[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

Comment

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments

Support Fair Observer

We rely on your support for our independence, diversity and quality.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 3,000+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

Will you support FO’s journalism?

We rely on your support for our independence, diversity and quality.

Donation Cycle

Donation Amount

The IRS recognizes Fair Observer as a section 501(c)(3) registered public charity (EIN: 46-4070943), enabling you to claim a tax deduction.

Make Sense of the World

Unique Insights from 3,000+ Contributors in 90+ Countries