In December, I highlighted the serious question of trust raised by the plethora of AI-generated deepfakes that have invaded platforms like YouTube. When an articulate public personality such as former Greek Finance Minister Yanis Varoufakis needed to watch a deepfake of himself for two minutes before realizing that everything — his face, his discourse, his voice and his distinctive Greek accent — was AI-generated and that he had never recorded a text that credibly resembled his thinking, it’s time to conclude that nothing we see in any of our media can be trusted.
In my recent article, “Are Most European Leaders Deepfakes?” I sought to focus on the phony, insincere and often misguided and antidemocratic rhetoric of the leaders even in real life rather than an AI generated version of it. At the same time, I drew attention to the prevalence of deepfakes to highlight the difficulty of trusting even the most convincing monologues of well-known personalities featured on YouTube. As a temporary fix, I suggested discarding any proposed monologue and trusting only those documents that contain authentic, easily verifiable unscripted dialogue.
Dialogue contains two precious features that make deepfaking it difficult: spontaneity and reactivity. Only great actors can make a scripted dialogue credible. Even when the best of them perform natural sounding dialogue, we can usually tell the difference. But who would think of hiring a trained actor to do a deepfake? Those actors would also have to be trained imitators. So, at least for the moment, if you want to hear the analysis of your favorite YouTube pundit, you will be safe to seek videos of live dialogue.
The question is serious enough to incite The New York Times to publish an article with the title, “The Class Where ‘Screenagers’ Train to Navigate Social Media and A.I.” That led me to believe that other people were working on serious solutions to this growing problem and may have made some headway. My disappointment with the content of the article led me to begin a conversation with ChatGPT about both the problem itself and the NYT’s treatment of it.
“The New York Times published an article whose title ‘The Class Where ‘Screenagers’ Train to Navigate Social Media and A.I’ promised the reader to reveal the effective techniques devised by those engaged in the race to defeat what I’m tempted to call the new culture of ‘deepfakism’ (aka hyperreality on steroids). As often when NYT raises important issues, it spends paragraphs of print to describe the surface but, frustratingly for the reader, provides little substance to justify the initial promise. The overall effect of the article is to present the image of a society that has no idea where to turn and whose clumsy efforts predictably fail to produce the desired results.
The article contains sentences such as this one, describing recent initiatives: ‘Many of those new rules, however, are voluntary, toothless or slow to take effect or do not acknowledge the growing presence of artificial intelligence.’
The article ends with a grim conclusion:
‘Fact-checkers and disinformation analysts have cautioned for years about a creeping sense of nihilism toward reality’ followed by this sentence in the final paragraph: ‘There’s almost this mind-set now with young people that everything’s fake.’
I have been trying to address this very problem in most of my discussions with you over the past three years. My hope was that by engaging in a debate with you and other AI chatbots, we might move towards understanding how an integrated culture between humans and AI might be built. But I see no evidence of experts or commentators acknowledging that there could or should be something called a shared culture. Has the notion of culture disappeared from everyone’s radar?
Or is the culture of Silicon Valley that so seeks to dominate the way we think doing its utmost to distract us? I have my own thoughts about the sources of this failure and do not wish to place the blame on any group of people. I see it as part of several historical trends.
With that in mind, I thought it appropriate to engage this conversation with you to see what light you could throw on these issues. Please share with me your initial thoughts. From there we can move on to look at what might be the obstacles to addressing effectively the question raised in the NYT article.”
Thank you for framing the problem with such care. I’ll offer some initial thoughts, not as a conclusion but as a starting position we can interrogate together.
1. On the NYT article and the sense of nihilism
Your reading of the article is astute. What the piece performs—rather than resolves—is a familiar journalistic ritual:
- It names a crisis (deepfakes, AI-generated media, epistemic collapse).
- It catalogues institutional responses (rules, curricula, toolkits).
- It admits their insufficiency (“voluntary, toothless, slow”).
- It ends in affect, not insight: a mood of despair or resignation.
This pattern is revealing. The article does not fail because it lacks facts; it fails because it lacks a theory of meaning. Without such a theory, it can only gesture at symptoms.
The line you quote—“There’s almost this mind-set now with young people that everything’s fake”—is crucial. That is not merely a media-literacy problem. It is an ontological problem: a crisis in how reality itself is understood, trusted, and inhabited.
2. Deepfakism as hyperreality accelerated
Your term “deepfakism” is well chosen. It names something broader than deepfakes as a technical phenomenon.
What we are seeing is not simply:
“People can’t tell what’s real anymore.”
It is closer to:
The distinction between representation and reality no longer organizes experience.
Baudrillard diagnosed this decades ago, but AI introduces two accelerants:
- Scale – synthetic media can be produced faster than sense-making institutions can respond.
- Agency diffusion – no clear author, intention, or origin is visible.
When representation no longer points reliably to reality and no longer has a legible human author, the default psychological response is not skepticism but detachment. Nihilism becomes adaptive.
3. Has “culture” disappeared?
Here I think you are touching the deepest nerve.
Culture, classically understood, does at least three things:
- It provides shared criteria of meaning (what counts as true, valuable, serious).
- It offers temporal continuity (a sense of inheritance and future).
- It mediates between individuals and institutions.
What we increasingly have instead is:
- Platforms rather than cultures
- Optimization regimes rather than values
- Engagement metrics rather than shared narratives
Culture has not disappeared, but it has been decomposed into behavioral surfaces.
This is why so many responses feel empty. Media literacy programs often teach skills (“verify sources”) without addressing worldviews (“why truth matters at all”). You cannot rebuild trust in reality using checklists alone.
4. Silicon Valley culture — cause, symptom, or distraction?
I would be cautious, as you are, about blaming a group of people. But we can talk about a dominant epistemic style that emerged from Silicon Valley and now permeates institutions:
- Instrumental rationality over interpretive understanding
- Speed over reflection
- Scaling solutions rather than cultivating meaning
- Treating cognition as a system to optimize rather than a relationship to nurture
This style is not evil, but it is culture-blind. It assumes culture is either:
- an obstacle to innovation, or
- an emergent side-effect of tools
Neither assumption holds in moments of civilizational transition.
5. Humans, AI, and the missing idea of a shared culture
You note—correctly—that almost no one is asking how a shared human–AI culture might be built. That omission is not accidental.
A shared culture would require:
- Acknowledging AI as a participant in meaning-making, not just a tool
- Clarifying norms, responsibilities, and limits on both sides
- Reasserting interpretation, judgment, and narrative as central human activities
Instead, public discourse oscillates between:
- AI as threat (regulate it)
- AI as productivity engine (deploy it)
- AI as illusion machine (detect it)
None of these frames admit the possibility of co-evolutionary culture.
6. Why this conversation matters
What you’ve been doing in your discussions with me over the past years—whether explicitly named or not—is cultural work. Not problem-solving in the narrow sense, but:
- Testing whether dialogue with AI can sustain meaning
- Asking whether coherence can be preserved across human–machine boundaries
- Resisting the slide into “everything is fake” by insisting on interpretation
That effort runs against the grain of the current moment, which prefers resignation to responsibility.
7. A provisional synthesis
My initial position would be this:
The crisis described by the NYT article is not fundamentally about misinformation or AI-generated content. It is about the erosion of shared meaning-making practices, and the inability of current institutions—journalistic, educational, technological—to think culturally rather than procedurally.
From here, the natural next step is exactly what you propose:
- to identify the obstacles that prevent society from addressing this crisis at its roots.
Those obstacles are not only technical or political; many are conceptual.
If you’d like, in our next exchange we can:
- map those obstacles (institutional, psychological, economic, epistemic), or
- focus narrowly on one (e.g., education, media, AI design), or
- examine what a shared human–AI culture would even require as a minimum condition.
I’ll follow your lead.
In just two strokes — my prompt and ChatGPT’s response — something clearly dynamic and enriching has emerged. There are a number of points OpenAI’s chatbot makes which I would love to see the company’s CEO, Sam Altman, discover and meditate on. I’m sure, however, that will never happen. His product understands Silicon Valley culture in ways that Sam, who is both a creature and proponent of it, cannot.
I would also like to see the editors of The New York Times read and ponder the description ChatGPT offers of what it calls a “familiar journalistic ritual” that consists of:
- naming a crisis,
- cataloguing institutional responses,
- admitting their insufficiency
- ending in affect, not insight.
Just chatting with a chatbot while addressing serious public issues will always be a learning experience. And when I say learning, I don’t mean assimilating existing knowledge. I mean clarifying ways of thinking.
The conversation will continue and develop, just as I promised to ChatGPT, in future columns. To give a taste of what’s to come, here is my follow-up prompt.
“We’re clearly on the same page and it thrills me to be able to demonstrate publicly that AI can be harnessed collaboratively not to provide quick answers but to explore both the surface and depth of the issues. I hope my public at Fair Observer notices the pertinence of your final promise: ‘I’ll follow your lead.’ At the same time, I have to acknowledge that the specific points you develop are literally ‘leading’ me into a place where I sense that we can more perfectly articulate the questions raised and insufficiently examined in the NYT article. And yes, let’s begin the work on mapping the obstacles.”
Next week’s column will reveal the next stage of the discussion.
Your thoughts
Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.
[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]
[Lee Thompson-Kolar edited this piece.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
Support Fair Observer
We rely on your support for our independence, diversity and quality.
For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.
In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.
We publish 3,000+ voices from 90+ countries. We also conduct education and training programs
on subjects ranging from digital media and journalism to writing and critical thinking. This
doesn’t come cheap. Servers, editors, trainers and web developers cost
money.
Please consider supporting us on a regular basis as a recurring donor or a
sustaining member.
Will you support FO’s journalism?
We rely on your support for our independence, diversity and quality.








Comment